Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Threaded View
Hive >> mail # user >> dynamic partition import


Copy link to this message
-
Re: dynamic partition import
can you check atleast one datanode is running and is not part of
blacklisted nodes
On Tue, May 29, 2012 at 3:01 PM, Nimra Choudhary <[EMAIL PROTECTED]>wrote:

>  ** **
>
> We are using Dynamic partitioning and facing the similar problem. Below is
> the jobtracker error log. We have a hadoop cluster of 6 nodes, 1.16 TB
> capacity with over 700GB still free.****
>
> ** **
>
> *Caused by: org.apache.hadoop.hive.ql.metadata.HiveException:
> org.apache.hadoop.ipc.RemoteException: java.io.IOException: File
> /tmp/hive-nimrac/hive_2012-05-29_10-32-06_332_4238693577104368640/_tmp.-ext-10000/createddttm=2011-04-24/_tmp.000001_2
> could only be replicated to 0 nodes, instead of 1*
>
> *               at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1421)
> *
>
> *               at
> org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:596)
> *
>
> *               at sun.reflect.GeneratedMethodAccessor23.invoke(Unknown
> Source)*
>
> *               at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> *
>
> *               at java.lang.reflect.Method.invoke(Method.java:601)*
>
> *               at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:523)*
>
> *               at
> org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1383)*
>
> *               at
> org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1379)*
>
> *               at java.security.AccessController.doPrivileged(Native
> Method)*
>
> *               at javax.security.auth.Subject.doAs(Subject.java:415)*
>
> *               at
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1059)
> *
>
> *               at
> org.apache.hadoop.ipc.Server$Handler.run(Server.java:1377)*
>
> * *
>
> *               at
> org.apache.hadoop.hive.ql.exec.FileSinkOperator.processOp(FileSinkOperator.java:576)
> *
>
> *               at
> org.apache.hadoop.hive.ql.exec.Operator.process(Operator.java:471)*
>
> *               at
> org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:744)*
>
> *               at
> org.apache.hadoop.hive.ql.exec.SelectOperator.processOp(SelectOperator.java:84)
> *
>
> *               at
> org.apache.hadoop.hive.ql.exec.Operator.process(Operator.java:471)*
>
> *               at
> org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:744)*
>
> *               at
> org.apache.hadoop.hive.ql.exec.ExtractOperator.processOp(ExtractOperator.java:45)
> *
>
> *               at
> org.apache.hadoop.hive.ql.exec.Operator.process(Operator.java:471)*
>
> *               at
> org.apache.hadoop.hive.ql.exec.ExecReducer.reduce(ExecReducer.java:247)*
>
> *               ...*
>
> ** **
>
> Is there any workaround or fix for this?****
>
> ** **
>
> Regards,****
>
> Nimra****
>
> ** **
>

--
Nitin Pawar
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB