Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Plain View
Hive >> mail # user >> Dynamic partition raising exception while creation


+
Hamza Asad 2013-06-17, 06:14
Copy link to this message
-
Re: Dynamic partition raising exception while creation
in any search engine you like, search for "*could only be replicated to 0
nodes, instead of 1" *

On Mon, Jun 17, 2013 at 11:44 AM, Hamza Asad <[EMAIL PROTECTED]> wrote:

> Im trying to create partition table (dynamically) from old non partitioned
> table. the query is as follow
>
> *INSERT OVERWRITE TABLE new_events_details Partition (event_date) SELECT
> id, event_id, user_id, intval_1, intval_2, intval_3, intval_4, intval_5,
> intval_6, intval_7, intval_8, intval_9, intval_10, intval_11, intval_12,
> intval_13, intval_14, intval_15, intval_16, intval_17, intval_18,
> intval_19, intval_20, intval_21, intval_22, intval_23, intval_24,
> intval_25, intval_26 , to_date(event_date) FROM events_details;*
>
> After waiting for more then 2 hours, following exceptions raised and
> further executions stops
>
> *spark.SparkException: Job failed: ResultTask(0, 1063) failed:
> ExceptionFailure(org.apache.hadoop.hive.ql.metadata.HiveException:
> org.apache.hadoop.ipc.RemoteException: java.io.IOException: File
> /tmp/hive-hadoop/hive_2013-06-16_15-08-26_985_3160022698353542666/_task_tmp.-ext-10000/event_date=2013-02-22/_tmp.001063_0
> could only be replicated to 0 nodes, instead of 1
>     at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1558)
>     at
> org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:696)
>     at sun.reflect.GeneratedMethodAccessor6.invoke(Unknown Source)
>     at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>     at java.lang.reflect.Method.invoke(Method.java:601)
>     at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:563)
>     at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1388)
>     at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1384)
>     at java.security.AccessController.doPrivileged(Native Method)
>     at javax.security.auth.Subject.doAs(Subject.java:415)
>     at
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1121)
>     at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1382)
> )
>     at
> spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:529)
>     at
> spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:527)
>     at
> scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:60)
>     at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47)
>     at spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:527)
>     at
> spark.scheduler.DAGScheduler.handleTaskCompletion(DAGScheduler.scala:497)
>     at spark.scheduler.DAGScheduler.run(DAGScheduler.scala:269)
>     at spark.scheduler.DAGScheduler$$anon$1.run(DAGScheduler.scala:90)
> FAILED: Execution Error, return code -101 from shark.execution.SparkTask
> *
> why it giving me exception?
>
> --
> *Muhammad Hamza Asad*
>

--
Nitin Pawar
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB