Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Plain View
Hive >> mail # user >> hive jobs pending so long


+
Neil Guo 2012-12-19, 16:27
+
Nitin Pawar 2012-12-19, 17:03
+
Neil Guo 2012-12-19, 17:27
Copy link to this message
-
Re: hive jobs pending so long
i hope hadoop replication factor is set to 1
apart from this can you check how much disk space is empty on the datanode?

how are the memory stats on JT and NN?

On Wed, Dec 19, 2012 at 10:57 PM, Neil Guo <[EMAIL PROTECTED]> wrote:

> No, actually there's only one datanode.
>
> On Thu, Dec 20, 2012 at 1:03 AM, Nitin Pawar <[EMAIL PROTECTED]>wrote:
>
>> Did you retire/remove few datanodes from your cluster in hurry ?
>>
>>
>> On Wed, Dec 19, 2012 at 9:57 PM, Neil Guo <[EMAIL PROTECTED]> wrote:
>>
>>> hi,
>>>
>>> My hive jobs became very slow from yesterday,  it keeps about 5 minutes
>>> on pending, but it was only  40sec before. I didn't modify any
>>> configuration and there's no other jobs on hadoop cluster.
>>>
>>>
>>> My envirment,
>>> hadoop-0.20.203.0
>>> hive-0.8.1
>>>
>>> [neil@host logs@master]$ hive -e 'select count(*) from neiltest;'
>>>
>>> WARNING: org.apache.hadoop.metrics.jvm.EventCounter is deprecated.
>>> Please use org.apache.hadoop.log.metrics.EventCounter in all the
>>> log4j.properties files.
>>> Logging initialized using configuration in
>>> jar:file:/opt/hadoop/hive-0.8.1/lib/hive-common-0.8.1.jar!/hive-log4j.properties
>>> Hive history file=/tmp/neil/hive_job_log_neil_201212192350_640804332.txt
>>> Total MapReduce jobs = 1
>>> Launching Job 1 out of 1
>>> Number of reduce tasks determined at compile time: 1
>>> In order to change the average load for a reducer (in bytes):
>>>   set hive.exec.reducers.bytes.per.reducer=<number>
>>> In order to limit the maximum number of reducers:
>>>   set hive.exec.reducers.max=<number>
>>> In order to set a constant number of reducers:
>>>   set mapred.reduce.tasks=<number>
>>> Starting Job = job_201212191724_0012, Tracking URL >>> http://localhost:50030/jobdetails.jsp?jobid=job_201212191724_0012
>>> Kill Command = /opt/hadoop/hadoop/bin/../bin/hadoop job
>>>  -Dmapred.job.tracker=localhost:9001 -kill job_201212191724_0012
>>> Hadoop job information for Stage-1: number of mappers: 1; number of
>>> reducers: 1
>>> 2012-12-19 23:53:21,894 Stage-1 map = 0%,  reduce = 0%
>>> 2012-12-19 23:53:27,940 Stage-1 map = 100%,  reduce = 0%
>>> 2012-12-19 23:53:37,001 Stage-1 map = 100%,  reduce = 33%
>>> 2012-12-19 23:53:40,026 Stage-1 map = 100%,  reduce = 100%
>>> Ended Job = job_201212191724_0012
>>> MapReduce Jobs Launched:
>>> Job 0: Map: 1  Reduce: 1   HDFS Read: 22131 HDFS Write: 4 SUCESS
>>> Total MapReduce CPU Time Spent: 0 msec
>>> OK
>>> 100
>>> Time taken: 305.199 seconds
>>>
>>>
>>> When the job was running, the datanode log file got this,
>>>
>>> 2012-12-20 00:00:18,408 INFO
>>> org.apache.hadoop.hdfs.server.datanode.DataBlockScanner: Verification
>>> succeeded for blk_-5694413837496700253_48905
>>> 2012-12-20 00:01:11,408 INFO
>>> org.apache.hadoop.hdfs.server.datanode.DataBlockScanner: Verification
>>> succeeded for blk_1151904922466051083_48776
>>> 2012-12-20 00:01:51,855 INFO
>>> org.apache.hadoop.hdfs.server.datanode.DataNode: BlockReport of 39205
>>> blocks took 7578 msec to generate and 124 msecs for RPC and NN processing
>>> 2012-12-20 00:02:04,608 INFO
>>> org.apache.hadoop.hdfs.server.datanode.DataBlockScanner: Verification
>>> succeeded for blk_-7641590199699424252_48775
>>> 2012-12-20 00:02:57,607 INFO
>>> org.apache.hadoop.hdfs.server.datanode.DataBlockScanner: Verification
>>> succeeded for blk_-7818849592980221590_48773
>>> 2012-12-20 00:03:50,608 INFO
>>> org.apache.hadoop.hdfs.server.datanode.DataBlockScanner: Verification
>>> succeeded for blk_9074790179047774257_48905
>>> 2012-12-20 00:04:43,608 INFO
>>> org.apache.hadoop.hdfs.server.datanode.DataBlockScanner: Verification
>>> succeeded for blk_-3172554843058932436_17003
>>> 2012-12-20 00:05:36,608 INFO
>>> org.apache.hadoop.hdfs.server.datanode.DataBlockScanner: Verification
>>> succeeded for blk_8421435495571059078_48776
>>> 2012-12-20 00:06:29,808 INFO
>>> org.apache.hadoop.hdfs.server.datanode.DataBlockScanner: Verification
>>> succeeded for blk_-4414438232543819004_48775
>>> 2012-12-20 00:07:22,808 INFO
Nitin Pawar
+
Neil Guo 2012-12-19, 18:15
+
Nitin Pawar 2012-12-19, 18:58
+
Neil Guo 2012-12-20, 04:40
+
Neil Guo 2012-12-20, 07:21
+
Mark Grover 2012-12-20, 17:29
+
Neil Guo 2012-12-21, 06:22
+
Neil Guo 2012-12-21, 07:53
+
Neil Guo 2012-12-20, 10:10
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB