You may want to go to the jobtracker web interface and look at the
failed task logs for more info.
On Sun, Dec 16, 2012 at 1:36 PM, Sékine Coulibaly <[EMAIL PROTECTED]> wrote:
> Hi there,
> I loaded data from the movielens database into hive, into a u_data table. I
> wish I could count the total number of rows in that table.
> Although mapreduce task starts, it ends with a Null Pointer Exception.
> I must admit I don't know what to do next, how to investigate this issue.
> The standard output is as follows :
> hive> describe u_data;
> userid int
> movieid int
> rating int
> unixtime string
> Time taken: 3.421 seconds
> hive> select count(1) from u_data where userid=1;
> Total MapReduce jobs = 1
> Launching Job 1 out of 1
> Number of reduce tasks determined at compile time: 1
> In order to change the average load for a reducer (in bytes):
> set hive.exec.reducers.bytes.per.reducer=<number>
> In order to limit the maximum number of reducers:
> set hive.exec.reducers.max=<number>
> In order to set a constant number of reducers:
> set mapred.reduce.tasks=<number>
> Starting Job = job_201212161709_0002, Tracking URL > http://localhost.localdomain:50030/jobdetails.jsp?jobid=job_201212161709_0002
> Kill Command = /usr/lib/hadoop/bin/hadoop job
> -Dmapred.job.tracker=localhost.localdomain:8021 -kill job_201212161709_0002
> Hadoop job information for Stage-1: number of mappers: 1; number of
> reducers: 1
> 2012-12-16 18:07:55,502 Stage-1 map = 0%, reduce = 0%
> 2012-12-16 18:08:15,627 Stage-1 map = 100%, reduce = 100%
> Ended Job = job_201212161709_0002 with errors
> Error during job, obtaining debugging information...
> Examining task ID: task_201212161709_0002_m_000002 (and more) from job
> Exception in thread "Thread-23" java.lang.NullPointerException
> at java.lang.Thread.run(Thread.java:662)
> FAILED: Execution Error, return code 2 from
> MapReduce Jobs Launched:
> Job 0: Map: 1 Reduce: 1 HDFS Read: 0 HDFS Write: 0 FAIL
> Total MapReduce CPU Time Spent: 0 msec
> Thank you !