Home | About | Sematext search-lucene.com search-hadoop.com
 Search Hadoop and all its subprojects:

Switch to Plain View
Pig, mail # user - Re: YARN hangs while computing pig?


+
Mark Grover 2012-11-29, 18:11
Copy link to this message
-
Re: YARN hangs while computing pig?
Johnny Kowalski 2012-11-30, 08:43
please do not forward it. I suppose it might be cloudera manager problem
with computing mapreduce?

I've installed separetly CDH4.1 with YARN and I've made a virtual machine
with cloudera manager and that

mapreduce testing example from
Running an example application with YARN from
https://ccp.cloudera.com/display/CDH4DOC/Installing+CDH4+on+a+Single+Linux+Node+in+Pseudo-distributed+Mode#InstallingCDH4onaSingleLinuxNodeinPseudo-distributedMode-ComponentsThatRequireAdditionalConfiguration

simply doesn't work on machine configured by cloudera manager and works on
pure CDH4 installation...

not working I mean it stops at mapping
W dniu czwartek, 29 listopada 2012 19:11:46 UTC+1 użytkownik Mark Grover
napisał:
>
> Redirecting to Apache pig user list
>
> On Thu, Nov 29, 2012 at 1:01 AM, Johnny Kowalski <[EMAIL PROTECTED]<javascript:>
> > wrote:
>
>> Another hint? Is is some permission issue?
>>
>> MY EXAMPLE:
>> in = LOAD '/user/myUser/aTest' USING PigStorage();
>> DUMP in;
>>
>> When I try to run above pig in grunt shell with "pig -x local" it
>> successfully prints output.
>> *
>> And when I do it in mapreduce pig grundshell it hangs... on this line:"
>>
>> 2012-11-28 15:24:27,709 [main] INFO  org.apache.pig.backend.hadoop.
>> *
>> ***executionengine.****mapReduceLayer.****MapReduceLauncher - 0%
>> complete"
>> *
>>
>>
>> W dniu środa, 28 listopada 2012 15:44:22 UTC+1 użytkownik Johnny Kowalski
>> napisał:
>>
>>> Hi, after configuring whole stack I've got another issue. My pig jobs
>>> hangs.
>>> I've created completly new user added a dir for him
>>> sudo -u hdfs hadoop fs -mkdir /user/newUser
>>> sudo -u hdfs hadoop fs -chown newUser:newUser /user/newUser
>>>
>>> and wanted to run pig that runs fine on another yarn configuration
>>> cluster. And got something like this:
>>>
>>>
>>> PIG CONSOLE OUTPUT
>>>
>>> 2012-11-28 15:24:26,759 [Thread-4] INFO  org.apache.hadoop.mapreduce.**lib.input.FileInputFormat
>>> - Total input paths to process : 1
>>> 2012-11-28 15:24:26,760 [Thread-4] INFO  org.apache.pig.backend.hadoop.*
>>> *executionengine.util.**MapRedUtil - Total input paths to process : 1
>>> 2012-11-28 15:24:26,783 [Thread-4] INFO  org.apache.pig.backend.hadoop.*
>>> *executionengine.util.**MapRedUtil - Total input paths (combined) to
>>> process : 1
>>> 2012-11-28 15:24:26,872 [Thread-4] INFO  org.apache.hadoop.mapreduce.**JobSubmitter
>>> - number of splits:1
>>> 2012-11-28 15:24:26,889 [Thread-4] WARN  org.apache.hadoop.conf.**Configuration
>>> - fs.default.name is deprecated. Instead, use fs.defaultFS
>>> 2012-11-28 15:24:26,890 [Thread-4] WARN  org.apache.hadoop.conf.**Configuration
>>> - mapreduce.job.counters.limit is deprecated. Instead, use
>>> mapreduce.job.counters.max
>>> 2012-11-28 15:24:26,891 [Thread-4] WARN  org.apache.hadoop.conf.**Configuration
>>> - mapred.job.tracker is deprecated. Instead, use
>>> mapreduce.jobtracker.address
>>> 2012-11-28 15:24:26,892 [Thread-4] WARN  org.apache.hadoop.conf.**Configuration
>>> - dfs.https.address is deprecated. Instead, use dfs.namenode.https-address
>>> 2012-11-28 15:24:26,893 [Thread-4] WARN  org.apache.hadoop.conf.**Configuration
>>> - io.bytes.per.checksum is deprecated. Instead, use dfs.bytes-per-checksum
>>> 2012-11-28 15:24:26,894 [Thread-4] WARN  org.apache.hadoop.conf.**Configuration
>>> - mapred.working.dir is deprecated. Instead, use mapreduce.job.working.dir
>>> 2012-11-28 15:24:27,563 [Thread-4] INFO  org.apache.hadoop.mapred.**ResourceMgrDelegate
>>> - Submitted application application_1354100898821_0011 to ResourceManager
>>> at userver/192.168.56.101:8032
>>> 2012-11-28 15:24:27,629 [Thread-4] INFO  org.apache.hadoop.mapreduce.**Job
>>> - The url to track the job: http://userver:8088/proxy/**
>>> application_1354100898821_**0011/<http://userver:8088/proxy/application_1354100898821_0011/>
>>> 2012-11-28 15:24:27,709 [main] INFO  org.apache.pig.backend.hadoop.**
>>> executionengine.**mapReduceLayer.**MapReduceLauncher - 0% complete