Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Plain View
HDFS >> mail # user >> InvalidProtocolBufferException while submitting crunch job to cluster


+
Narlin M 2013-08-30, 20:04
+
Harsh J 2013-08-31, 16:12
+
Narlin M 2013-09-03, 13:36
Copy link to this message
-
Re: InvalidProtocolBufferException while submitting crunch job to cluster
The <server_address> that was mentioned in my original post is not
pointing to bdatadev. I should have mentioned this in my original post,
sorry I missed that.

On 8/31/13 8:32 AM, "Narlin M" <[EMAIL PROTECTED]> wrote:

>I would, but bdatadev is not one of my servers, it seems like a random
>host name. I can't figure out how or where this name got generated. That's
>what puzzling me.
>
>On 8/31/13 5:43 AM, "Shekhar Sharma" <[EMAIL PROTECTED]> wrote:
>
>>: java.net.UnknownHostException: bdatadev
>>
>>
>>edit your /etc/hosts file
>>Regards,
>>Som Shekhar Sharma
>>+91-8197243810
>>
>>
>>On Sat, Aug 31, 2013 at 2:05 AM, Narlin M <[EMAIL PROTECTED]> wrote:
>>> Looks like I was pointing to incorrect ports. After correcting the port
>>> numbers,
>>>
>>> conf.set("fs.defaultFS", "hdfs://<server_address>:8020");
>>> conf.set("mapred.job.tracker", "<server_address>:8021");
>>>
>>> I am now getting the following exception:
>>>
>>> 2880 [Thread-15] INFO
>>> org.apache.crunch.hadoop.mapreduce.lib.jobcontrol.CrunchControlledJob
>>>-
>>> java.lang.IllegalArgumentException: java.net.UnknownHostException:
>>>bdatadev
>>> at
>>>
>>>org.apache.hadoop.security.SecurityUtil.buildTokenService(SecurityUtil.j
>>>a
>>>va:414)
>>> at
>>>
>>>org.apache.hadoop.hdfs.NameNodeProxies.createNonHAProxy(NameNodeProxies.
>>>j
>>>ava:164)
>>> at
>>>
>>>org.apache.hadoop.hdfs.NameNodeProxies.createProxy(NameNodeProxies.java:
>>>1
>>>29)
>>> at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:389)
>>> at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:356)
>>> at
>>>
>>>org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileS
>>>y
>>>stem.java:124)
>>> at
>>>org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2218)
>>> at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:80)
>>> at
>>>org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2252)
>>> at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2234)
>>> at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:300)
>>> at org.apache.hadoop.fs.Path.getFileSystem(Path.java:194)
>>> at
>>>
>>>org.apache.hadoop.mapreduce.JobSubmissionFiles.getStagingDir(JobSubmissi
>>>o
>>>nFiles.java:103)
>>> at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:902)
>>> at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:896)
>>> at java.security.AccessController.doPrivileged(Native Method)
>>> at javax.security.auth.Subject.doAs(Subject.java:396)
>>> at
>>>
>>>org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformatio
>>>n
>>>.java:1332)
>>> at
>>>org.apache.hadoop.mapred.JobClient.submitJobInternal(JobClient.java:896)
>>> at org.apache.hadoop.mapreduce.Job.submit(Job.java:531)
>>> at
>>>
>>>org.apache.crunch.hadoop.mapreduce.lib.jobcontrol.CrunchControlledJob.su
>>>b
>>>mit(CrunchControlledJob.java:305)
>>> at
>>>
>>>org.apache.crunch.hadoop.mapreduce.lib.jobcontrol.CrunchJobControl.start
>>>R
>>>eadyJobs(CrunchJobControl.java:180)
>>> at
>>>
>>>org.apache.crunch.hadoop.mapreduce.lib.jobcontrol.CrunchJobControl.pollJ
>>>o
>>>bStatusAndStartNewOnes(CrunchJobControl.java:209)
>>> at
>>>
>>>org.apache.crunch.impl.mr.exec.MRExecutor.monitorLoop(MRExecutor.java:10
>>>0
>>>)
>>> at
>>>org.apache.crunch.impl.mr.exec.MRExecutor.access$000(MRExecutor.java:51)
>>> at org.apache.crunch.impl.mr.exec.MRExecutor$1.run(MRExecutor.java:75)
>>> at java.lang.Thread.run(Thread.java:680)
>>> Caused by: java.net.UnknownHostException: bdatadev
>>> ... 27 more
>>>
>>> However nowhere in my code a host named "bdatadev" is mentioned, and I
>>> cannot ping this host.
>>>
>>> Thanks for the help.
>>>
>>>
>>> On Fri, Aug 30, 2013 at 3:04 PM, Narlin M <[EMAIL PROTECTED]> wrote:
>>>>
>>>> I am getting following exception while trying to submit a crunch
>>>>pipeline
>>>> job to a remote hadoop cluster:
>>>>
>>>> Exception in thread "main" java.lang.RuntimeException: Cannot create
>>>>job
>>>> output directory /tmp/crunch-324987940
>>>> at
>>>>
>>>
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB