Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Threaded View
Hadoop >> mail # user >> DFSClient error


Copy link to this message
-
Re: DFSClient error
After all the jobs fail I can't run anything. Once I restart the cluster I
am able to run other jobs with no problems, hadoop fs and other io
intensive jobs run just fine.

On Fri, Apr 27, 2012 at 3:12 PM, John George <[EMAIL PROTECTED]> wrote:

> Can you run a regular 'hadoop fs' (put orls or get) command?
> If yes, how about a wordcount example?
> '<path>/hadoop jar <path>hadoop-*examples*.jar wordcount input output'
>
>
> -----Original Message-----
> From: Mohit Anchlia <[EMAIL PROTECTED]>
> Reply-To: "[EMAIL PROTECTED]" <[EMAIL PROTECTED]>
> Date: Fri, 27 Apr 2012 14:36:49 -0700
> To: "[EMAIL PROTECTED]" <[EMAIL PROTECTED]>
> Subject: Re: DFSClient error
>
> >I even tried to reduce number of jobs but didn't help. This is what I see:
> >
> >datanode logs:
> >
> >Initializing secure datanode resources
> >Successfully obtained privileged resources (streaming port > >ServerSocket[addr=/0.0.0.0,localport=50010] ) (http listener port > >sun.nio.ch.ServerSocketChannelImpl[/0.0.0.0:50075])
> >Starting regular datanode initialization
> >26/04/2012 17:06:51 9858 jsvc.exec error: Service exit with a return value
> >of 143
> >
> >userlogs:
> >
> >2012-04-26 19:35:22,801 WARN
> >org.apache.hadoop.io.compress.snappy.LoadSnappy: Snappy native library is
> >available
> >2012-04-26 19:35:22,801 INFO
> >org.apache.hadoop.io.compress.snappy.LoadSnappy: Snappy native library
> >loaded
> >2012-04-26 19:35:22,808 INFO
> >org.apache.hadoop.io.compress.zlib.ZlibFactory: Successfully loaded &
> >initialized native-zlib library
> >2012-04-26 19:35:22,903 INFO org.apache.hadoop.hdfs.DFSClient: Failed to
> >connect to /125.18.62.197:50010, add to deadNodes and continue
> >java.io.EOFException
> >        at java.io.DataInputStream.readShort(DataInputStream.java:298)
> >        at
> >org.apache.hadoop.hdfs.DFSClient$RemoteBlockReader.newBlockReader(DFSClien
> >t.java:1664)
> >        at
> >org.apache.hadoop.hdfs.DFSClient$DFSInputStream.getBlockReader(DFSClient.j
> >ava:2383)
> >        at
> >org.apache.hadoop.hdfs.DFSClient$DFSInputStream.blockSeekTo(DFSClient.java
> >:2056)
> >        at
> >org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2170)
> >        at java.io.DataInputStream.read(DataInputStream.java:132)
> >        at
> >org.apache.hadoop.io.compress.DecompressorStream.getCompressedData(Decompr
> >essorStream.java:97)
> >        at
> >org.apache.hadoop.io.compress.DecompressorStream.decompress(DecompressorSt
> >ream.java:87)
> >        at
> >org.apache.hadoop.io.compress.DecompressorStream.read(DecompressorStream.j
> >ava:75)
> >        at java.io.InputStream.read(InputStream.java:85)
> >        at
> >org.apache.hadoop.util.LineReader.readDefaultLine(LineReader.java:205)
> >        at org.apache.hadoop.util.LineReader.readLine(LineReader.java:169)
> >        at
> >org.apache.hadoop.mapreduce.lib.input.LineRecordReader.nextKeyValue(LineRe
> >cordReader.java:114)
> >        at org.apache.pig.builtin.PigStorage.getNext(PigStorage.java:109)
> >        at
> >org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigRecordRead
> >er.nextKeyValue(PigRecordReader.java:187)
> >        at
> >org.apache.hadoop.mapred.MapTask$NewTrackingRecordReader.nextKeyValue(MapT
> >ask.java:456)
> >        at
> >org.apache.hadoop.mapreduce.MapContext.nextKeyValue(MapContext.java:67)
> >        at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:143)
> >        at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:647)
> >        at org.apache.hadoop.mapred.MapTask.run(MapTask.java:323)
> >        at org.apache.hadoop.mapred.Child$4.run(Child.java:270)
> >        at java.security.AccessController.doPrivileged(Native Method)
> >        at javax.security.auth.Subject.doAs(Subject.java:396)
> >        at
> >org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.
> >java:1157)
> >        at org.apache.hadoop.mapred.Child.main(Child.java:264)
> >2012-04-26 19:35:22,906 INFO org.apache.hadoop.hdfs.DFSClient: Failed to
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB