Home | About | Sematext search-lucene.com search-hadoop.com
 Search Hadoop and all its subprojects:

Switch to Threaded View
Hadoop, mail # user - Re: Socket timeout for BlockReaderLocal


Copy link to this message
-
Re: Socket timeout for BlockReaderLocal
Robert Molina 2012-12-04, 19:54
Hi Haitao,
To help isolate, what happens if you run a different job?  Also, if you
view the namenode webui or the specific datanode webui having the issue,
are there any indicators of it being down?

Regards,
Robert

On Tue, Dec 4, 2012 at 12:49 AM, panfei <[EMAIL PROTECTED]> wrote:

> I noticed that you are using jdk 1.7 , personally I prefer 1.6.x ;
> if your firewall is OK, you can check you RPC service to see if it is also
> OK; and test it by telnet  10.130.110.80 50020;
> I suggested hive because HQL(SQL-like) is familiar to most people, and the
> learning curve is smooth;
>
>
> 2012/12/4 Haitao Yao <[EMAIL PROTECTED]>
>
>> The firewall is OK.
>> Well, personally I prefer Pig. And it's a big project, switching pig to
>> hive is not an easy way.
>> thanks.
>>
>>   Haitao Yao
>> [EMAIL PROTECTED]
>> weibo: @haitao_yao
>> Skype:  haitao.yao.final
>>
>> On 2012-12-4, at 下午3:14, panfei <[EMAIL PROTECTED]> wrote:
>>
>> check your firewall settings plz.  and why not use hive to do work ?
>>
>>
>> 2012/12/4 Haitao Yao <[EMAIL PROTECTED]>
>>
>>> hi, all
>>> I's using Hadoop 1.2.0 , java version "1.7.0_05"
>>>  When running my pig script ,  the worker always report this error, and
>>> the MR jobs run very slow.
>>> Increase the dfs.socket.timeout value does not work. the network is ok,
>>> telnet to 50020 port is always ok.
>>>  here's the stacktrace:
>>>
>>> 2012-12-04 14:29:41,323 INFO org.apache.hadoop.hdfs.DFSClient: Failed to read blk_-2337696885631113108_11054058 on local machinejava.net.SocketTimeoutException: Call to /10.130.110.80:50020 failed on socket timeout exception: java.net.SocketTimeoutException: 10000 millis timeout while waiting for channel to be ready for read. ch : java.nio.channels.SocketChannel[connected local=/10.130.110.80:57689 remote=/10.130.110.80:50020]
>>> at org.apache.hadoop.ipc.Client.wrapException(Client.java:1140)
>>> at org.apache.hadoop.ipc.Client.call(Client.java:1112)
>>> at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:229)
>>> at $Proxy3.getProtocolVersion(Unknown Source)
>>> at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:411)
>>> at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:392)
>>> at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:374)
>>> at org.apache.hadoop.hdfs.DFSClient.createClientDatanodeProtocolProxy(DFSClient.java:212)
>>> at org.apache.hadoop.hdfs.BlockReaderLocal$LocalDatanodeInfo.getDatanodeProxy(BlockReaderLocal.java:90)
>>> at org.apache.hadoop.hdfs.BlockReaderLocal$LocalDatanodeInfo.access$200(BlockReaderLocal.java:65)
>>> at org.apache.hadoop.hdfs.BlockReaderLocal.getBlockPathInfo(BlockReaderLocal.java:224)
>>> at org.apache.hadoop.hdfs.BlockReaderLocal.newBlockReader(BlockReaderLocal.java:145)
>>> at org.apache.hadoop.hdfs.DFSClient.getLocalBlockReader(DFSClient.java:509)
>>> at org.apache.hadoop.hdfs.DFSClient.access$800(DFSClient.java:78)
>>> at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.blockSeekTo(DFSClient.java:2231)
>>> at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2384)
>>> at java.io.DataInputStream.read(DataInputStream.java:149)
>>> at java.io.BufferedInputStream.fill(BufferedInputStream.java:235)
>>> at java.io.BufferedInputStream.read(BufferedInputStream.java:254)
>>> at org.apache.pig.impl.io.BufferedPositionedInputStream.read(BufferedPositionedInputStream.java:52)
>>> at org.apache.pig.impl.io.InterRecordReader.nextKeyValue(InterRecordReader.java:86)
>>> at org.apache.pig.impl.io.InterStorage.getNext(InterStorage.java:77)
>>> at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigRecordReader.nextKeyValue(PigRecordReader.java:187)
>>> at org.apache.hadoop.mapred.MapTask$NewTrackingRecordReader.nextKeyValue(MapTask.java:533)
>>> at org.apache.hadoop.mapreduce.MapContext.nextKeyValue(MapContext.java:67)
>>> at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:143)
>>> at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:766)
>>> at org.apache.hadoop.mapred.MapTask.run(MapTask.java:370)