Home | About | Sematext search-lucene.com search-hadoop.com
 Search Hadoop and all its subprojects:

Switch to Threaded View
HDFS >> mail # user >> Re:Socket does not have a channel


Copy link to this message
-
Re: 回复: Socket does not have a channel
Hi Shashwat,

As already mentioned already in my mail setting dfs.client.use.legacy.blockreader to true fixes the problem.
This looks to be workaround or moreover disabling a feature.
Would like to know, what is the exact problem?

Cheers,
Subroto Sanyal
On Mar 5, 2013, at 6:33 PM, shashwat shriparv wrote:

> Try setting dfs.client.use.legacy.blockreader to true
>
> ∞
> Shashwat Shriparv
>
>
>
> On Tue, Mar 5, 2013 at 8:39 PM, 卖报的小行家 <[EMAIL PROTECTED]> wrote:
> Yes.It's from hadoop 2.0. I just now read the code 1.1.1.There are no such classes the log mentioned.Maybe you can read the code first.
>
>
> ------------------ 原始邮件 ------------------
> 发件人: "Subroto"<[EMAIL PROTECTED]>;
> 发送时间: 2013年3月5日(星期二) 晚上10:56
> 收件人: "user"<[EMAIL PROTECTED]>;
> 主题: Re: Socket does not have a channel
>
> Hi Julian,
>
> This is from CDH4.1.2 and I think its based on Apache Hadoop 2.0.
>
> Cheers,
> Subroto Sanyal
> On Mar 5, 2013, at 3:50 PM, 卖报的小行家 wrote:
>
>> Hi,
>> Which revision of hadoop?
>> and  what's the  situation  to report the Exception?
>> BRs//Julian
>>
>> ------------------ Original ------------------
>> From:  "Subroto"<[EMAIL PROTECTED]>;
>> Date:  Tue, Mar 5, 2013 04:46 PM
>> To:  "user"<[EMAIL PROTECTED]>;
>> Subject:  Socket does not have a channel
>>
>> Hi
>>
>> java.lang.IllegalStateException: Socket Socket[addr=/10.86.203.112,port=1004,localport=35170] does not have a channel
>> at com.google.common.base.Preconditions.checkState(Preconditions.java:172)
>> at org.apache.hadoop.net.SocketInputWrapper.getReadableByteChannel(SocketInputWrapper.java:83)
>> at org.apache.hadoop.hdfs.RemoteBlockReader2.newBlockReader(RemoteBlockReader2.java:432)
>> at org.apache.hadoop.hdfs.BlockReaderFactory.newBlockReader(BlockReaderFactory.java:82)
>> at org.apache.hadoop.hdfs.DFSInputStream.getBlockReader(DFSInputStream.java:832)
>> at org.apache.hadoop.hdfs.DFSInputStream.blockSeekTo(DFSInputStream.java:444)
>>
>> While accessing the HDFS  I keep getting the above mentioned error.
>> Setting the dfs.client.use.legacy.blockreader to true fixes the problem.
>> I would like to know what exactly is the problem? Is it a problem/bug in hadoop ?
>> Is there is JIRA ticket for this??
>>
>>
>> Cheers,
>> Subroto Sanyal
>
>