Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Threaded View
HBase >> mail # user >> Exception in data node log


Copy link to this message
-
Re: Exception in data node log
So you might want to test your HDFS installation first before loading
HBase. You might want to try teragen/terasort and tools like that to
produce load and see if it's table.

JM
2014-02-05 Vimal Jain <[EMAIL PROTECTED]>:

> Hi Jean,
> These are from data node logs. ( HDFS )
>
>
> On Sat, Feb 1, 2014 at 7:26 AM, Jean-Marc Spaggiari <
> [EMAIL PROTECTED]
> > wrote:
>
> > Hi Vimal,
> >
> > Are those logs into HBase logs? Or into HDFS logs? Sound like you have
> some
> > HDFS issues (slow disks, or running on VM, etc.)
> >
> > JM
> >
> >
> > 2014-01-31 Vimal Jain <[EMAIL PROTECTED]>:
> >
> > > Hi,
> > > I have set up hbase in pseudo distributed mode.
> > > I keep on getting below exceptions in data node log.
> > > Is it a problem ?
> > >
> > > ( Hadoop version - 1.1.2 , Hbase version - 0.94.7 )
> > >
> > > Please help.
> > >
> > >
> > > java.net.SocketTimeoutException: 480000 millis timeout while waiting
> for
> > > channel to be ready for write. ch :
> > > java.nio.channels.SocketChannel[connected local=/192.168.20.30:50010
> > > remote=/
> > > 192.168.20.30:38188]
> > >         at
> > >
> > >
> >
> org.apache.hadoop.net.SocketIOWithTimeout.waitForIO(SocketIOWithTimeout.java:246)
> > >         at
> > >
> > >
> >
> org.apache.hadoop.net.SocketOutputStream.waitForWritable(SocketOutputStream.java:159)
> > >         at
> > >
> > >
> >
> org.apache.hadoop.net.SocketOutputStream.transferToFully(SocketOutputStream.java:198)
> > >         at
> > >
> > >
> >
> org.apache.hadoop.hdfs.server.datanode.BlockSender.sendChunks(BlockSender.java:392)
> > >         at
> > >
> > >
> >
> org.apache.hadoop.hdfs.server.datanode.BlockSender.sendBlock(BlockSender.java:490)
> > >         at
> > >
> > >
> >
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.readBlock(DataXceiver.java:202)
> > >         at
> > >
> > >
> >
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:104)
> > >         at java.lang.Thread.run(Thread.java:662)
> > >
> > > 2014-01-31 00:10:28,951 ERROR
> > > org.apache.hadoop.hdfs.server.datanode.DataNode: DatanodeRegistration(
> > > 192.168.20.30:50010,
> > > storageID=DS-1816106352-192.168.20.30-50010-1369314076237,
> > infoPort=50075,
> > > ipcPort=50020):DataXceiver
> > > java.net.SocketTimeoutException: 480000 millis timeout while waiting
> for
> > > channel to be ready for write. ch :
> > > java.nio.channels.SocketChannel[connected local=/192.168.20.30:50010
> > > remote=/
> > > 192.168.20.30:38188]
> > >         at
> > >
> > >
> >
> org.apache.hadoop.net.SocketIOWithTimeout.waitForIO(SocketIOWithTimeout.java:246)
> > >         at
> > >
> > >
> >
> org.apache.hadoop.net.SocketOutputStream.waitForWritable(SocketOutputStream.java:159)
> > >         at
> > >
> > >
> >
> org.apache.hadoop.net.SocketOutputStream.transferToFully(SocketOutputStream.java:198)
> > >         at
> > >
> > >
> >
> org.apache.hadoop.hdfs.server.datanode.BlockSender.sendChunks(BlockSender.java:392)
> > >         at
> > >
> > >
> >
> org.apache.hadoop.hdfs.server.datanode.BlockSender.sendBlock(BlockSender.java:490)
> > >         at
> > >
> > >
> >
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.readBlock(DataXceiver.java:202)
> > >         at
> > >
> > >
> >
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:104)
> > >         at java.lang.Thread.run(Thread.java:662)
> > >
> > > --
> > > Thanks and Regards,
> > > Vimal Jain
> > >
> >
>
>
>
> --
> Thanks and Regards,
> Vimal Jain
>

 
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB