Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Plain View
HBase >> mail # user >> Re: HBase Multiget taking more time


+
Ankit Jain 2013-06-25, 13:34
Copy link to this message
-
Re: HBase Multiget taking more time
Hi Ankit,

Attachements are not working well with the mailing list. Can you post
them on pastbin? Also, can you please post the servers (region and
master) logs?

thanks,

JM

2013/6/25 Ankit Jain <[EMAIL PROTECTED]>:
> Hi Jean-Marc/Michael,
>
> Thanks for the reply.
>
> Hardware detail:
> Processor: 8 core
> RAM: 16 GB.
>
> We have allotted 4GB of RAM to HBase and also we are ingesting data into
> HBase in parallel with rate of 50 records(Each record has 20 KB) per sec.
>
> Please find the attached GC log.
>
> Thanks,
> Ankit
>
>
> On Tue, Jun 25, 2013 at 6:03 PM, Ankit Jain <[EMAIL PROTECTED]> wrote:
>>
>> Hi Jean-Marc/Michael,
>>
>> Thanks for the reply.
>>
>> Hardware detail:
>> Processor: 8 core
>> RAM: 16 GB.
>>
>> We have allotted 4GB of RAM to HBase and also we are ingesting data into
>> HBase in parallel with rate of 50 records(Each record has 20 KB) per sec.
>>
>> Please find the attached GC log.
>>
>> Thanks,
>> Ankit
>>
>>
>>
>>
>>
>>
>> On Tue, Jun 25, 2013 at 5:43 PM, Michael Segel <[EMAIL PROTECTED]>
>> wrote:
>>>
>>> Double your timeout from 60K to 120K.
>>> While I don't think its the problem... its just a good idea.
>>>
>>> What happens if you drop the 50 down to 25?  Do you still fail?
>>> If not, go to 35,  etc... until you hit a point where you fail.
>>>
>>> As Jean-Marc said, we need a bit more information. Including some
>>> hardware info too.
>>>
>>> Thx
>>>
>>> -Mike
>>>
>>> On Jun 25, 2013, at 4:03 AM, Ankit Jain <[EMAIL PROTECTED]> wrote:
>>>
>>> > Hi All,
>>> >
>>> > HBase multiget call taking large time and throwing time out exception.
>>> > I am
>>> > retrieving only 50 records in one call. The size of each record is 20
>>> > KB.
>>> >
>>> > java.net.SocketTimeoutException: 60000 millis timeout while waiting for
>>> > channel to be ready for read. ch :
>>> > java.nio.channels.SocketChannel[connected
>>> > local=/192.168.50.122:48695remote=ct-0096/
>>> > 192.168.50.177:60020]
>>> >
>>> > hTable = new HTable(conf, tableName);
>>> > results = hTable.get(rows);
>>> >
>>> > Cluster Detail:
>>> > 1 master, 1 regionserver and 8 regions
>>> >
>>> > --
>>> > Thanks,
>>> > Ankit Jain
>>>
>>
>>
>>
>> --
>> Thanks,
>> Ankit Jain
>
>
>
>
> --
> Thanks,
> Ankit Jain
+
Ankit Jain 2013-06-25, 16:25
+
Ankit Jain 2013-06-25, 09:03
+
Michael Segel 2013-06-25, 12:13
+
Jean-Marc Spaggiari 2013-06-25, 11:59
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB