Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Threaded View
HBase >> mail # user >> Re: High Full GC count for Region server


Copy link to this message
-
Re: High Full GC count for Region server
The "responseTooSlow" message is triggered whenever a bunch of operations
is taking more than a configured amount of time. In your case, processing
15827 elements can lead into long response time, so no worry about this.

However, your SocketTimeoutException might be due to long GC pauses. I
guess it might also be due to network failures or RS contention (too many
requests on this RS, no more IPC slot...)
On Thu, Oct 31, 2013 at 9:52 AM, Vimal Jain <[EMAIL PROTECTED]> wrote:

> Hi,
> Can anyone please reply to the above query ?
>
>
> On Tue, Oct 29, 2013 at 10:48 AM, Vimal Jain <[EMAIL PROTECTED]> wrote:
>
> > Hi,
> > Here is my analysis of this problem.Please correct me if i wrong
> somewhere.
> > I have assigned 2 GB to region server process.I think its sufficient
> > enough to handle around 9GB of data.
> > I have not changed much of the parameters , especially memstore size
> which
> > is 128 GB for 0.94.7 by default.
> > Also as per my understanding , each col-family has one memstore
> associated
> > with it.So my memstores are taking 128*3 = 384 MB ( I have 3 column
> > families).
> > So i think i should reduce memstore size to something like 32/64 MB so
> > that data is flushed to disk at higher frequency then current
> > frequency.This will save some memory.
> > Is there any other parameter other then memstore size which affects
> memory
> > utilization.
> >
> > Also I am getting below exceptions in data node log and region server log
> > every day.Is it due to long GC pauses ?
> >
> > Data node logs :-
> >
> > hadoop-hadoop-datanode-woody.log:2013-10-29 00:12:13,127 WARN
> > org.apache.hadoop.hdfs.server.datanode.DataNode: DatanodeRegistration(
> > 192.168.20.30:5001
> > 0, storageID=DS-1816106352-192.168.20.30-50010-1369314076237,
> > infoPort=50075, ipcPort=50020):Got exception while serving
> > blk_-560908881317618221_58058
> >  to /192.168.20.30:
> > hadoop-hadoop-datanode-woody.log:java.net.SocketTimeoutException: 480000
> > millis timeout while waiting for channel to be ready for write. ch :
> > java.nio
> > .channels.SocketChannel[connected local=/192.168.20.30:50010 remote=/
> > 192.168.20.30:39413]
> > hadoop-hadoop-datanode-woody.log:2013-10-29 00:12:13,127 ERROR
> > org.apache.hadoop.hdfs.server.datanode.DataNode: DatanodeRegistration(
> > 192.168.20.30:500
> >
> > 10, storageID=DS-1816106352-192.168.20.30-50010-1369314076237,
> > infoPort=50075, ipcPort=50020):DataXceiver
> > hadoop-hadoop-datanode-woody.log:java.net.SocketTimeoutException: 480000
> > millis timeout while waiting for channel to be ready for write. ch :
> > java.nio
> > .channels.SocketChannel[connected local=/192.168.20.30:50010 remote=/
> > 192.168.20.30:39413]
> >
> >
> > Region server logs :-
> >
> > hbase-hadoop-regionserver-woody.log:2013-10-29 01:01:16,475 WARN
> > org.apache.hadoop.ipc.HBaseServer: (responseTooSlow):
> > {"processingtimems":15827,"call
> > ":"multi(org.apache.hadoop.hbase.client.MultiAction@2918e464), rpc
> > version=1, client version=29,
> > methodsFingerPrint=-1368823753","client":"192.168.20.
> >
> >
> 31:50619","starttimems":1382988660645,"queuetimems":0,"class":"HRegionServer","responsesize":0,"method":"multi"}
> > hbase-hadoop-regionserver-woody.log:2013-10-29 06:01:27,459 WARN
> > org.apache.hadoop.ipc.HBaseServer: (operationTooSlow):
> > {"processingtimems":14745,"cli
> > ent":"192.168.20.31:50908
> >
> ","timeRange":[0,9223372036854775807],"starttimems":1383006672707,"responsesize":55,"class":"HRegionServer","table":"event_da
> >
> >
> ta","cacheBlocks":true,"families":{"oinfo":["clubStatus"]},"row":"1752869","queuetimems":1,"method":"get","totalColumns":1,"maxVersions":1}
> >
> >
> >
> >
> >
> > On Mon, Oct 28, 2013 at 11:55 PM, Asaf Mesika <[EMAIL PROTECTED]
> >wrote:
> >
> >> Check through HDFS UI that your cluster haven't reached maximum disk
> >> capacity
> >>
> >> On Thursday, October 24, 2013, Vimal Jain wrote:
> >>
> >> > Hi Ted/Jean,
> >> > Can you please help here ?
> >> >
> >> >
> >> > On Tue, Oct 22, 2013 at 10:29 PM, Vimal Jain <[EMAIL PROTECTED]

Adrien Mogenet
http://www.borntosegfault.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB