Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Threaded View
HBase >> mail # user >> Regionserver goes down while endpoint execution


Copy link to this message
-
Re: Regionserver goes down while endpoint execution
Hi Deepak

"this.table.put(putInIndexTable);""

I think this one should be the problem.

Your table is at the instance level.   See the documentation of HTable.
"<p>This class is not thread safe for reads nor write."

So if you try creating a new HTable every time this problem should be
avoided.

Regards
Ram

On Wed, Mar 20, 2013 at 1:11 PM, Kumar, Deepak8 <[EMAIL PROTECTED]>wrote:

> Hi Anoop,
>
> Quite inspired by your coprocessor secondary indexing document & trying to
> implement one, for better response :)
>
>
>
> The coprocessor executes for some time, but later on (say after 400-500
> inserts) it gives IndexOutofBounds exception.
>
>
>
> The stack trace is
>
> 2013-03-20 02:40:42,074 INFO
> com.citi.sponge.hbase.coprocessor.secondaryindex.SecondaryIndexCoprocessor:
> Counter: 2408
>
> 2013-03-20 02:40:42,098 INFO
> com.citi.sponge.hbase.coprocessor.secondaryindex.SecondaryIndexCoprocessor:
> Added in elf_log_index: vm-ab1f-dd21.nam.nsroot.net:
> /var/log/flume/flume-root-node-vm-ab1f-dd21.log::153299:1363758015261:3913805817870658:
> vm-ab1f-dd21.nam.nsroot.net:
>
> 2013-03-20 02:40:42,098 INFO
> com.citi.sponge.hbase.coprocessor.secondaryindex.SecondaryIndexCoprocessor:
> Counter: 2410
>
> 2013-03-20 02:40:42,122 ERROR
> org.apache.hadoop.hbase.coprocessor.CoprocessorHost: Removing coprocessor
> 'org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$RegionEnvironment@2f86780d'
> from environment because it threw:  java.lang.IndexOutOfBoundsException:
> Index: 1, Size: 1
>
> java.lang.IndexOutOfBoundsException: Index: 1, Size: 1
>
>         at java.util.ArrayList.RangeCheck(ArrayList.java:547)
>
>         at java.util.ArrayList.remove(ArrayList.java:387)
>
>         at
> org.apache.hadoop.hbase.client.HTable.flushCommits(HTable.java:960)
>
>         at org.apache.hadoop.hbase.client.HTable.doPut(HTable.java:826)
>
>         at org.apache.hadoop.hbase.client.HTable.put(HTable.java:801)
>
>         at
> org.apache.hadoop.hbase.client.HTablePool$PooledHTable.put(HTablePool.java:394)
>
>         at
> com.citi.sponge.hbase.coprocessor.secondaryindex.SecondaryIndexCoprocessor.postPut(SecondaryIndexCoprocessor.java:91)
>
>
>
> If I synchronize on current object in the postPut then it works fine? If I
> use some counter valriable in the postput then it seems the counter misses
> some value just before this exception. As you can see above 2409 is
> missing. But if I synchronize the postPut block then it executes fine & the
> counter is also in serial order. But I  think using synchronize would slow
> down the insertion in secondary index. Could you guide me the exact reason
> for this?
>
>
>
> Here is the complete code for postPut
>
>
>
> public void postPut(final ObserverContext<RegionCoprocessorEnvironment> e,
>
>                   final Put put, final WALEdit edit, final boolean
> writeToWAL) throws IOException {
>
>
>
>             List<KeyValue> hostName = put.get(Bytes.toBytes("sysInfo"),
> Bytes.toBytes("hostName"));
>
>             List<KeyValue> logFilePath = put.get(Bytes.toBytes("content"),
> Bytes.toBytes("logFilePath"));
>
>             List<KeyValue> logFileName = put.get(Bytes.toBytes("content"),
> Bytes.toBytes("logFileName"));
>
>
>
>             if(hostName.size() > 0 && logFilePath.size() > 0 &&
> logFileName.size() >0){
>
>                   byte[] hostNameVal = hostName.get(0).getValue();
>
>                   byte[] logFilePathVal = logFilePath.get(0).getValue();
>
>                   byte[] logFileNameVal = logFileName.get(0).getValue();
>
>                   byte[] rowKeyBody = hostName.get(0).getRow();
>
>                   synchronized(this){
>
>
>
>
>
>                   byte[] rowKey = (
>
>                               Bytes.toString(hostNameVal) + ":" +
> Bytes.toString(logFilePathVal) + Bytes.toString(logFileNameVal)
>
>                               + LOG_INDEX_DELIM +
> Bytes.toString(rowKeyBody)).getBytes();
>
>
>
>                   logger.debug("Row Key Secondary Index:
> "+Bytes.toString(rowKey));
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB