Home | About | Sematext search-lucene.com search-hadoop.com
 Search Hadoop and all its subprojects:

Switch to Plain View
HBase >> mail # user >> Regionserver goes down while endpoint execution


+
Kumar, Deepak8 2013-03-12, 05:51
+
lars hofhansl 2013-03-12, 06:01
+
Kumar, Deepak8 2013-03-12, 06:27
+
Kumar, Deepak8 2013-03-12, 06:59
+
Kumar, Deepak8 2013-03-12, 11:46
+
Ted Yu 2013-03-12, 16:29
+
Gary Helmling 2013-03-12, 18:13
+
Kumar, Deepak8 2013-03-13, 15:19
+
Ted Yu 2013-03-13, 16:01
+
Himanshu Vashishtha 2013-03-13, 16:08
+
Kumar, Deepak8 2013-03-14, 17:09
+
Ted Yu 2013-03-14, 17:15
+
Himanshu Vashishtha 2013-03-14, 17:45
+
Anoop Sam John 2013-03-15, 06:55
+
Kumar, Deepak8 2013-03-20, 07:41
Copy link to this message
-
Re: Regionserver goes down while endpoint execution
Hi Deepak

"this.table.put(putInIndexTable);""

I think this one should be the problem.

Your table is at the instance level.   See the documentation of HTable.
"<p>This class is not thread safe for reads nor write."

So if you try creating a new HTable every time this problem should be
avoided.

Regards
Ram

On Wed, Mar 20, 2013 at 1:11 PM, Kumar, Deepak8 <[EMAIL PROTECTED]>wrote:

> Hi Anoop,
>
> Quite inspired by your coprocessor secondary indexing document & trying to
> implement one, for better response :)
>
>
>
> The coprocessor executes for some time, but later on (say after 400-500
> inserts) it gives IndexOutofBounds exception.
>
>
>
> The stack trace is
>
> 2013-03-20 02:40:42,074 INFO
> com.citi.sponge.hbase.coprocessor.secondaryindex.SecondaryIndexCoprocessor:
> Counter: 2408
>
> 2013-03-20 02:40:42,098 INFO
> com.citi.sponge.hbase.coprocessor.secondaryindex.SecondaryIndexCoprocessor:
> Added in elf_log_index: vm-ab1f-dd21.nam.nsroot.net:
> /var/log/flume/flume-root-node-vm-ab1f-dd21.log::153299:1363758015261:3913805817870658:
> vm-ab1f-dd21.nam.nsroot.net:
>
> 2013-03-20 02:40:42,098 INFO
> com.citi.sponge.hbase.coprocessor.secondaryindex.SecondaryIndexCoprocessor:
> Counter: 2410
>
> 2013-03-20 02:40:42,122 ERROR
> org.apache.hadoop.hbase.coprocessor.CoprocessorHost: Removing coprocessor
> 'org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$RegionEnvironment@2f86780d'
> from environment because it threw:  java.lang.IndexOutOfBoundsException:
> Index: 1, Size: 1
>
> java.lang.IndexOutOfBoundsException: Index: 1, Size: 1
>
>         at java.util.ArrayList.RangeCheck(ArrayList.java:547)
>
>         at java.util.ArrayList.remove(ArrayList.java:387)
>
>         at
> org.apache.hadoop.hbase.client.HTable.flushCommits(HTable.java:960)
>
>         at org.apache.hadoop.hbase.client.HTable.doPut(HTable.java:826)
>
>         at org.apache.hadoop.hbase.client.HTable.put(HTable.java:801)
>
>         at
> org.apache.hadoop.hbase.client.HTablePool$PooledHTable.put(HTablePool.java:394)
>
>         at
> com.citi.sponge.hbase.coprocessor.secondaryindex.SecondaryIndexCoprocessor.postPut(SecondaryIndexCoprocessor.java:91)
>
>
>
> If I synchronize on current object in the postPut then it works fine? If I
> use some counter valriable in the postput then it seems the counter misses
> some value just before this exception. As you can see above 2409 is
> missing. But if I synchronize the postPut block then it executes fine & the
> counter is also in serial order. But I  think using synchronize would slow
> down the insertion in secondary index. Could you guide me the exact reason
> for this?
>
>
>
> Here is the complete code for postPut
>
>
>
> public void postPut(final ObserverContext<RegionCoprocessorEnvironment> e,
>
>                   final Put put, final WALEdit edit, final boolean
> writeToWAL) throws IOException {
>
>
>
>             List<KeyValue> hostName = put.get(Bytes.toBytes("sysInfo"),
> Bytes.toBytes("hostName"));
>
>             List<KeyValue> logFilePath = put.get(Bytes.toBytes("content"),
> Bytes.toBytes("logFilePath"));
>
>             List<KeyValue> logFileName = put.get(Bytes.toBytes("content"),
> Bytes.toBytes("logFileName"));
>
>
>
>             if(hostName.size() > 0 && logFilePath.size() > 0 &&
> logFileName.size() >0){
>
>                   byte[] hostNameVal = hostName.get(0).getValue();
>
>                   byte[] logFilePathVal = logFilePath.get(0).getValue();
>
>                   byte[] logFileNameVal = logFileName.get(0).getValue();
>
>                   byte[] rowKeyBody = hostName.get(0).getRow();
>
>                   synchronized(this){
>
>
>
>
>
>                   byte[] rowKey = (
>
>                               Bytes.toString(hostNameVal) + ":" +
> Bytes.toString(logFilePathVal) + Bytes.toString(logFileNameVal)
>
>                               + LOG_INDEX_DELIM +
> Bytes.toString(rowKeyBody)).getBytes();
>
>
>
>                   logger.debug("Row Key Secondary Index:
> "+Bytes.toString(rowKey));
+
Anoop Sam John 2013-03-20, 08:36
+
Kumar, Deepak8 2013-03-20, 12:44
+
Anoop Sam John 2013-03-20, 12:58
+
Kumar, Deepak8 2013-03-20, 13:18
+
Kumar, Deepak8 2013-03-25, 16:53
+
Anoop Sam John 2013-03-26, 06:20
+
Kumar, Deepak8 2013-03-26, 07:27
+
Adrien Mogenet 2013-03-26, 07:42
+
Kumar, Deepak8 2013-03-26, 08:27
+
Anoop John 2013-03-26, 17:17
+
Kumar, Deepak8 2013-03-28, 10:50
+
ramkrishna vasudevan 2013-03-28, 10:53
+
Agarwal, Saurabh 2013-03-28, 12:26
+
Anoop Sam John 2013-04-02, 06:51
+
Kumar, Deepak8 2013-03-28, 12:11
+
Himanshu Vashishtha 2013-03-12, 16:59