Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Threaded View
HBase >> mail # user >> Coprocessor tests under busy insertions


Copy link to this message
-
Re: Coprocessor tests under busy insertions
hi, Anoop.

this is my implementation using Coprocessors RegionObserver.


@Override
    public void prePut(ObserverContext<RegionCoprocessorEnvironment> e, Put put, WALEdit edit,
            boolean writeToWAL) throws IOException {
        String tableName = e.getEnvironment().getRegion().getRegionInfo().getTableNameAsString()
                .toLowerCase();
        
        if (tableName.equals(BLOG)) {
            //HTableInterface table = pool.getTable(SIDX);
            HTableInterface table = e.getEnvironment().getTable(Bytes.toBytes(SIDX));

            if (table == null) {
                log.error("failed to get a connection.");
                return;
            }

            try {
                Map<byte[], List<KeyValue>> familyMap = put.getFamilyMap();

                List<KeyValue> kvs = familyMap.get(COLUMN_FAMILY_BYTES);

                if (kvs != null) {
                    for (KeyValue kv : kvs) {
                        if (StringUtils.equals(Bytes.toString(kv.getQualifier()), "field0")) {
                            byte[] row = put.getRow();
                            Put idx = new Put(row);
                            idx.add(COLUMN_FAMILY_BYTES, "field0".getBytes(), kv.getValue());
                            table.put(idx);
                        }
                    }
                }
            } catch (Exception ex) {
                log.error("coprocessor error : ", ex);
            } finally {
                table.close();
            }
        }
    }


thanks for your response.

- Henry

2012. 8. 13., 오후 1:01, Anoop Sam John <[EMAIL PROTECTED]> 작성:

> Can u paste your CP implementation here [prePut/ postPut?]
> Are u doing check for the table in CP hook? U need to only handle the hooks while it is being called for your table. Remember that your index table also have these same hooks.
>
> -Anoop-
> ________________________________________
> From: Henry JunYoung KIM [[EMAIL PROTECTED]]
> Sent: Monday, August 13, 2012 7:18 AM
> To: [EMAIL PROTECTED]
> Subject: Coprocessor tests under busy insertions
>
> Hi, hbase users.
>
> now, I am testing coprocessors to create secondary indexes in background.
> coprocessors itself is packaged in base 0.92.1 I am using.
>
> the scenario I want to describe is this one.
>
> the main table is 'blog' which is having a field named 'userId'.
> from this field I want to create secondary index to map 'userId' and it's 'url'.
>
> I put RegionObserver implementations in my secondary index creator.
>
> the situation I got from hbase is this log.
>
> ------------
> 12/08/13 10:37:08 WARN client.HConnectionManager$HConnectionImplementation: Failed all from region=blog,user6447991910946051755,1344821177585.7d4cbd4a9817ab7cb5c6219498d854a4., hostname=search-ddm-test5, port=60020
> java.util.concurrent.ExecutionException: java.net.SocketTimeoutException: Call to search-ddm-test5/xx.xx.xx.xx:60020 failed on socket timeout exception: java.net.SocketTimeoutException: 60000 millis timeout while waiting for channel to be ready for read. ch : java.nio.channels.SocketChannel[connected local=/xx.xx.xx.xx:53733 remote=search-ddm-test5/xx.xx.xx.xx:60020]
>        at java.util.concurrent.FutureTask$Sync.innerGet(FutureTask.java:222)
>        at java.util.concurrent.FutureTask.get(FutureTask.java:83)
>        at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.processBatchCallback(HConnectionManager.java:1557)
>        at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.processBatch(HConnectionManager.java:1409)
>        at org.apache.hadoop.hbase.client.HTable.flushCommits(HTable.java:943)
>        at org.apache.hadoop.hbase.client.HTable.doPut(HTable.java:820)
>        at org.apache.hadoop.hbase.client.HTable.put(HTable.java:795)
>        at com.yahoo.ycsb.db.HBaseClient.update(HBaseClient.java:321)
>        at com.yahoo.ycsb.DBWrapper.update(DBWrapper.java:126)
>        at com.yahoo.ycsb.workloads.CoreWorkload.doTransactionUpdate(CoreWorkload.java:628)
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB