Home | About | Sematext search-lucene.com search-hadoop.com
 Search Hadoop and all its subprojects:

Switch to Threaded View
HBase, mail # user - Some regions keep using an older version of coprocessor.


Copy link to this message
-
Re: Some regions keep using an older version of coprocessor.
iain wright 2013-07-26, 18:56
Filed bug report for confirmation:
https://issues.apache.org/jira/browse/HBASE-9046

Happy to supply any additional info as requested

Best,
iain

--
Iain Wright
Cell: (562) 852-5916

<http://www.labctsi.org/>
This email message is confidential, intended only for the recipient(s)
named above and may contain information that is privileged, exempt from
disclosure under applicable law. If you are not the intended recipient, do
not disclose or disseminate the message to anyone except the intended
recipient. If you have received this message in error, or are not the named
recipient(s), please immediately notify the sender by return email, and
delete all copies of this message.
On Thu, Jul 25, 2013 at 8:25 PM, Kim Chew <[EMAIL PROTECTED]> wrote:

> Hello Ian,
>
> Glad I am not alone 8-)
>
> I ended up putting the coprocessor to a new location and all the regions
> are able to run with the latest cp.
>
> I suspect that there are some hiccups in CoprocessorClassLoader?
>
> Kim
>
>
> On Thu, Jul 25, 2013 at 7:22 PM, iain wright <[EMAIL PROTECTED]> wrote:
>
> > Hi Kim,
> >
> > My devs ran into the same thing when doing iterative dev on coprocessors
> > and were constantly re-loading them into HDFS & re loading the hbase
> table.
> >
> > After banging my head against a keyboard for 2 days trying to find some
> > kind of caching culprit (and failing), i ended up writing a little deploy
> > script which appends an epoch to their jar on each deployment so there is
> > no cache involved.
> >
> > It's hacky, but yeah, using a different co-processor name and uploading
> > that to HDFS/re-enabling table should solve your problem. pasted the
> script
> > below for your reference.
> >
> > Cheers,
> >
> > iain wright
> > sysadmin @ Telescope
> >
> > #! /bin/bash
> >
> > # must pass in a Jar
> > if test -z $1
> > then
> >   echo "ERROR: Pass a jar as the paramter to this script"
> >   echo "IE: ./script.sh HbaseCoprocessors.jar"
> >
> >   exit 1
> > fi
> >
> > # must be run as root
> > if [ "$(id -u)" != "0" ]; then
> >    echo "This script must be run as root" 1>&2
> >    exit 1
> > fi
> >
> > # increment jar
> > EPOCH_NOW=`date +%s`
> > mkdir ARCHIVED
> > mkdir ARCHIVED/${EPOCH_NOW}
> > INPUT_JAR=`echo $1 | cut -f1 -d.`
> > OUTPUT_JAR=${INPUT_JAR}${EPOCH_NOW}.jar
> > mv $1 ${OUTPUT_JAR}
> >
> > # get value of last jar loaded for hdfs cleanup
> > OLD_JAR=`cat last_load.txt`
> >
> > # clean up
> > echo "Loading ${OUTPUT_JAR} into HDFS"
> > if [ "${OLD_JAR}" == "" ]; then
> >   echo "last_load.txt is empty, I dont know what to clean up from the
> last
> > run"
> > else
> >   hadoop fs -rm /${OLD_JAR}
> > fi
> >
> > # load into hdfs
> > hadoop fs -put ${OUTPUT_JAR} /${OUTPUT_JAR}
> > su -m hdfs -c 'hadoop fs -chmod 775 /${OUTPUT_JAR}'
> >
> > su -m hdfs -c 'hadoop fs -chown hbase:hbase /${OUTPUT_JAR}'
> > hadoop fs -ls /${OUTPUT_JAR}
> > echo "$OUTPUT_JAR loaded into HDFS"
> >
> > # load into hbase
> > echo "Loading ${OUTPUT_JAR} into HBASE"
> > shift
> > cat > hbase_script <<- _EOF1_
> > disable 'test_table'
> > disable 'test_table2'
> >
> > alter 'test_table', METHOD => 'table_att_unset', NAME => 'coprocessor\$1'
> > alter 'test_table', METHOD => 'table_att',
> >
> >
> 'coprocessor'=>'hdfs:///${OUTPUT_JAR}|telescope.hbase.coprocessors.test|1001|'
> >
> > alter 'test_table2', METHOD => 'table_att_unset', NAME =>
> 'coprocessor\$1'
> > alter 'test_table2', METHOD => 'table_att_unset', NAME =>
> 'coprocessor\$2'
> > alter 'test_table2', METHOD => 'table_att',
> >
> >
> 'coprocessor'=>'hdfs:///${OUTPUT_JAR}|telescope.hbase.coprocessors.observers.Exporter1|1001|source_family=c'
> > alter 'test_table2', METHOD => 'table_att',
> >
> >
> 'coprocessor'=>'hdfs:///${OUTPUT_JAR}|telescope.hbase.coprocessors.observers.Expander2|1002|families=c'
> >
> > enable 'test_table'
> > enable 'test_table2'
> > exit
> > _EOF1_
> > hbase shell hbase_script
> > echo "${OUTPUT_JAR} Loaded into HBASE"
> > echo "Cleaning up"
> >
> > # cleanup
> > mv hbase_script ARCHIVED/$EPOCH_NOW/hbase_script