Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Threaded View
HBase >> mail # user >> issue about search speed and rowkey


Copy link to this message
-
Re: issue about search speed and rowkey
Lars checked in HBASE-6580 today where HTablePool is Deprecated.

Please take a look.

On Wed, Aug 7, 2013 at 6:08 PM, ch huang <[EMAIL PROTECTED]> wrote:

>  table.close(); this is not close the table ,just get the connect back to
> pool,because
>
> *putTable methode in HTablePool is Deprecated,see*
>
>
> http://hbase.apache.org/apidocs/org/apache/hadoop/hbase/client/HTablePool.html#putTable(org.apache.hadoop.hbase.client.HTableInterface)
>
>
>
>
> On Wed, Aug 7, 2013 at 9:46 PM, Ted Yu <[EMAIL PROTECTED]> wrote:
>
> > Table is closed for each Put in writeRow(). This is not efficient.
> >
> > Take a look at http://hbase.apache.org/book.html#client , 9.3.1
> > Connections
> >
> > On Wed, Aug 7, 2013 at 5:11 AM, Lu, Wei <[EMAIL PROTECTED]> wrote:
> >
> > > decrease cache size (say, 1000)  and increase batch or just set it as
> > > default if #qualifiers in a row is not too many.
> > >
> > >
> > > -----Original Message-----
> > > From: ch huang [mailto:[EMAIL PROTECTED]]
> > > Sent: Wednesday, August 07, 2013 5:18 PM
> > > To: [EMAIL PROTECTED]
> > > Subject: issue about search speed and rowkey
> > >
> > > hi,all:
> > >
> > >         i have problem in running following code,MoveData method is
> used
> > to
> > > get data from source table ,and modify each row rowkey ,and insert into
> > > destination table,and i always get error,anyone can help?
> > >
> > >  public static void writeRow(HTablePool htp,String tablename, String
> > > rowkey,String cf,String col,String value) {
> > >         try {
> > >          HTableInterface table > htp.getTable(Bytes.toBytes(tablename));
> > >             Put put = new Put(Bytes.toBytes(rowkey));
> > >                 put.add(Bytes.toBytes(cf),
> > >                   Bytes.toBytes(col),
> > >                   Long.parseLong(rowkey),
> > >                   Bytes.toBytes(value));
> > >
> > >                 table.put(put);
> > >             table.close();
> > >         } catch (IOException e) {
> > >             e.printStackTrace();
> > >         }
> > >     }
> > >
> > > public static void MoveData(String src_t,String dest_t){
> > >
> > >     try{
> > >      HTable tabsrc = new HTable(conf, src_t);
> > >      HTable tabdest = new HTable(conf, dest_t);
> > >      tabsrc.setAutoFlush(false);
> > >      tabdest.setAutoFlush(false);
> > >      HTablePool tablePool = new HTablePool(conf, 5);
> > >      Scan scan = new Scan();
> > >      scan.setCaching(10000);
> > >         scan.setBatch(10);
> > >         ResultScanner rs = tabsrc.getScanner(scan);
> > >
> > >            for (Result r : rs){
> > >             ArrayList al = new ArrayList();
> > >             HashMap hm = new HashMap();
> > >             for (KeyValue kv : r.raw()){
> > >
> > >              hm.put(new String(kv.getQualifier()), new
> > > String(kv.getValue()));
> > >              al.add(new String(kv.getQualifier()));
> > >             }
> > >
> > >                for (int i = 0; i < al.size(); i++) {
> > >
> > >
> > >
> >
> writeRow(tablePool,dest_t,hm.get("date").toString(),"info",al.get(i).toString(),hm.get(al.get(i)).toString());
> > >                }
> > >          }
> > >
> > >            rs.close();
> > >            tabsrc.close();
> > >            tabdest.close();
> > >
> > >     }catch(IOException e){
> > >      e.printStackTrace();
> > >     }
> > >
> > >    }
> > >
> > >
> > >
> > > 2013-08-07 16:43:31,250 WARN  [main] conf.Configuration
> > > (Configuration.java:warnOnceIfDeprecated(824)) - hadoop.native.lib is
> > > deprecated. Instead, use io.native.lib.available
> > > java.lang.RuntimeException:
> > > org.apache.hadoop.hbase.client.ScannerTimeoutException: 123891ms passed
> > > since the last invocation, timeout is currently set to 120000
> > >  at
> > >
> > >
> >
> org.apache.hadoop.hbase.client.AbstractClientScanner$1.hasNext(AbstractClientScanner.java:44)
> > >  at com.testme.demo.HBaseTest.MoveData(HBaseTest.java:186)
> > >  at com.testme.demo.HBaseTest.main(HBaseTest.java:314)
> > > Caused by: org.apache.hadoop.hbase.client.ScannerTimeoutException:
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB