Home | About | Sematext search-lucene.com search-hadoop.com
 Search Hadoop and all its subprojects:

Switch to Plain View
HBase, mail # user - Is "synchronized" required?


+
Bing Li 2013-02-04, 20:20
+
Harsh J 2013-02-04, 20:21
+
Ted Yu 2013-02-04, 20:25
+
Bing Li 2013-02-04, 20:32
+
Haijia Zhou 2013-02-04, 20:42
+
Adrien Mogenet 2013-02-04, 21:13
+
Nicolas Liochon 2013-02-04, 21:31
+
Bing Li 2013-02-04, 22:40
+
Nicolas Liochon 2013-02-04, 22:49
+
Bing Li 2013-02-05, 16:54
+
lars hofhansl 2013-02-06, 05:05
+
Bing Li 2013-02-07, 08:10
+
lars hofhansl 2013-02-07, 17:24
+
Bing Li 2013-02-06, 06:36
Copy link to this message
-
Re: Is "synchronized" required?
Adrien Mogenet 2013-02-06, 07:45
I probably don't know your application enough to make an accurate answer,
but you could have a look at asynchbase [
https://github.com/OpenTSDB/asynchbase] if you have thread-safety issues
and feel the need to control your resources over your threads.
On Wed, Feb 6, 2013 at 7:36 AM, Bing Li <[EMAIL PROTECTED]> wrote:

> Lars,
>
> I found that at least the exceptions have nothing to do with shared HTable.
>
> To save the resources, I designed a pool for the classes that write
> and read from HBase. The primary resources consumed in the classes are
> HTable. The pool has some bugs.
>
> My question is whether it is necessary to design such a pool? Is it
> fine to create a instance of HTable for each thread?
>
> I noticed that HBase has a class, HTablePool. Maybe the pool I
> designed is NOT required?
>
> Thanks so much!
>
> Best wishes!
> Bing
>
> On Wed, Feb 6, 2013 at 1:05 PM, lars hofhansl <[EMAIL PROTECTED]> wrote:
> > Are you sharing this.rankTable between threads? HTable is not thread
> safe.
> >
> > -- Lars
> >
> >
> >
> > ________________________________
> >  From: Bing Li <[EMAIL PROTECTED]>
> > To: "[EMAIL PROTECTED]" <[EMAIL PROTECTED]>; user
> <[EMAIL PROTECTED]>
> > Sent: Tuesday, February 5, 2013 8:54 AM
> > Subject: Re: Is "synchronized" required?
> >
> > Dear all,
> >
> > After "synchronized" is removed from the method of writing, I get the
> > following exceptions when reading. Before the removal, no such
> > exceptions.
> >
> > Could you help me how to solve it?
> >
> > Thanks so much!
> >
> > Best wishes,
> > Bing
> >
> >      [java] Feb 6, 2013 12:21:31 AM
> > org.apache.hadoop.hbase.ipc.HBaseClient$Connection run
> >      [java] WARNING: Unexpected exception receiving call responses
> >      [java] java.lang.NullPointerException
> >      [java]     at
> >
> org.apache.hadoop.hbase.io.HbaseObjectWritable.readObject(HbaseObjectWritable.java:521)
> >      [java]     at
> >
> org.apache.hadoop.hbase.io.HbaseObjectWritable.readFields(HbaseObjectWritable.java:297)
> >      [java]     at
> >
> org.apache.hadoop.hbase.ipc.HBaseClient$Connection.receiveResponse(HBaseClient.java:593)
> >      [java]     at
> >
> org.apache.hadoop.hbase.ipc.HBaseClient$Connection.run(HBaseClient.java:505)
> >      [java] Feb 6, 2013 12:21:31 AM
> > org.apache.hadoop.hbase.client.ScannerCallable close
> >      [java] WARNING: Ignore, probably already closed
> >      [java] java.io.IOException: Call to greatfreeweb/127.0.1.1:60020
> > failed on local exception: java.io.IOException: Unexpected exception
> > receiving call responses
> >      [java]     at
> >
> org.apache.hadoop.hbase.ipc.HBaseClient.wrapException(HBaseClient.java:934)
> >      [java]     at
> > org.apache.hadoop.hbase.ipc.HBaseClient.call(HBaseClient.java:903)
> >      [java]     at
> >
> org.apache.hadoop.hbase.ipc.WritableRpcEngine$Invoker.invoke(WritableRpcEngine.java:150)
> >      [java]     at $Proxy6.close(Unknown Source)
> >      [java]     at
> >
> org.apache.hadoop.hbase.client.ScannerCallable.close(ScannerCallable.java:112)
> >      [java]     at
> >
> org.apache.hadoop.hbase.client.ScannerCallable.call(ScannerCallable.java:74)
> >      [java]     at
> >
> org.apache.hadoop.hbase.client.ScannerCallable.call(ScannerCallable.java:39)
> >      [java]     at
> >
> org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.getRegionServerWithRetries(HConnectionManager.java:1325)
> >      [java]     at
> >
> org.apache.hadoop.hbase.client.HTable$ClientScanner.nextScanner(HTable.java:1167)
> >      [java]     at
> >
> org.apache.hadoop.hbase.client.HTable$ClientScanner.next(HTable.java:1296)
> >      [java]     at
> >
> org.apache.hadoop.hbase.client.HTable$ClientScanner$1.hasNext(HTable.java:1356)
> >      [java]     at
> >
> com.greatfree.hbase.rank.NodeRankRetriever.LoadNodeGroupNodeRankRowKeys(NodeRankRetriever.java:348)
> >      [java]     at
> >
> com.greatfree.ranking.PersistNodeGroupNodeRanksThread.run(PersistNodeGroupNodeRanksThread.java:29)

Adrien Mogenet
06.59.16.64.22
http://www.mogenet.me
+
lars hofhansl 2013-02-06, 07:44
+
Bing Li 2013-02-06, 10:31
+
lars hofhansl 2013-02-06, 18:54