Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Plain View
HBase >> mail # user >> Batch returned value and exception handling.


+
Amit Sela 2013-03-14, 17:35
+
Jean-Marc Spaggiari 2013-03-14, 17:52
+
Amit Sela 2013-03-14, 18:34
+
Jean-Marc Spaggiari 2013-03-14, 18:42
+
Jean-Marc Spaggiari 2013-03-14, 18:55
+
Ted Yu 2013-03-14, 20:37
+
Jean-Marc Spaggiari 2013-03-14, 22:36
+
Ted Yu 2013-03-14, 22:51
+
Jean-Marc Spaggiari 2013-03-15, 00:14
Copy link to this message
-
Re: Batch returned value and exception handling.
I am not very familiar with ReplicationSink
Maybe J-D can tell us why the results from batch() call are not checked.

Now I understand your argument better.

On Thu, Mar 14, 2013 at 5:14 PM, Jean-Marc Spaggiari <
[EMAIL PROTECTED]> wrote:

> ReplicationSink.batch() is calling the HTable.batch(list) without
> looking at the results. The idea is, should we allow something like
> HTableInterface.batch(List<>, null) for the cases where we don't need
> to retrieve the result of the calls.
>
> Today, on the HConnectionManager.processBatchCallback() you will get a
> NPE right from the first line "if (results.length != list.size())".
>
> If I have 1000 increments to send and have nothing planned in case
> they fail, do I really want to create an array of 1000 objects for
> nothing, iterate over it, etc. when it might have been possible to
> simply drop it?
>
> If results == null, in HConnectionManager.processBatchCallback we can
> use workingList in step 3 instead of iterating again in step 4.
>
> I will try to explain that a bit more in the JIRA.
>
> JM
>
>
> 2013/3/14 Ted Yu <[EMAIL PROTECTED]>:
> > bq.  Should we mark it as deprecated in the interface too?
> >
> > Yes. That was my intention.
> >
> > I am not clear about your second suggestion, though.
> >
> > Cheers
> >
> > On Thu, Mar 14, 2013 at 3:36 PM, Jean-Marc Spaggiari <
> > [EMAIL PROTECTED]> wrote:
> >
> >> I agree.
> >>
> >> This method is also in the interface declaration. Should we mark it as
> >> deprecated in the interface too?
> >>
> >> Also, if someone don't want to get the results, should we find a way
> >> to allow he user to pass null for results?
> >>
> >> 2013/3/14 Ted Yu <[EMAIL PROTECTED]>:
> >> > Looking at this batch() method in HTable:
> >> >
> >> >   Object[] batch(final List<? extends Row> actions) throws
> IOException,
> >> > InterruptedException;
> >> > I think the above method should be deprecated due to the issue raised
> by
> >> > Amit.
> >> > The following method is more reliable:
> >> >
> >> >   void batch(final List<?extends Row> actions, final Object[] results)
> >> > throws IOException, InterruptedException;
> >> > I plan to raise a JIRA for deprecating the first method, if I don't
> hear
> >> > objections.
> >> >
> >> > Cheers
> >> >
> >> > On Thu, Mar 14, 2013 at 11:55 AM, Jean-Marc Spaggiari <
> >> > [EMAIL PROTECTED]> wrote:
> >> >
> >> >> Amit, do it that way:
> >> >>
> >> >>       Object[] res = new Object[batch.size()];
> >> >>       try {
> >> >>         table.batch(batch, res);
> >> >>
> >> >> Then res will contain the result, and the exception even if you will
> >> >> catch a RetriesExhaustedWithDetailsException because your batch got
> >> >> one.
> >> >>
> >> >> JM
> >> >>
> >> >> 2013/3/14 Jean-Marc Spaggiari <[EMAIL PROTECTED]>:
> >> >> > Can you paste the compelte stacktrace here with the causes too?
> >> >> >
> >> >> > I will try you piece of code locally to try to reproduce.
> >> >> >
> >> >> > JM
> >> >> >
> >> >> > 2013/3/14 Amit Sela <[EMAIL PROTECTED]>:
> >> >> >> I did look at HConnectionManager and that is the reason I expected
> >> the
> >> >> >> scenario you just described but running the test I ran from the
> >> >> development
> >> >> >> environment (IntelliJ IDEA) I did not get any returned value,
> instead
> >> >> the
> >> >> >> exception is thrown and after I catch it the result is null...
> >> >> >>
> >> >> >> Object[] res = null;
> >> >> >> try {
> >> >> >>       res = table.batch(batch);
> >> >> >> } catch (RetriesExhaustedWithDetailsException
> >> >> >> retriesExhaustedWithDetailsException) {
> >> >> >>       retriesExhaustedWithDetailsException.printStackTrace();
> >> >> >> }
> >> >> >> if (res == null) {
> >> >> >> System.out.println("No results - returned null.");
> >> >> >> return;
> >> >> >> }
> >> >> >>
> >> >> >>
> >> >> >>
> >> >> >> On Thu, Mar 14, 2013 at 7:52 PM, Jean-Marc Spaggiari <
> >> >> >> [EMAIL PROTECTED]> wrote:
> >> >> >>
> >> >> >>> Hi Amit,
> >> >> >>>
>
+
Ted Yu 2013-03-14, 22:28
+
Amit Sela 2013-03-14, 19:11
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB