Home | About | Sematext search-lucene.com search-hadoop.com
 Search Hadoop and all its subprojects:

Switch to Threaded View
HBase >> mail # user >> Coprocessor end point vs MapReduce?

Copy link to this message
Re: Coprocessor end point vs MapReduce?
Hi all,

First, sorry about my slowness to reply to this thread, but it went to
my spam folder and I lost sight of it.

I don’t have good knowledge of RDBMS, and so I don’t have good
knowledge of triggers too. That’s why I looked at the endpoints too
because they are pretty new for me.

First, I can’t really use multiple tables. I have one process writing
to this table barely real-time. Another one is deleting from this
table too. But some rows are never deleted. They are timing out, and
need to be moved by the process I’m building here.

I was not aware of the possibility to setup the priority for an MR job
(any link to show how?). That’s something I will dig into. I was a bit
scared about the network load if I’m doing deletes lines by lines and
not bulk.

What I still don’t understand is, since both CP and MR are both
running on the region side, with is the MR better than the CP? Because
the hadoop framework is taking care of it and will guarantee that it
will run on all the regions?

Also, is there some sort of “pre” and “post�� methods I can override
for MR jobs to initially list of puts/deletes and submit them at the
end? Or should I do that one by one on the map method?


2012/10/18, lohit <[EMAIL PROTECTED]>:
> I might be little off here. If rows are moved to another table on weekly or
> daily basis, why not create per weekly or per day table.
> That way you need to copy and delete. Of course it will not work you are
> are selectively filtering between timestamps and clients have to have
> notion of multiple tables.
> 2012/10/18 Anoop Sam John <[EMAIL PROTECTED]>
>> A CP and Endpoints operates at a region level.. Any operation within one
>> region we can perform using this..  I have seen in below use case that
>> along with the delete there was a need for inserting data to some other
>> table also.. Also this was kind of a periodic action.. I really doubt how
>> the endpoints alone can be used here.. I also tend towards the MR..
>>   The idea behind the bulk delete CP is simple.  We have a use case of
>> deleting a bulk of rows and this need to be online delete. I also have
>> seen
>> in the mailing list many people ask question regarding that... In all
>> people were using scans and get the rowkeys to the client side and then
>> doing the deletes..  Yes most of the time complaint was the slowness..
>> One
>> bulk delete performance improvement was done in HBASE-6284..  Still
>> thought
>> we can do all the operation (scan+delete) in server side and we can make
>> use of the endpoints here.. This will be much more faster and can be used
>> for online bulk deletes..
>> -Anoop-
>> ________________________________________
>> From: Michael Segel [[EMAIL PROTECTED]]
>> Sent: Thursday, October 18, 2012 11:31 PM
>> Subject: Re: Coprocessor end point vs MapReduce?
>> Doug,
>> One thing that concerns me is that a lot of folks are gravitating to
>> Coprocessors and may be using them for the wrong thing.
>> Has anyone done any sort of research as to some of the limitations and
>> negative impacts on using coprocessors?
>> While I haven't really toyed with the idea of bulk deletes, periodic
>> deletes is probably not a good use of coprocessors.... however using them
>> to synchronize tables would be a valid use case.
>> Thx
>> -Mike
>> On Oct 18, 2012, at 7:36 AM, Doug Meil <[EMAIL PROTECTED]>
>> wrote:
>> >
>> > To echo what Mike said about KISS, would you use triggers for a large
>> > time-sensitive batch job in an RDBMS?  It's possible, but probably not.
>> > Then you might want to think twice about using co-processors for such a
>> > purpose with HBase.
>> >
>> >
>> >
>> >
>> >
>> > On 10/17/12 9:50 PM, "Michael Segel" <[EMAIL PROTECTED]> wrote:
>> >
>> >> Run your weekly job in a low priority fair scheduler/capacity
>> >> scheduler
>> >> queue.
>> >>
>> >> Maybe its just me, but I look at Coprocessors as a similar structure