Thank you very much for your answer! I really appreciate that you are thinking with me.
Regarding trhe number of mappers to export, yes, we can keep it low, but as you said, Sqoop will try its best for the highest throughput so even one mapper can cause replication lag.
Your idea of the non-replicated tables could work, but I'm almost sure we'll need to discard it, because it's impossible to maintain with a few hundred machines, all constantly changing, adding new servers, creating new exports, etc...
The solutions we had in mind so far:
It is an unofficial project for MySQL, and it seems to be sopped somehow. It doesn't seem to support throttling our of the box, but in theory with using Lua scripts one can write a system to limit the number of queries. This, however, is not a guarantee to limit data throughput (imagine one huge insert with thousands of lines...) and doesn't seem to be ready for production
We had in mind a solution where we completely discard Sqoop and write our own solution which somehow puts exported lines from Hive to a message queue and there we can already process it the way we want. I see this very complex and costly solution.
Contributing to Sqoop
This is what I see now as the best option - creating our own branch of Sqoop and adding the throttling feature.
If anyone has something else in mind, it's really appreciated.
From: Jarek Jarcec Cecho [[EMAIL PROTECTED]]
Sent: Thursday, September 13, 2012 12:19 PM
To: [EMAIL PROTECTED]
Subject: Re: Throttling inserts to avoid replication lags
Sqoop is trying for the best throughput to move data from source to destination, so your issue might be tricky to solve. I was thinking about it and I do have couple of ideas:
1) Did you tried to limit number of concurrent connections using "-m" parameter?
2) I can imagine that huge parallelism in Sqoop can make hard time for MySQL single threaded replication. Thinking out-of-the box, what about creating table that won't be replicated (mysql can limit replication on both database and table level) on all your nodes and performing your load to all of them (it doesn't matter whether sequentially or in parallel). Once every node will get the data, you can atomically switch the table on all nodes at once. I'm not sure whether it's feasible nor whether it will actually work. I'm just trying to help.
On Thu, Sep 13, 2012 at 08:41:13AM +0000, Zoltán Tóth-Czifra wrote:
> Thank you for your answers!
> I have been reading about Sqoop2, but since it's still under development it doesn't really serve me. Besides, my problem is not limiting connections, but somehow limiting the throughput of even one connection.
> This problem might not be Sqoop-specific, but I wondered if anyone have faced this and solved it somehow.
> Thank you!
> From: Kathleen Ting [[EMAIL PROTECTED]]
> Sent: Thursday, September 13, 2012 1:27 AM
> To: [EMAIL PROTECTED]
> Subject: Re: Throttling inserts to avoid replication lags
> Chuck, Zoltán,
> In Sqoop 2, it has been discussed that connections will allow the
> specification of a resource policy in that resources will be managed
> by limiting the total number of physical Connections open at one time
> and with an option to disable Connections.
> More info: https://blogs.apache.org/sqoop/entry/apache_sqoop_highlights_of_sqoop
> Regards, Kathleen
> On Wed, Sep 12, 2012 at 8:08 AM, Connell, Chuck
> <[EMAIL PROTECTED]> wrote:
> > In my opinion, this is not a Sqoop problem. It is related to the RDBMS and
> > the way it handles high-volume updates. Those updates might be coming from
> > Sqoop, or they might be coming from a realtime stock market price feed.
> > I would go ahead and test the system as is. Let Sqoop do all its updates. If
> > you actually have a problem with inconsistencies or poor performance, then I