Home | About | Sematext search-lucene.com search-hadoop.com
 Search Hadoop and all its subprojects:

Switch to Plain View
Chukwa, mail # user - Re: network compression between agent and collector


+
Sourygna Luangsay 2012-07-29, 00:19
+
Sourygna Luangsay 2012-09-15, 10:37
Copy link to this message
-
Re: network compression between agent and collector
Ariel Rabkin 2012-07-29, 03:29
I would do the DataOutputBuffer level -- in general, compressing
bigger blocks is more efficient since the compression algorithm has
more room to find duplicates. But trying to stripe across buffers
would leave you with awkwardness in the presence of missing data.

 I would start with the DataOutputBuffer strategy, since it's easy to
do and not obviously the wrong thing -- if it seems to work
satisfactorily, declare victory and contribute the patch.

On Sat, Jul 28, 2012 at 8:19 PM, Sourygna Luangsay
<[EMAIL PROTECTED]> wrote:
> Hi Ari,
>
> Yes, we do need such feature for a project of us. So plan to develop it.
> When I come back from holidays I'll create a JIRA.
>
> Meanwhile, don't hesitate to tell me more if you have any idea of some
> interesting
> features linked with such compression, or any advice to implement it. For
> instance, I am not
> really sure right now at which level I should set the compression in the
> Chukwa Agent:
> - at the whole DataOutputBuffer level?
> - at the "data" field of every Chunk?
> - or at the adaptor level?
> (at first sight, the DataOutputBuffer level seems the easier to impement).
>
> Thanks,
>
> Sourygna

--
Ari Rabkin [EMAIL PROTECTED]
Princeton Computer Science Department
+
Sourygna Luangsay 2012-07-24, 15:50
+
Ariel Rabkin 2012-07-24, 16:31