Hi Joel

Thanks for the hints. Apparently it was a configuration error at operating
system level.

We are using Debian Linux. Kafka uses setsockopt call with SO_SNDBUF to
setup the buffer size (socket.send.buffer). The operating system then set
the real buffer size to min(socket.send.buffer, net.core.wmem_max).
net.core.wmem_max was set to a really small value (131071), that was Debian
Linux default. However, if you delegate the buffsize to the operating
system (basically you don't call to setsockopt for SO_SNDBUF), the
operating system handles the buffer size using the configuration in
net.ipv4.tcp_wmem. The default values for our hosts were 4096, 16384,
4194304, meaning that a buffer can automatically grow up to 4MB (if there
is enough memory available).

Having such an small window was the main reason for the low performance.
With the new configuration we increased performance by an order of
magnitude.

The interesting think is that, at least in Linux, if you don't call to
setsockopt the default buffer works really well. I don't notice any
difference between calling to setsockopt or not (after fixing the
configuration of the machine). So why to setsockopt?

Anyway, I see value on using configuration to turn on or off the call to
setsockopt. I will send a patch.

Regards,

Pablo

PS: We are using 0.7.1 and Linux 2.6.32.
2013/1/22 Joel Koshy <[EMAIL PROTECTED]>
should
(fetch.size)
One
Does

 
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB