Ross Black 2013-03-19, 04:28
Yes, your understanding is correct. The reason we have to recompress the
messages is to assign a unique offset to messages inside a compressed
message. Some preliminary load testing shows 30% increase in CPU, but that
is using GZIP which is known to be CPU intensive. By this week, we will
know the CPU usage for a lighter compression codec like Snappy. Will post
the results on the mailing list.
On Monday, March 18, 2013, Ross Black wrote:
> I have just started looking at moving from 0.7 to 0.8 and wanted to confirm
> my understanding of code in the message server/broker.
> In the code for 0.8, KafkaApis.appendToLocalLog calls log.append(...,
> assignOffsets = true), which then calls ByteBufferMessageSet.assignOffsets.
> This method seems to uncompress and then re-compress the entire set of
> Is my understanding of the code correct?
> Has any testing been done on the CPU consumption / performance of the
> message server to determine whether this adversely impacts message
> throughput under high load?
Ross Black 2013-03-19, 10:28