Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Threaded View
Kafka >> mail # dev >> having problem with 0.8 gzip compression


Copy link to this message
-
Re: having problem with 0.8 gzip compression
Jun,

I did a test this morning and got a very interesting result with you
command.  I started by wipe all the log files and clean up all zookeeper
data files.

Once I restarted both server, producer and consumer then execute your
command, what I got is a empty log as following:

Dumping /Users/scott/Temp/kafka/test-topic-0/00000000000000000000.log
Starting offset: 0

One observation, the 00000000000000000000.index file was getting huge but
there was nothing in 00000000000000000000.log file.

Thanks,
Scott
On Tue, Jul 9, 2013 at 8:40 PM, Jun Rao <[EMAIL PROTECTED]> wrote:

> Could you run the following command on one of the log files of your topic
> and attach the output?
>
> bin/kafka-run-class.sh kafka.tools.DumpLogSegments --files
> /tmp/kafka-logs/testtopic-0/00000000000000000000.log
>
> Thanks,
>
> Jun
>
>
> On Tue, Jul 9, 2013 at 3:23 PM, Scott Wang <
> [EMAIL PROTECTED]> wrote:
>
> > Another piece of information, the snappy compression also does not work.
> >
> > Thanks,
> > Scott
> >
> >
> > On Tue, Jul 9, 2013 at 11:07 AM, Scott Wang <
> > [EMAIL PROTECTED]> wrote:
> >
> > > I just try it and it still not showing up, thanks for looking into
> this.
> > >
> > > Thanks,
> > > Scott
> > >
> > >
> > > On Tue, Jul 9, 2013 at 8:06 AM, Jun Rao <[EMAIL PROTECTED]> wrote:
> > >
> > >> Could you try starting the consumer first (and enable gzip in the
> > >> producer)?
> > >>
> > >> Thanks,
> > >>
> > >> Jun
> > >>
> > >>
> > >> On Mon, Jul 8, 2013 at 9:37 PM, Scott Wang <
> > >> [EMAIL PROTECTED]> wrote:
> > >>
> > >> > No, I did not start the consumer before the producer.  I actually
> > >> started
> > >> > the producer first and nothing showed up in the consumer unless I
> > >> commented
> > >> > out this line -- props.put("compression.codec", "gzip").    If I
> > >> commented
> > >> > out the compression codec, everything just works.
> > >> >
> > >> >
> > >> > On Mon, Jul 8, 2013 at 9:07 PM, Jun Rao <[EMAIL PROTECTED]> wrote:
> > >> >
> > >> > > Did you start the consumer before the producer? Be default, the
> > >> consumer
> > >> > > gets only the new data?
> > >> > >
> > >> > > Thanks,
> > >> > >
> > >> > > Jun
> > >> > >
> > >> > >
> > >> > > On Mon, Jul 8, 2013 at 2:53 PM, Scott Wang <
> > >> > > [EMAIL PROTECTED]> wrote:
> > >> > >
> > >> > > > I am testing with Kafka 0.8 beta and having problem of receiving
> > >> > message
> > >> > > in
> > >> > > > consumer.  There is no error so does anyone have any insights.
> > >>  When I
> > >> > > > commented out the "compression.code" everything works fine.
> > >> > > >
> > >> > > > My producer:
> > >> > > > public class TestKafka08Prod {
> > >> > > >
> > >> > > >     public static void main(String [] args) {
> > >> > > >
> > >> > > >         Producer<Integer, String> producer = null;
> > >> > > >         try {
> > >> > > >             Properties props = new Properties();
> > >> > > >             props.put("metadata.broker.list", "localhost:9092");
> > >> > > >             props.put("serializer.class",
> > >> > > > "kafka.serializer.StringEncoder");
> > >> > > >             props.put("producer.type", "sync");
> > >> > > >             props.put("request.required.acks","1");
> > >> > > >             props.put("compression.codec", "gzip");
> > >> > > >             ProducerConfig config = new ProducerConfig(props);
> > >> > > >             producer = new Producer<Integer, String>(config);
> > >> > > >             int j=0;
> > >> > > >             for(int i=0; i<10; i++) {
> > >> > > >                 KeyedMessage<Integer, String> data = new
> > >> > > > KeyedMessage<Integer, String>("test-topic", "test-message: "+i+"
> > >> > > > "+System.currentTimeMillis());
> > >> > > >                 producer.send(data);
> > >> > > >
> > >> > > >             }
> > >> > > >
> > >> > > >         } catch (Exception e) {
> > >> > > >             System.out.println("Error happened: ");
> > >> > > >             e.printStackTrace();

 
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB