We are currently using kafka-0.7.1 right now.
I have two questions:
1. We use SimpleConsumer to aggregate messages to log files and there is no zookeeper. Sometimes we can see kafka.common.OffsetOutOfRangeException.
And this exception happens when we start our consumer program. We do not know the reason why this happens.
How can I get a valid latest message offset in kafka-0.7.1 when this exception happens?
2. Before we start consumer, we call getOffsetsBefore function to get a list of valid offsets (up to maxSize) before the given time.
How can we interpret this list?
For example, this function returns an array [offset1, offset2].
Does this mean from offset1 to offset2 are valid, and offset2 to current offset are valid? We are confused about the meaning of this array.

   Sining Ma

NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB