Kafka, mail # user - Re: hadoop-consumer code in contrib package - 2013-01-17, 14:21
Solr & Elasticsearch trainings in New York & San Francisco [more info][hide]
 Search Hadoop and all its subprojects:

Switch to Plain View
+
navneet sharma 2013-01-14, 18:35
+
Felix GV 2013-01-14, 22:43
+
navneet sharma 2013-01-15, 17:06
+
Felix GV 2013-01-15, 18:17
+
navneet sharma 2013-01-17, 00:41
+
Jun Rao 2013-01-17, 05:12
Copy link to this message
-
Re: hadoop-consumer code in contrib package
That makes sense.

I tried an alternate approach- i am using high level consumer and going
through Hadoop HDFS APIs and pushing data in HDFS.

I am not creating any jobs for that.

The only problem i am seeing here is that the consumer is designed to run
forever. Which means i need to find out how to close the HDFS file and kill
consumer.

Is there any way to kill or close high level consumer gracefully?

I am running v0.7.0. I don't mind upgrading to higher version if that
allows me this kind of consumer handling.

Thanks,
Navneet
On Thu, Jan 17, 2013 at 10:41 AM, Jun Rao <[EMAIL PROTECTED]> wrote:
 
+
Jun Rao 2013-01-17, 15:29
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB