Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Threaded View
Hadoop >> mail # user >> Flume not moving data help !!!


Copy link to this message
-
Flume not moving data help !!!
Hi team I created flume source and sink as following in hadoop yarn and I am not getting data transferred from source to sink in HDFS it doesnt create any file and on local everytime I start agent it creates one empty file. Below are my configs in source and sink

Source :-
agent.sources = logger1agent.sources.logger1.type = execagent.sources.logger1.command = tail -f /var/log/messagesagent.sources.logger1.batchsSize = 0agent.sources.logger1.channels = memoryChannelagent.channels = memoryChannelagent.channels.memoryChannel.type = memoryagent.channels.memoryChannel.capacity = 100agent.sinks = AvroSinkagent.sinks.AvroSink.type = avroagent.sinks.AvroSink.channel = memoryChannelagent.sinks.AvroSink.hostname = 192.168.147.101agent.sinks.AvroSink.port = 4545agent.sources.logger1.interceptors = itime ihostagent.sources.logger1.interceptors.itime.type = TimestampInterceptoragent.sources.logger1.interceptors.ihost.type = hostagent.sources.logger1.interceptors.ihost.useIP = falseagent.sources.logger1.interceptors.ihost.hostHeader = host

Sink at one of the slave ( datanodes on my Yarn cluster ) :
collector.sources = AvroIncollector.sources.AvroIn.type = avrocollector.sources.AvroIn.bind = 0.0.0.0collector.sources.AvroIn.port = 4545collector.sources.AvroIn.channels = mc1 mc2collector.channels = mc1 mc2collector.channels.mc1.type = memorycollector.channels.mc1.capacity = 100
collector.channels.mc2.type = memorycollector.channels.mc2.capacity = 100
collector.sinks = LocalOut HadoopOutcollector.sinks.LocalOut.type = file_rollcollector.sinks.LocalOut.sink.directory = /home/hadoop/flumecollector.sinks.LocalOut.sink.rollInterval = 0collector.sinks.LocalOut.channel = mc1collector.sinks.HadoopOut.type = hdfscollector.sinks.HadoopOut.channel = mc2collector.sinks.HadoopOut.hdfs.path = /flumecollector.sinks.HadoopOut.hdfs.fileType = DataStreamcollector.sinks.HadoopOut.hdfs.writeFormat = Textcollector.sinks.HadoopOut.hdfs.rollSize = 0collector.sinks.HadoopOut.hdfs.rollCount = 10000collector.sinks.HadoopOut.hdfs.rollInterval = 600

can somebody point me to what am I doing wrong ?
This is what I get in my local directory
[hadoop@node1 flume]$ ls -lrttotal 0-rw-rw-r-- 1 hadoop hadoop 0 Oct 31 11:25 1383243942803-1-rw-rw-r-- 1 hadoop hadoop 0 Oct 31 11:28 1383244097923-1-rw-rw-r-- 1 hadoop hadoop 0 Oct 31 11:31 1383244302225-1-rw-rw-r-- 1 hadoop hadoop 0 Oct 31 11:33 1383244404929-1

when I restart the collector it creates one 0 bytes file.
Please help
*------------------------*

Cheers !!!

Siddharth Tiwari

Have a refreshing day !!!
"Every duty is holy, and devotion to duty is the highest form of worship of God.”

"Maybe other people will try to limit me but I don't limit myself"
     
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB