If it's just files, why not use the hadoop fs copy command instead? If you want to play around with flume, take a look at the spool directory source or the exec source and you should be able to put something together that'll push data through flume to hadoop.
# Roll based on the block size only tier1.sinks.sink1.hdfs.rollCount=0 tier1.sinks.sink1.hdfs.rollInterval=0 tier1.sinks.sink1.hdfs.rollSize = 120000000 # seconds to wait before closing the file. tier1.sinks.sink1.hdfs.idleTimeout = 60 tier1.sinks.sink1.channel = channel1
Thanks for reply, after I called with -n tier, source is not directory error came, I changed the source to /tmp/ and hdfs.path to /flume/messages/ in config file, and run the command, the INFO i am getting now is "spooling directory source runner has shutdown" what could be the problem, please help me. On Sun, Jun 15, 2014 at 10:21 PM, Mohit Durgapal <[EMAIL PROTECTED]> wrote:
I created the /flume/messages directories, but still nothing is written with flume in those directories. please help me. On Mon, Jun 16, 2014 at 10:15 AM, kishore alajangi < [EMAIL PROTECTED]> wrote: Thanks, Kishore.