Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Threaded View
Flume >> mail # user >> tail source exec unable to HDFS sink.


Copy link to this message
-
Re: tail source exec unable to HDFS sink.
can you write something in file continuously after you start flume-ng

if you do tail -f it will start getting only new entries
or you can just change the command  in the config file from tail -f to
tail so each time it bring default last 10 lines from the the file

~nitin

On Tue, Sep 18, 2012 at 2:51 PM, prabhu k <[EMAIL PROTECTED]> wrote:
> Hi Nitin,
>
> While executing flume-ng, i have updated the flume_test.txt file,still
> unable to do HDFS sink.
>
> Thanks,
> Prabhu.
>
> On Tue, Sep 18, 2012 at 2:35 PM, Nitin Pawar <[EMAIL PROTECTED]>
> wrote:
>>
>> Hi Prabhu,
>>
>> are you sure there is continuous text being written to your file
>> flume_test.txt.
>>
>> if nothing is written to that file, flume will not write anything into
>> hdfs.
>>
>> On Tue, Sep 18, 2012 at 2:31 PM, prabhu k <[EMAIL PROTECTED]> wrote:
>> > Hi Brock,
>> >
>> > Thanks for the reply.
>> >
>> > As per your suggestion, i have modified,but still same issue.
>> >
>> > My hadoop version is : 1.0.3 & Flume version is 1.2.0. Please let us
>> > know is
>> > there any incompatible version?
>> >
>> > On Mon, Sep 17, 2012 at 8:01 PM, Brock Noland <[EMAIL PROTECTED]>
>> > wrote:
>> >>
>> >> Hi,
>> >>
>> >> I believe, this line:
>> >> agent1.sinks.HDFS.hdfs.type = hdfs
>> >>
>> >> should be:
>> >> agent1.sinks.HDFS.type = hdfs
>> >>
>> >> Brock
>> >>
>> >> On Mon, Sep 17, 2012 at 5:17 AM, prabhu k <[EMAIL PROTECTED]>
>> >> wrote:
>> >> > Hi Users,
>> >> >
>> >> > I have followed the below link for sample text file to HDFS sink
>> >> > using
>> >> > tail
>> >> > source.
>> >> >
>> >> >
>> >> >
>> >> > http://cloudfront.blogspot.in/2012/06/how-to-use-host-escape-sequence-in.html#more
>> >> >
>> >> > I have executed flume-ng like as below command. it seems got stuck.
>> >> > and
>> >> > attached flume.conf file.
>> >> >
>> >> > #bin/flume-ng agent -n agent1 -c /conf -f conf/flume.conf
>> >> >
>> >> >
>> >> > flume.conf
>> >> > =========>> >> > agent1.sources = tail
>> >> > agent1.channels = MemoryChannel-2
>> >> > agent1.sinks = HDFS
>> >> >
>> >> > agent1.sources.tail.type = exec
>> >> > agent1.sources.tail.command = tail -F
>> >> >
>> >> >
>> >> > /usr/local/flume_dir/flume/flume-1.2.0-incubating-SNAPSHOT/flume_test.txt
>> >> > agent1.sources.tail.channels = MemoryChannel-2
>> >> >
>> >> > agent1.sources.tail.interceptors = hostint
>> >> > agent1.sources.tail.interceptors.hostint.type >> >> > org.apache.flume.interceptor.HostInterceptor$Builder
>> >> > agent1.sources.tail.interceptors.hostint.preserveExisting = true
>> >> > agent1.sources.tail.interceptors.hostint.useIP = false
>> >> >
>> >> > agent1.sinks.HDFS.hdfs.channel = MemoryChannel-2
>> >> > agent1.sinks.HDFS.hdfs.type = hdfs
>> >> > agent1.sinks.HDFS.hdfs.path = hdfs://<hostname>:54310/user
>> >> >
>> >> > agent1.sinks.HDFS.hdfs.fileType = dataStream
>> >> > agent1.sinks.HDFS.hdfs.writeFormat = text
>> >> > agent1.channels.MemoryChannel-2.type = memory
>> >> >
>> >> >
>> >> >
>> >> > flume.log
>> >> > =========>> >> > 12/09/17 15:40:05 INFO lifecycle.LifecycleSupervisor: Starting
>> >> > lifecycle
>> >> > supervisor 1
>> >> > 12/09/17 15:40:05 INFO node.FlumeNode: Flume node starting - agent1
>> >> > 12/09/17 15:40:05 INFO nodemanager.DefaultLogicalNodeManager: Node
>> >> > manager
>> >> > starting
>> >> > 12/09/17 15:40:05 INFO
>> >> > properties.PropertiesFileConfigurationProvider:
>> >> > Configuration provider starting
>> >> > 12/09/17 15:40:05 INFO lifecycle.LifecycleSupervisor: Starting
>> >> > lifecycle
>> >> > supervisor 10
>> >> > 12/09/17 15:40:05 INFO
>> >> > properties.PropertiesFileConfigurationProvider:
>> >> > Reloading configuration file:conf/flume.conf
>> >> > 12/09/17 15:40:05 INFO conf.FlumeConfiguration: Processing:HDFS
>> >> > 12/09/17 15:40:05 INFO conf.FlumeConfiguration: Processing:HDFS
>> >> > 12/09/17 15:40:05 INFO conf.FlumeConfiguration: Processing:HDFS
>> >> > 12/09/17 15:40:05 INFO conf.FlumeConfiguration: Processing:HDFS
>> >> > 12/09/17 15:40:05 INFO conf.FlumeConfiguration: Processing:HDFS

Nitin Pawar
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB