Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Plain View
Flume >> mail # user >> Getting "Checking file:conf/flume.conf for changes" message in loop


+
Vikas Kanth 2013-04-29, 15:31
Copy link to this message
-
Re: Getting "Checking file:conf/flume.conf for changes" message in loop
Vikas,

This message is normal and harmless.

2013-04-29 08:26:11,868 (conf-file-poller-0) [DEBUG -
org.apache.flume.conf.file.AbstractFileConfigurationProvi
der$FileWatcherRunnable.run(AbstractFileConfigurationProvider.java:188)]
Checking file:conf/flume.conf for changes

If you change your log settings to INFO level it will also not show up.

Regarding the reason for which you do not see the contents of your file in
hdfs.
One thing with the exec source and tail is that the events are buffered
until 20 events have been written to the cache. One way to work around this
is to change the default from 20 -> 1

batchSize  20 The max number of lines to read and send to the channel at a
time

Alternatively there was a recent patch that sets the batchTimeout for exec
source that will let you flush the cache based on elapsed time. That fix is
available on the latest version of trunk.

-Jeff

On Mon, Apr 29, 2013 at 8:31 AM, Vikas Kanth <[EMAIL PROTECTED]>wrote:

> Hi,
>
> I am getting following message in loop. The source file hasn't moved to
> the destination.
>
> 2013-04-29 08:24:41,346 (lifecycleSupervisor-1-0) [INFO -
> org.apache.flume.instrumentation.MonitoredCounterGroup.start(MonitoredCounterGroup.java:73)]
> Component type: CHANNEL, name: Channel-2 started
> 2013-04-29 08:24:41,846 (conf-file-poller-0) [INFO -
> org.apache.flume.node.nodemanager.DefaultLogicalNodeManager.startAllComponents(DefaultLogicalNodeManager.java:141)]
> Starting Sink HDFS
> 2013-04-29 08:24:41,847 (conf-file-poller-0) [INFO -
> org.apache.flume.node.nodemanager.DefaultLogicalNodeManager.startAllComponents(DefaultLogicalNodeManager.java:152)]
> Starting Source tail
> 2013-04-29 08:24:41,847 (lifecycleSupervisor-1-3) [INFO -
> org.apache.flume.source.ExecSource.start(ExecSource.java:155)] Exec source
> starting with command:tail -F /home/vkanth/temp/Sample2.txt
> 2013-04-29 08:24:41,850 (lifecycleSupervisor-1-3) [DEBUG -
> org.apache.flume.source.ExecSource.start(ExecSource.java:173)] Exec source
> started
> 2013-04-29 08:24:41,850 (lifecycleSupervisor-1-0) [INFO -
> org.apache.flume.instrumentation.MonitoredCounterGroup.register(MonitoredCounterGroup.java:89)]
> Monitoried counter group for type: SINK, name: HDFS, registered
> successfully.
> 2013-04-29 08:24:41,851 (lifecycleSupervisor-1-0) [INFO -
> org.apache.flume.instrumentation.MonitoredCounterGroup.start(MonitoredCounterGroup.java:73)]
> Component type: SINK, name: HDFS started
> 2013-04-29 08:24:41,852 (SinkRunner-PollingRunner-DefaultSinkProcessor)
> [DEBUG -
> org.apache.flume.SinkRunner$PollingRunner.run(SinkRunner.java:143)] Polling
> sink runner starting
> 2013-04-29 08:25:11,855 (conf-file-poller-0) [DEBUG -
> org.apache.flume.conf.file.AbstractFileConfigurationProvider$FileWatcherRunnable.run(AbstractFileConfigurationProvider.java:188)]
> Checking file:conf/flume.conf for changes
> 2013-04-29 08:25:41,861 (conf-file-poller-0) [DEBUG -
> org.apache.flume.conf.file.AbstractFileConfigurationProvider$FileWatcherRunnable.run(AbstractFileConfigurationProvider.java:188)]
> Checking file:conf/flume.conf for changes
> 2013-04-29 08:26:11,868 (conf-file-poller-0) [DEBUG -
> org.apache.flume.conf.file.AbstractFileConfigurationProvider$FileWatcherRunnable.run(AbstractFileConfigurationProvider.java:188)]
> Checking file:conf/flume.conf for changes
> .......
> .......
>
>
> Flume.conf:
> agent1.sources = tail
> agent1.channels = Channel-2
> agent1.sinks = HDFS
>
> agent1.sources.tail.type = exec
> agent1.sources.tail.command = tail -F /home/vikas/temp/Sample2.txt
> agent1.sources.tail.channels = Channel-2
>
> agent1.sinks.HDFS.channel = Channel-2
> agent1.sinks.HDFS.type = hdfs
> agent1.sinks.HDFS.hdfs.path = hdfs://dev-pub01.xyz.abc.com:8020/tmp
> agent1.sinks.HDFS.hdfs.file.fileType = DataStream
>
> agent1.channels.Channel-2.type = memory
> agent1.channels.Channel-2.capacity = 1000
> agent1.channels.Channel-2.transactionCapacity=10
>
> Command:
> bin/flume-ng agent --conf ./conf/ -f conf/flume.conf
+
Vikas Kanth 2013-05-01, 15:00
+
Alexander Alten-Lorenz 2013-05-02, 09:54
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB