Jagadish Bihani 2012-08-08, 07:45
alo alt 2012-08-08, 07:49
Patrick Wendell 2012-08-08, 16:49
Hari Shreedharan 2012-08-08, 16:57
Jagadish Bihani 2012-08-10, 10:00
-Re: flume-ng agent startup problem
Jagadish Bihani 2012-08-11, 09:09
In my case flume is not transferring data to HDFS with my hadoop version
being 0.20.1 and it doesn't show any error even in DEBUG log mode.
It works fine for other sinks.
Is there any known compatibility problem with hadoop 0.20.1 ? OR
can there be a problem due to an particular hadoop version?
(I know its an old version but it is on production machine and cant upgrade
as of now...)
Details of configuration and log records are in the following mail
On 08/10/2012 03:30 PM, Jagadish Bihani wrote:
> Thanks all for the inputs. After the initial problem I was able to
> start flume except in one scenario in
> which I use HDFS as sink.
> I have a production machine with hadoop-0.20.1 installed. I have
> installed latest flume 1.2.0.
> It works fine for all the configurations (at least which I tried)
> except when HDFS sink is used.
> I used both netcat listener as the source of the agent and HDFS is
> sink. Then I start the agent using
> the command *"bin/flume-ng agent -n agent1 -c conf -f
> conf/flume_hdfs.conf --classpath
> with DEBUG logging mode enabled. I don't get any error/exception. I
> use *"/usr/sbin/lsof -i:<port_no>"* command to check whether the source
> is actually bound to that port and it doesn't return any port. But
> when I use *file sink instead of HDFS sink* and run lsof it correctly
> shows me the port on which
> it is listening.
> Thus when HDFS sink is used even source part of agent doesn't work and
> it doesn't give any exception. And nothing is written to
> HDFS sink.
> P.S. I have checked the user,permission details of HDFS. They are fine.
> I have run flume on my other machines with different version of hadoop
> (0.23 & 1.0). It has run HDFS sink properly there.
> Does flume support hadoop-0.20.1 or there is something I am missing???
> This is my Configuration:
> agent1.sources = sequencer
> agent1.sinks =hdfsSink fileSink
> agent1.sinks =fileSink
> agent1.channels =memoryChannel fileChannel
> agent1.sources.sequencer.channels = fileChannel
> agent1.sinks.hdfsSink.channel = fileChannel
> This is the log which I get:
> bin/flume-ng agent -n agent1 -c conf -f conf/flume_hdfs.conf
> --classpath /MachineLearning/OTFA/hadoop-0.20.1-cluster1/hadoop
> -0.20.1-core.jar -Dflume.root.logger=DEBUG,console
> + exec /usr/java/jdk1.6.0_12/bin/java -Xmx20m
> -Dflume.root.logger=DEBUG,console -cp
> -Djava.library.path= org.apache.flume.node.Application -n agent1 -f
> 2012-08-10 10:56:50,604 (main) [INFO -
> Starting lifecycle supervisor 1
> 2012-08-10 10:56:50,607 (main) [INFO -
> org.apache.flume.node.FlumeNode.start(FlumeNode.java:54)] Flume node
> starting - agent1
> 2012-08-10 10:56:50,611 (lifecycleSupervisor-1-2) [INFO -
> Configuration provider starting
Patrick Wendell 2012-08-11, 20:49
Jagadish Bihani 2012-08-14, 13:11
ashutosh 2012-08-09, 08:48
Hari Shreedharan 2012-08-10, 17:07