Jagadish Bihani 2012-08-08, 07:45
alo alt 2012-08-08, 07:49
Patrick Wendell 2012-08-08, 16:49
Hari Shreedharan 2012-08-08, 16:57
Jagadish Bihani 2012-08-10, 10:00
Jagadish Bihani 2012-08-11, 09:09
One possibility is that flume is not finding the Hadoop classpath correctly
and silently failing when trying to create the HDFS sink. I've run into
something like this before and thought we had fixed it.
Do you have HADOOP_HOME set in your environment? If you run "$> hadoop
classpath" on the command line does it correctly print out the hadoop
classpath? Flume uses these to try and find out the correct hadoop
directories to include in the classpath.
Also, can you run ./flume-ng with the -d option to print out the classpath
that is being used to launch flume? You want to verify that your hadoop
directory is in there.
On Sat, Aug 11, 2012 at 2:09 AM, Jagadish Bihani <
[EMAIL PROTECTED]> wrote:
> In my case flume is not transferring data to HDFS with my hadoop version
> being 0.20.1 and it doesn't show any error even in DEBUG log mode.
> It works fine for other sinks.
> Is there any known compatibility problem with hadoop 0.20.1 ? OR
> can there be a problem due to an particular hadoop version?
> (I know its an old version but it is on production machine and cant upgrade
> as of now...)
> Details of configuration and log records are in the following mail
> Thanks ,
> On 08/10/2012 03:30 PM, Jagadish Bihani wrote:
> Thanks all for the inputs. After the initial problem I was able to start
> flume except in one scenario in
> which I use HDFS as sink.
> I have a production machine with hadoop-0.20.1 installed. I have installed
> latest flume 1.2.0.
> It works fine for all the configurations (at least which I tried) except
> when HDFS sink is used.
> I used both netcat listener as the source of the agent and HDFS is sink.
> Then I start the agent using
> the command *"bin/flume-ng agent -n agent1 -c conf -f
> conf/flume_hdfs.conf --classpath
> with DEBUG logging mode enabled. I don't get any error/exception. I use *"/usr/sbin/lsof
> -i:<port_no>"* command to check whether the source
> is actually bound to that port and it doesn't return any port. But when I
> use *file sink instead of HDFS sink* and run lsof it correctly shows me
> the port on which
> it is listening.
> Thus when HDFS sink is used even source part of agent doesn't work and it
> doesn't give any exception. And nothing is written to
> HDFS sink.
> P.S. I have checked the user,permission details of HDFS. They are fine.
> I have run flume on my other machines with different version of hadoop
> (0.23 & 1.0). It has run HDFS sink properly there.
> Does flume support hadoop-0.20.1 or there is something I am missing???
> This is my Configuration:
> agent1.sources = sequencer
> agent1.sinks =hdfsSink fileSink
> agent1.sinks =fileSink
> agent1.channels =memoryChannel fileChannel
> agent1.sources.sequencer.channels = fileChannel
> agent1.sinks.hdfsSink.channel = fileChannel
> This is the log which I get:
> bin/flume-ng agent -n agent1 -c conf -f conf/flume_hdfs.conf --classpath
> -0.20.1-core.jar -Dflume.root.logger=DEBUG,console
> + exec /usr/java/jdk1.6.0_12/bin/java -Xmx20m
> -Dflume.root.logger=DEBUG,console -cp
Jagadish Bihani 2012-08-14, 13:11
ashutosh 2012-08-09, 08:48
Hari Shreedharan 2012-08-10, 17:07