Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Plain View
Flume >> mail # user >> flume-ng agent startup problem


+
Jagadish Bihani 2012-08-08, 07:45
+
alo alt 2012-08-08, 07:49
+
Patrick Wendell 2012-08-08, 16:49
+
Hari Shreedharan 2012-08-08, 16:57
+
Jagadish Bihani 2012-08-10, 10:00
+
Jagadish Bihani 2012-08-11, 09:09
+
Patrick Wendell 2012-08-11, 20:49
Copy link to this message
-
Re: flume-ng agent startup problem
Hi Patrick

I verified that flume is finding hadoop directory in the classpath.
And HADOOP_HOME is also set.
When I run the flume command it prints:

apache-flume-1.2.0]$ bin/flume-ng agent  -n agent -c conf -f
conf/flume_socket.conf --classpath
/MachineLearning/OTFA/hadoop-0.20.1-cluster1/hadoop-0.20.1-core.jar
-Dflume.root.logger=DEBUG,console
*Info: Including Hadoop libraries found via
(/MachineLearning/OTFA/hadoop-0.20.1-cluster1/bin/hadoop) for HDFS access*
+ exec /usr/java/jdk1.6.0_12/bin/java -Xmx20m
-Dflume.root.logger=DEBUG,console -cp
'/home/hadoop/flume/apache-flume-1.2.0/conf:/home/hadoop/flume/apache-flume-1.2.0/lib/*:/MachineLearning/OTFA/hadoop-0.20.1-cluster1/hadoop-0.20.1-core.jar'
-Djava.library.path=:/MachineLearning/OTFA/hadoop-0.20.1-cluster1/bin/../lib/native/Linux-i386-32
org.apache.flume.node.Application -n agent -f conf/flume_socket.conf

So it finds Hadoop libraries. I also tried adding other jars like
commons-cli & commons-codec in the classpath.
But still HDFS sink is not working.

Regards,
Jagadish
On 08/12/2012 02:19 AM, Patrick Wendell wrote:
> Jagadish,
>
> One possibility is that flume is not finding the Hadoop classpath
> correctly and silently failing when trying to create the HDFS sink.
> I've run into something like this before and thought we had fixed it.
>
> Do you have HADOOP_HOME set in your environment? If you run "$> hadoop
> classpath" on the command line does it correctly print out the hadoop
> classpath? Flume uses these to try and find out the correct hadoop
> directories to include in the classpath.
>
> Also, can you run ./flume-ng with the -d option to print out the
> classpath that is being used to launch flume? You want to verify that
> your hadoop directory is in there.
>
> - Patrick
>
> On Sat, Aug 11, 2012 at 2:09 AM, Jagadish Bihani
> <[EMAIL PROTECTED] <mailto:[EMAIL PROTECTED]>>
> wrote:
>
>     Hi
>
>     In my case flume is not transferring data to HDFS with my hadoop
>     version
>     being 0.20.1 and it doesn't show any error even in DEBUG log mode.
>     It works fine for other sinks.
>
>     Is there any known compatibility problem with hadoop 0.20.1 ? OR
>      can there be a problem due to an particular hadoop version?
>     (I know its an old version but it is on production machine and
>     cant upgrade
>     as of now...)
>
>     Details of configuration and log records are in the following mail
>
>     Thanks ,
>     Jagadish
>
>
>     On 08/10/2012 03:30 PM, Jagadish Bihani wrote:
>>     Hi
>>
>>     Thanks all for the inputs. After the initial problem I was able
>>     to start flume except in one scenario in
>>     which I use HDFS as sink.
>>
>>     I have a production machine with hadoop-0.20.1 installed. I have
>>     installed latest flume 1.2.0.
>>     It works fine for all the configurations (at least which I tried)
>>     except when HDFS sink is used.
>>
>>     Test:
>>     ---------
>>      I used both netcat listener as the source of the agent and HDFS
>>     is sink. Then I start the agent using
>>     the command *"bin/flume-ng agent -n agent1 -c conf -f
>>     conf/flume_hdfs.conf --classpath
>>     /MachineLearning/OTFA/hadoop-0.20.1-cluster1/hadoop-0.20.1-core.jar
>>     -Dflume.root.logger=DEBUG,console"*
>>      with DEBUG logging mode enabled. I don't get any
>>     error/exception. I use *"/usr/sbin/lsof -i:<port_no>"* command to
>>     check whether the source
>>     is actually bound to that port and it doesn't return any port.
>>     But when I use *file sink instead of HDFS sink* and run lsof it
>>     correctly shows me the port on which
>>     it is listening.
>>     Thus when HDFS sink is used even source part of agent doesn't
>>     work and it doesn't give any exception. And nothing is written to
>>     HDFS sink.
>>
>>     P.S. I have checked the user,permission details of HDFS. They are
>>     fine.
>>
>>     I have run flume on my other machines with different version of
>>     hadoop (0.23 & 1.0). It has run HDFS sink properly there.
+
ashutosh 2012-08-09, 08:48
+
Hari Shreedharan 2012-08-10, 17:07
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB