Home | About | Sematext search-lucene.com search-hadoop.com
 Search Hadoop and all its subprojects:

Switch to Threaded View
Flume, mail # user - Configure Flume HDFS Sink


Copy link to this message
-
Re: Configure Flume HDFS Sink
Rajesh Jain 2013-07-19, 16:14
I formatted the namenode, recreated the hadoop fs directories and
everything seems to be working file.
On Fri, Jul 19, 2013 at 5:35 AM, Serega Sheypak <[EMAIL PROTECTED]>wrote:

> You also should check the address bind for your name node in hdfs site
> 19.07.2013 10:35 пользователь "Ripon Nandi" <[EMAIL PROTECTED]>
> написал:
>
> Rajesh,
>>
>> Apparently the issue seems to be related to hdfs, not flume. Check the
>> connectivity to hadoop from your flume box.
>>
>> Thanks
>>
>>
>>
>>
>>
>>
>> On Thursday, July 18, 2013 3:50:50 AM UTC+5:30, Rajesh Jain wrote:
>>
>>> I have configured Flume with HDFS Sink and I am facing an issue, no
>>> matter which port I run the NameNode, I still get an HDFS IO Error
>>>
>>>
>>>
>>> 17 Jul 2013 22:09:12,508 INFO  [hdfs-VisitSink-call-runner-4]
>>> (org.apache.flume.sink.hdfs.**BucketWriter.doOpen:208)  - Creating
>>> /home/cloudera/btbridge/data/**visit/export.1374113279142.**txt.tmp
>>>
>>> 17 Jul 2013 22:09:12,532 WARN  [SinkRunner-PollingRunner-**DefaultSinkProcessor]
>>> (org.apache.flume.sink.hdfs.**HDFSEventSink.process:456)  - HDFS IO
>>> error
>>>
>>> java.io.IOException: Failed on local exception: java.io.IOException:
>>> Broken pipe; Host Details : local host is: "localhost.localdomain/127.0.
>>> **0.1 <http://127.0.0.1>"; destination host is:
>>> "localhost.localdomain":8020;
>>>
>>>
>>> I have tried the following in flume-conf.properties
>>>
>>>
>>>
>>> agent1.sinks.PurePathSink.**hdfs.path = /home/cloudera/btbridge/data/**
>>> pp
>>>
>>>
>>>
>>> or
>>>
>>>
>>>
>>> agent1.sinks.PurePathSink.**hdfs.path = hdfs://localhost:8020/home/**
>>> cloudera/btbridge/data/pp
>>>
>>>
>>>
>>> or
>>>
>>> agent1.sinks.PurePathSink.**hdfs.path = hdfs://localhost:9000/home/**
>>> cloudera/btbridge/data/pp
>>>
>>>
>>>
>>> etc
>>>
>>> Any pointers on what is missing. Is this a flume issue or Hadoop issue
>>>
>>>
>>> Thanks,
>>>
>>> Rajesh
>>>
>>