Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Threaded View
Flume >> mail # user >> Unable to setup HDFS sink


Copy link to this message
-
Re: Unable to setup HDFS sink
Hi,

Check your HDFS cluster, he's not responding on localhost/127.0.0.1:50030

- Alex

On Jan 14, 2013, at 7:43 AM, Vikram Kulkarni <[EMAIL PROTECTED]> wrote:

> I am trying to setup a sink for hdfs for HTTPSource . But I get the following exception when I try to send a simple Json event. I am also using a logger sink and I can clearly see the event output to the console window but it fails to write to hdfs. I have also in a separate conf file successfully written to hdfs sink.
>
> Thanks,
> Vikram
>
> Exception:
> [WARN - org.apache.flume.sink.hdfs.HDFSEventSink.process(HDFSEventSink.java:456)] HDFS IO error
> java.io.IOException: Call to localhost/127.0.0.1:50030 failed on local exception: java.io.EOFException
> at org.apache.hadoop.ipc.Client.wrapException(Client.java:1144)
>
> My conf file is as follows:
> # flume-httphdfs.conf: A single-node Flume with Http Source and hdfs sink configuration
>
> # Name the components on this agent
> agent1.sources = r1
> agent1.channels = c1
>
> # Describe/configure the source
> agent1.sources.r1.type = org.apache.flume.source.http.HTTPSource
> agent1.sources.r1.port = 5140
> agent1.sources.r1.handler = org.apache.flume.source.http.JSONHandler
> agent1.sources.r1.handler.nickname = random props
>
> # Describe the sink
> agent1.sinks = logsink hdfssink
> agent1.sinks.logsink.type = logger
>
> agent1.sinks.hdfssink.type = hdfs
> agent1.sinks.hdfssink.hdfs.path = hdfs://localhost:50030/flume/events
> agent1.sinks.hdfssink.hdfs.file.Type = DataStream
>
> # Use a channel which buffers events in memory
> agent1.channels.c1.type = memory
> agent1.channels.c1.capacity = 1000
> agent1.channels.c1.transactionCapacity = 100
>
> # Bind the source and sink to the channel
> agent1.sources.r1.channels = c1
> agent1.sinks.logsink.channel = c1
> agent1.sinks.hdfssink.channel = c1
>
>

--
Alexander Alten-Lorenz
http://mapredit.blogspot.com
German Hadoop LinkedIn Group: http://goo.gl/N8pCF
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB