Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Plain View
Flume >> mail # user >> Flume - Avro source to avro sink


+
JR 2013-03-24, 12:41
+
Alexander Alten-Lorenz 2013-03-25, 07:28
Copy link to this message
-
Re: Flume - Avro source to avro sink
Hi Alex,

   Thanks for your reply. I changed the port to the actual ip of the
machine. I dont know how to resolve this problem.

Thanks much!!
sincerely,
Jayashree
-------------------------------------
# first sink - avro
 agent1.sinks.avroSink.type = avro
 agent1.sinks.avroSink.hostname = x.yy.194.180
 agent1.sinks.avroSink.port = 41415
 agent1.sinks.avroSink.channel = ch1

# second source - avro
 agent2.sources.avroSource2.type = avro
 agent2.sources.avroSource2.bind = x.yy.194.180  (not sure if I am allowed
to give our IP addresses... so writing x.yy.)
 agent2.sources.avroSource2.port = 41415
 agent2.sources.avroSource2.channel = ch2
And I get this error:
64/bin/../lib/hadoop-1.1.1/slf4j-log4j12-1.4.3.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an
explanation.
13/03/29 07:56:23 ERROR avro.AvroCLIClient: Unable to open connection to
Flume. Exception follows.
org.apache.flume.FlumeException: NettyAvroRpcClient { host: node1, port:
41415 }: RPC connection error
        at
org.apache.flume.api.NettyAvroRpcClient.connect(NettyAvroRpcClient.java:117)
        at
org.apache.flume.api.NettyAvroRpcClient.connect(NettyAvroRpcClient.java:93)
        at
org.apache.flume.api.NettyAvroRpcClient.configure(NettyAvroRpcClient.java:507)
        at
org.apache.flume.api.RpcClientFactory.getDefaultInstance(RpcClientFactory.java:169)
        at
org.apache.flume.client.avro.AvroCLIClient.run(AvroCLIClient.java:180)
        at
org.apache.flume.client.avro.AvroCLIClient.main(AvroCLIClient.java:71)
Caused by: java.io.IOException: Error connecting to node1/x.yy.194.180:41415
        at
org.apache.avro.ipc.NettyTransceiver.getChannel(NettyTransceiver.java:261)
        at
org.apache.avro.ipc.NettyTransceiver.<init>(NettyTransceiver.java:203)
        at
org.apache.avro.ipc.NettyTransceiver.<init>(NettyTransceiver.java:152)
        at
org.apache.flume.api.NettyAvroRpcClient.connect(NettyAvroRpcClient.java:106)
        ... 5 more
Caused by: java.net.ConnectException: Connection refused
        at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
On Mon, Mar 25, 2013 at 3:28 AM, Alexander Alten-Lorenz <[EMAIL PROTECTED]
> wrote:

> agent1.sinks.avroSink.hostname = 0.0.0.0
>
> => Avro sink avroSink: Building RpcClient with hostname: 0.0.0.0, port:
> 41415
>
> Hostname is the FQHN of the system, not a wildcard ip.
>
> - Alex
>
>
> On Mar 24, 2013, at 1:41 PM, JR <[EMAIL PROTECTED]> wrote:
>
> > Hello,
> >
> >    I am trying to configure a two node work flow.
> >
> > Avro source ---> mem Channel ----> Avro sink --> (next node) avro source
> --> mem channel ---> hdfs sink
> >
> > #agent1 on  node1
> >  agent1.sources = avroSource
> >  agent1.channels = ch1
> >  agent1.sinks = avroSink
> >
> > #agent2 on node2
> >  agent2.sources = avroSource2
> >  agent2.channels = ch2
> >  agent2.sinks = hdfsSink
> >
> > # first source - avro
> >  agent1.sources.avroSource.type = avro
> >  agent1.sources.avroSource.bind = 0.0.0.0
> >  agent1.sources.avroSource.port = 41414
> >  agent1.sources.avroSource.channels = ch1
> >
> > # first sink - avro
> >  agent1.sinks.avroSink.type = avro
> >  agent1.sinks.avroSink.hostname = 0.0.0.0
> >  agent1.sinks.avroSink.port = 41415
> >  agent1.sinks.avroSink.channel = ch1
> >
> > # second source - avro
> >  agent2.sources.avroSource2.type = avro
> >  agent2.sources.avroSource2.bind = node2 ip
> >  agent2.sources.avroSource2.port = 41415
> >  agent2.sources.avroSource2.channel = ch2
> >
> > # second sink - hdfs
> >  agent2.sinks.hdfsSink.type = hdfs
> >  agent2.sinks.hdfsSink.channel = ch2
> > agent2.sinks.hdfsSink.hdfs.writeFormat = Text
> >  agent2.sinks.hdfsSink.hdfs.filePrefix =  testing
> >  agent2.sinks.hdfsSink.hdfs.path = hdfs://node2:9000/flume/
> >
> > # channels
> >  agent1.channels.ch1.type = memory
> >  agent1.channels.ch1.capacity = 1000
> >  agent2.channels.ch2.type = memory
> >  agent2.channels.ch2.capacity = 1000
> >
> >
> > Am getting errors with the ports. Could someone please check if I have
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB