Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Threaded View
Flume >> mail # dev >> Exception while sync from multiple source


Copy link to this message
-
Exception while sync from multiple source
Hi Team,

  I'm getting below exception when I'm trying to send multiple logs on a
single port. Does any one have idea If its the problem with my
configuration or do I need raise this as a bug. I tried to configure
multiple port as well still got the same exception.

[WARN -
org.apache.avro.ipc.NettyServer$NettyServerAvroHandler.exceptionCaught(NettyServer.java:201)]
Unexpected exception from downstream.
java.io.IOException: Connection reset by peer
    at sun.nio.ch.FileDispatcherImpl.read0(Native Method)
    at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:39)
    at sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:225)
    at sun.nio.ch.IOUtil.read(IOUtil.java:193)
    at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:359)
    at org.jboss.netty.channel.socket.nio.NioWorker.read(NioWorker.java:66)
    at
org.jboss.netty.channel.socket.nio.AbstractNioWorker.processSelectedKeys(AbstractNioWorker.java:364)
    at
org.jboss.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:238)
    at org.jboss.netty.channel.socket.nio.NioWorker.run(NioWorker.java:38)
    at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
    at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
    at java.lang.Thread.run(Thread.java:722)

---------------------------------------
Collector Configuration
---------------------------------------

hdfs-agent.sources = avro-collect
hdfs-agent.sinks = hdfs-write
hdfs-agent.channels = fileChannel
hdfs-agent.sources.avro-collect.type = avro
hdfs-agent.sources.avro-collect.bind = <<System IP>>
hdfs-agent.sources.avro-collect.port = 41414
hdfs-agent.sources.avro-collect.channels = fileChannel

hdfs-agent.sinks.hdfs-write.type = hdfs
hdfs-agent.sinks.hdfs-write.hdfs.path hdfs://hadoop:54310/flume/%{host}/%Y%m%d/%{logFileType}
hdfs-agent.sinks.hdfs-write.hdfs.rollSize = 209715200
hdfs-agent.sinks.hdfs-write.hdfs.rollCount = 6000
hdfs-agent.sinks.hdfs-write.hdfs.fileType = DataStream
hdfs-agent.sinks.hdfs-write.hdfs.writeFormat = Text
hdfs-agent.sinks.hdfs-write.hdfs.filePrefix = %{host}
hdfs-agent.sinks.hdfs-write.hdfs.maxOpenFiles = 100000
hdfs-agent.sinks.hdfs-write.hdfs.batchSize = 5000
hdfs-agent.sinks.hdfs-write.hdfs.rollInterval = 75
hdfs-agent.sinks.hdfs-write.hdfs.callTimeout = 5000000
hdfs-agent.sinks.hdfs-write.channel = fileChannel
hdfs-agent.channels.fileChannel.type=file
hdfs-agent.channels.fileChannel.dataDirs=/u01/Collector/flume_channel/dataDir13
hdfs-agent.channels.fileChannel.checkpointDir=/u01/Collector/flume_channel/checkpointDir13
hdfs-agent.channels.fileChannel.transactionCapacity = 50000
hdfs-agent.channels.fileChannel.capacity = 9000000
hdfs-agent.channels.fileChannel.write-timeout = 250000

-------------------------------
Sender Configuration
-------------------------------

app-agent.sources = tail tailapache
app-agent.channels = fileChannel
app-agent.sinks = avro-forward-sink avro-forward-sink-apache

app-agent.sources.tail.type = exec
app-agent.sources.tail.command = tail -f /server/default/log/server.log
app-agent.sources.tail.channels = fileChannel

app-agent.sources.tailapache.type = exec
app-agent.sources.tailapache.command = tail -f /logs/access_log
app-agent.sources.tailapache.channels = fileChannel

app-agent.sources.tail.interceptors = ts st stt
app-agent.sources.tail.interceptors.ts.type org.apache.flume.interceptor.TimestampInterceptor$Builder
app-agent.sources.tail.interceptors.st.type = static
app-agent.sources.tail.interceptors.st.key = logFileType
app-agent.sources.tail.interceptors.st.value = jboss
app-agent.sources.tail.interceptors.stt.type = static
app-agent.sources.tail.interceptors.stt.key = host
app-agent.sources.tail.interceptors.stt.value = Mart

app-agent.sources.tailapache.interceptors = ts1 i1 st1
app-agent.sources.tailapache.interceptors.ts1.type org.apache.flume.interceptor.TimestampInterceptor$Builder
app-agent.sources.tailapache.interceptors.i1.type = static
app-agent.sources.tailapache.interceptors.i1.key = logFileType
app-agent.sources.tailapache.interceptors.i1.value = apache
app-agent.sources.tailapache.interceptors.st1.type = static
app-agent.sources.tailapache.interceptors.st1.key = host
app-agent.sources.tailapache.interceptors.st1.value = Mart

app-agent.sinks.avro-forward-sink.type = avro
app-agent.sinks.avro-forward-sink.hostname = <<Host IP>>
app-agent.sinks.avro-forward-sink.port = 41414
app-agent.sinks.avro-forward-sink.channel = fileChannel

app-agent.sinks.avro-forward-sink-apache.type = avro
app-agent.sinks.avro-forward-sink-apache.hostname = <<Host IP>>
app-agent.sinks.avro-forward-sink-apache.port = 41414
app-agent.sinks.avro-forward-sink-apache.channel = fileChannel

app-agent.channels.fileChannel.type=file
app-agent.channels.fileChannel.dataDirs=/usr/local/lib/flume-ng/flume_channel/dataDir13
app-agent.channels.fileChannel.checkpointDir=/usr/local/lib/flume-ng/flume_channel/checkpointDir13
app-agent.channels.fileChannel.transactionCapacity = 50000
app-agent.channels.fileChannel.capacity = 9000000
app-agent.channels.fileChannel.write-timeout = 250000
app-agent.channles.fileChannel.keep-alive=600

Thanks in Advance,
-Divya
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB