Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Threaded View
Flume >> mail # user >> Uncaught Exception When Using Spooling Directory Source


Copy link to this message
-
Re: Uncaught Exception When Using Spooling Directory Source
attached is the log file.

the content of conf file:
# Name the components on this agent
a1.sources = r1
a1.sinks = k1
a1.channels = c1

# Describe/configure the source
a1.sources.r1.type = spooldir
a1.sources.r1.spoolDir = /disk2/mahy/FLUME_TEST/source
a1.sources.r1.channels = c1

# Describe the sink
a1.sinks.k1.type = file_roll
a1.sinks.k1.sink.directory = /disk2/mahy/FLUME_TEST/sink
a1.sinks.k1.sink.rollInterval = 0

# Use a channel which buffers events in memory
a1.channels.c1.type = memory
a1.channels.c1.capacity = 99999
#a1.channels.c1. = /disk2/mahy/FLUME_TEST/check
#a1.channels.c1.dataDirs = /disk2/mahy/FLUME_TEST/channel-data

# Bind the source and sink to the channel
a1.sources.r1.channels = c1
a1.sinks.k1.channel = c1
On Fri, Jan 18, 2013 at 12:39 PM, Brock Noland <[EMAIL PROTECTED]> wrote:

> Hi,
>
> Would you mind turning logging to debug and then posting your full
> log/config?
>
> Brock
>
> On Thu, Jan 17, 2013 at 8:24 PM, Henry Ma <[EMAIL PROTECTED]> wrote:
> > Hi,
> >
> > When using Spooling Directory Source in Flume NG 1.3.1, this exception
> > happens:
> >
> > 13/01/18 11:37:09 ERROR source.SpoolDirectorySource: Uncaught exception
> in
> > Runnable
> > java.io.IOException: Stream closed
> > at java.io.BufferedReader.ensureOpen(BufferedReader.java:97)
> > at java.io.BufferedReader.readLine(BufferedReader.java:292)
> > at java.io.BufferedReader.readLine(BufferedReader.java:362)
> > at
> >
> org.apache.flume.client.avro.SpoolingFileLineReader.readLines(SpoolingFileLineReader.java:180)
> > at
> >
> org.apache.flume.source.SpoolDirectorySource$SpoolDirectoryRunnable.run(SpoolDirectorySource.java:135)
> > at
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:441)
> > at
> >
> java.util.concurrent.FutureTask$Sync.innerRunAndReset(FutureTask.java:317)
> > at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:150)
> > at
> >
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$101(ScheduledThreadPoolExecutor.java:98)
> > at
> >
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.runPeriodic(ScheduledThreadPoolExecutor.java:180)
> > at
> >
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:204)
> > at
> >
> java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
> > at
> >
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
> > at java.lang.Thread.run(Thread.java:662)
> >
> > It usually happened when dropping some new files into the spooling dir,
> and
> > stop collecting file. Does someone know the reason and how to avoid it?
> >
> > Thanks very much!
> > --
> > Best Regards,
> > Henry Ma
>
>
>
> --
> Apache MRUnit - Unit testing MapReduce -
> http://incubator.apache.org/mrunit/
>

--
Best Regards,
Henry Ma
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB