Home | About | Sematext search-lucene.com search-hadoop.com
 Search Hadoop and all its subprojects:

Switch to Threaded View
Flume >> mail # dev >> Backoff returned from HDFSEventSink after IOException


Copy link to this message
-
Re: Backoff returned from HDFSEventSink after IOException
It is easy to reproduce. For example IOException will be thrown from
BucketWriter after all DataNodes in cluster become unavailable. I believe
that in that case, BucketWriter won't return to operational state even if
cluster is back and ready to use. It is probably a good idea to discard
BucketWriters after IOException. Do you agree?

I've filed a jira with stack trace in attached file:
https://issues.apache.org/jira/browse/FLUME-1779

Regards,
J. Grabowski
2012/12/12 Mike Percy <[EMAIL PROTECTED]>

> Under what circumstances are you seeing an IOException? Can you post the
> full stack trace?
>
> Best,
> Mike
>
>
> On Tue, Dec 11, 2012 at 5:22 AM, Jaroslaw Grabowski
> <[EMAIL PROTECTED]>wrote:
>
> > Hello,
> >
> > could sameone please tell me why HDFSEventSink returns Status.BACKOFF in
> > case of IOException? Because of this implementation,
> FailoverSinkProcessor
> > will never push event to next sink in case of any hdfs failure.
> >
> > Another thing is that BucketWriters should be removed from sfWriters map
> > after IOException, can anyone please confirm that?
> >
> > --
> > Regards,
> > J. Grabowski
> >
>

--
Pozdrawiam,
Jarosław Grabowski