Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Plain View
Flume >> mail # dev >> Backoff returned from HDFSEventSink after IOException


+
Jaroslaw Grabowski 2012-12-11, 13:22
+
Brock Noland 2012-12-12, 15:38
+
Mike Percy 2012-12-12, 18:22
Copy link to this message
-
Re: Backoff returned from HDFSEventSink after IOException
It is easy to reproduce. For example IOException will be thrown from
BucketWriter after all DataNodes in cluster become unavailable. I believe
that in that case, BucketWriter won't return to operational state even if
cluster is back and ready to use. It is probably a good idea to discard
BucketWriters after IOException. Do you agree?

I've filed a jira with stack trace in attached file:
https://issues.apache.org/jira/browse/FLUME-1779

Regards,
J. Grabowski
2012/12/12 Mike Percy <[EMAIL PROTECTED]>

> Under what circumstances are you seeing an IOException? Can you post the
> full stack trace?
>
> Best,
> Mike
>
>
> On Tue, Dec 11, 2012 at 5:22 AM, Jaroslaw Grabowski
> <[EMAIL PROTECTED]>wrote:
>
> > Hello,
> >
> > could sameone please tell me why HDFSEventSink returns Status.BACKOFF in
> > case of IOException? Because of this implementation,
> FailoverSinkProcessor
> > will never push event to next sink in case of any hdfs failure.
> >
> > Another thing is that BucketWriters should be removed from sfWriters map
> > after IOException, can anyone please confirm that?
> >
> > --
> > Regards,
> > J. Grabowski
> >
>

--
Pozdrawiam,
Jarosław Grabowski
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB