Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Threaded View
Flume >> mail # user >> Architecting Flume for failover


Copy link to this message
-
Re: Architecting Flume for failover
No, it does not mean that. To talk to different HDFS clusters you must specify the hdfs.path as hdfs://namenode:port/<path>. You don't need to specify the bind etc.

Hope this helps.

Hari

--
Hari Shreedharan
On Tuesday, February 19, 2013 at 8:18 PM, Noel Duffy wrote:

> Hari Shreedharan [mailto:[EMAIL PROTECTED]] wrote:
>
> > The "bind" configuration param does not really exist for HDFS Sink (it is only for the IPC sources).
>
> Does this mean that failover for sinks on different hosts cannot work for HDFS sinks at all? Does it require Avro sinks, which seem to have a hostname parameter?

NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB