Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Threaded View
HDFS >> mail # user >> a question about controlling roles (read, write) of data node


Copy link to this message
-
Re: a question about controlling roles (read, write) of data node
Kyungyong Lee,

One way: This may be possible to do if you inflate the
"dfs.datanode.du.reserved" property on the specific DataNode to a very
large bytes value (> maximum volume size). This way your NN will still
consider the DN as a valid one that carries readable blocks, but when
writing files this DN will never be selected due to its
false-lack-of-space report.

On Mon, May 21, 2012 at 12:37 AM, Kyungyong Lee <[EMAIL PROTECTED]> wrote:
> Hello,
>
> I would like to ask if I can do the following. Assuming that I have a
> datanode, i.e., dn1, which already contains some useful blocks. Here,
> I do not want to save new data blocks to the datanode, but I still
> want to use the blocks that already exist in the datanode (dn1).
> I considered to use exclude file (dfs.hosts.exclude). However, if I
> add "dn1" to the exclude file list, I cannot use blocks that are
> already contained in dn1. If it is right, can you please give me some
> guidances to do what I'm thinking using HDFS?
>
> Thanks,

--
Harsh J
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB