Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Threaded View
MapReduce >> mail # user >> Re: How can I add a new hard disk in an existing HDFS cluster?


Copy link to this message
-
Re: How can I add a new hard disk in an existing HDFS cluster?
you can change the setting of data.dfs.dir in hdfs-site.xml if your version
is 1.x
<property>
        <name>data.dfs.dir</name>
        <value>/usr/hadoop/tmp/dfs/data, /dev/vdb </value>
    </property>
2013/5/3 Joarder KAMAL <[EMAIL PROTECTED]>

> Hi,
>
>  I have a running HDFS cluster (Hadoop/HBase) consists of 4 nodes and the
> initial hard disk (/dev/vda1) size is 10G only. Now I have a second hard
> drive /dev/vdb of 60GB size and want to add it into my existing HDFS
> cluster. How can I format the new hard disk (and in which format? XFS?) and
> mount it to work with HDFS
>
> Default HDFS directory is situated in
> /usr/local/hadoop-1.0.4/hadoop-datastore
> And I followed this link for installation.
>
> http://ankitasblogger.blogspot.com.au/2011/01/hadoop-cluster-setup.html
>
> Many thanks in advance :)
>
>
> Regards,
> Joarder Kamal
>

--
>From Good To Great
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB