Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Threaded View
HDFS >> mail # user >> How can I add a new hard disk in an existing HDFS cluster?


Copy link to this message
-
How can I add a new hard disk in an existing HDFS cluster?
Hi,

 I have a running HDFS cluster (Hadoop/HBase) consists of 4 nodes and the
initial hard disk (/dev/vda1) size is 10G only. Now I have a second hard
drive /dev/vdb of 60GB size and want to add it into my existing HDFS
cluster. How can I format the new hard disk (and in which format? XFS?) and
mount it to work with HDFS

Default HDFS directory is situated in
/usr/local/hadoop-1.0.4/hadoop-datastore
And I followed this link for installation.

http://ankitasblogger.blogspot.com.au/2011/01/hadoop-cluster-setup.html

Many thanks in advance :)
Regards,
Joarder Kamal
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB