Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Plain View
HDFS >> mail # user >> Re: Uploading file to HDFS


Copy link to this message
-
Re: Uploading file to HDFS
Can you not simply do a fs -put from the location where the 2 TB file
currently resides? HDFS should be able to consume it just fine, as the
client chunks them into fixed size blocks.

On Fri, Apr 19, 2013 at 10:05 AM, 超级塞亚人 <[EMAIL PROTECTED]> wrote:
> I have a problem. Our cluster has 32 nodes. Each disk is 1TB. I wanna upload
> 2TB file to HDFS.How can I put the file to the namenode and upload to HDFS?

--
Harsh J
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB