Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Threaded View
MapReduce >> mail # user >> Re: HDFS Block location verification


Copy link to this message
-
Re: HDFS Block location verification
Hi Samir,

Thanks so much.

Exactly I want this.

tech@dvcliftonhera150:~$ hadoop fsck -locations -blocks -files
/user/tech/pkg.tar.gz
DEPRECATED: Use of this script to execute hdfs command is deprecated.
Instead use the hdfs command for it.

Connecting to namenode via http://dvcliftonhera122:50070
FSCK started by tech (auth:SIMPLE) from /172.16.30.150 for path
/user/tech/pkg.tar.gz at Tue Feb 05 10:33:23 EST 2013
/user/tech/pkg.tar.gz 165 bytes, 1 block(s):  OK
0.
BP-1936777173-172.16.30.122-1343141974879:blk_8828079455224016541_10294868
len=165 repl=3 [*172.16.30.144:50010, 172.16.30.135:50010,
172.16.30.134:50010*]

Status: HEALTHY
 Total size:    165 B
 Total dirs:    0
 Total files:    1
 Total blocks (validated):    1 (avg. block size 165 B)
 Minimally replicated blocks:    1 (100.0 %)
 Over-replicated blocks:    0 (0.0 %)
 Under-replicated blocks:    0 (0.0 %)
 Mis-replicated blocks:        0 (0.0 %)
 Default replication factor:    3
 Average block replication:    3.0
 Corrupt blocks:        0
 Missing replicas:        0 (0.0 %)
 Number of data-nodes:        47
 Number of racks:        1
FSCK ended at Tue Feb 05 10:33:23 EST 2013 in 3 milliseconds
The filesystem under path '/user/tech/pkg.tar.gz' is HEALTHY

Did I learn something today? If not, I wasted it.
On Tue, Feb 5, 2013 at 10:18 AM, Samir Ahmic <[EMAIL PROTECTED]> wrote:

> Hi,
> You may try with:
> hadoop fsck -locations -blocks -files [hdfs_path.] It will print detailed
> info about blocks and there locations.
>
>
> On Tue, Feb 5, 2013 at 4:00 PM, Dhanasekaran Anbalagan <[EMAIL PROTECTED]
> > wrote:
>
>> Hi Guys,
>>
>> I have configured HDFS with replication factor 3. We have 1TB for data
>> How to file the particular block will available in 3 machine
>>
>> How to find same block of data will available in 3 machine
>>
>> Please guide How to check my data available in three different location
>> node?
>>
>> -Dhanasekaran.
>> Did I learn something today? If not, I wasted it.
>>
>
>
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB