Thanks so much.
Exactly I want this.
tech@dvcliftonhera150:~$ hadoop fsck -locations -blocks -files
DEPRECATED: Use of this script to execute hdfs command is deprecated.
Instead use the hdfs command for it.
Connecting to namenode via http://dvcliftonhera122:50070
FSCK started by tech (auth:SIMPLE) from /172.16.30.150 for path
/user/tech/pkg.tar.gz at Tue Feb 05 10:33:23 EST 2013
/user/tech/pkg.tar.gz 165 bytes, 1 block(s): OK
len=165 repl=3 [*172.16.30.144:50010, 172.16.30.135:50010,
Total size: 165 B
Total dirs: 0
Total files: 1
Total blocks (validated): 1 (avg. block size 165 B)
Minimally replicated blocks: 1 (100.0 %)
Over-replicated blocks: 0 (0.0 %)
Under-replicated blocks: 0 (0.0 %)
Mis-replicated blocks: 0 (0.0 %)
Default replication factor: 3
Average block replication: 3.0
Corrupt blocks: 0
Missing replicas: 0 (0.0 %)
Number of data-nodes: 47
Number of racks: 1
FSCK ended at Tue Feb 05 10:33:23 EST 2013 in 3 milliseconds
The filesystem under path '/user/tech/pkg.tar.gz' is HEALTHY
Did I learn something today? If not, I wasted it.
On Tue, Feb 5, 2013 at 10:18 AM, Samir Ahmic <[EMAIL PROTECTED]> wrote:
> You may try with:
> hadoop fsck -locations -blocks -files [hdfs_path.] It will print detailed
> info about blocks and there locations.
> On Tue, Feb 5, 2013 at 4:00 PM, Dhanasekaran Anbalagan <[EMAIL PROTECTED]
> > wrote:
>> Hi Guys,
>> I have configured HDFS with replication factor 3. We have 1TB for data
>> How to file the particular block will available in 3 machine
>> How to find same block of data will available in 3 machine
>> Please guide How to check my data available in three different location
>> Did I learn something today? If not, I wasted it.