Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Threaded View
Hadoop >> mail # user >> Re: empty file


Copy link to this message
-
Re: empty file
That file is still in an 'open' state. Running the below may show it up:

/opt/local/hadoop/bin/hadoop fsck
/tmp/corrupt_lzo/lc_hadoop16.1386270004881.lzo  -openforwrite

On Thu, Dec 12, 2013 at 9:22 AM, chenchun <[EMAIL PROTECTED]> wrote:
> Nothing is still writing to it. I can't read that file.
> I'm using hadoop 1.0.1.
>
> $ hadoop fs -text /tmp/corrupt_lzo/lc_hadoop16.1386270004881.lzo
> 13/12/12 11:48:56 WARN hdfs.DFSClient: Failed to connect to
> /10.64.32.14:50010, add to deadNodes and continuejava.io.IOException: Got
> error for OP_READ_BLOCK, self=/10.64.10.102:51390,
> remote=/10.64.32.14:50010, for file
> /tmp/corrupt_lzo/lc_hadoop16.1386270004881.lzo, for block
> -5402857470524491959_58312275
> 13/12/12 11:48:56 WARN hdfs.DFSClient: Failed to connect to
> /10.64.32.36:50010, add to deadNodes and continuejava.io.IOException: Got
> error for OP_READ_BLOCK, self=/10.64.10.102:41277,
> remote=/10.64.32.36:50010, for file
> /tmp/corrupt_lzo/lc_hadoop16.1386270004881.lzo, for block
> -5402857470524491959_58312275
> 13/12/12 11:48:56 INFO hdfs.DFSClient: Could not obtain block
> blk_-5402857470524491959_58312275 from any node: java.io.IOException: No
> live nodes contain current block. Will get new block locations from namenode
> and retry...
> 13/12/12 11:48:59 WARN hdfs.DFSClient: Failed to connect to
> /10.64.32.14:50010, add to deadNodes and continuejava.io.IOException: Got
> error for OP_READ_BLOCK, self=/10.64.10.102:51403,
> remote=/10.64.32.14:50010, for file
> /tmp/corrupt_lzo/lc_hadoop16.1386270004881.lzo, for block
> -5402857470524491959_58312275
> 13/12/12 11:48:59 WARN hdfs.DFSClient: Failed to connect to
> /10.64.32.36:50010, add to deadNodes and continuejava.io.IOException: Got
> error for OP_READ_BLOCK, self=/10.64.10.102:41290,
> remote=/10.64.32.36:50010, for file
> /tmp/corrupt_lzo/lc_hadoop16.1386270004881.lzo, for block
> -5402857470524491959_58312275
> 13/12/12 11:48:59 INFO hdfs.DFSClient: Could not obtain block
> blk_-5402857470524491959_58312275 from any node: java.io.IOException: No
> live nodes contain current block. Will get new block locations from namenode
> and retry...
>
> --
> chenchun
>
> On Thursday, 12 December, 2013 at 5:51 AM, John Meagher wrote:
>
> Is something still writing to it?
> ...
> Total files: 0 (Files currently being written: 1)
> Total blocks (validated): 0 (Total open file blocks (not validated): 1)
>
>
>
> On Wed, Dec 11, 2013 at 2:37 PM, Adam Kawa <[EMAIL PROTECTED]> wrote:
>
> i have never seen something like that.
>
> Can you read that file?
>
> $ hadoop fs -text /tmp/corrupt_lzo/lc_hadoop16.1386270004881.lzo
>
>
> 2013/12/11 chenchun <[EMAIL PROTECTED]>
>
>
> Hi,
> I find some files on hdfs which command “hadoop fs -ls” tells they are not
> empty. But command “fsck” tells that these files have no replications. Is
> it normal?
>
> $ hadoop fs -ls /tmp/corrupt_lzo/lc_hadoop16.1386270004881.lzo;
> Found 1 items
> -rw-r--r-- 3 sankuai supergroup 1123927 2013-12-06 03:22
> /tmp/corrupt_lzo/lc_hadoop16.1386270004881.lzo
>
> $ /opt/local/hadoop/bin/hadoop fsck
> /tmp/corrupt_lzo/lc_hadoop16.1386270004881.lzo -files -blocks -locations
> -racks
> FSCK started by sankuai from /10.64.10.102 for path
> /tmp/corrupt_lzo/lc_hadoop16.1386270004881.lzo at Wed Dec 11 20:35:30 CST
> 2013
> Status: HEALTHY
> Total size: 0 B (Total open files size: 1123927 B)
> Total dirs: 0
> Total files: 0 (Files currently being written: 1)
> Total blocks (validated): 0 (Total open file blocks (not validated): 1)
> Minimally replicated blocks: 0
> Over-replicated blocks: 0
> Under-replicated blocks: 0
> Mis-replicated blocks: 0
> Default replication factor: 3
> Average block replication: 0.0
> Corrupt blocks: 0
> Missing replicas: 0
> Number of data-nodes: 38
> Number of racks: 6
> FSCK ended at Wed Dec 11 20:35:30 CST 2013 in 1 milliseconds
>
>
> The filesystem under path '/tmp/corrupt_lzo/lc_hadoop16.1386270004881.lzo'
> is HEALTHY
>
> --
> chenchun
>
>

--
Harsh J
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB