Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Threaded View
MapReduce >> mail # user >> Re: Hadoop fs -getmerge


Copy link to this message
-
Re: Hadoop fs -getmerge
If -getmerge is updated, specifically deleting the .crc is not necessary.  Adding an option that invokes fs.writeChecksum via a cmdline option should do the trick.

On Apr 18, 2013, at 2:41 AM, Fabio Pitzolu wrote:

Hi Hemanth,
I guess that the only solution is to delete the crc files after the export.
Does anyone of you knows if someone filed a Jira to implement a parameter to -getmerge to delete the crc files afterwards?

Fabio Pitzolu
Consultant - BI & Infrastructure

Mob. +39 3356033776
Telefono 02 87157239
Fax. 02 93664786

Gruppo Consulenza Innovazione - http://www.gr-ci.com<http://www.gr-ci.com/>
2013/4/18 Hemanth Yamijala <[EMAIL PROTECTED]<mailto:[EMAIL PROTECTED]>>
I don't think that is possible. When we use -getmerge, the destination filesystem happens to be a LocalFileSystem which extends from ChecksumFileSystem. I believe that's why the CRC files are getting in.

Would it not be possible for you to ignore them, since they have a fixed extension ?

Thanks
Hemanth
On Wed, Apr 17, 2013 at 8:09 PM, Fabio Pitzolu <[EMAIL PROTECTED]<mailto:[EMAIL PROTECTED]>> wrote:
Hi all,
is there a way to use the "getmerge" fs command and not generate the .crc files in the output local directory?

Thanks,

Fabio Pitzolu

NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB