Thanks a lot! hadoop fs -cat did the trick.
2013/2/18 Harsh J <[EMAIL PROTECTED]>:
> The command you're looking for is not -copyToLocal (it doesn't really
> emit the file, which you seem to need here), but rather a simple -cat:
> Something like the below would make your command work:
> $ hadoop fs -cat FILE_IN_HDFS | ssh REMOTE_HOST "dd of=TARGET_FILE"
> On Mon, Feb 18, 2013 at 10:46 PM, Julian Wissmann
> <[EMAIL PROTECTED]> wrote:
>> we're running a Hadoop cluster with hbase for the purpose of
>> evaluating it as database for a research project and we've more or
>> less decided to go with it.
>> So now I'm exploring backup mechanisms and have decided to experiment
>> with hadoops export functionality for that.
>> What I am trying to achieve is getting data out of hbase and into hdfs
>> via hadoop export and then copy it out of hdfs onto a backup system.
>> However while copying data out of hdfs to the backup machine I am
>> experiencing problems.
>> What I am trying to do is the following:
>> hadoop fs -copyToLocal FILE_IN_HDFS | ssh REMOTE_HOST "dd of=TARGET_FILE"
>> It creates a file on the remote host, however this file is 0kb in
>> size; instead of copying any data over there, the file just lands in
>> my home folder.
>> The command output looks like this: hadoop fs -copyToLocal
>> FILE_IN_HDFS | ssh REMOTE_HOST "dd of=FILE_ON REMOTE_HOST"
>> 0+0 Datensätze ein
>> 0+0 Datensätze aus
>> 0 Bytes (0 B) kopiert, 1,10011 s, 0,0 kB/s
>> I cannot think of any reason, why this command would behave in this
>> way. Is this some Java-ism that I'm missing here (like not correctly
>> treating stdout), or am I actually doing it wrong?
>> The Hadoop Version is 2.0.0-cdh4.1.2
> Harsh J