-Re: transfer large file onto hdfs
Harsh J 2012-07-06, 14:55
Hadoop in general should work OK on Solaris, minus the native code,
which may or may not require some tweaking to build for Solaris, but
needs to be manually built nevertheless.
For CDH-specific questions though, please mail to the CDH users list
instead at http://groups.google.com/a/cloudera.org/group/cdh-user.
On Fri, Jul 6, 2012 at 7:28 PM, Ted Yu <[EMAIL PROTECTED]> wrote:
> Thanks for the reply, Harsh.
> BTW Does anyone deploy cdh3u4 on Solaris ?
> On Thu, Jul 5, 2012 at 11:47 PM, Harsh J <[EMAIL PROTECTED]> wrote:
>> Should we assume that the non-hadoop system has no way to get on the
>> network of the hadoop cluster and its clients? Otherwise, you can
>> grant it temporary access and do a write from it itself. If not:
>> Run a micro FTP server pointed to that file, and then do a 'hadoop fs
>> -cp ftp://location hdfs://location', since FTPFileSystem is present in
>> Hadoop? Or if NFS/etc. mounted, file:/// will work (or via
>> copyFromLocal/put). Essentially you're bringing in the file remotely
>> but performing the copy via CLI.
>> Or you can copy them in chunks, either keeping the destination file
>> writer open, if possible, or appending (depending on what version of
>> Hadoop you're using).
>> On Thu, Jul 5, 2012 at 11:54 PM, Ted Yu <[EMAIL PROTECTED]> wrote:
>> > Hi,
>> > One of the customers wants to transfer dump file (size ~ 2TB) from
>> > hadoop cluster onto hdfs.
>> > The size exceeds free space on CLI machine.
>> > I want to poll best practice in this scenario.
>> > Thanks
>> Harsh J