Home | About | Sematext search-lucene.com search-hadoop.com
 Search Hadoop and all its subprojects:

Switch to Threaded View
HDFS, mail # user - set reduced block size for a specific file


Copy link to this message
-
Re: set reduced block size for a specific file
Ted Dunning 2011-08-27, 19:42
There is no way to do this for standard Apache Hadoop.

But other, otherwise Hadoop compatible, systems such as MapR do support this
operation.

Rather than push commercial systems on this mailing list, I would simply
recommend anybody who is curious to email me.

On Sat, Aug 27, 2011 at 12:07 PM, Uma Maheswara Rao G 72686 <
[EMAIL PROTECTED]> wrote:

> Hi Ben,
> Currently there is no way to specify the blocksize from command line in
> Hadoop.
>
> Why can't you write the file from java program?
> Is there any use case for you to write some files only from command line?
>
> Regards,
> Uma
>
> ----- Original Message -----
> From: Ben Clay <[EMAIL PROTECTED]>
> Date: Saturday, August 27, 2011 10:03 pm
> Subject: set reduced block size for a specific file
> To: [EMAIL PROTECTED]
>
> > I'd like to set a lowered block size for a specific file.  IE, if
> > HDFS is
> > configured to use 64mb blocks, I'd like to use 32mb blocks for a
> > specificfile.
> >
> >
> >
> > Is there a way to do this from the commandline, without writing a
> > jar which
> > uses org.apache.hadoop.fs.FileSystem.create() ?
> >
> >
> >
> > I tried the following, but it didn't work:
> >
> >
> >
> > hadoop fs -Ddfs.block.size=1048576  -put /local/path /remote/path
> >
> >
> >
> > I also tried -copyFromLocal.  It looks like the -D is being ignored.
> >
> >
> >
> > Thanks.
> >
> >
> >
> > -Ben
> >
> >
> >
> >
>