Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Threaded View
HDFS >> mail # user >> Re: Loading file to HDFS with custom chunk structure


Copy link to this message
-
Re: Loading file to HDFS with custom chunk structure
Hello there,

    You don't have to split the file. When you push anything into
HDFS, it automatically gets splitted into small chunks of uniform
size (64MB or 128MB usually). And the MapReduce frameworks
ensures that each block is processed locally on the node where it
is located.

Do you have any specific requirement??

Warm Regards,
Tariq
https://mtariq.jux.com/
cloudfront.blogspot.com
On Wed, Jan 16, 2013 at 9:01 PM, Kaliyug Antagonist <
[EMAIL PROTECTED]> wrote:

> I want to load a SegY <http://en.wikipedia.org/wiki/SEG_Y> file onto HDFS
> of a 3-node Apache Hadoop cluster.
>
> To summarize, the SegY file consists of :
>
>    1. 3200 bytes *textual header*
>    2. 400 bytes *binary header*
>    3. Variable bytes *data*
>
> The 99.99% size of the file is due to the variable bytes data which is
> collection of thousands of contiguous traces. For any SegY file to make
> sense, it must have the textual header+binary header+at least one trace of
> data. What I want to achieve is to split a large SegY file onto the Hadoop
> cluster so that a smaller SegY file is available on each node for local
> processing.
>
> The scenario is as follows:
>
>    1. The SegY file is large in size(above 10GB) and is resting on the
>    local file system of the NameNode machine
>    2. The file is to be split on the nodes in such a way each node has a
>    small SegY file with a strict structure - 3200 bytes *textual header*+ 400 bytes
>    *binary header* + variable bytes *data*As obvious, I can't blindly use
>    FSDataOutputStream or hadoop fs -copyFromLocal as this may not ensure the
>    format in which the chunks of the larger file are required
>
> Please guide me as to how I must proceed.
>
> Thanks and regards !
>
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB