Home | About | Sematext search-lucene.com search-hadoop.com
 Search Hadoop and all its subprojects:

Switch to Threaded View
HDFS, mail # user - Re: Why other process can't see the change after calling hdfsHFlush unless hdfsCloseFile is called?


Copy link to this message
-
Re: Why other process can't see the change after calling hdfsHFlush unless hdfsCloseFile is called?
Peyman Mohajerian 2013-12-19, 23:28
Ok i just read the book section on this (Definite Guide to Hadoop), just to
be sure, length of a file is stored in Name Node, and its updated only
after client calls Name Node after close of the file. At that point if Name
Node has received all the ACK from Data Nodes then it will set the length
meta-data (e.g. minimum replication is met), so one of the last steps and
its for performance reasons, client decides when its done writing.
On Thu, Dec 19, 2013 at 8:36 AM, Xiaobin She <[EMAIL PROTECTED]> wrote:

> To Devin,
>
> thank you very much for your explanation.
>
> I do found that I can read the data out of the file even if I did not
> close the file I'm writing to ( the read operation is call on another file
> handler opened on the same file but still in the same process ), which make
> me more confuse at that time, because I think since I can read the data
> from the file , why can't I get the length of the file correctly.
>
> But from the explantion that you have described, I think I can understand
> it now.
>
> So it seems in order to do what I want ( write some data to the file, and
> then get the length of the file throuth webhdfs interface), I have to open
> and close the file every time I do the write operation.
>
> Thank you very much again.
>
> xiaobinshe
>
>
>
>
>
> 2013/12/19 Devin Suiter RDX <[EMAIL PROTECTED]>
>
>> Hello,
>>
>> In my experience with Flume, watching the HDFS Sink verbose output, I
>> know that even after a file has flushed, but is still open, it reads as a
>> 0-byte file, even if there is actually data contained in the file.
>>
>> A HDFS "file" is a meta-location that can accept streaming input for as
>> long as it is open, so the length cannot be mathematically defined until a
>> start and an end are in place.
>>
>> The flush operation moves data from a buffer to a storage medium, but I
>> don't think that necessarily means that it tells the HDFS RecordWriter to
>> place the "end of stream/EOF" marker down, since the "file" meta-location
>> in HDFS is a pile of actual files around the cluster on physical disk that
>> HDFS presents to you as one file. The HDFS "file" and the physical file
>> splits on disk are distinct, and I would suspect that your HDFS flush calls
>> are forcing Hadoop to move the physical filesplits from their physical
>> datanode buffers to disk, but is not telling HDFS that you expect no
>> further input - that is what the HDFS close will do.
>>
>> One thing you could try - instead of asking for the length property,
>> which is probably unavailable until the close call, try asking for/viewing
>> the contents of the file.
>>
>> Your scenario step 3 says "according to the header hdfs.h, after this
>> call returns, *new readers should be able to see the data*" which isn't
>> the same as "new readers can obtain an updated property value from the file
>> metadata" - one is looking at the data inside the container, and the other
>> is asking the container to describe itself.
>>
>> I hope that helps with your problem!
>>
>>
>> *Devin Suiter*
>> Jr. Data Solutions Software Engineer
>> 100 Sandusky Street | 2nd Floor | Pittsburgh, PA 15212
>> Google Voice: 412-256-8556 | www.rdx.com
>>
>>
>> On Thu, Dec 19, 2013 at 7:50 AM, Xiaobin She <[EMAIL PROTECTED]>wrote:
>>
>>>
>>> sorry to reply to my own thread.
>>>
>>> Does anyone know the answer to this question?
>>> If so, can you please tell me if my understanding is right or wrong?
>>>
>>> thanks.
>>>
>>>
>>>
>>> 2013/12/17 Xiaobin She <[EMAIL PROTECTED]>
>>>
>>>> hi,
>>>>
>>>> I'm using libhdfs to deal with hdfs in an c++ programme.
>>>>
>>>> And I have encountered an problem.
>>>>
>>>> here is the scenario :
>>>> 1. first I call hdfsOpenFile with O_WRONLY flag to open an file
>>>> 2. call hdfsWrite to write some data
>>>> 3. call hdfsHFlush to flush the data,  according to the header hdfs.h,
>>>> after this call returns, new readers shoule be able to see the data
>>>> 4. I use an http get request to get the file list on that directionary