Home | About | Sematext search-lucene.com search-hadoop.com
 Search Hadoop and all its subprojects:

Switch to Threaded View
HDFS >> mail # user >> exceptions i got in  HDFS - append problem?


Copy link to this message
-
Re: exceptions i got in HDFS - append problem?
On Fri, Apr 9, 2010 at 3:07 AM, Gokulakannan M <[EMAIL PROTECTED]> wrote:
> Hi,
>  I got the following exceptions , when I am using HDFS to write the logs
> coming from Scribe
>  1. java.io.IOException: Filesystem closed
>
>      <stack trace>
>      ........
>      ........
>      call to org.apache.hadoop.fs.FSDataOutputStream::write failed!
>

Above seems to be saying that filesystem is closed and as a
consequence, you are not able to write it.

>  2. org.apache.hadoop.ipc.RemoteException:
> org.apache.hadoop.hdfs.protocol.AlreadyBeingCreatedException: failed to
> create
>       file xxx-2010-04-01-12-40_00000 for DFSClient_1355960219 on client
> 10.18.22.55 because current leaseholder is trying to recreate file
>       <stack trace>
>      ........
>      ........
>      call to
> org.apache.hadoop.conf.FileSystem::append((Lorg/apache/hadoop/fs/Path;)Lorg/apache/hadoop/fs/FSDataOutputStream;)failed!
>

Someone holds the lease on the file you are trying to open?

You mention scribe.  Do you have hdfs-200 and friends applied to your cluster?

>   I didn't apply the HDFS-265 to my hadoop patch yet.
>

What hadoop version are you running?  hdfs-265 won't apply to hadoop
0.20.x if that is what you are running.

>
>   Are these exceptions due to the bugs in existing append-feature?? or some
> other reason?
>
>  Should I need to apply the complete append patch or a simple patch will
> solve this.
>
I haven't looked, but my guess is that scribe documentation probably
has description of the patchset required to run on hadoop.

St.Ack