Home | About | Sematext search-lucene.com search-hadoop.com
 Search Hadoop and all its subprojects:

Switch to Threaded View
Accumulo, mail # dev - WAL issues in 1.5.0


Copy link to this message
-
Re: WAL issues in 1.5.0
John Vines 2013-08-13, 22:23
CRAP, that was it. I wonder if that was the root of all of my VM issues
before...

Thanks Todd.
On Tue, Aug 13, 2013 at 6:09 PM, Todd Lipcon <[EMAIL PROTECTED]> wrote:

> Out of disk space? HDFS won't write to a volume if you don't have 5x the
> block size available.
>
> -Todd
>
>
> On Tue, Aug 13, 2013 at 3:06 PM, John Vines <[EMAIL PROTECTED]> wrote:
>
>> I had a few instances of it before, but I was never able to concretely
>> create it in a non-virtual environment. Except today, I had a next clean
>> checkout of first 1.5.1-SNAPSHOT and then 1.5.0 from git, with a fresh
>> hdfs
>> directory and I got the never ending stream of
>>
>> "       java.lang.RuntimeException: org.apache.hadoop.ipc.RemoteException:
>> java.io.IOException: File
>> /accumulo/wal/127.0.0.1+9997/86c798b2-2de3-4860-ba84-645cc9d38cc7
>> could only be replicated to 0 nodes, instead of 1"
>>
>>
>> Normally when this happens, restarting the namenode is all I need to do to
>> fix it, but not this time. I'm willing to bet when I restart my computer
>> it
>> will be fine. But, while this is happening, I'm seeing the number of files
>> in hdfs under the wal directory ever growing. I'm wondering if we have an
>> overly time sensitive contstraint or if there is a check we need to do
>> before giving up? I am seeing that error echoed in the namenode, so I'm
>> not
>> quite sure.  This is on hadoop 1.0.4.
>>
>
>
>
> --
> Todd Lipcon
> Software Engineer, Cloudera
>