Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Threaded View
Accumulo >> mail # dev >> WAL issues in 1.5.0


Copy link to this message
-
Re: WAL issues in 1.5.0
Out of disk space? HDFS won't write to a volume if you don't have 5x the
block size available.

-Todd

On Tue, Aug 13, 2013 at 3:06 PM, John Vines <[EMAIL PROTECTED]> wrote:

> I had a few instances of it before, but I was never able to concretely
> create it in a non-virtual environment. Except today, I had a next clean
> checkout of first 1.5.1-SNAPSHOT and then 1.5.0 from git, with a fresh hdfs
> directory and I got the never ending stream of
>
> "       java.lang.RuntimeException: org.apache.hadoop.ipc.RemoteException:
> java.io.IOException: File
> /accumulo/wal/127.0.0.1+9997/86c798b2-2de3-4860-ba84-645cc9d38cc7
> could only be replicated to 0 nodes, instead of 1"
>
>
> Normally when this happens, restarting the namenode is all I need to do to
> fix it, but not this time. I'm willing to bet when I restart my computer it
> will be fine. But, while this is happening, I'm seeing the number of files
> in hdfs under the wal directory ever growing. I'm wondering if we have an
> overly time sensitive contstraint or if there is a check we need to do
> before giving up? I am seeing that error echoed in the namenode, so I'm not
> quite sure.  This is on hadoop 1.0.4.
>

--
Todd Lipcon
Software Engineer, Cloudera
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB