I had a few instances of it before, but I was never able to concretely
create it in a non-virtual environment. Except today, I had a next clean
checkout of first 1.5.1-SNAPSHOT and then 1.5.0 from git, with a fresh hdfs
directory and I got the never ending stream of
" java.lang.RuntimeException: org.apache.hadoop.ipc.RemoteException:
could only be replicated to 0 nodes, instead of 1"
Normally when this happens, restarting the namenode is all I need to do to
fix it, but not this time. I'm willing to bet when I restart my computer it
will be fine. But, while this is happening, I'm seeing the number of files
in hdfs under the wal directory ever growing. I'm wondering if we have an
overly time sensitive contstraint or if there is a check we need to do
before giving up? I am seeing that error echoed in the namenode, so I'm not
quite sure. This is on hadoop 1.0.4.