Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Threaded View
Hadoop >> mail # user >> RemoteException writing files


Copy link to this message
-
Re: RemoteException writing files
Hi Todd,

It might be useful to try the CDH user mailing list too. I'm afraid I
haven't used CDH, so I'm not entirely certain.
The fact that after you run your JAVA program, the NN has created a
directory and a 0-byte file means you were able to contact and interact
with the NN just fine. I'm guessing the problem is in streaming data to the
DN(s). Does the VM have its ports blocked, causing your client (presumably
outside the VM) to be unable to talk the DNs? What happens when you run the
JAVA program from inside the VM?

After your JAVA program is unable to talk to the DN, it asks the NN for
another DN. I'm guessing since there are no more left, you see the message
"*could only be replicated to 0 nodes, instead of 1*"  So its kind of a red
herring.

>Using my java program remotely, it simply doesn't work.  All I can think of
>is that there is some property on the Java side (in Windows 7) that is
>telling Hadoop (in VMware Linux) to do the block replication differently
>than what it does when the operation is run locally via the command line.
I would be very surprised if this were the issue.

Hope this helps,
Ravi.

On Sun, May 20, 2012 at 9:40 AM, Todd McFarland <[EMAIL PROTECTED]>wrote:

> Thanks for the links.  The behavior is as the links describe but bottom
> line it works fine if I'm copying these files on the Linux VMWare instance
> via the command line.
>
> Using my java program remotely, it simply doesn't work.  All I can think of
> is that there is some property on the Java side (in Windows 7) that is
> telling Hadoop (in VMware Linux) to do the block replication differently
> than what it does when the operation is run locally via the command line.
>
> This is a frustrating problem.  I'm sure its a 10 second fix if I can find
> the right property to set in the Configuration class.
> This is what I have loaded into the Configuration class so far:
>
> config.addResource(new Path("c:/_bigdata/client_libs/core-site.xml"));
> config.addResource(new Path("c:/_bigdata/client_libs/hdfs-site.xml"));
> config.addResource(new Path("c:/_bigdata/client_libs/mapred-site.xml"));
> //config.set("dfs.replication", "1");
> //config.set("dfs.datanode.address", "192.168.60.128:50010");
>
> Setting "dfs.replication=1" is the default setting from hdfs-site.xml so
> that didn't do anything.  I tried to override the dfs.datanode.address in
> case "127.0.0.1:50010" was the issue but it gets overridden on the Linux
> end apparently.
>
> How do I override 127.0.0.1 to localhost?  What config file?
>
> -------------------------------------------------------------------------
>
>
> On Sat, May 19, 2012 at 2:00 PM, samir das mohapatra <
> [EMAIL PROTECTED]> wrote:
>
> > Hi
> >  This Could be due to the Following reason
> >
> > 1) The *NameNode <http://wiki.apache.org/hadoop/NameNode>* does not have
> > any available DataNodes
> >  2) Namenode not able to start properly
> >  3) other wise some IP Issue .
> >    Note:- Pleaes  mention localhost instead of 127.0.0.1 (If it is in
> > local)
> >
> >   Follow URL:
> >
> >
> http://wiki.apache.org/hadoop/FAQ#What_does_.22file_could_only_be_replicated_to_0_nodes.2C_instead_of_1.22_mean.3F
> >
> >
> > Thanks
> >  samir
> >
> >
> > On Sat, May 19, 2012 at 8:59 PM, Todd McFarland <[EMAIL PROTECTED]
> > >wrote:
> >
> > > Hi folks,
> > >
> > > (Resending to this group, sent to common-dev before, pretty sure that's
> > for
> > > Hadoop internal development - sorry for that..)
> > >
> > > I'm pretty stuck here.  I've been researching for hours and I haven't
> > made
> > > any forward progress on this one.
> > >
> > > I have a vmWare installation of Cloudera Hadoop 0.20.  The following
> > > commands to create a directory and copy a file from the shared folder
> > *work
> > > fine*, so I'm confident everything is setup correctly:
> > >
> > > [cloudera@localhost bin]$ hadoop fs -mkdir /user/cloudera/testdir
> > > [cloudera@localhost bin]$ hadoop fs -put
> > /mnt/hgfs/shared_folder/file1.txt
> > > /user/cloudera/testdir/file1.txt
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB