Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Threaded View
Accumulo >> mail # user >> Accumulo with NameNode HA: UnknownHostException for dfs.nameservices


Copy link to this message
-
Re: Accumulo with NameNode HA: UnknownHostException for dfs.nameservices
This discussion seems to provide some insight:

https://groups.google.com/a/cloudera.org/forum/#!topic/cdh-user/I_OmKdZOjVE

Please let us know if you get it working; I would like to test this for the
1.6.0 release.

-Eric

On Tue, Sep 3, 2013 at 12:06 PM, Smith, Joshua D.
<[EMAIL PROTECTED]>wrote:

>  All-****
>
> ** **
>
> I’m installing Accumulo 1.5 on CDH 4.3. I’m running Hadoop 2.0 (yarn) with
> High Availability (HA) for the NameNode. When I try to initialize Accumulo
> I get the following error message:****
>
> ** **
>
> >sudo –u accumulo accumulo init****
>
> ** **
>
> FATAL: java.lang.IllegalArgumentException: java.net.UnknownHostException:
> mycluster****
>
> java.lang.IllegalArgumentException: java.net.UnknownHostException:
> mycluster****
>
> at
> org.apache.hadoop.security.SecurityUtil.buildTokenService(SecurityUtil.java:414)
> ****
>
> at
> org.apache.hadoop.hdfs.NameNodeProxies.createNonHAProxy(NameNodeProzies.java:164)
> ****
>
> at
> org.apache.hadoop.hdfs.NameNodeProxies.createProxy(NameNodeProxies.java:129)
> ****
>
> at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:448)****
>
> at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:410)****
>
> at
> org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:128)
> ****
>
> at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2308)*
> ***
>
> at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:87)****
>
> at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2324)
> ****
>
> at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:351)****
>
> at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:163)****
>
> at org.apache.accumulo.core.file.FileUtil.getFileSystem(FileUtil.java:550)
> ****
>
> at org.apache.accumulo.server.util.Initialize.main(Initialize.java:485)***
> *
>
> …****
>
> ** **
>
> “mycluster” is from my hdfs-site.xml and is part of the HA configuration**
> **
>
> <property>****
>
> <name>dfs.nameservices</name>****
>
> <value>mycluster</value>****
>
> </property>****
>
> ** **
>
> It’s not a hostname and I’m not sure why Accumulo would try to resolve it
> as if it was a hostname.****
>
> ** **
>
> Any idea why I would get this error or why Accumulo would have trouble
> running on Hadoop 2.0 with HA?****
>
> ** **
>
> Thanks,****
>
> Josh****
>
> ** **
>
> ** **
>
> ** **
>
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB