Home | About | Sematext search-lucene.com search-hadoop.com
 Search Hadoop and all its subprojects:

Switch to Threaded View
Accumulo, mail # user - Accumulo with NameNode HA: UnknownHostException for dfs.nameservices


Copy link to this message
-
Re: Accumulo with NameNode HA: UnknownHostException for dfs.nameservices
Eric Newton 2013-09-03, 17:53
Try:

<property>
     <name>instance.dfs.uri</name>
     <value>hdfs://namenodehostname.domain:9000</value>
</property>

Use the port number for your configuration, of course.

-Eric
On Tue, Sep 3, 2013 at 1:33 PM, Smith, Joshua D. <[EMAIL PROTECTED]>wrote:

>  I tried adding the following property to the accumulo-site.xml file, but
> got the same results.****
>
> <property>****
>
>      <name>instance.dfs.uri</name>****
>
>      <value>namenodehostname.domain</value>****
>
> </property>****
>
> ** **
>
> Josh****
>
> ** **
>
> *From:* Eric Newton [mailto:[EMAIL PROTECTED]]
> *Sent:* Tuesday, September 03, 2013 1:22 PM
>
> *To:* [EMAIL PROTECTED]
> *Subject:* Re: Accumulo with NameNode HA: UnknownHostException for
> dfs.nameservices****
>
> ** **
>
> Accumulo generally uses the settings in the hdfs configuration files using
> FileSystem.get(new Configuration()).****
>
> ** **
>
> In 1.5 you can configure instance.dfs.uri to specify a NameNode uri.****
>
> ** **
>
> In 1.6 you can set instance.volumes to multiple uri's, but this is not the
> same as HA.****
>
> ** **
>
> -Eric****
>
> ** **
>
> On Tue, Sep 3, 2013 at 12:43 PM, Smith, Joshua D. <[EMAIL PROTECTED]>
> wrote:****
>
> Eric-****
>
>  ****
>
> The link you sent is directly relevant, but unfortunately it didn’t
> resolve it.****
>
>  ****
>
> I already had the following property set****
>
> <property>****
>
>                 <name>dfs.client.failover.proxy.provider.mycluster</name>*
> ***
>
>
> <value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value>
> ****
>
> </property>****
>
>  ****
>
> The discussion at the link that you sent was in a CDH forum and it looked
> like HA resulted in some required changes for the hdfs command to be able
> to resolve the active NameNode. That leads me to two questions:****
>
>  ****
>
> 1)      Does Accumulo know how to resolve the active NameNode?****
>
> 2)      If it doesn’t, is there a way to explicitly specify it like the
> user did for the hdfs command as a work-around?****
>
>  ****
>
> Josh****
>
>  ****
>
> *From:* Eric Newton [mailto:[EMAIL PROTECTED]]
> *Sent:* Tuesday, September 03, 2013 12:24 PM
> *To:* [EMAIL PROTECTED]
> *Subject:* Re: Accumulo with NameNode HA: UnknownHostException for
> dfs.nameservices****
>
>  ****
>
> This discussion seems to provide some insight:****
>
>  ****
>
> https://groups.google.com/a/cloudera.org/forum/#!topic/cdh-user/I_OmKdZOjVE
> ****
>
>  ****
>
> Please let us know if you get it working; I would like to test this for
> the 1.6.0 release.****
>
>  ****
>
> -Eric****
>
>  ****
>
>  ****
>
> On Tue, Sep 3, 2013 at 12:06 PM, Smith, Joshua D. <[EMAIL PROTECTED]>
> wrote:****
>
> All-****
>
>  ****
>
> I’m installing Accumulo 1.5 on CDH 4.3. I’m running Hadoop 2.0 (yarn) with
> High Availability (HA) for the NameNode. When I try to initialize Accumulo
> I get the following error message:****
>
>  ****
>
> >sudo –u accumulo accumulo init****
>
>  ****
>
> FATAL: java.lang.IllegalArgumentException: java.net.UnknownHostException:
> mycluster****
>
> java.lang.IllegalArgumentException: java.net.UnknownHostException:
> mycluster****
>
> at
> org.apache.hadoop.security.SecurityUtil.buildTokenService(SecurityUtil.java:414)
> ****
>
> at
> org.apache.hadoop.hdfs.NameNodeProxies.createNonHAProxy(NameNodeProzies.java:164)
> ****
>
> at
> org.apache.hadoop.hdfs.NameNodeProxies.createProxy(NameNodeProxies.java:129)
> ****
>
> at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:448)****
>
> at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:410)****
>
> at
> org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:128)
> ****
>
> at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2308)*
> ***
>
> at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:87)****
>
> at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2324)
> ****
>
> at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:351)****