Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Threaded View
HBase >> mail # user >> hbase 0.96 stop master receive ERROR ipc.RPC: RPC.stopProxy called on non proxy.


Copy link to this message
-
Re: hbase 0.96 stop master receive ERROR ipc.RPC: RPC.stopProxy called on non proxy.
Update:
Henry tried my patch attached to HBASE-10029

From master log, it seems my patch worked.

I will get back to this thread after further testing / code review.

Cheers

On Nov 25, 2013, at 6:05 PM, Henry Hung <[EMAIL PROTECTED]> wrote:

> @Ted:
>
> I create the JIRA, is the information sufficient?
> https://issues.apache.org/jira/browse/HBASE-10029
>
> Best regards,
> Henry
>
> -----Original Message-----
> From: Ted Yu [mailto:[EMAIL PROTECTED]]
> Sent: Tuesday, November 26, 2013 9:30 AM
> To: [EMAIL PROTECTED]
> Subject: Re: hbase 0.96 stop master receive ERROR ipc.RPC: RPC.stopProxy called on non proxy.
>
> Henry:
> Thanks for the additional information.
>
> Looks like HA namenode with QJM is not covered by current code.
>
> Mind filing a JIRA with summary of this thread ?
>
> Cheers
>
>
> On Tue, Nov 26, 2013 at 9:12 AM, Henry Hung <[EMAIL PROTECTED]> wrote:
>
>> @Ted
>> Yes, I use the hadoop-hdfs-2.2.0.jar.
>>
>> BTW, how do you certain that the namenode class is
>> ClientNamenodeProtocolTranslatorPB?
>>
>> From the NameNodeProxies, I can only assume the
>> ClientNamenodeProtocolTranslatorPB is used only when connecting to
>> single hadoop namenode.
>>
>>  public static <T> ProxyAndInfo<T> createNonHAProxy(
>>      Configuration conf, InetSocketAddress nnAddr, Class<T> xface,
>>      UserGroupInformation ugi, boolean withRetries) throws IOException {
>>    Text dtService = SecurityUtil.buildTokenService(nnAddr);
>>
>>    T proxy;
>>    if (xface == ClientProtocol.class) {
>>      proxy = (T) createNNProxyWithClientProtocol(nnAddr, conf, ugi,
>>          withRetries);
>>
>>
>> But I'm using HA configuration using QJM, so the my guess is the
>> createProxy will go to the HA case because I provide
>> failoverProxyProviderClass with
>> "org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider".
>>
>>  public static <T> ProxyAndInfo<T> createProxy(Configuration conf,
>>      URI nameNodeUri, Class<T> xface) throws IOException {
>>    Class<FailoverProxyProvider<T>> failoverProxyProviderClass >>        getFailoverProxyProviderClass(conf, nameNodeUri, xface);
>>
>>    if (failoverProxyProviderClass == null) {
>>      // Non-HA case
>>      return createNonHAProxy(conf, NameNode.getAddress(nameNodeUri),
>> xface,
>>          UserGroupInformation.getCurrentUser(), true);
>>    } else {
>>      // HA case
>>      FailoverProxyProvider<T> failoverProxyProvider = NameNodeProxies
>>          .createFailoverProxyProvider(conf,
>> failoverProxyProviderClass, xface,
>>              nameNodeUri);
>>      Conf config = new Conf(conf);
>>      T proxy = (T) RetryProxy.create(xface, failoverProxyProvider,
>> RetryPolicies
>>          .failoverOnNetworkException(RetryPolicies.TRY_ONCE_THEN_FAIL,
>>              config.maxFailoverAttempts, config.failoverSleepBaseMillis,
>>              config.failoverSleepMaxMillis));
>>
>>      Text dtService = HAUtil.buildTokenServiceForLogicalUri(nameNodeUri);
>>      return new ProxyAndInfo<T>(proxy, dtService);
>>    }
>>  }
>>
>> Here is the snippet of my hdfs-site.xml:
>>
>>  <property>
>>    <name>dfs.nameservices</name>
>>    <value>hadoopdev</value>
>>  </property>
>>  <property>
>>    <name>dfs.ha.namenodes.hadoopdev</name>
>>    <value>nn1,nn2</value>
>>  </property>
>>  <property>
>>    <name>dfs.namenode.rpc-address.hadoopdev.nn1</name>
>>    <value>fphd9.ctpilot1.com:9000</value>
>>  </property>
>>  <property>
>>    <name>dfs.namenode.http-address.hadoopdev.nn1</name>
>>    <value>fphd9.ctpilot1.com:50070</value>
>>  </property>
>>  <property>
>>    <name>dfs.namenode.rpc-address.hadoopdev.nn2</name>
>>    <value>fphd10.ctpilot1.com:9000</value>
>>  </property>
>>  <property>
>>    <name>dfs.namenode.http-address.hadoopdev.nn2</name>
>>    <value>fphd10.ctpilot1.com:50070</value>
>>  </property>
>>  <property>
>>    <name>dfs.namenode.shared.edits.dir</name>
>>    <value>qjournal://fphd8.ctpilot1.com:8485;fphd9.ctpilot1.com:8485;
>> fphd10.ctpilot1.com:8485/hadoopdev</value>
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB