Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Threaded View
HBase >> mail # dev >> Exception while using HBase trunk with hadoop - 2.0.3


Copy link to this message
-
Re: Exception while using HBase trunk with hadoop - 2.0.3
Just to add on

As i said i have two setups.  Verified the HBase lib dir in these two
setups.  The one complied with profile 2.0 has hadoop 2.0 jars.  And the
one compiled with profile 1.0 has hadoop 1.0 jars.

I used to ways of creating and compiling this package
mvn clean install -Dhadoop.profile=2.0 -DskipTests assembly:assembly
mvn -X -DskipTests help:active-profiles package assembly:assembly -Prelease
-Dhadoop.profile=2.0

Both did not help me.

>From the logs i can see that the FileSystem.get() works fine.
But when the DFSClient in master and the NN talks to each other, the MAster
sends hostname/ip. whereas the NN replies with hostname:port.

Contents of core-site.xml
===================<configuration>
<property>
    <name>fs.defaultFS</name>
    <value>hdfs://localhost:9000</value>
     </property>

</configuration>
Contents of hdfs-site.xml
====================<configuration>
<property>
    <name>dfs.namenode.name.dir</name>
    <value>/home/ram/datadir</value>
     </property>
</configuration>
Content of hbase-site.xml
======================<configuration>
 <property>
    <name>hbase.rootdir</name>
    <value>hdfs://localhost:9000/hbase</value>
  </property>
  <property>
    <name>hbase.cluster.distributed</name>
    <value>true</value>
  </property>
  <property>
    <name>hbase.zookeeper.quorum</name>
    <value>localhost</value>
  </property>
</configuration>

This is just a single node machine.  Also am trying out the HBase trunk
with hadoop 2.0 for the first time.

Regards
Ram
On Thu, Feb 21, 2013 at 8:54 PM, ramkrishna vasudevan <
[EMAIL PROTECTED]> wrote:

> Hi Devs
>
> I tried to run HBase current trunk snapshot with Hadoop 2.0.3 alpha.
>
> I got the following exception
> java.io.IOException: Failed on local exception:
> com.google.protobuf.InvalidProtocolBufferException: Message missing
> required fields: callId, status; Host Details : local host is: "ram/
> 10.239.47.144"; destination host is: "localhost":9000;
>  at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:760)
> at org.apache.hadoop.ipc.Client.call(Client.java:1168)
>  at
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:202)
> at $Proxy10.setSafeMode(Unknown Source)
>  at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>  at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> at java.lang.reflect.Method.invoke(Method.java:597)
>  at
> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:164)
> at
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:83)
>  at $Proxy10.setSafeMode(Unknown Source)
> at
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.setSafeMode(ClientNamenodeProtocolTranslatorPB.java:514)
>  at org.apache.hadoop.hdfs.DFSClient.setSafeMode(DFSClient.java:1896)
> at
> org.apache.hadoop.hdfs.DistributedFileSystem.setSafeMode(DistributedFileSystem.java:660)
>  at org.apache.hadoop.hbase.util.FSUtils.isInSafeMode(FSUtils.java:261)
> at org.apache.hadoop.hbase.util.FSUtils.waitOnSafeMode(FSUtils.java:650)
>  at
> org.apache.hadoop.hbase.master.MasterFileSystem.checkRootDir(MasterFileSystem.java:389)
> at
> org.apache.hadoop.hbase.master.MasterFileSystem.createInitialFileSystemLayout(MasterFileSystem.java:147)
>  at
> org.apache.hadoop.hbase.master.MasterFileSystem.<init>(MasterFileSystem.java:131)
> at
> org.apache.hadoop.hbase.master.HMaster.finishInitialization(HMaster.java:654)
>  at org.apache.hadoop.hbase.master.HMaster.run(HMaster.java:476)
> at java.lang.Thread.run(Thread.java:662)
> Caused by: com.google.protobuf.InvalidProtocolBufferException: Message
> missing required fields: callId, status
> at
> com.google.protobuf.UninitializedMessageException.asInvalidProtocolBufferException(UninitializedMessageException.java:81)
>  at
> org.apache.hadoop.ipc.protobuf.RpcPayloadHeaderProtos$RpcResponseHeaderProto$Builder.buildParsed(RpcPayloadHeaderProtos.java:1094)
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB