Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Threaded View
Pig >> mail # user >> Unable to typecast fields loaded from HBase


Copy link to this message
-
Re: Unable to typecast fields loaded from HBase
Looks like an issue with either your HBase configs that specify the ZK
quorum being off, or ZK itself not responding. If you keep having problems
though, I'm sure the hbase users list would be able to help out pretty
quickly. I'd start with checking that the quorum is properly configured and
that you can connect to it manually from a node.
On Thu, Mar 28, 2013 at 3:25 AM, Praveen Bysani <[EMAIL PROTECTED]>wrote:

> Hi,
>
> I setup all the nodes using Cloudera Manager. So i assume all the
> classpaths and the environment is handled by the framework (cloudera
> distro), isn't it so ? However after trying to execute on each node, i
> found that on one of my node has problems connecting to hbase. The ip
> address of this node is recently changed from what it was during
> installation. I update the /etc/hosts file on all nodes and restarted all
> hadoop services. The services tab in cloudera manager shows good health for
> all services which made me believe everything is alright, apparently not so.
>
> Trying to access hbase on that particular node gives,
>
> 13/03/28 16:28:14 ERROR zookeeper.RecoverableZooKeeper: ZooKeeper exists
> failed after 3 retries
> 13/03/28 16:28:14 WARN zookeeper.ZKUtil: hconnection Unable to set watcher
> on znode /hbase/master
> org.apache.zookeeper.KeeperException$ConnectionLossException:
> KeeperErrorCode = ConnectionLoss for /hbase/master
>         at
> org.apache.zookeeper.KeeperException.create(KeeperException.java:99)
>         at
> org.apache.zookeeper.KeeperException.create(KeeperException.java:51)
>         at org.apache.zookeeper.ZooKeeper.exists(ZooKeeper.java:1041)
>         at
> org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper.exists(RecoverableZooKeeper.java:176)
>         at
> org.apache.hadoop.hbase.zookeeper.ZKUtil.watchAndCheckExists(ZKUtil.java:418)
>         at
> org.apache.hadoop.hbase.zookeeper.ZooKeeperNodeTracker.start(ZooKeeperNodeTracker.java:82)
>         at
> org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.ensureZookeeperTrackers(HConnectionManager.java:589)
>         at
> org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.getMaster(HConnectionManager.java:648)
>         at
> org.apache.hadoop.hbase.client.HBaseAdmin.<init>(HBaseAdmin.java:121)
>         at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native
> Method)
>         at
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39)
>         at
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27)
>         at java.lang.reflect.Constructor.newInstance(Constructor.java:513)
>         at
> org.jruby.javasupport.JavaConstructor.newInstanceDirect(JavaConstructor.java:275)
>         at
> org.jruby.java.invokers.ConstructorInvoker.call(ConstructorInvoker.java:91)
>         at
> org.jruby.java.invokers.ConstructorInvoker.call(ConstructorInvoker.java:178)
>         at
> org.jruby.runtime.callsite.CachingCallSite.cacheAndCall(CachingCallSite.java:322)
>         at
> org.jruby.runtime.callsite.CachingCallSite.callBlock(CachingCallSite.java:178)
>         at
> org.jruby.runtime.callsite.CachingCallSite.call(CachingCallSite.java:182)
>         at
> org.jruby.java.proxies.ConcreteJavaProxy$2.call(ConcreteJavaProxy.java:47)
>         at
> org.jruby.runtime.callsite.CachingCallSite.cacheAndCall(CachingCallSite.java:322)
>         at
> org.jruby.runtime.callsite.CachingCallSite.callBlock(CachingCallSite.java:178)
>
> I understand it is no longer an issue with pig, it would be great if
> someone could give some pointers to configure the hbase on the node that
> has a new ip address.
>
> On 28 March 2013 12:54, Bill Graham <[EMAIL PROTECTED]> wrote:
>
>> Your initial exception shows ClassNotFoundExceptions for HBase. Are you
>> adding HBase to PIG_CLASSPATH on the client or do you have it installed on
>> your Hadoop nodes? In the case of the latter, maybe some nodes are
>> different than others?

*Note that I'm no longer using my Yahoo! email address. Please email me at
[EMAIL PROTECTED] going forward.*
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB