Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Plain View
Pig >> mail # user >> Unable to typecast fields loaded from HBase


+
Praveen Bysani 2013-03-27, 08:29
+
Praveen Bysani 2013-03-28, 04:20
+
Bill Graham 2013-03-28, 04:54
Copy link to this message
-
Re: Unable to typecast fields loaded from HBase
Hi,

I setup all the nodes using Cloudera Manager. So i assume all the
classpaths and the environment is handled by the framework (cloudera
distro), isn't it so ? However after trying to execute on each node, i
found that on one of my node has problems connecting to hbase. The ip
address of this node is recently changed from what it was during
installation. I update the /etc/hosts file on all nodes and restarted all
hadoop services. The services tab in cloudera manager shows good health for
all services which made me believe everything is alright, apparently not so.

Trying to access hbase on that particular node gives,

13/03/28 16:28:14 ERROR zookeeper.RecoverableZooKeeper: ZooKeeper exists
failed after 3 retries
13/03/28 16:28:14 WARN zookeeper.ZKUtil: hconnection Unable to set watcher
on znode /hbase/master
org.apache.zookeeper.KeeperException$ConnectionLossException:
KeeperErrorCode = ConnectionLoss for /hbase/master
        at
org.apache.zookeeper.KeeperException.create(KeeperException.java:99)
        at
org.apache.zookeeper.KeeperException.create(KeeperException.java:51)
        at org.apache.zookeeper.ZooKeeper.exists(ZooKeeper.java:1041)
        at
org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper.exists(RecoverableZooKeeper.java:176)
        at
org.apache.hadoop.hbase.zookeeper.ZKUtil.watchAndCheckExists(ZKUtil.java:418)
        at
org.apache.hadoop.hbase.zookeeper.ZooKeeperNodeTracker.start(ZooKeeperNodeTracker.java:82)
        at
org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.ensureZookeeperTrackers(HConnectionManager.java:589)
        at
org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.getMaster(HConnectionManager.java:648)
        at
org.apache.hadoop.hbase.client.HBaseAdmin.<init>(HBaseAdmin.java:121)
        at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native
Method)
        at
sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39)
        at
sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27)
        at java.lang.reflect.Constructor.newInstance(Constructor.java:513)
        at
org.jruby.javasupport.JavaConstructor.newInstanceDirect(JavaConstructor.java:275)
        at
org.jruby.java.invokers.ConstructorInvoker.call(ConstructorInvoker.java:91)
        at
org.jruby.java.invokers.ConstructorInvoker.call(ConstructorInvoker.java:178)
        at
org.jruby.runtime.callsite.CachingCallSite.cacheAndCall(CachingCallSite.java:322)
        at
org.jruby.runtime.callsite.CachingCallSite.callBlock(CachingCallSite.java:178)
        at
org.jruby.runtime.callsite.CachingCallSite.call(CachingCallSite.java:182)
        at
org.jruby.java.proxies.ConcreteJavaProxy$2.call(ConcreteJavaProxy.java:47)
        at
org.jruby.runtime.callsite.CachingCallSite.cacheAndCall(CachingCallSite.java:322)
        at
org.jruby.runtime.callsite.CachingCallSite.callBlock(CachingCallSite.java:178)

I understand it is no longer an issue with pig, it would be great if
someone could give some pointers to configure the hbase on the node that
has a new ip address.

On 28 March 2013 12:54, Bill Graham <[EMAIL PROTECTED]> wrote:

> Your initial exception shows ClassNotFoundExceptions for HBase. Are you
> adding HBase to PIG_CLASSPATH on the client or do you have it installed on
> your Hadoop nodes? In the case of the latter, maybe some nodes are
> different than others?
>
>
> On Wed, Mar 27, 2013 at 9:20 PM, Praveen Bysani <[EMAIL PROTECTED]
> >wrote:
>
> > This is not about casting types. The scripts work sometime without any
> > issue and fails with the error as i specified before ? I have no clue of
> > what might be the issue ? Network probably ? I run my cluster on VPS
> > machines, running CDH 4.2 that is installed using cloudera Manager. I am
> > running pig version 0.10.1 which is installed as parcel.
> >
> > On 27 March 2013 16:29, Praveen Bysani <[EMAIL PROTECTED]> wrote:
> >

Regards,
Praveen Bysani
http://www.praveenbysani.com
+
Bill Graham 2013-03-28, 14:48
+
Praveen Bysani 2013-04-01, 02:42
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB