Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Threaded View
Pig >> mail # user >> [pig-0.10.0 failed to load hbase table]Re:Re: Re: Re: pig-0.10.0: TestJobSubmission failed with: Internal error creating job configuration.


Copy link to this message
-
Re: Re: [pig-0.10.0 failed to load hbase table]Re:Re: Re: Re: pig-0.10.0: TestJobSubmission failed with: Internal error creating job configuration.
It's a known issue with Pig 0.10.0 as far as I know, so I usually recommend
the workaround that I suggested to you.

I haven't verified this by myself, but I believe that the recent change to
HBaseStorage (PIG-2821 <https://issues.apache.org/jira/browse/PIG-2821>)
fixes this problem (PIG-2822<https://issues.apache.org/jira/browse/PIG-2822>)
- i.e. hbase-site.xml not being packed into job.xml.

Regarding Pig 0.9.1, I haven't used that version, so I can't answer your
question.

Thanks,
Cheolsoo
On Thu, Sep 13, 2012 at 1:10 AM, lulynn_2008 <[EMAIL PROTECTED]> wrote:

> Hi Cheolsoo,
>
> Yes, you are right. I am running the ZK quorum on remote machines and you
> way works.
> --  But I have put hbase-site.xml(which include hbase.zookeeper.quorum) in
> pig classpath. But seems pig did not store these hbase configurations to
> job. Is this the design of pig-0.10.0?

--  Seems pig just puts hadoop configurations(hadoop-site.xml,
> core-site.xml...) into job.xml during create job. Is this correct?
> --  I remembered that pig-0.9.1 store hbase configurations into job.xml.
> Did I miss anything?
>
> Thanks.
>
>
>
>
>
>
> At 2012-09-13 02:24:18,"Cheolsoo Park" <[EMAIL PROTECTED]> wrote:
> >Hi,
> >
> >2012-09-12 00:30:07,198 INFO org.apache.zookeeper.ClientCnxn: Opening
> >> socket connection to server /127.0.0.1:2181
> >
> >
> >This message seems wrong. I assume that you're running the ZK quorum on
> >remote machines, but it is trying to connect to localhost. Can you try to
> >set "hbase.zookeeper.quorum" in "pig.properties" as follows:
> >"hbase.zookeeper.quorum=<your
> >ZK quorum host:port>" ?
> >
> >Thanks,
> >Cheolsoo
> >
> >On Wed, Sep 12, 2012 at 12:39 AM, lulynn_2008 <[EMAIL PROTECTED]>
> wrote:
> >
> >> Hi Cheolsoo,
> >> Current TestJobSubmission and TestHBaseStorage passed. But if I run
> >> following scripts with hbase-0.94/zookeeper-3.4.3 cluster:
> >>
> >> the same issue happened.
> >>
> >> 2012-09-12 00:30:07,198 INFO org.apache.zookeeper.ClientCnxn: Opening
> >> socket connection to server /127.0.0.1:2181
> >> 2012-09-12 00:30:07,199 INFO
> >> org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper: The identifier
> of
> >> this process is [EMAIL PROTECTED]
> >> 2012-09-12 00:30:07,212 WARN
> >> org.apache.zookeeper.client.ZooKeeperSaslClient: SecurityException:
> >> java.lang.SecurityException: Unable to locate a login configuration
> >> occurred when trying to find JAAS configuration.
> >> 2012-09-12 00:30:07,213 INFO
> >> org.apache.zookeeper.client.ZooKeeperSaslClient: Client will not
> >> SASL-authenticate because the default JAAS configuration section
> 'Client'
> >> could not be found. If you are not using SASL, you may ignore this. On
> the
> >> other hand, if you expected SASL to work, please fix your JAAS
> >> configuration.
> >> 2012-09-12 00:30:07,221 WARN org.apache.zookeeper.ClientCnxn: Session
> 0x0
> >> for server null, unexpected error, closing socket connection and
> attempting
> >> reconnect
> >> java.net.ConnectException: Connection refused
> >>         at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
> >>         at
> >> sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:610)
> >>         at
> >>
> org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:286)
> >>         at
> >> org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1035)
> >>
> >> Reproduce steps:
> >>
> >> Create hbase table:
> >> ./hbase shell:
> >> create 'employees', 'SN', 'department', 'address'
> >> put 'employees', 'Hong', 'address:country', 'China'
> >>
> >> Run following pig commands:
> >> ./pig
> >> A = load 'hbase://employees' using
> >> org.apache.pig.backend.hadoop.hbase.HBaseStorage( 'address:country',
> >> '-loadKey true') as (SN:bytearray,country:bytearray);
> >> B = filter A by SN == 'Hong';
> >> dump B;
> >>
> >>
> >>
> >>
> >>
> >> At 2012-08-22 05:51:38,"Cheolsoo Park" <[EMAIL PROTECTED]> wrote:
> >> >OK, I got TestJobSubmission passing. Please apply the following diff to
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB