The Hadoop MR job is being started from one of the "slaves" on the cluster
(i.e. where a HBase RegionServer is running, along with a TaskTracker). The
hbase-site.xml file on this machine points to the correct ZK quorum.
One interesting thing I've noticed is that the right ZK server is used for
the following code (that is in the same MR job, just a few lines earlier):
HBaseAdmin admin = new HBaseAdmin(conf);
Could the issue be with HTable or HFileOutputFormat (i.e. maybe they have a
config or config-path hard-coded)?
On Mon, Sep 23, 2013 at 12:53 PM, Shahab Yunus <[EMAIL PROTECTED]>wrote:
> From where are you running your job? From which machine? This client
> machine from where you are kicking of this job should have the
> hbase-site.xml with the correct ZK info in it. It seems that your
> client/job is having and issue picking up the right ZK, rather than the
> services running on your non-local cluster.
> On Mon, Sep 23, 2013 at 12:09 PM, Dolan Antenucci <[EMAIL PROTECTED]
> > I'm having an issue where my Hadoop MR job for bulk loading data into
> > is not reading my hbase-site.xml file -- thus it tries to connect to
> > Zookeeper on localhost. This is on a cluster using CDH4 on Ubuntu 12.04.
> > Here's the code where it attempts to connect to local zookeeper:
> > Configuration conf = new Configuration(); // from
> > org.apache.hadoop.conf
> > Job job = new Job(conf);
> > HTable hTable = new HTable(conf, tableName);
> > HFileOutputFormat.configureIncrementalLoad(job, hTable);
> > As suggested by another thread I came across, I've added
> > to my HADOOP_CLASSPATH (in /etc/hadoop/conf/hadoop-env.sh), restarted
> > services, but no improvement. Here is the full classpath:
> > Any thoughts on what the problem could be?