Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Threaded View
HBase >> mail # user >> Loading hbase-site.xml settings from Hadoop MR job


Copy link to this message
-
Re: Loading hbase-site.xml settings from Hadoop MR job
Hi Renato,

Can you clarify your recommendation?  Currently I've added the directory
where my hbase-site.xml file lives (/etc/hbase/conf/) to my Hadoop
classpath (as described above). Note: from the client machine (where I'm
starting my MR job), I generated the above class path by running "hadoop
classpath".  Also worth noting that the /etc/hbase/conf/hbase-site.xml file
on this client machine points to the correct ZK quorum.

Thanks
On Mon, Sep 23, 2013 at 1:06 PM, Renato Marroquín Mogrovejo <
[EMAIL PROTECTED]> wrote:

> Maybe you should putting this configurations within your class path, so it
> can be reached from your clients env.
>
>
> 2013/9/23 Shahab Yunus <[EMAIL PROTECTED]>
>
> > From where are you running your job? From which machine? This client
> > machine from where you are kicking of this job should have the
> > hbase-site.xml with the correct ZK info in it. It seems that your
> > client/job is having and issue picking up the right ZK, rather than the
> > services running on your non-local cluster.
> >
> > Regards,
> > Shahab
> >
> >
> > On Mon, Sep 23, 2013 at 12:09 PM, Dolan Antenucci <[EMAIL PROTECTED]
> > >wrote:
> >
> > > I'm having an issue where my Hadoop MR job for bulk loading data into
> > Hbase
> > > is not reading my hbase-site.xml file -- thus it tries to connect to
> > > Zookeeper on localhost.  This is on a cluster using CDH4 on Ubuntu
> 12.04.
> > >
> > > Here's the code where it attempts to connect to local zookeeper:
> > >     Configuration conf = new Configuration(); // from
> > > org.apache.hadoop.conf
> > >     Job job = new Job(conf);
> > >     HTable hTable = new HTable(conf, tableName);
> > >     HFileOutputFormat.configureIncrementalLoad(job, hTable);
> > >
> > > As suggested by another thread I came across, I've added
> > "/etc/hbase/conf/"
> > > to my HADOOP_CLASSPATH (in /etc/hadoop/conf/hadoop-env.sh), restarted
> > > services, but no improvement. Here is the full classpath:
> > >
> > >
> > >
> >
> /usr/local/hadoop/lib/hadoop-lzo-0.4.17-SNAPSHOT.jar:/etc/hbase/conf/::/etc/hadoop/conf:/usr/lib/hadoop/lib/*:/usr/lib/hadoop/.//*:/usr/lib/hadoop-hdfs/./:/usr/lib/hadoop-hdfs/lib/*:/usr/lib/hadoop-hdfs/.//*:/usr/lib/hadoop-yarn/lib/*:/usr/lib/hadoop-yarn/.//*:/usr/lib/hadoop-0.20-mapreduce/./:/usr/lib/hadoop-0.20-mapreduce/lib/*:/usr/lib/hadoop-0.20-mapreduce/.//*
> > >
> > > Any thoughts on what the problem could be?
> > >
> >
>
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB