Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Threaded View
MapReduce >> mail # user >> pipes(pydoop) and hbase classpath


Copy link to this message
-
Re: pipes(pydoop) and hbase classpath
Hi, needed to add this as well
<property>
<name>hbase.mapred.tablecolumns</name>
<value>col_fam:name</value>
</property>

-Håvard
On Wed, Aug 15, 2012 at 9:42 AM, Håvard Wahl Kongsgård
<[EMAIL PROTECTED]> wrote:
> Hi, my job config is
>
> <property>
> <name>mapred.input.format.class</name>
> <value>org.apache.hadoop.hbase.mapred.TableInputFormat</value>
> </property>
>
> <property>
>   <name>hadoop.pipes.java.recordreader</name>
>   <value>true</value>
> </property>
>
>
> Exception in thread "main" java.lang.RuntimeException: Error in
> configuring object
>         at org.apache.hadoop.util.ReflectionUtils.setJobConf(ReflectionUtils.java:93)
>         at org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:64)
>         at org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:117)
>         at org.apache.hadoop.mapred.JobConf.getInputFormat(JobConf.java:596)
>         at org.apache.hadoop.mapred.JobClient.writeOldSplits(JobClient.java:977)
>         at org.apache.hadoop.mapred.JobClient.writeSplits(JobClient.java:969)
>         at org.apache.hadoop.mapred.JobClient.access$500(JobClient.java:170)
>         at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:880)
>         at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:833)
>         at java.security.AccessController.doPrivileged(Native Method)
>         at javax.security.auth.Subject.doAs(Subject.java:396)
>         at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1177)
>         at org.apache.hadoop.mapred.JobClient.submitJobInternal(JobClient.java:833)
>         at org.apache.hadoop.mapred.JobClient.submitJob(JobClient.java:807)
>         at org.apache.hadoop.mapred.JobClient.runJob(JobClient.java:1248)
>         at org.apache.hadoop.mapred.pipes.Submitter.runJob(Submitter.java:248)
>         at org.apache.hadoop.mapred.pipes.Submitter.run(Submitter.java:479)
>         at org.apache.hadoop.mapred.pipes.Submitter.main(Submitter.java:494)
> Caused by: java.lang.reflect.InvocationTargetException
>         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>         at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>         at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>         at java.lang.reflect.Method.invoke(Method.java:597)
>         at org.apache.hadoop.util.ReflectionUtils.setJobConf(ReflectionUtils.java:88)
>         ... 17 more
> Caused by: java.lang.NullPointerException
>         at org.apache.hadoop.hbase.mapred.TableInputFormat.configure(TableInputFormat.java:51)
>
>
> should I included the col names? according to the api it's deprecated?
> http://hbase.apache.org/apidocs/org/apache/hadoop/hbase/mapred/TableInputFormat.html
>
>
> -Håvard
>
>
> On Tue, Aug 14, 2012 at 11:17 PM, Harsh J <[EMAIL PROTECTED]> wrote:
>> Hi,
>>
>> Per:
>>
>>> org.apache.hadoop.hbase.mapreduce.TableInputFormat not
>> org.apache.hadoop.mapred.InputFormat
>>
>> Pydoop seems to be expecting you to pass it an old API class for
>> InputFormat/etc. but you've passed in the newer class. I am unsure
>> what part of your code exactly may be at fault since I do not have
>> access to it, but you probably want to use the deprecated
>> org.apache.hadoop.hbase.mapred.* package classes such as
>> org.apache.hadoop.hbase.mapred.TableInputFormat, and not the
>> org.apache.hadoop.hbase.mapreduce.* classes, as you are using at the
>> moment.
>>
>> HTH!
>>
>> On Wed, Aug 15, 2012 at 2:39 AM, Håvard Wahl Kongsgård
>> <[EMAIL PROTECTED]> wrote:
>>> Hi, I'am trying to read hbase key-values with pipes(pydoop). As hadoop
>>> is unable to find the hbase jar files. I get
>>>
>>> Exception in thread "main" java.lang.RuntimeException:
>>> java.lang.RuntimeException: class
>>> org.apache.hadoop.hbase.mapreduce.TableInputFormat not
>>> org.apache.hadoop.mapred.InputFormat
>>>
>>> have added export
>>> HADOOP_CLASSPATH=/usr/lib/hbase/hbase-0.90.6-cdh3u4.jar to my
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB