Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Threaded View
Pig >> mail # user >> Unable to store data into HBase


Copy link to this message
-
Re: Unable to store data into HBase
I don't think there is any problem with that as I am able to execute other
queries, like loading data from an HBase table and storing it into another
HBase table.

Regards,
    Mohammad Tariq

On Mon, Sep 3, 2012 at 1:57 PM, shashwat shriparv <[EMAIL PROTECTED]
> wrote:

> What can conclude from the error is that PIG is not able to run in
> distributed mode as its not able to connect to Hadoop. just check out if
> other map reduce tasks in Pig is working fine. Either pig is searching the
> file which is not present, check where pig is searching the file its
> there..
>
> Regards
>
> ∞
> Shashwat Shriparv
>
>
>
> On Mon, Sep 3, 2012 at 1:00 PM, Mohammad Tariq <[EMAIL PROTECTED]> wrote:
>
> > Hello list,
> >
> >        I have a file in my Hdfs and I am reading this file and trying to
> > store the data into an HBase table through Pig Shell. Here are the
> commands
> > I am using :i
> >
> > z = load '/mapin/testdata2.csv/part-m-00000' using PigStorage(',') as
> > (rowkey:int, id:int, age:float, gender:chararray, height:int, size:int,
> > color:chararray);
> > store z into 'hbase://csvdata' USING
> > org.apache.pig.backend.hadoop.hbase.HBaseStorage('cf:id, cf:age,
> cf:gender,
> > cf:height, cf:size, cf:color');
> >
> > Although, I can see the data when I dump the relation 'z', but I am not
> > able to store 'z' in HBase using the above specified command. I am
> getting
> > the following error :
> >
> > HadoopVersion PigVersion UserId StartedAt FinishedAt Features
> > 1.0.3 0.10.0 cluster 2012-09-03 12:40:31 2012-09-03 12:41:04 UNKNOWN
> >
> > Failed!
> >
> > Failed Jobs:
> > JobId Alias Feature Message Outputs
> > job_201209031122_0009 z MAP_ONLY Message: Job failed! Error - JobCleanup
> > Task Failure, Task: task_201209031122_0009_m_000001 csvdata,
> >
> > Input(s):
> > Failed to read data from "/mapin/testdata2.csv/part-m-00000"
> >
> > Output(s):
> > Failed to produce result in "csvdata"
> >
> > Counters:
> > Total records written : 0
> > Total bytes written : 0
> > Spillable Memory Manager spill count : 0
> > Total bags proactively spilled: 0
> > Total records proactively spilled: 0
> >
> > Job DAG:
> > job_201209031122_0009
> >
> >
> > 2012-09-03 12:41:04,606 [main] INFO
> >
> >
>  org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher
> > - Failed!
> > 2012-09-03 12:41:04,629 [main] INFO
> >  org.apache.pig.backend.hadoop.hbase.HBaseStorage - Adding
> > family:descriptor filters with values cf:id
> > 2012-09-03 12:41:04,629 [main] INFO
> >  org.apache.pig.backend.hadoop.hbase.HBaseStorage - Adding
> > family:descriptor filters with values cf:age
> > 2012-09-03 12:41:04,629 [main] INFO
> >  org.apache.pig.backend.hadoop.hbase.HBaseStorage - Adding
> > family:descriptor filters with values cf:gender
> > 2012-09-03 12:41:04,629 [main] INFO
> >  org.apache.pig.backend.hadoop.hbase.HBaseStorage - Adding
> > family:descriptor filters with values cf:height
> > 2012-09-03 12:41:04,629 [main] INFO
> >  org.apache.pig.backend.hadoop.hbase.HBaseStorage - Adding
> > family:descriptor filters with values cf:size
> > 2012-09-03 12:41:04,629 [main] INFO
> >  org.apache.pig.backend.hadoop.hbase.HBaseStorage - Adding
> > family:descriptor filters with values cf:color
> >
> > I am not getting why it shows Failed to read data from
> > "/mapin/testdata2.csv/part-m-00000, when I already have data in relation
> > 'z'. Any help would be much appreciated. Many thanks.
> >
> > Regards,
> >     Mohammad Tariq
> >
>
>
>
> --
>
>
> ∞
> Shashwat Shriparv
>
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB