Home | About | Sematext search-lucene.com search-hadoop.com
 Search Hadoop and all its subprojects:

Switch to Plain View
HBase, mail # user - delete rows from hbase


+
Oleg Ruchovets 2012-06-18, 22:08
+
Jean-Daniel Cryans 2012-06-18, 22:18
+
Oleg Ruchovets 2012-06-18, 23:13
+
Jean-Daniel Cryans 2012-06-18, 23:18
+
Amitanand Aiyer 2012-06-18, 23:36
+
shashwat shriparv 2012-06-19, 07:43
+
Mohammad Tariq 2012-06-19, 10:46
+
Kevin Odell 2012-06-19, 13:26
+
Oleg Ruchovets 2012-06-19, 16:17
Copy link to this message
-
RE: delete rows from hbase
Anoop Sam John 2012-06-20, 08:38
Hi
      Do some one tried for the possibility of an Endpoint implementation using which the delete can be done directly with the scan at server side.
In the below samples I can see
Client -> Server - Scan for certain rows ( we want the rowkeys satisfying our criteria)
Client <- Server - returns the Results
Client -> Server - Delete calls

Instead using the Endpoints we can make one call from Client to Server in which both the scan and the delete will happen...

-Anoop-
________________________________________
From: Oleg Ruchovets [[EMAIL PROTECTED]]
Sent: Tuesday, June 19, 2012 9:47 PM
To: [EMAIL PROTECTED]
Subject: Re: delete rows from hbase

Thank you all for the answers. I try to speed up my solution and user
map/reduce over hbase

Here is the code:
I want to use Delete (map function to delete the row) and I pass the same
tableName  at TableMapReduceUtil.initTableMapperJob
and TableMapReduceUtil.initTableReducerJob.

Question: is it possible to pass Delete as I did in map function?
public class DeleteRowByCriteria {
    final static Logger LOG LoggerFactory.getLogger(DeleteRowByCriteria.class);
    public static class MyMapper extends
TableMapper<ImmutableBytesWritable, Delete> {

        public String account;
        public String lifeDate;

        @Override
        public void map(ImmutableBytesWritable row, Result value, Context
context) throws IOException, InterruptedException {
            context.write(row, new Delete(row.get()));
        }
    }
    public static void main(String[] args) throws ClassNotFoundException,
IOException, InterruptedException {

String tableName = args[0];
String filterCriteria = args[1];

        Configuration config = HBaseConfiguration.create();
        Job job = new Job(config, "DeleteRowByCriteria");
        job.setJarByClass(DeleteRowByCriteria.class);

        try {

            Filter campaignIdFilter = new
PrefixFilter(Bytes.toBytes(filterCriteria));
            Scan scan = new Scan();
            scan.setFilter(campaignIdFilter);
            scan.setCaching(500);
            scan.setCacheBlocks(false);
            TableMapReduceUtil.initTableMapperJob(
                    tableName,
                    scan,
                    MyMapper.class,
                    null,
                    null,
                    job);
            TableMapReduceUtil.initTableReducerJob(
                    tableName,
                    null,
                    job);
            job.setNumReduceTasks(0);

            boolean b = job.waitForCompletion(true);
            if (!b) {
                throw new IOException("error with job!");
            }

        }catch (Exception e) {
            LOG.error(e.getMessage(), e);
        }
    }
}

On Tue, Jun 19, 2012 at 9:26 AM, Kevin O'dell <[EMAIL PROTECTED]>wrote:

> Oleg,
>
>  Here is some code that we used for deleting all rows with user name
> foo.  It should be fairly portable to your situation:
>
> import java.io.IOException;
>
> import org.apache.hadoop.conf.Configuration;
> import org.apache.hadoop.hbase.HBaseConfiguration;
> import org.apache.hadoop.hbase.client.HTable;
> import org.apache.hadoop.hbase.client.Result;
> import org.apache.hadoop.hbase.client.ResultScanner;
> import org.apache.hadoop.hbase.client.Scan;
> import org.apache.hadoop.hbase.util.Bytes;
>
> public class HBaseDelete {
> public static void main(String[] args){
> Configuration conf = HbaseConfiguration.create();
> Htable t = new HTable("t");
>
> String user = "foo";
>
> byte[] startRow = Bytes.toBytes(user);
> byte[] stopRow = Bytes.toBytes(user);
> stopRow[stopRow.length - 1]++; //'fop'
> Scan scan = new Scan(start Row, stopRow);
> ResultScanner sc = t.getScanner(scan);
> for(Result r : sc) {
>  t.delete(new Delete(r.getRow()));
> }
> }
> }
> /**
>  * Start row: foo
>  * HBase begins matching this byte, one after another.
>  * End row: foo
>  * HBase stops matching at first match, cause start == stop.
>  * End Row: fo[p] (p being 0 +1)
>  * HBase stops matching at something not "foo"
+
Michael Segel 2012-06-20, 11:41
+
Oleg Ruchovets 2012-06-20, 11:56
+
Michael Segel 2012-06-20, 14:10