Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Plain View
Accumulo >> mail # user >> FW: Deletes not commiting from custom java API but work in shell interface


+
Bell, Philip S CIV SPAWAR... 2013-04-10, 22:02
+
John Vines 2013-04-10, 22:06
+
Bell, Philip S CIV SPAWAR... 2013-04-10, 22:09
+
John Vines 2013-04-10, 22:16
+
Bell, Philip S CIV SPAWAR... 2013-04-10, 22:20
+
Bell, Philip S CIV SPAWAR... 2013-04-10, 22:42
+
Josh Elser 2013-04-10, 23:51
+
Billie Rinaldi 2013-04-11, 14:53
Copy link to this message
-
Re: FW: Deletes not commiting from custom java API but work in shell interface
On Wed, Apr 10, 2013 at 6:06 PM, John Vines <[EMAIL PROTECTED]> wrote:
> Deletes will remove all entries which occur before the key. I believe there
> is undefined behavior for when a delete key is identical. It should work if
> you set the delete keys time +1.

The behavior is defined when keys are identical except for delete.
Delete is the last thing a key is sorted on, and delete keys sort
before non delete keys.  So a delete key with timestamp T will delete
all keys with the same row and column with timestamps <= T.

>
> Sent from my phone, please pardon the typos and brevity.
>
> On Apr 10, 2013 6:02 PM, "Bell, Philip S CIV SPAWARSYSCEN-PACIFIC, 81320"
> <[EMAIL PROTECTED]> wrote:
>>
>> Using the following code, rows are never deleted even when identified and
>> submitted for deletion to the batchwriter.
>> After running the code the rows still show up in search.
>>
>> This has been seen in 1.4.1 and 1.4.2
>>
>> The same rows can be deleted from the accumulo shell interface
>>
>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>
>>
>> for( Entry<Key, Value> e : mainScanner )
>> {
>>         Text currentUUID = e.getKey().getRow();
>>
>>         Text colFam = e.getKey().getColumnFamily();
>>         Text colQual = e.getKey().getColumnQualifier();
>>
>>         String colVis = e.getKey().getColumnVisibility().toString();
>>
>>         System.out.println( currentUUID + ":" + colFam + ":" + colQual +
>> ":" + colVis + ":" + e.getKey().getTimestamp() );
>>
>>         if( colFam.toString().equalsIgnoreCase( "root" ) ||
>> colVis.length() > 0 )
>>         {
>>                 Mutation delMutation = new Mutation( currentUUID );
>>                 delMutation.putDelete( colFam, colQual, new
>> ColumnVisibility( colVis ), e.getKey().getTimestamp() );
>>
>>                 System.out.println( "removing" );
>>
>>                 try
>>                 {
>>                         bw.addMutation( delMutation );
>>                 }
>>                 catch( MutationsRejectedException e1 )
>>                 {
>>                         e1.printStackTrace();
>>                 }
>>
>>                 count++;
>>                 if( count % 1000000 == 0 )
>>                 {
>>                         System.out.println( this.getName() + ": " + count
>> + " completed in " + getTimePassed() );
>>                 }
>>         }
>> }// for each entry found
>>
>> try
>> {
>>         bw.flush();
>>         bw.close();
>> }
>> catch( Exception e1 )
>> {
>>         e1.printStackTrace();
>> }
+
John Vines 2013-04-11, 14:34
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB