Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Threaded View
Accumulo >> mail # user >> BatchWriter not working


Copy link to this message
-
Re: BatchWriter not working
Is the code running to completion, or do you run it and it just stalls? If
the latter, please run jstack on it and share the output.

Alternatively, when you check to make sure the data is written, how are you
checking? Regardless, make sure the user you're checking with has the
authorization "public", otherwise the data will not be returned if it is
there.
On Mon, Dec 17, 2012 at 5:10 PM, Korb, Michael [USA]
<[EMAIL PROTECTED]>wrote:

>  I'm trying to write a simple Accumulo insertion with a BatchWriter,
> based on this document:
> http://accumulo.apache.org/1.4/user_manual/Writing_Accumulo_Clients.html.
> I have a running Accumulo instance with a table "table" and can add and
> remove records using the shell. But I run the following code, and there is
> no output at all and nothing inserted:
>
>          String instanceName = "instance";
>         String zooServers = "localhost:2181";
>         Instance instance = new ZooKeeperInstance(**instanceName,
> zooServers);
>
>         Connector connector = instance.getConnector("root", "secret");
>
>         Text rowId = new Text("row1");
>         Text colFam = new Text("myColFam");
>         Text colQual = new Text("myColQual");
>         ColumnVisibility vis = new ColumnVisibility("public");
>         long timestamp = System.currentTimeMillis();
>         Value value = new Value("Hello World!".getBytes());
>
>         Mutation mutation = new Mutation(rowId);
>         mutation.put(colFam, colQual, vis, timestamp, value);
>
>         BatchWriterConfig config = new BatchWriterConfig();
>         long memBuf = 1000000L;
>         long timeout = 1000L;
>         int numThreads = 10;
>         config.setMaxMemory(memBuf);
>         config.setTimeout(timeout, TimeUnit.MILLISECONDS);
>         config.setMaxWriteThreads(**numThreads);
>
>         BatchWriter writer = connector.createBatchWriter("**table",
> config);
>
>         writer.addMutation(mutation);
>
>         writer.close();
>
>
>  Why isn't this working?
>
>  Thanks,
> Mike
>
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB