Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Threaded View
HBase >> mail # user >> Tables & rows disappear


Copy link to this message
-
Re: Tables & rows disappear
Stack - Any thoughts on this?

On Mon, Jan 31, 2011 at 6:27 PM, Something Something <
[EMAIL PROTECTED]> wrote:

> 1)  Version numbers:
>
> hadoop-0.20.2
> hbase-0.20.6
>
>
> 2)  autoFlush to 'true' works, but wouldn't that slow down the insertion
> process?
>
> 3)  Here's how I had set it up:
>
> In my Mapper's setup method:
>
>         table = new HTable(new HBaseConfiguration(), XYZ_TABLE);
>
>         table.setAutoFlush(false);
>
>         table.setWriteBufferSize(1024 * 1024 * 12);
>
> In my Mappers' cleanup method:
>        table.flushCommits();
>
>     table.close();
>
> At the time of writing:
>
>     Put put = new Put(Bytes.toBytes(key));
>
>     put.setWriteToWAL(false);
>
>     put.add(Bytes.toBytes("info"), Bytes.toBytes("code"), Bytes.toBytes(
> code));
>
>     & so on... and at the end...
>
>
>
>     table.put(put);
>
>
> Is this not the right way to do it?  Please let me know.  Thanks for the
> help.
>
>
>
> On Sun, Jan 30, 2011 at 3:03 PM, Stack <[EMAIL PROTECTED]> wrote:
>
>> What version of hbase+hadoop?
>> St.Ack
>>
>> On Fri, Jan 28, 2011 at 8:37 PM, Something Something
>> <[EMAIL PROTECTED]> wrote:
>> > Apologies for my dumbness.  I know it's some property that I am not
>> setting
>> > correctly.  But every time I stop & start HBase & Hadoop I either lose
>> all
>> > my tables or loose rows on tables in HBase.
>> >
>> > Here's what various files contain:
>> >
>> > *core-site.xml*
>> > <configuration>
>> >  <property>
>> >    <name>fs.default.name</name>
>> >    <value>hdfs://localhost:9000</value>
>> >  </property>
>> >  <property>
>> >    <name>hadoop.tmp.dir</name>
>> >    <value>/usr/xxx/hdfs</value>
>> >  </property>
>> > </configuration>
>> >
>> > *hdfs-site.xml*
>> > <configuration>
>> >  <property>
>> >    <name>dfs.replication</name>
>> >    <value>1</value>
>> >  </property>
>> >  <property>
>> >    <name>dfs.name.dir</name>
>> >    <value>/usr/xxx/hdfs/name</value>
>> >  </property>
>> >
>> >  <property>
>> >    <name>dfs.data.dir</name>
>> >    <value>/usr/xxx/hdfs/data</value>
>> >  </property>
>> >
>> > *mapred-site.xml*
>> > <configuration>
>> >  <property>
>> >    <name>mapred.job.tracker</name>
>> >    <value>localhost:9001</value>
>> >  </property>
>> > </configuration>
>> >
>> > *hbase-site.xml*
>> > <configuration>
>> >  <property>
>> >    <name>hbase.rootdir</name>
>> >    <value>hdfs://localhost:9000/hbase</value>
>> >  </property>
>> >  <property>
>> >    <name>hbase.tmp.dir</name>
>> >    <value>/usr/xxx/hdfs/hbase</value>
>> >  </property>
>> > </configuration>
>> >
>> >
>> > What am I missing?  Please help.  Thanks.
>> >
>>
>
>
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB