Home | About | Sematext search-lucene.com search-hadoop.com
 Search Hadoop and all its subprojects:

Switch to Threaded View
HBase >> mail # dev >> ANN: The third hbase-0.96.1 release candidate is available for download


Copy link to this message
-
Re: ANN: The third hbase-0.96.1 release candidate is available for download
Yeah. In that case you need to wipe the corresponding znode under /hbase/table.
We need a way to do so automatically.

-- Lars

________________________________
 From: Jean-Marc Spaggiari <[EMAIL PROTECTED]>
To: dev <[EMAIL PROTECTED]>; lars hofhansl <[EMAIL PROTECTED]>
Sent: Friday, December 13, 2013 5:56 PM
Subject: Re: ANN: The third hbase-0.96.1 release candidate is available for download
 
Got it. Thanks. So I guess even is you wipe only a single table you will have the same issue, right? Not just only if you wipe the entire /hbase folder?
2013/12/13 lars hofhansl <[EMAIL PROTECTED]>

This is (somewhat expected) after HBASE-7600. A wipe of HDFS needs to be followed by a wipe of the /hbase folder in ZK.
>I filed HBASE-10145 for further discussion, yesterday.
>
>-- Lars
>
>
>
>________________________________
> From: Jean-Marc Spaggiari <[EMAIL PROTECTED]>
>To: dev <[EMAIL PROTECTED]>
>Sent: Friday, December 13, 2013 2:27 PM
>Subject: Re: ANN: The third hbase-0.96.1 release candidate is available for download
>
>
>
>Facing something strange...
>
>I have been able to download and check the signature. I deployed on 4
>nodes, but I'm getting that on stratup:
>
>2013-12-13 17:19:26,373 FATAL [master:hbasetest1:60000] master.HMaster:
>Unhandled exception. Starting shutdown.
>org.apache.hadoop.hbase.TableExistsException: hbase:namespace
>    at
>org.apache.hadoop.hbase.master.handler.CreateTableHandler.prepare(CreateTableHandler.java:120)
>    at
>org.apache.hadoop.hbase.master.TableNamespaceManager.createNamespaceTable(TableNamespaceManager.java:230)
>    at
>org.apache.hadoop.hbase.master.TableNamespaceManager.start(TableNamespaceManager.java:85)
>    at
>org.apache.hadoop.hbase.master.HMaster.initNamespace(HMaster.java:1052)
>    at
>org.apache.hadoop.hbase.master.HMaster.finishInitialization(HMaster.java:920)
>    at org.apache.hadoop.hbase.master.HMaster.run(HMaster.java:606)
>    at java.lang.Thread.run(Thread.java:744)
>
>2013-12-13 17:19:26,506 ERROR [main] master.HMasterCommandLine: Master
>exiting
>java.lang.RuntimeException: HMaster Aborted
>    at
>org.apache.hadoop.hbase.master.HMasterCommandLine.startMaster(HMasterCommandLine.java:192)
>    at
>org.apache.hadoop.hbase.master.HMasterCommandLine.run(HMasterCommandLine.java:134)
>    at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
>    at
>org.apache.hadoop.hbase.util.ServerCommandLine.doMain(ServerCommandLine.java:126)
>    at org.apache.hadoop.hbase.master.HMaster.main(HMaster.java:2779)
>
>I have done a rm -r of the /hbase hdfs folder to get it clean.
>
>I have been able to start after clearing ZK too. But not sure if this is
>something we want to address or not.
>
>I will continue my tests...
>
>JM
>
>
>
>2013/12/13 Aleksandr Shulman <[EMAIL PROTECTED]>
>
>> My previous note references Cloudera-internal infrastructure, so folks will
>> not be able to inspect it (sorry about that). However, I'm +1 this RC.
>>
>>
>> On Fri, Dec 13, 2013 at 11:44 AM, Aleksandr Shulman <[EMAIL PROTECTED]
>> >wrote:
>>
>> > +1 smoke tests pass, with the exception of tests that we know will fail
>> > (involving snappy compression and Pig, since these things are not yet
>> > implemented in my setup). Everything that should work properly works.
>> > http://sandbox.jenkins.cloudera.com/job/Run-Smokes-Upstream/25/
>> >
>> > The cluster is a 5-node tarball-based cluster, mastered at
>> > tarball-target-2.ent.cloudera.com:60010
>> >
>> >
>> > On Fri, Dec 13, 2013 at 10:37 AM, Elliott Clark <[EMAIL PROTECTED]>
>> wrote:
>> >
>> >> +1
>> >>
>> >> Downloaded
>> >> Checked the signature of all the tar.gz's
>> >> Installed on a cluster (Hadoop 2.2 and Java 7u25)
>> >> Ran PE
>> >> Ran YCSB
>> >> * The performance of this release is much better than 0.96.0.
>> >>  Ran IT tests for ~16 hours
>> >> * No data loss
>> >> * IntegrationTestBigLinkedList
>> >> * IntegrationTestIngest
>> >> * IntegrationTestLoadAndVeirfy
>> >> * IntegrationTestBulkLoad
>> >> * IntegrationTestImportTsv