Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Plain View
HBase >> mail # user >> Migrate 0.94 to remote 0.96


+
Jean-Marc Spaggiari 2013-11-10, 19:34
+
Himanshu Vashishtha 2013-11-10, 20:24
+
Jean-Marc Spaggiari 2013-11-10, 20:28
+
Jean-Marc Spaggiari 2013-11-11, 13:44
Copy link to this message
-
Re: Migrate 0.94 to remote 0.96
Thanks for trying it out.

Would be great to have your inputs/suggestions on making it more user
friendly JM.

Himanshu
On Mon, Nov 11, 2013 at 5:44 AM, Jean-Marc Spaggiari <
[EMAIL PROTECTED]> wrote:

> So, seems that it went well. The only thing is that I don't have snappy on
> the new cluster so one table has 0 regions deployed, but expect that, seems
> pretty "simple". Thanks again.
>
>
> 2013/11/10 Jean-Marc Spaggiari <[EMAIL PROTECTED]>
>
> > Thanks Himanshu. I missed this section. So I will "simply" dump all HDFS
> > files to huge disks, move them, restore them on the other side and run
> > hbase upgrade... It seems to be pretty easy. I will update this thread
> with
> > the result...
> >
> > JM
> >
> >
> > 2013/11/10 Himanshu Vashishtha <[EMAIL PROTECTED]>
> >
> >> JM,
> >>
> >> Did you look at the upgrade section in the book,
> >> http://hbase.apache.org/upgrading.html#upgrade0.96
> >> It does in-place upgrade of a 94 installation to 96.
> >>
> >> In case your 96 is fresh, you could dump/copy all your 94 data under
> root
> >> dir, and run the upgrade script.
> >> No, 96 doesn't convert each table automatically, one need to use the
> >> upgrade script.
> >>
> >>
> >> Himanshu
> >>
> >>
> >> On Sun, Nov 10, 2013 at 11:34 AM, Jean-Marc Spaggiari <
> >> [EMAIL PROTECTED]> wrote:
> >>
> >> > Hi,
> >> >
> >> > I have a 0.94 (hadoop 1.0.3) cluster that I want to migrate to a 0.96
> >> > (hadoop 2.2.0) cluster. However, there is no network connection
> between
> >> the
> >> > 2 clusters... What's the best way to do that?
> >> >
> >> > I tried with a single table first. Did an extract from 0.94 to the
> local
> >> > disk using hadoop get of all the files into /hbase/tablename and tried
> >> to
> >> > re-import on the 0.96 side. I was able to see the table name, but not
> >> the
> >> > content. I guess because of namespace and others, it's not doable?
> >> >
> >> > Another option is to export in CVS format, then transfer. the files,
> and
> >> > re-import on the 0.96 side. But I would have liked to keep the regions
> >> > splits, etc.
> >> >
> >> > So the only working option I see for now is the CVS. Any other one?
> >> >
> >> > Next, if I'm able to get a network between the 2 clusters, then
> >> copytable
> >> > should be the best option? Or an I "simply" dist-cp with entire /hbase
> >> > folder? I guess this last option is not reall correct since I will
> have
> >> a
> >> > 0.94 format in the 0.96 cluster. Or will the 0.96 cluster
> automatically
> >> > convert each table with is in the old format?
> >> >
> >> > Thanks,
> >> >
> >> > JM
> >> >
> >>
> >
> >
>
+
Jean-Marc Spaggiari 2013-11-11, 17:27
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB