Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Threaded View
HBase >> mail # user >> Migrate 0.94 to remote 0.96


Copy link to this message
-
Re: Migrate 0.94 to remote 0.96
Sure!

I will have 2 comments.

1) In the documentation, we expect user to have an existing cluster in 0.94
and to try to upgrade it. However, some might prefer to move the data to a
new one to not risk the gold data, and migrate the new one. We might want
to put some links on DistCP, CopyTable or others to help with with that.
Also, it might be different versions of HDFS underneath (1.2 to 2.2, etc.)
for might also tell them about the need to use hftp (I know, it's related
to Hadoop and not HBase, but it took me some time to find the right command
line).

2) -check should verify that compression codecs configured in the tables
are available and working. Not required when migrating into the same
cluster, but useful when you move the data and upgrade into another cluster.

Apart from that (and some Yarn vs MapReduce challenges to get rowcounter
working fine), everything else went very smoothly.

JM
2013/11/11 Himanshu Vashishtha <[EMAIL PROTECTED]>

> Thanks for trying it out.
>
> Would be great to have your inputs/suggestions on making it more user
> friendly JM.
>
> Himanshu
>
>
> On Mon, Nov 11, 2013 at 5:44 AM, Jean-Marc Spaggiari <
> [EMAIL PROTECTED]> wrote:
>
> > So, seems that it went well. The only thing is that I don't have snappy
> on
> > the new cluster so one table has 0 regions deployed, but expect that,
> seems
> > pretty "simple". Thanks again.
> >
> >
> > 2013/11/10 Jean-Marc Spaggiari <[EMAIL PROTECTED]>
> >
> > > Thanks Himanshu. I missed this section. So I will "simply" dump all
> HDFS
> > > files to huge disks, move them, restore them on the other side and run
> > > hbase upgrade... It seems to be pretty easy. I will update this thread
> > with
> > > the result...
> > >
> > > JM
> > >
> > >
> > > 2013/11/10 Himanshu Vashishtha <[EMAIL PROTECTED]>
> > >
> > >> JM,
> > >>
> > >> Did you look at the upgrade section in the book,
> > >> http://hbase.apache.org/upgrading.html#upgrade0.96
> > >> It does in-place upgrade of a 94 installation to 96.
> > >>
> > >> In case your 96 is fresh, you could dump/copy all your 94 data under
> > root
> > >> dir, and run the upgrade script.
> > >> No, 96 doesn't convert each table automatically, one need to use the
> > >> upgrade script.
> > >>
> > >>
> > >> Himanshu
> > >>
> > >>
> > >> On Sun, Nov 10, 2013 at 11:34 AM, Jean-Marc Spaggiari <
> > >> [EMAIL PROTECTED]> wrote:
> > >>
> > >> > Hi,
> > >> >
> > >> > I have a 0.94 (hadoop 1.0.3) cluster that I want to migrate to a
> 0.96
> > >> > (hadoop 2.2.0) cluster. However, there is no network connection
> > between
> > >> the
> > >> > 2 clusters... What's the best way to do that?
> > >> >
> > >> > I tried with a single table first. Did an extract from 0.94 to the
> > local
> > >> > disk using hadoop get of all the files into /hbase/tablename and
> tried
> > >> to
> > >> > re-import on the 0.96 side. I was able to see the table name, but
> not
> > >> the
> > >> > content. I guess because of namespace and others, it's not doable?
> > >> >
> > >> > Another option is to export in CVS format, then transfer. the files,
> > and
> > >> > re-import on the 0.96 side. But I would have liked to keep the
> regions
> > >> > splits, etc.
> > >> >
> > >> > So the only working option I see for now is the CVS. Any other one?
> > >> >
> > >> > Next, if I'm able to get a network between the 2 clusters, then
> > >> copytable
> > >> > should be the best option? Or an I "simply" dist-cp with entire
> /hbase
> > >> > folder? I guess this last option is not reall correct since I will
> > have
> > >> a
> > >> > 0.94 format in the 0.96 cluster. Or will the 0.96 cluster
> > automatically
> > >> > convert each table with is in the old format?
> > >> >
> > >> > Thanks,
> > >> >
> > >> > JM
> > >> >
> > >>
> > >
> > >
> >
>
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB