Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Plain View
HBase >> mail # user >> Import HBase snapshots possible?


+
Siddharth Karandikar 2013-08-01, 13:35
+
Matteo Bertozzi 2013-08-01, 13:38
+
Siddharth Karandikar 2013-08-01, 13:45
+
Matteo Bertozzi 2013-08-01, 13:47
+
Siddharth Karandikar 2013-08-01, 13:54
+
Matteo Bertozzi 2013-08-01, 14:01
+
Siddharth Karandikar 2013-08-01, 14:09
Copy link to this message
-
Re: Import HBase snapshots possible?
you have to use 3 slashes otherwise is interpreted as local file-system path
-Dhbase.rootdir=hdfs:///10.209.17.88:9000/hbase

Matteo

On Thu, Aug 1, 2013 at 3:09 PM, Siddharth Karandikar <
[EMAIL PROTECTED]> wrote:

> Tried what you suggested. Here is what I get -
>
> ssk01:~/siddharth/tools/hbase-0.95.1-hadoop1 # ./bin/hbase
> org.apache.hadoop.hbase.snapshot.ExportSnapshot
> -Dhbase.rootdir=hdfs://10.209.17.88:9000/hbase -snapshot s1 -copy-to
> /root/siddharth/tools/hbase-0.95.1-hadoop1/data/
> Exception in thread "main" java.lang.IllegalArgumentException: Wrong
> FS: hdfs://10.209.17.88:9000/hbase/.hbase-snapshot/s1/.snapshotinfo,
> expected: file:///
>         at org.apache.hadoop.fs.FileSystem.checkPath(FileSystem.java:381)
>         at
> org.apache.hadoop.fs.RawLocalFileSystem.pathToFile(RawLocalFileSystem.java:55)
>         at
> org.apache.hadoop.fs.RawLocalFileSystem.getFileStatus(RawLocalFileSystem.java:393)
>         at
> org.apache.hadoop.fs.FilterFileSystem.getFileStatus(FilterFileSystem.java:251)
>         at
> org.apache.hadoop.fs.ChecksumFileSystem$ChecksumFSInputChecker.<init>(ChecksumFileSystem.java:125)
>         at
> org.apache.hadoop.fs.ChecksumFileSystem.open(ChecksumFileSystem.java:283)
>         at org.apache.hadoop.fs.FileSystem.open(FileSystem.java:427)
>         at
> org.apache.hadoop.hbase.snapshot.SnapshotDescriptionUtils.readSnapshotInfo(SnapshotDescriptionUtils.java:296)
>         at
> org.apache.hadoop.hbase.snapshot.ExportSnapshot.getSnapshotFiles(ExportSnapshot.java:371)
>         at
> org.apache.hadoop.hbase.snapshot.ExportSnapshot.run(ExportSnapshot.java:618)
>         at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65)
>         at
> org.apache.hadoop.hbase.snapshot.ExportSnapshot.innerMain(ExportSnapshot.java:690)
>         at
> org.apache.hadoop.hbase.snapshot.ExportSnapshot.main(ExportSnapshot.java:694)
>
>
> Am I missing something?
>
>
> Thanks,
> Siddharth
>
>
> On Thu, Aug 1, 2013 at 7:31 PM, Matteo Bertozzi <[EMAIL PROTECTED]>
> wrote:
> > Ok, so to export a snapshot from your HBase cluster, you can do
> > $ bin/hbase class org.apache.hadoop.hbase.snapshot.tool.ExportSnapshot
> > -snapshot MySnapshot -copy-to hdfs:///srv2:8082/my-backup-dir
> >
> > Now on cluster2, hdfs:///srv2:8082 you've your my-backup-dir that
> contains
> > the exported snapshot (note that the snapshot is under hidden dirs
> > .snapshots, and .archive)
> >
> > Now if you want to restore the snapshot,  you have to export it back an
> > HBase cluster.
> > So on cluster2, you can do:
> > $ bin/hbase class org.apache.hadoop.hbase.snapshot.tool.ExportSnapshot -D
> > hbase.rootdir=hdfs:///srv2:8082/my-backup-dir -snapshot MySnapshot
> -copy-to
> > hdfs:///hbaseSrv:8082/hbase
> >
> >
> > so, to recap
> >  - You take a snapshot
> >  - You Export the snapshot from HBase Cluster-1 -> to a simple HDFS dir
> in
> > Cluster-2
> >  - Then you want to restore
> >  - You Export the snapshot from HDFS dir in Cluster-2 to HBase Cluster
> (it
> > can be a different one from the original)
> >  - From the hbase shell you can just: clone_snapshot 'snapshotName',
> > 'newTableName' if the table does not exists or use restore_snapshot
> > 'snapshotName', if there's a table with the same name
> >
> >
> > Matteo
> >
> >
> >
> > On Thu, Aug 1, 2013 at 2:54 PM, Siddharth Karandikar <
> > [EMAIL PROTECTED]> wrote:
> >
> >> Yeah, thats right. But the issue is, hdfs that I am exporting to is
> >> not under HBase.
> >> Can you please provide some example command to do this...
> >>
> >>
> >> Thanks,
> >> Siddharth
> >>
> >> On Thu, Aug 1, 2013 at 7:17 PM, Matteo Bertozzi <
> [EMAIL PROTECTED]>
> >> wrote:
> >> > Yes, the export an HDFS path.
> >> > $ bin/hbase class org.apache.hadoop.hbase.snapshot.tool.ExportSnapshot
> >> > -snapshot MySnapshot -copy-to hdfs:///srv2:8082/hbase
> >> >
> >> > so you can export to some /my-backup-dir on your HDFS
> >> > and then you've to export back to an hbase cluster, when you want to
+
Siddharth Karandikar 2013-08-01, 14:25
+
Matteo Bertozzi 2013-08-01, 14:44
+
Siddharth Karandikar 2013-08-02, 07:34
+
Jignesh Patel 2013-08-03, 09:55
+
Siddharth Karandikar 2013-08-01, 13:36
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB