Home | About | Sematext search-lucene.com search-hadoop.com
 Search Hadoop and all its subprojects:

Switch to Plain View
HBase, mail # user - LoadIncrementalHFiles always run with "hbase" user


+
anil gupta 2013-01-24, 01:09
+
Harsh J 2013-01-24, 14:30
Copy link to this message
-
Re: LoadIncrementalHFiles always run with "hbase" user
anil gupta 2013-01-24, 17:09
Hi Harsh,

Thanks for your response. If i understand you correctly then instead of the
process(java program) that i started, RS is trying to write in the
directory. Hence the error. Right?

If that's the case then it kinda makes sense.

~Anil

On Thu, Jan 24, 2013 at 6:30 AM, Harsh J <[EMAIL PROTECTED]> wrote:

> The exception is remote and seems to indicate that your RS is running
> as the 'hbase' user. RS will attempt to do a mv/rename operation when
> you provide it a bulkloadable file, which will then be attempted as
> the user the RS itself runs as - thereby this error.
>
> On Thu, Jan 24, 2013 at 6:39 AM, anil gupta <[EMAIL PROTECTED]> wrote:
> > Hi All,
> >
> > I am generating HFiles by running the bulk loader with a custom mapper.
> > Once the MR job for generating HFile is finished, I trigger the loading
> of
> > HFiles into HBase with the help of following java code:
> > ToolRunner.run(new LoadIncrementalHFiles(HBaseConfiguration.create()),
> new
> > String[]{conf.get("importtsv.bulk.output"), otherArgs[0]});
> >
> > However, while loading i am getting errors related to permissions since
> the
> > loading is being attempted by "hbase" user even though the process(java
> > program) was started by "root". This seems like a bug since the loading
> of
> > data into HBase should also be done as "root". Is there any for only
> using
> > "hbase" user while loading?
> > HBase cluster is not secured. I am using 0.92.1 and its fully distributed
> > cluster. Please help me in resolving this error.
> >
> > Here is the error message:
> > 13/01/23 17:02:16 WARN mapreduce.LoadIncrementalHFiles: Skipping
> > non-directory hdfs://ihubcluster/tmp/hfile_txn_subset/_SUCCESS
> > 13/01/23 17:02:16 INFO hfile.CacheConfig: Allocating LruBlockCache with
> > maximum size 241.7m
> > 13/01/23 17:02:16 INFO mapreduce.LoadIncrementalHFiles: Trying to load
> >
> hfile=hdfs://ihubcluster/tmp/hfile_txn_subset/t/344d58edc7d74e7b9a35ef5e1bf906cc
> >
> first=\x00\x0F(\xC7F\xAD2\xB4\x00\x00\x02\x87\xE1\xB9\x9F\x18\x00\x0C\x1E\x1A\x00\x00\x01<j\x14\x95d
> > last=\x00\x12\xA4\xC6$IP\x9D\x00\x00\x02\x88+\x11\xD2
> > \x00\x0C\x1E\x1A\x00\x00\x01<j\x14\x04A
> > 13/01/23 17:02:55 ERROR mapreduce.LoadIncrementalHFiles: Encountered
> > unrecoverable error from region server
> > org.apache.hadoop.hbase.client.RetriesExhaustedException: Failed after
> > attempts=10, exceptions:
> > Wed Jan 23 17:02:16 PST 2013,
> > org.apache.hadoop.hbase.mapreduce.LoadIncrementalHFiles$3@7b4189d0,
> > org.apache.hadoop.security.AccessControlException:
> > org.apache.hadoop.security.AccessControlException: Permission denied:
> > user=hbase, access=WRITE,
> > inode="/tmp/hfile_txn_subset/t":root:hadoop:drwxr-xr-x
> >     at
> >
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:205)
> >     at
> >
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:186)
> >     at
> >
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:138)
> >     at
> >
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:4265)
> >     at
> >
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkParentAccess(FSNamesystem.java:4231)
> >     at
> >
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.renameToInternal(FSNamesystem.java:2347)
> >     at
> >
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.renameTo(FSNamesystem.java:2315)
> >     at
> >
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.rename(NameNodeRpcServer.java:579)
> >     at
> >
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.rename(ClientNamenodeProtocolServerSideTranslatorPB.java:374)
> >     at
> >
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:42612)
> >     at
> >
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:427)

Thanks & Regards,
Anil Gupta