Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Threaded View
HBase >> mail # user >> Dual Hadoop/HBase configuration through same client


Copy link to this message
-
Re: Dual Hadoop/HBase configuration through same client
You are right as it is not a proper strategy to try to connect a secure
cluster with a non-secure one. We were just trying to see that if somehow
we can connect those without changing the current security configurations
of these existing clusters, but I guess not. Thanks for your input and
replies.

Shahab
On Sat, Apr 27, 2013 at 6:38 PM, Ted Yu <[EMAIL PROTECTED]> wrote:

> Shahab:
> Can you enable Kerberos-based security in the other cluster ?
>
> Exporting information from secure cluster to insecure cluster doesn't seem
> right.
>
> Cheers
>
> On Sun, Apr 28, 2013 at 12:54 AM, Shahab Yunus <[EMAIL PROTECTED]
> >wrote:
>
> > Interesting lead, thanks.
> >
> > Meanwhile, I was also thinking of using distcp. With the help of hftp we
> > can overcome the Hadoop mismatch issue as well but I think the mismatch
> in
> > security configuration will still be a problem. I tried it as the
> > following, where the source has Kerberos configured and the destination
> > didn't but it failed with the exception. This was kicked-off form the
> > destination server of course.
> >
> > hadoop distcp
> > hftp://<dfs.http.address>/<path>hdfs://<dfs.http.address>/<path>
> >
> > org.apache.hadoop.ipc.RemoteException(java.io.IOException): Security
> > enabled but user not authenticated by filter
> > at org.apache.hadoop.ipc.RemoteException.valueOf(RemoteException.java:97)
> > at
> >
> >
> org.apache.hadoop.hdfs.HftpFileSystem$LsParser.startElement(HftpFileSyste...
> >
> > Regards,
> > Shahab
> >
> >
> > On Sat, Apr 27, 2013 at 2:51 AM, Damien Hardy <[EMAIL PROTECTED]>
> > wrote:
> >
> > > Hello
> > >
> > > Maybe you should look at export tools source code as it can export
> HBase
> > > data to distant HDFS space (setting a full hdfs:// url in command line
> > > option for outputdir)
> > >
> > >
> > >
> >
> https://github.com/apache/hbase/blob/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/mapreduce/Export.java
> > >
> > > Cheers,
> > >
> > >
> > > 2013/4/27 Shahab Yunus <[EMAIL PROTECTED]>
> > >
> > > > Thanks Ted for the response. But the issue is that I want to read
> from
> > > one
> > > > cluster and write to another. If I will have to clients then how will
> > > they
> > > > communicate with each other? Essentially what am I trying to do here
> is
> > > > intra-cluster data copy/exchange. Any other ideas or suggestions?
> Even
> > if
> > > > both servers have no security or one has Kerberos or both have
> > > > authentication how to exchange data between them?
> > > >
> > > > I was actually not expecting that I cannot load multiple Hadoop or
> > HBase
> > > > configurations in 2 different Configuration objects in one
> application.
> > > > As mentioned I have tried overwriting properties as well but
> > > > security/authentication properties are overwritten somehow.
> > > >
> > > > Regards,
> > > > Shahab
> > > >
> > > >
> > > > On Fri, Apr 26, 2013 at 7:43 PM, Ted Yu <[EMAIL PROTECTED]> wrote:
> > > >
> > > > > Looks like the easiest solution is to use separate clients, one for
> > > each
> > > > > cluster you want to connect to.
> > > > >
> > > > > Cheers
> > > > >
> > > > > On Sat, Apr 27, 2013 at 6:51 AM, Shahab Yunus <
> > [EMAIL PROTECTED]
> > > > > >wrote:
> > > > >
> > > > > > Hello,
> > > > > >
> > > > > > This is a follow-up to my previous post a few days back. I am
> > trying
> > > to
> > > > > > connect to 2 different Hadoop clusters' setups through a same
> > client
> > > > but
> > > > > I
> > > > > > am running into the issue that the config of one overwrites the
> > > other.
> > > > > >
> > > > > > The scenario is that I want to read data from an HBase table from
> > one
> > > > > > cluster and write it as a file on HDFS on the other.
> Individually,
> > > if I
> > > > > try
> > > > > > to write to them they both work but when I try this through a
> same
> > > Java
> > > > > > client, they fail.
> > > > > >
> > > > > > I have tried loading the core-site.xml through addResource method
> > of
> >
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB