Home | About | Sematext search-lucene.com search-hadoop.com
 Search Hadoop and all its subprojects:

Switch to Plain View
MapReduce >> mail # user >> Problem with RPC encryption over wire


+
rab ra 2013-11-13, 12:05
Copy link to this message
-
Re: Problem with RPC encryption over wire
"No common protection layer between server and client " likely means the host for job submission does not have hadoop.rpc.protection=privacy.  In order for QOP to work, all client hosts (DN & others used to access the cluster) must have an identical setting.

A few quick questions: I'm assuming you mis-posted your configs and the protection setting isn't really commented out?  Your configs don't show security being enabled, but you do have it enabled, correct?  Otherwise QOP shouldn't apply.  Perhaps a bit obvious, but did you restart your NN after changing the QOP?  Since your defaultFS is just "master", are you using HA?

It's a bit concerning that you aren't consistently receiving the mismatch error.  Is the client looping on retries and then you get timeouts after 5 attempts?  If yes, we've got a major bug.  5 is the default number of RPC readers which handle SASL auth which means the protection mismatch is killing off the reader threads and rendering the NN unusable.  This shouldn't be possible, but what does your NN log show?

Daryn

On Nov 13, 2013, at 6:05 AM, rab ra <[EMAIL PROTECTED]<mailto:[EMAIL PROTECTED]>> wrote:

Hello,

I am facing a problem in using Hadoop RPC encryption while transfer feature in hadoop 2.2.0. I have 3 node cluster
Service running in node 1 (master)
Resource manager
Namenode
DataNode
SecondaryNamenode

Service running in slaves ( node 2 & 3)
NodeManager

I am trying to make data transfer between master and slave secure. For that, I wanted to use data encryption over wire (RPC encryption) feature of hadoop 2.2.0

When I ran the code, I get the below exception

Caused by: java.net.SocketTimeoutException: 60000 millis timeout while waiting for channel to be ready for read.
In another run, I saw in log the following error

No common protection layer between server and client

Not sure whether my configuration is inline with what I want to achieve.

Can someone give me some hint on where I am going wrong?

By the way, I have the below configuration setting in all of these nodes

Core-site.xml

<configuration>

  <property>
    <name>fs.defaultFS</name>
    <value>hdfs://master:8020</value>
  </property>

  <property>
    <name>hadoop.tmp.dir</name>
    <value>/tmp</value>
  </property>
<!--
  <property>
    <name>hadoop.rpc.protection</name>
    <value>privacy</value>
  </property>
-->
  <property>
    <name>io.file.buffer.size</name>
    <value>131072</value>
  </property>

</configuration>

Hdfs-site.xml
<configuration>

  <property>
    <name>dfs.replication</name>
    <value>1</value>
   </property>

  <property>
    <name>dfs.name.dir</name>
    <value>/app/hadoop/dfs-2.2.0/name</value>
  </property>

  <property>
    <name>dfs.data.dir</name>
    <value>/app/hadoop/dfs-2.2.0/data</value>
  </property>

  <property>
    <name>dfs.encrypt.data.transfer</name>
    <value>true</value>
  </property>

  <property>
    <name>dfs.encrypt.data.transfer.algorithm</name>
    <value>rc4</value>
  </property>

  <property>
    <name>dfs.block.access.token.enable</name>
    <value>true</value>
  </property>

</configuration>

Mapred-site.xml

<configuration>

  <property>
    <name>mapreduce.framework.name<http://mapreduce.framework.name/></name>
    <value>yarn</value>
  </property>
<!--
  <property>
    <name>mapreduce.jobtracker.address</name>
    <value>master:8032</value>
  </property>
-->
  <property>
    <name>mapreduce.tasktracker.map.tasks.maximum</name>
    <value>1</value>
  </property>

  <property>
    <name>mapreduce.tasktracker.reduce.tasks.maximum</name>
    <value>1</value>
  </property>

  <property>
    <name>mapreduce.map.speculative</name>
    <value>false</value>
  </property>

  <property>
    <name>mapreduce.reduce.speculative</name>
    <value>false</value>
  </property>

  <property>
    <name>mapreduce.map.java.opts</name>
    <value>-Xmx1024m</value>
  </property>

</configuration>
Yarn-site.xml

<configuration>

  <property>
    <name>yarn.resourcemanager.hostname</name    <value>master</value>
  </property>

  <property>
    <name>yarn.log-aggregation-enable</name>
    <value>true</value>
  </property>

  <property>
    <name>yarn.nodemanager.aux-services</name>
    <value>mapreduce_shuffle</value>
  </property>

  <property>
    <name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name>
    <value>org.apache.hadoop.mapred.ShuffleHandler</value>
  </property>

</configuration>

With thanks and regards
Rab
+
rab ra 2013-11-14, 06:15
+
rab ra 2013-11-14, 07:56