Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Threaded View
MapReduce >> mail # user >> Problem with RPC encryption over wire


Copy link to this message
-
Problem with RPC encryption over wire
Hello,

I am facing a problem in using Hadoop RPC encryption while transfer feature
in hadoop 2.2.0. I have 3 node cluster
Service running in node 1 (master)
Resource manager
Namenode
DataNode
SecondaryNamenode

Service running in slaves ( node 2 & 3)
NodeManager

I am trying to make data transfer between master and slave secure. For
that, I wanted to use data encryption over wire (RPC encryption) feature of
hadoop 2.2.0

When I ran the code, I get the below exception

Caused by: java.net.SocketTimeoutException: 60000 millis timeout while
waiting for channel to be ready for read.
In another run, I saw in log the following error

No common protection layer between server and client

Not sure whether my configuration is inline with what I want to achieve.

Can someone give me some hint on where I am going wrong?

By the way, I have the below configuration setting in all of these nodes

Core-site.xml

<configuration>

  <property>
    <name>fs.defaultFS</name>
    <value>hdfs://master:8020</value>
  </property>

  <property>
    <name>hadoop.tmp.dir</name>
    <value>/tmp</value>
  </property>
<!--
  <property>
    <name>hadoop.rpc.protection</name>
    <value>privacy</value>
  </property>
-->
  <property>
    <name>io.file.buffer.size</name>
    <value>131072</value>
  </property>

</configuration>

Hdfs-site.xml
<configuration>

  <property>
    <name>dfs.replication</name>
    <value>1</value>
   </property>

  <property>
    <name>dfs.name.dir</name>
    <value>/app/hadoop/dfs-2.2.0/name</value>
  </property>

  <property>
    <name>dfs.data.dir</name>
    <value>/app/hadoop/dfs-2.2.0/data</value>
  </property>

  <property>
    <name>dfs.encrypt.data.transfer</name>
    <value>true</value>
  </property>

  <property>
    <name>dfs.encrypt.data.transfer.algorithm</name>
    <value>rc4</value>
  </property>

  <property>
    <name>dfs.block.access.token.enable</name>
    <value>true</value>
  </property>

</configuration>

Mapred-site.xml

<configuration>

  <property>
    <name>mapreduce.framework.name</name>
    <value>yarn</value>
  </property>
<!--
  <property>
    <name>mapreduce.jobtracker.address</name>
    <value>master:8032</value>
  </property>
-->
  <property>
    <name>mapreduce.tasktracker.map.tasks.maximum</name>
    <value>1</value>
  </property>

  <property>
    <name>mapreduce.tasktracker.reduce.tasks.maximum</name>
    <value>1</value>
  </property>

  <property>
    <name>mapreduce.map.speculative</name>
    <value>false</value>
  </property>

  <property>
    <name>mapreduce.reduce.speculative</name>
    <value>false</value>
  </property>

  <property>
    <name>mapreduce.map.java.opts</name>
    <value>-Xmx1024m</value>
  </property>

</configuration>
Yarn-site.xml

<configuration>

  <property>
    <name>yarn.resourcemanager.hostname</name>
    <value>master</value>
  </property>

  <property>
    <name>yarn.log-aggregation-enable</name>
    <value>true</value>
  </property>

  <property>
    <name>yarn.nodemanager.aux-services</name>
    <value>mapreduce_shuffle</value>
  </property>

  <property>
    <name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name>
    <value>org.apache.hadoop.mapred.ShuffleHandler</value>
  </property>

</configuration>

With thanks and regards
Rab
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB