Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Threaded View
HDFS >> mail # user >> Rack Awareness


Hi,

Try running 'hadoop dfsadmin -refreshNodes' ! Your NN might have
cached previously set values!

Thanks,
On Tue, Mar 26, 2013 at 10:31 AM, preethi ganeshan
<[EMAIL PROTECTED]> wrote:
> Hi,
>
> I used this script . In core-site.xml i have set
> net.topology.script.file.name to this file's path. Then i executed the
> script and passed my computers IP address. It returned /dc1/rack1 . However
> , when i ran my MapReduce job it still says the job ran on default-rack .
> How can i change that??
> Thank you
> Regards,
> Preethi Ganeshan
>
>
> ( I have made the changes accordingly to fit my computer )
>
> HADOOP_CONF=/etc/hadoop/conf
>
> while [ $# -gt 0 ] ; do
>   nodeArg=$1
>   exec< ${HADOOP_CONF}/topology.data
>   result=""
>   while read line ; do
>     ar=( $line )
>     if [ "${ar[0]}" = "$nodeArg" ] ; then
>       result="${ar[1]}"
>     fi
>   done
>   shift
>   if [ -z "$result" ] ; then
>     echo -n "/default/rack "
>   else
>     echo -n "$result "
>   fi
> done
>
> Topology data
>
> hadoopdata1.ec.com     /dc1/rack1
> hadoopdata1            /dc1/rack1
> 10.1.1.1               /dc1/rack2
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB