Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Threaded View
Hadoop >> mail # user >> please help in setting hadoop


Copy link to this message
-
Re: RE: please help in setting hadoop
You've set hadoop.tmp.dir to /home/hadoop/hadoop-${user.name}.

This means that on every node, you're going to need a directory named (e.g.)
/home/hadoop/hadoop-root/, since it seems as though you're running things as
root (in general, not a good policy; but ok if you're on EC2 or something
like that).

mapred.local.dir defaults to ${hadoop.tmp.dir}/mapred/local. You've
confirmed that this exists on the machine named 'master' -- what about on
slave?

Then, what are the permissions of /home/hadoop/ on the slave node? Whichever
user started the Hadoop daemons (probably either 'root' or 'hadoop') will
need the ability to mkdir /home/hadoop/hadoop-root underneath of
/home/hadoop. If that directory doesn't exist, or is chown'd to someone
else, this will probably be the result.

- Aaron
On Thu, Nov 26, 2009 at 10:22 PM, <[EMAIL PROTECTED]> wrote:

> Hi,
>   There should be a folder called as logs in $HADOOP_HOME. Also try going
> through
>
> http://www.michael-noll.com/wiki/Running_Hadoop_On_Ubuntu_Linux_%28Multi-Node_Cluster%29
> .
>
>
> This is a pretty good tutorial
>
> Abhishek Agrawal
>
> SUNY- Buffalo
> (716-435-7122)
>
> On Fri 11/27/09  1:18 AM , "Krishna Kumar" [EMAIL PROTECTED] sent:
> > I have tried, but didn't get any success. In bwt can you please tell
> exact
> > path of log file which I have to refer.
> >
> >
> > Thanks and Best Regards,
> >
> > Krishna Kumar
> >
> > Senior Storage Engineer
> >
> > Why do we have to die? If we had to die, and everything is gone after
> that,
> > then nothing else matters on this earth - everything is temporary, at
> least
> > relative to me.
> >
> >
> >
> >
> > -----Original Message-----
> >
> > From: [EMAIL PROTECTED] [aa225@buffa
> > lo.edu]
> > Sent: Friday, November 27, 2009 10:56 AM
> >
> > To: [EMAIL PROTECTED]
> > Subject: Re: please help in setting hadoop
> >
> >
> >
> > Hi,
> >
> > Just a thought, but you do not need to setup the temp directory in
> >
> > conf/hadoop-site.xml especially if you are running basic examples. Give
> > that a
> > shot, maybe it will work out. Otherwise see if you can find additional
> info
> > in
> > the LOGS
> >
> >
> >
> > Thank You
> >
> >
> >
> > Abhishek Agrawal
> >
> >
> >
> > SUNY- Buffalo
> >
> > (716-435-7122)
> >
> >
> >
> > On Fri 11/27/09 12:20 AM , "Krishna Kumar" kri
> > [EMAIL PROTECTED] sent:
> > > Dear All,
> >
> > > Can anybody please help me in getting out from
> > these error messages:
> > > [ hadoop]# hadoop jar
> >
> > >
> > /usr/lib/hadoop/hadoop-0.18.3-14.cloudera.CH0_3-examples.jar
> > > wordcount
> >
> > > test test-op
> >
> > >
> >
> > > 09/11/26 17:15:45 INFO mapred.FileInputFormat:
> > Total input paths to
> > > process : 4
> >
> > >
> >
> > > 09/11/26 17:15:45 INFO mapred.FileInputFormat:
> > Total input paths to
> > > process : 4
> >
> > >
> >
> > > org.apache.hadoop.ipc.RemoteException:
> > java.io.IOException: No valid
> > > local directories in property: mapred.local.dir
> >
> > >
> >
> > > at
> >
> > >
> > org.apache.hadoop.conf.Configuration.getLocalPath(Configuration.java:730
> > > )
> >
> > >
> >
> > > at
> >
> > >
> > org.apache.hadoop.mapred.JobConf.getLocalPath(JobConf.java:222)
> > >
> >
> > > at
> >
> > >
> > org.apache.hadoop.mapred.JobInProgress.(JobInProgress.java:194)
> > >
> >
> > > at
> >
> > >
> > org.apache.hadoop.mapred.JobTracker.submitJob(JobTracker.java:1557)
> > >
> >
> > > at
> > sun.reflect.NativeMethodAccessorImpl.invoke0(Native
> > > Method)
> >
> > >
> >
> > > at
> >
> > >
> > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.jav
> > > a:39)
> >
> > >
> >
> > > at
> >
> > >
> > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessor
> > > Impl.java:25)
> >
> > >
> >
> > > at
> > java.lang.reflect.Method.invoke(Method.java:585)
> > >
> >
> > > at
> > org.apache.hadoop.ipc.RPC$Server.call(RPC.java:481)
> > >
> >
> > > at
> > org.apache.hadoop.ipc.Server$Handler.run(Server.java:890)
> > > I am running the hadoop cluster as root user on
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB