Manish Bhoge 2012-03-15, 15:51
Manu S 2012-03-15, 16:01
Manu S 2012-03-15, 16:13
Manish Bhoge 2012-03-15, 16:21
Manu S 2012-03-15, 17:01
Guys, can you please take this up in CDH related mailing lists.
On Thu, Mar 15, 2012 at 10:01 AM, Manu S <[EMAIL PROTECTED]> wrote:
> Because for large clusters we have to run namenode in a single node,
> datanode in another nodes
> So we can start namenode and jobtracker in master node and datanode n
> tasktracker in slave nodes
> For getting more clarity You can check the service status after starting
> Verify these:
> dfs.name.dir hdfs:hadoop drwx------
> dfs.data.dir hdfs:hadoop drwx------
> mapred.local.dir mapred:hadoop drwxr-xr-x
> Please follow each steps in this link
> On Mar 15, 2012 9:52 PM, "Manish Bhoge" <[EMAIL PROTECTED]>
> > Ys, I understand the order and I formatted namenode before starting
> > services. As I suspect there may be ownership and an access issue. Not
> > to nail down issue exactly. I also have question why there are 2 routes
> > start services. When we have start-all.sh script then why need to go to
> > init.d to start services??
> > Thank you,
> > Manish
> > Sent from my BlackBerry, pls excuse typo
> > -----Original Message-----
> > From: Manu S <[EMAIL PROTECTED]>
> > Date: Thu, 15 Mar 2012 21:43:26
> > To: <[EMAIL PROTECTED]>; <[EMAIL PROTECTED]>
> > Reply-To: [EMAIL PROTECTED]
> > Subject: Re: Issue when starting services on CDH3
> > Did you check the service status?
> > Is it like "dead, but pid exist"?
> > Did you check the ownership and permissions for the
> > dfs.name.dir,dfs.data.dir,mapped.local.dir etc ?
> > The order for starting daemons are like this:
> > 1 namenode
> > 2 datanode
> > 3 jobtracker
> > 4 tasktracker
> > Did you format the namenode before starting?
> > On Mar 15, 2012 9:31 PM, "Manu S" <[EMAIL PROTECTED]> wrote:
> > > Dear manish
> > > Which daemons are not starting?
> > >
> > > On Mar 15, 2012 9:21 PM, "Manish Bhoge" <[EMAIL PROTECTED]>
> > > wrote:
> > > >
> > > > I have CDH3 installed in standalone mode. I have install all hadoop
> > > components. Now when I start services (namenode,secondary namenode,job
> > > tracker,task tracker) I can start gracefully from /usr/lib/hadoop/
> > > ./bin/start-all.sh. But when start the same servises from
> > > /etc/init.d/hadoop-0.20-* then I unable to start. Why? Now I want to
> > start
> > > Hue also which is in init.d that also I couldn't start. Here I suspect
> > > authentication issue. Because all the services in init.d are under root
> > > user and root group. Please suggest I am stuck here. I tried hive and
> > > seems it running fine.
> > > > Thanks
> > > > Manish.
> > > > Sent from my BlackBerry, pls excuse typo
> > > >
> > >
Harsh J 2012-03-15, 19:16
Michael Segel 2012-03-15, 16:43
Manish Bhoge 2012-03-15, 16:05