Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Threaded View
MapReduce >> mail # user >> Re: How to setup Cloudera Hadoop to run everything on a localhost?


Copy link to this message
-
Re: How to setup Cloudera Hadoop to run everything on a localhost?
I am at a loss. I have set an IP address that my node got by DHCP:
 127.0.0.1       localhost
192.168.1.6    node

This has not helped. Cloudera Manager finds this host all right, but still
can not get a "heartbeat" from it next.
Maybe the problem is that at the moment of these experiments I have three
laptops with addresses assigned by DHCP all running at once?

To make Hadoop work I am ready now to switch Ubuntu for CentOS or should I
try something else?
Please let me know on what Linux version you have managed to run Hadoop on
a local host only?
On Tue, Mar 5, 2013 at 10:54 PM, Jean-Marc Spaggiari <
[EMAIL PROTECTED]> wrote:

> Hi Anton,
>
> Here is what my host is looking like:
> 127.0.0.1       localhost
> 192.168.1.2    myserver
>
> JM
>
> 2013/3/5 anton ashanin <[EMAIL PROTECTED]>:
> > Morgan,
> > Just did exactly as you suggested, my /etc/hosts:
> > 127.0.1.1 node.domain.local node
> >
> > Wiped out, annihilated my previous installation completely and
> reinstalled
> > everything from scratch.
> > The same problem with CLOUDERA MANAGER (FREE EDITION):
> > "Installation failed.  Failed to receive heartbeat from agent"
> > ((((
> >
> > I will try now the the  bright idea from Jean, looks promising to me
> >
> >
> >
> > On Tue, Mar 5, 2013 at 10:10 PM, Morgan Reece <[EMAIL PROTECTED]>
> wrote:
> >>
> >> Don't use 'localhost' as your host name.  For example, if you wanted to
> >> use the name 'node'; add another line to your hosts file like:
> >>
> >> 127.0.1.1 node.domain.local node
> >>
> >> Then change all the host references in your configuration files to
> 'node'
> >> -- also, don't forget to change the master/slave files as well.
> >>
> >> Now, if you decide to use an external address it would need to be
> static.
> >> This is easy to do, just follow this guide
> >> http://www.howtoforge.com/linux-basics-set-a-static-ip-on-ubuntu
> >> and replace '127.0.1.1' with whatever external address you decide on.
> >>
> >>
> >> On Tue, Mar 5, 2013 at 12:59 PM, Suresh Srinivas <
> [EMAIL PROTECTED]>
> >> wrote:
> >>>
> >>> Can you please take this Cloudera mailing list?
> >>>
> >>>
> >>> On Tue, Mar 5, 2013 at 10:33 AM, anton ashanin <
> [EMAIL PROTECTED]>
> >>> wrote:
> >>>>
> >>>> I am trying to run all Hadoop servers on a single Ubuntu localhost.
> All
> >>>> ports are open and my /etc/hosts file is
> >>>>
> >>>> 127.0.0.1   frigate frigate.domain.local    localhost
> >>>> # The following lines are desirable for IPv6 capable hosts
> >>>> ::1     ip6-localhost ip6-loopback
> >>>> fe00::0 ip6-localnet
> >>>> ff00::0 ip6-mcastprefix
> >>>> ff02::1 ip6-allnodes
> >>>> ff02::2 ip6-allrouters
> >>>>
> >>>> When trying to install cluster Cloudera manager fails with the
> following
> >>>> messages:
> >>>>
> >>>> "Installation failed. Failed to receive heartbeat from agent".
> >>>>
> >>>> I run my Ubuntu-12.04 host from home connected by WiFi/dialup modem to
> >>>> my provider. What configuration is missing?
> >>>>
> >>>> Thanks!
> >>>
> >>>
> >>>
> >>>
> >>> --
> >>> http://hortonworks.com/download/
> >>
> >>
> >
>
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB