Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Plain View
MapReduce >> mail # user >> Re: How to setup Cloudera Hadoop to run everything on a localhost?


+
Morgan Reece 2013-03-05, 19:10
+
anton ashanin 2013-03-05, 19:47
+
anton ashanin 2013-03-05, 21:14
+
yibing Shi 2013-03-05, 22:25
+
anton ashanin 2013-03-05, 22:56
+
Suresh Srinivas 2013-03-05, 23:30
+
anton ashanin 2013-03-05, 23:38
Copy link to this message
-
Re: How to setup Cloudera Hadoop to run everything on a localhost?
I didn't run all the services on a single server, but I doesn't matter
since the installation is the same no matter how many servers you are going
to install on.

I got the same error as you and it turned out that CM needs to be able to
know the FQDN. But I didn't use DHCP so it is easier for me to fix that. I
guess you might have to set up the DHCP server correctly for CM to find
your FQDN.
<http://demo.effectivemeasure.com/signatures/au/YibingShi.vcf>

On Wed, Mar 6, 2013 at 9:56 AM, anton ashanin <[EMAIL PROTECTED]>wrote:

> Do you run all Hadoop servers on a single host that gets IP by DHCP?
> What do you have in /etc/hosts?
>
> Thanks!
>
>
> On Wed, Mar 6, 2013 at 1:25 AM, yibing Shi <
> [EMAIL PROTECTED]> wrote:
>
>> Hi Anton,
>>
>> Cloudera manager needs fully qualified domain name. Run "hostname -f" to
>> check whether you have FQDN or not.
>>
>> I am not familiar with Ubuntu, but on my CentOS, I just put the FQDN into
>> /etc/sysconfig/network, which then looks like the following:
>> NETWORKING=yes
>> HOSTNAME=myhost.my.domain
>> GATEWAY=10.2.2.254
>>
>>
>> <http://demo.effectivemeasure.com/signatures/au/YibingShi.vcf>
>>
>>
>>
>> On Wed, Mar 6, 2013 at 8:14 AM, anton ashanin <[EMAIL PROTECTED]>wrote:
>>
>>> I am at a loss. I have set an IP address that my node got by DHCP:
>>>  127.0.0.1       localhost
>>> 192.168.1.6    node
>>>
>>> This has not helped. Cloudera Manager finds this host all right, but
>>> still can not get a "heartbeat" from it next.
>>> Maybe the problem is that at the moment of these experiments I have
>>> three laptops with addresses assigned by DHCP all running at once?
>>>
>>> To make Hadoop work I am ready now to switch Ubuntu for CentOS or should
>>> I try something else?
>>> Please let me know on what Linux version you have managed to run Hadoop
>>> on a local host only?
>>>
>>>
>>> On Tue, Mar 5, 2013 at 10:54 PM, Jean-Marc Spaggiari <
>>> [EMAIL PROTECTED]> wrote:
>>>
>>>> Hi Anton,
>>>>
>>>> Here is what my host is looking like:
>>>> 127.0.0.1       localhost
>>>> 192.168.1.2    myserver
>>>>
>>>>
>>>> JM
>>>>
>>>> 2013/3/5 anton ashanin <[EMAIL PROTECTED]>:
>>>> > Morgan,
>>>> > Just did exactly as you suggested, my /etc/hosts:
>>>> > 127.0.1.1 node.domain.local node
>>>> >
>>>> > Wiped out, annihilated my previous installation completely and
>>>> reinstalled
>>>> > everything from scratch.
>>>> > The same problem with CLOUDERA MANAGER (FREE EDITION):
>>>> > "Installation failed.  Failed to receive heartbeat from agent"
>>>> > ((((
>>>> >
>>>> > I will try now the the  bright idea from Jean, looks promising to me
>>>> >
>>>> >
>>>> >
>>>> > On Tue, Mar 5, 2013 at 10:10 PM, Morgan Reece <[EMAIL PROTECTED]>
>>>> wrote:
>>>> >>
>>>> >> Don't use 'localhost' as your host name.  For example, if you wanted
>>>> to
>>>> >> use the name 'node'; add another line to your hosts file like:
>>>> >>
>>>> >> 127.0.1.1 node.domain.local node
>>>> >>
>>>> >> Then change all the host references in your configuration files to
>>>> 'node'
>>>> >> -- also, don't forget to change the master/slave files as well.
>>>> >>
>>>> >> Now, if you decide to use an external address it would need to be
>>>> static.
>>>> >> This is easy to do, just follow this guide
>>>> >> http://www.howtoforge.com/linux-basics-set-a-static-ip-on-ubuntu
>>>> >> and replace '127.0.1.1' with whatever external address you decide on.
>>>> >>
>>>> >>
>>>> >> On Tue, Mar 5, 2013 at 12:59 PM, Suresh Srinivas <
>>>> [EMAIL PROTECTED]>
>>>> >> wrote:
>>>> >>>
>>>> >>> Can you please take this Cloudera mailing list?
>>>> >>>
>>>> >>>
>>>> >>> On Tue, Mar 5, 2013 at 10:33 AM, anton ashanin <
>>>> [EMAIL PROTECTED]>
>>>> >>> wrote:
>>>> >>>>
>>>> >>>> I am trying to run all Hadoop servers on a single Ubuntu
>>>> localhost. All
>>>> >>>> ports are open and my /etc/hosts file is
>>>> >>>>
>>>> >>>> 127.0.0.1   frigate frigate.domain.local    localhost
>>>> >>>> # The following lines are desirable for IPv6 capable hosts
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB