Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Threaded View
Sqoop >> mail # user >> hadoop namenode problem


Copy link to this message
-
Re: hadoop namenode problem
to start the hdfs use -

$ for service in /etc/init.d/hadoop-hdfs-*
> do
> sudo $service start
> done

and to start mapreduce do -

$ for service in /etc/init.d/hadoop-0.20-mapreduce-*
> do
> sudo $service start
> done

Regards,
    Mohammad Tariq
On Fri, Jun 15, 2012 at 4:54 PM, soham sardar <[EMAIL PROTECTED]> wrote:
> hey mohammad
> i wanna knw how to start all the nodes of hadoop like in cdh3 there
> was a script /bin/start-all.sh
> but in the cdh4 tarballs  i dont find any such script??
>
> On Fri, Jun 15, 2012 at 4:39 PM, Mohammad Tariq <[EMAIL PROTECTED]> wrote:
>> in both the lines???i mean your hosts file should look something like this -
>>
>> 127.0.0.1       localhost
>> 127.0.0.1       ubuntu.ubuntu-domain    ubuntu
>>
>> # The following lines are desirable for IPv6 capable hosts
>> ::1     ip6-localhost ip6-loopback
>> fe00::0 ip6-localnet
>> ff00::0 ip6-mcastprefix
>> ff02::1 ip6-allnodes
>> ff02::2 ip6-allrouters
>>
>> Regards,
>>     Mohammad Tariq
>>
>>
>> On Fri, Jun 15, 2012 at 4:32 PM, soham sardar <[EMAIL PROTECTED]> wrote:
>>> hey mohammad but its already 127.0.0.1 i guess
>>>
>>>
>>> On Fri, Jun 15, 2012 at 4:24 PM, Mohammad Tariq <[EMAIL PROTECTED]> wrote:
>>>> All looks fine to me..change the line "127.0.1.1" in your hosts file
>>>> to "127.0.0.1" and see if it works for you.
>>>>
>>>> Regards,
>>>>     Mohammad Tariq
>>>>
>>>>
>>>> On Fri, Jun 15, 2012 at 4:14 PM, soham sardar <[EMAIL PROTECTED]> wrote:
>>>>> configuration in the sense i have given the following configs
>>>>>
>>>>> hdfs-site
>>>>>
>>>>> <property>
>>>>>  <name>dfs.replication</name>
>>>>>  <value>1</value>
>>>>>  <description>Default block replication.
>>>>>  The actual number of replications can be specified when the file is created.
>>>>>  The default is used if replication is not specified in create time.
>>>>>  </description>
>>>>> </property>
>>>>>
>>>>> core-site
>>>>>
>>>>> <property>
>>>>>  <name>hadoop.tmp.dir</name>
>>>>>  <value>/app/hadoop/tmp</value>
>>>>>  <description>A base for other temporary directories.</description>
>>>>> </property>
>>>>>
>>>>> <property>
>>>>>  <name>fs.default.name</name>
>>>>>  <value>hdfs://localhost:54310</value>
>>>>>  <description>The name of the default file system.  A URI whose
>>>>>  scheme and authority determine the FileSystem implementation.  The
>>>>>  uri's scheme determines the config property (fs.SCHEME.impl) naming
>>>>>  the FileSystem implementation class.  The uri's authority is used to
>>>>>  determine the host, port, etc. for a filesystem.</description>
>>>>> </property>
>>>>>
>>>>> and yarn-site
>>>>>
>>>>> <property>
>>>>>    <name>yarn.resourcemanager.resource-tracker.address</name>
>>>>>    <value>localhost:8031</value>
>>>>>    <description>host is the hostname of the resource manager and
>>>>>    port is the port on which the NodeManagers contact the Resource Manager.
>>>>>    </description>
>>>>>  </property>
>>>>>
>>>>>  <property>
>>>>>    <name>yarn.resourcemanager.scheduler.address</name>
>>>>>    <value>localhost:8030</value>
>>>>>    <description>host is the hostname of the resourcemanager and port
>>>>> is the port
>>>>>    on which the Applications in the cluster talk to the Resource Manager.
>>>>>    </description>
>>>>>  </property>
>>>>>
>>>>>  <property>
>>>>>    <name>yarn.resourcemanager.scheduler.class</name>
>>>>>    <value>org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler</value>
>>>>>    <description>In case you do not want to use the default
>>>>> scheduler</description>
>>>>>  </property>
>>>>>
>>>>>  <property>
>>>>>    <name>yarn.resourcemanager.address</name>
>>>>>    <value>localhost:8032</value>
>>>>>    <description>the host is the hostname of the ResourceManager and
>>>>> the port is the port on
>>>>>    which the clients can talk to the Resource Manager. </description>
>>>>>  </property>
>>>>>
>>>>>  <property>
>>>>>    <name>yarn.nodemanager.local-dirs</name>
>>>>>    <value></value>
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB