Home | About | Sematext search-lucene.com search-hadoop.com
 Search Hadoop and all its subprojects:

Switch to Threaded View
Sqoop, mail # user - hadoop namenode problem


Copy link to this message
-
Re: hadoop namenode problem
Mohammad Tariq 2012-06-18, 07:18
Are you installing CDH4 for the first time or were you using the CDh3
with MRv1??If that is the case you have to uninstall that first..It
may cause problems.

Regards,
    Mohammad Tariq
On Mon, Jun 18, 2012 at 11:59 AM, soham sardar
<[EMAIL PROTECTED]> wrote:
> hey when i tried that it says that command not found .. I wanna tell u
> that i have installed via tarball ( cdh4)   so is there some changes i
> need to imply because of the tarball ???
> I badly need start the nodes ....
>
>
>
>
>
>
> On Fri, Jun 15, 2012 at 5:05 PM, Mohammad Tariq <[EMAIL PROTECTED]> wrote:
>> to start the hdfs use -
>>
>> $ for service in /etc/init.d/hadoop-hdfs-*
>>> do
>>> sudo $service start
>>> done
>>
>> and to start mapreduce do -
>>
>> $ for service in /etc/init.d/hadoop-0.20-mapreduce-*
>>> do
>>> sudo $service start
>>> done
>>
>> Regards,
>>     Mohammad Tariq
>>
>>
>> On Fri, Jun 15, 2012 at 4:54 PM, soham sardar <[EMAIL PROTECTED]> wrote:
>>> hey mohammad
>>> i wanna knw how to start all the nodes of hadoop like in cdh3 there
>>> was a script /bin/start-all.sh
>>> but in the cdh4 tarballs  i dont find any such script??
>>>
>>> On Fri, Jun 15, 2012 at 4:39 PM, Mohammad Tariq <[EMAIL PROTECTED]> wrote:
>>>> in both the lines???i mean your hosts file should look something like this -
>>>>
>>>> 127.0.0.1       localhost
>>>> 127.0.0.1       ubuntu.ubuntu-domain    ubuntu
>>>>
>>>> # The following lines are desirable for IPv6 capable hosts
>>>> ::1     ip6-localhost ip6-loopback
>>>> fe00::0 ip6-localnet
>>>> ff00::0 ip6-mcastprefix
>>>> ff02::1 ip6-allnodes
>>>> ff02::2 ip6-allrouters
>>>>
>>>> Regards,
>>>>     Mohammad Tariq
>>>>
>>>>
>>>> On Fri, Jun 15, 2012 at 4:32 PM, soham sardar <[EMAIL PROTECTED]> wrote:
>>>>> hey mohammad but its already 127.0.0.1 i guess
>>>>>
>>>>>
>>>>> On Fri, Jun 15, 2012 at 4:24 PM, Mohammad Tariq <[EMAIL PROTECTED]> wrote:
>>>>>> All looks fine to me..change the line "127.0.1.1" in your hosts file
>>>>>> to "127.0.0.1" and see if it works for you.
>>>>>>
>>>>>> Regards,
>>>>>>     Mohammad Tariq
>>>>>>
>>>>>>
>>>>>> On Fri, Jun 15, 2012 at 4:14 PM, soham sardar <[EMAIL PROTECTED]> wrote:
>>>>>>> configuration in the sense i have given the following configs
>>>>>>>
>>>>>>> hdfs-site
>>>>>>>
>>>>>>> <property>
>>>>>>>  <name>dfs.replication</name>
>>>>>>>  <value>1</value>
>>>>>>>  <description>Default block replication.
>>>>>>>  The actual number of replications can be specified when the file is created.
>>>>>>>  The default is used if replication is not specified in create time.
>>>>>>>  </description>
>>>>>>> </property>
>>>>>>>
>>>>>>> core-site
>>>>>>>
>>>>>>> <property>
>>>>>>>  <name>hadoop.tmp.dir</name>
>>>>>>>  <value>/app/hadoop/tmp</value>
>>>>>>>  <description>A base for other temporary directories.</description>
>>>>>>> </property>
>>>>>>>
>>>>>>> <property>
>>>>>>>  <name>fs.default.name</name>
>>>>>>>  <value>hdfs://localhost:54310</value>
>>>>>>>  <description>The name of the default file system.  A URI whose
>>>>>>>  scheme and authority determine the FileSystem implementation.  The
>>>>>>>  uri's scheme determines the config property (fs.SCHEME.impl) naming
>>>>>>>  the FileSystem implementation class.  The uri's authority is used to
>>>>>>>  determine the host, port, etc. for a filesystem.</description>
>>>>>>> </property>
>>>>>>>
>>>>>>> and yarn-site
>>>>>>>
>>>>>>> <property>
>>>>>>>    <name>yarn.resourcemanager.resource-tracker.address</name>
>>>>>>>    <value>localhost:8031</value>
>>>>>>>    <description>host is the hostname of the resource manager and
>>>>>>>    port is the port on which the NodeManagers contact the Resource Manager.
>>>>>>>    </description>
>>>>>>>  </property>
>>>>>>>
>>>>>>>  <property>
>>>>>>>    <name>yarn.resourcemanager.scheduler.address</name>
>>>>>>>    <value>localhost:8030</value>
>>>>>>>    <description>host is the hostname of the resourcemanager and port
>>>>>>> is the port
>>>>>>>    on which the Applications in the cluster talk to the Resource Manager.