Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Threaded View
Sqoop >> mail # user >> hadoop namenode problem


Copy link to this message
-
Re: hadoop namenode problem
Soham, to me it seems that your base directories haven't been created
properly. Stop all the Hadoop related processes and issue these
commands -

$ sudo rm -rf /var/lib/hadoop-0.20/cache/hadoop/dfs
$ sudo mkdir -p /var/lib/hadoop-0.20/cache/hadoop/dfs/{name,data}
$ sudo chown hdfs:hdfs /var/lib/hadoop-0.20/cache/hadoop/dfs/{name,data}
$ sudo -u hdfs hadoop namenode -format

It should work.

Regards,
    Mohammad Tariq
On Mon, Jun 18, 2012 at 12:52 PM, soham sardar
<[EMAIL PROTECTED]> wrote:
> yea i was using cdh3 and then i removed all the nodes and all that
> completely as to try cdh4 and hue more specifically
>
>
> On Mon, Jun 18, 2012 at 12:48 PM, Mohammad Tariq <[EMAIL PROTECTED]> wrote:
>> Are you installing CDH4 for the first time or were you using the CDh3
>> with MRv1??If that is the case you have to uninstall that first..It
>> may cause problems.
>>
>> Regards,
>>     Mohammad Tariq
>>
>>
>> On Mon, Jun 18, 2012 at 11:59 AM, soham sardar
>> <[EMAIL PROTECTED]> wrote:
>>> hey when i tried that it says that command not found .. I wanna tell u
>>> that i have installed via tarball ( cdh4)   so is there some changes i
>>> need to imply because of the tarball ???
>>> I badly need start the nodes ....
>>>
>>>
>>>
>>>
>>>
>>>
>>> On Fri, Jun 15, 2012 at 5:05 PM, Mohammad Tariq <[EMAIL PROTECTED]> wrote:
>>>> to start the hdfs use -
>>>>
>>>> $ for service in /etc/init.d/hadoop-hdfs-*
>>>>> do
>>>>> sudo $service start
>>>>> done
>>>>
>>>> and to start mapreduce do -
>>>>
>>>> $ for service in /etc/init.d/hadoop-0.20-mapreduce-*
>>>>> do
>>>>> sudo $service start
>>>>> done
>>>>
>>>> Regards,
>>>>     Mohammad Tariq
>>>>
>>>>
>>>> On Fri, Jun 15, 2012 at 4:54 PM, soham sardar <[EMAIL PROTECTED]> wrote:
>>>>> hey mohammad
>>>>> i wanna knw how to start all the nodes of hadoop like in cdh3 there
>>>>> was a script /bin/start-all.sh
>>>>> but in the cdh4 tarballs  i dont find any such script??
>>>>>
>>>>> On Fri, Jun 15, 2012 at 4:39 PM, Mohammad Tariq <[EMAIL PROTECTED]> wrote:
>>>>>> in both the lines???i mean your hosts file should look something like this -
>>>>>>
>>>>>> 127.0.0.1       localhost
>>>>>> 127.0.0.1       ubuntu.ubuntu-domain    ubuntu
>>>>>>
>>>>>> # The following lines are desirable for IPv6 capable hosts
>>>>>> ::1     ip6-localhost ip6-loopback
>>>>>> fe00::0 ip6-localnet
>>>>>> ff00::0 ip6-mcastprefix
>>>>>> ff02::1 ip6-allnodes
>>>>>> ff02::2 ip6-allrouters
>>>>>>
>>>>>> Regards,
>>>>>>     Mohammad Tariq
>>>>>>
>>>>>>
>>>>>> On Fri, Jun 15, 2012 at 4:32 PM, soham sardar <[EMAIL PROTECTED]> wrote:
>>>>>>> hey mohammad but its already 127.0.0.1 i guess
>>>>>>>
>>>>>>>
>>>>>>> On Fri, Jun 15, 2012 at 4:24 PM, Mohammad Tariq <[EMAIL PROTECTED]> wrote:
>>>>>>>> All looks fine to me..change the line "127.0.1.1" in your hosts file
>>>>>>>> to "127.0.0.1" and see if it works for you.
>>>>>>>>
>>>>>>>> Regards,
>>>>>>>>     Mohammad Tariq
>>>>>>>>
>>>>>>>>
>>>>>>>> On Fri, Jun 15, 2012 at 4:14 PM, soham sardar <[EMAIL PROTECTED]> wrote:
>>>>>>>>> configuration in the sense i have given the following configs
>>>>>>>>>
>>>>>>>>> hdfs-site
>>>>>>>>>
>>>>>>>>> <property>
>>>>>>>>>  <name>dfs.replication</name>
>>>>>>>>>  <value>1</value>
>>>>>>>>>  <description>Default block replication.
>>>>>>>>>  The actual number of replications can be specified when the file is created.
>>>>>>>>>  The default is used if replication is not specified in create time.
>>>>>>>>>  </description>
>>>>>>>>> </property>
>>>>>>>>>
>>>>>>>>> core-site
>>>>>>>>>
>>>>>>>>> <property>
>>>>>>>>>  <name>hadoop.tmp.dir</name>
>>>>>>>>>  <value>/app/hadoop/tmp</value>
>>>>>>>>>  <description>A base for other temporary directories.</description>
>>>>>>>>> </property>
>>>>>>>>>
>>>>>>>>> <property>
>>>>>>>>>  <name>fs.default.name</name>
>>>>>>>>>  <value>hdfs://localhost:54310</value>
>>>>>>>>>  <description>The name of the default file system.  A URI whose
>>>>>>>>>  scheme and authority determine the FileSystem implementation.  The
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB