Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Threaded View
Hadoop >> mail # user >> Re: Hadoop 2.2.0 from source configuration


Copy link to this message
-
Re: Hadoop 2.2.0 from source configuration
Adam,

here is the link:
http://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-common/SingleCluster.html

Then, since it didn't work I tried a number of things, but my configuration
files are really skinny and there isn't much stuff in it.

-----------------
Daniel Savard
2013/12/3 Adam Kawa <[EMAIL PROTECTED]>

> Could you please send me a link to the documentation that you followed to
> setup your single-node cluster?
> I will go through it and do it step by step, so hopefully at the end your
> issue will be solved and the documentation will be improved.
>
> If you have any non-standard settings in core-site.xml, hdfs-site.xml and
> hadoop-env.sh (that were not suggested by the documentation that you
> followed), then please share them.
>
>
> 2013/12/3 Daniel Savard <[EMAIL PROTECTED]>
>
>> Adam,
>>
>> that's not the issue, I did substitute the name in the first report. The
>> actual hostname is feynman.cids.ca.
>>
>> -----------------
>> Daniel Savard
>>
>>
>> 2013/12/3 Adam Kawa <[EMAIL PROTECTED]>
>>
>>> Daniel,
>>>
>>> I see that in previous hdfs report, you had: hosta.subdom1.tld1, but
>>> now you have feynman.cids.ca. What is the content of your /etc/hosts
>>> file, and output of $hostname command?
>>>
>>>
>>>
>>>
>>> 2013/12/3 Daniel Savard <[EMAIL PROTECTED]>
>>>
>>>> I did that more than once, I just retry it from the beginning. I zapped
>>>> the directories and recreated them with hdfs namenode -format and restarted
>>>> HDFS and I am still getting the very same error.
>>>>
>>>> I have posted previously the report. Is there anything in this report
>>>> that indicates I am not having enough free space somewhere? That's the only
>>>> thing I can see may cause this problem after everything I read on the
>>>> subject. I am new to Hadoop and I just want to setup a standalone node for
>>>> starting to experiment a while with it before going ahead with a complete
>>>> cluster.
>>>>
>>>> I repost the report for convenience:
>>>>
>>>> Configured Capacity: 2939899904 (2.74 GB)
>>>> Present Capacity: 534421504 (509.66 MB)
>>>> DFS Remaining: 534417408 (509.66 MB)
>>>>
>>>> DFS Used: 4096 (4 KB)
>>>> DFS Used%: 0.00%
>>>> Under replicated blocks: 0
>>>> Blocks with corrupt replicas: 0
>>>> Missing blocks: 0
>>>>
>>>> -------------------------------------------------
>>>> Datanodes available: 1 (1 total, 0 dead)
>>>>
>>>> Live datanodes:
>>>> Name: 127.0.0.1:50010 (feynman.cids.ca)
>>>> Hostname: feynman.cids.ca
>>>> Decommission Status : Normal
>>>> Configured Capacity: 2939899904 (2.74 GB)
>>>>
>>>> DFS Used: 4096 (4 KB)
>>>> Non DFS Used: 2405478400 (2.24 GB)
>>>> DFS Remaining: 534417408 (509.66 MB)
>>>> DFS Used%: 0.00%
>>>> DFS Remaining%: 18.18%
>>>> Last contact: Tue Dec 03 13:37:02 EST 2013
>>>>
>>>>
>>>> -----------------
>>>> Daniel Savard
>>>>
>>>>
>>>> 2013/12/3 Adam Kawa <[EMAIL PROTECTED]>
>>>>
>>>>> Daniel,
>>>>>
>>>>> It looks that you can only communicate with NameNode to do
>>>>> "metadata-only" operations (e.g. listing, creating a dir, empty file)...
>>>>>
>>>>> Did you format the NameNode correctly?
>>>>> A quite similar issue is described here:
>>>>> http://www.manning-sandbox.com/thread.jspa?messageID=126741. The last
>>>>> reply says: "The most common is that you have reformatted the
>>>>> namenode leaving it in an inconsistent state. The most common solution is
>>>>> to stop dfs, remove the contents of the dfs directories on all the
>>>>> machines, run “hadoop namenode -format” on the controller, then restart
>>>>> dfs. That consistently fixes the problem for me. This may be serious
>>>>> overkill but it works."
>>>>>
>>>>>
>>>>> 2013/12/3 Daniel Savard <[EMAIL PROTECTED]>
>>>>>
>>>>>> Thanks Arun,
>>>>>>
>>>>>> I already read and did everything recommended at the referred URL.
>>>>>> There isn't any error message in the logfiles. The only error message
>>>>>> appears when I try to put a non-zero file on the HDFS as posted above.
>>>>>> Beside that, absolutely nothing in the logs is telling me something is
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB