Home | About | Sematext search-lucene.com search-hadoop.com
 Search Hadoop and all its subprojects:

Switch to Plain View
HDFS, mail # user - Why total node just 1


+
Martinus Martinus 2012-01-02, 03:23
+
Prashant Sharma 2012-01-02, 03:33
+
Harsh J 2012-01-02, 03:34
+
Martinus Martinus 2012-01-02, 06:15
+
Martinus Martinus 2012-01-02, 06:16
+
Bharath Mundlapudi 2012-01-02, 18:20
+
Martinus Martinus 2012-01-04, 08:56
+
Harsh J 2012-01-04, 09:19
Copy link to this message
-
Re: Why total node just 1
Bharath Mundlapudi 2012-01-04, 17:45
Hi Martinus,

As Harsha mentioned, HA is under development.

Couple of things you can do for HOT-COLD setup are:

1. Multiple dirs for ${dfs.name.dir}
2. Place ${dfs.name.dir} on a RAID 1 mirror setup
3. NFS as one of the ${dfs.name.dir}
-Bharath

On Wed, Jan 4, 2012 at 1:19 AM, Harsh J <[EMAIL PROTECTED]> wrote:

> Martinus,
>
> High-Availability NameNode is being worked upon and an initial version
> will be out soon. Check out the
> https://issues.apache.org/jira/browse/HDFS-1623 JIRA for its
> state/discussions.
>
> You can also clone the Hadoop repo and switch to branch 'HDFS-1623' to
> give it a whirl, although it is still being worked upon presently.
>
> For now, we recommend using multiple ${dfs.name.dir} directories
> (across mounts), preferably one of them being a reliable-enough NFS
> point.
>
> On Wed, Jan 4, 2012 at 2:26 PM, Martinus Martinus <[EMAIL PROTECTED]>
> wrote:
> > Hi Bharath,
> >
> > Thanks for your answer. I remembered hadoop has single point of failure,
> > which is it's namenode. Is there a way to make my hadoop clusters to
> become
> > fault tolerant, even when the master node (namenode) fail?
> >
> >
> > Thanks and Happy New Year 2012.
> >
> > On Tue, Jan 3, 2012 at 2:20 AM, Bharath Mundlapudi <[EMAIL PROTECTED]
> >
> > wrote:
> >>
> >> You might want to check the datanode logs. Go to the 3 remaining nodes
> >> which didn't start and restart the datanode.
> >>
> >> -Bharath
> >>
> >>
> >> On Sun, Jan 1, 2012 at 7:23 PM, Martinus Martinus <
> [EMAIL PROTECTED]>
> >> wrote:
> >>>
> >>> Hi,
> >>>
> >>> I have setup a hadoop clusters with 4 nodes and I have start-all.sh and
> >>> checked in every node, there are tasktracker and datanode run, but
> when I
> >>> run hadoop dfsadmin -report it's said like this :
> >>>
> >>> Configured Capacity: 30352158720 (28.27 GB)
> >>> Present Capacity: 3756392448 (3.5 GB)
> >>> DFS Remaining: 3756355584 (3.5 GB)
> >>> DFS Used: 36864 (36 KB)
> >>> DFS Used%: 0%
> >>> Under replicated blocks: 1
> >>> Blocks with corrupt replicas: 0
> >>> Missing blocks: 0
> >>>
> >>> -------------------------------------------------
> >>> Datanodes available: 1 (1 total, 0 dead)
> >>>
> >>> Name: 192.168.1.1:50010
> >>> Decommission Status : Normal
> >>> Configured Capacity: 30352158720 (28.27 GB)
> >>> DFS Used: 36864 (36 KB)
> >>> Non DFS Used: 26595766272 (24.77 GB)
> >>> DFS Remaining: 3756355584(3.5 GB)
> >>> DFS Used%: 0%
> >>> DFS Remaining%: 12.38%
> >>> Last contact: Mon Jan 02 11:19:44 CST 2012
> >>>
> >>> Why is there only total 1 node available? How to fix this problem?
> >>>
> >>> Thanks.
> >>
> >>
> >
>
>
>
> --
> Harsh J
>
+
Martinus Martinus 2012-01-05, 08:06