Home | About | Sematext search-lucene.com search-hadoop.com
 Search Hadoop and all its subprojects:

Switch to Threaded View
Hadoop >> mail # general >> Total Space Available on Hadoop Cluster Or Hadoop version of "df".


Copy link to this message
-
RE: Total Space Available on Hadoop Cluster Or Hadoop version of "df".
Rahul,

There is a ton of documentation available for Hadoop (including books).

Best place to start is the wiki: http://wiki.apache.org/hadoop/

On your specific issue, you need to configure Hadoop to tell it what directories to store data.

The configuration parameter name is 'dfs.data.dir' and you need to put in a comma-delimited list of directories to use to store data.

JG

> -----Original Message-----
> From: rahul [mailto:[EMAIL PROTECTED]]
> Sent: Saturday, October 02, 2010 9:53 AM
> To: [EMAIL PROTECTED]
> Subject: Re: Total Space Available on Hadoop Cluster Or Hadoop version
> of "df".
>
> Hi Marcos,
>
> Same thing is happening for me as well.
>
> I have multiple disks mounted to my system but by default when i format
> it took the nearest/ disk in which hadoop binary is present.
>
> Is there a way in which I can format all the drives mounted to my
> system ?
>
> So can we control in some way the drives or the places which we want to
> format for hdfs?
>
> Thanks,
> Rahul
>
> On Oct 2, 2010, at 7:39 AM, Marcos Pinto wrote:
>
> > I gotte the same problem, I remember it was something realted to
> user's
> > partition.
> > for example I created hadoop user so HDFS took the closest partition
> to
> > user.
> > I dont remenber exaclty but it was something like that. I hope it
> helps u in
> > someway.
> >
> > On Sat, Oct 2, 2010 at 2:13 AM, Glenn Gore
> <[EMAIL PROTECTED]>wrote:
> >
> >> hadoop dfsadmin -report
> >>
> >> Regards
> >>
> >> Glenn
> >>
> >>
> >> -----Original Message-----
> >> From: rahul [mailto:[EMAIL PROTECTED]]
> >> Sent: Sat 10/2/2010 2:27 PM
> >> To: [EMAIL PROTECTED]
> >> Subject: Total Space Available on Hadoop Cluster Or Hadoop version
> of "df".
> >>
> >> Hi,
> >>
> >> I am using Hadoop 0.20.2 version for data processing by setting up
> Hadoop
> >> Cluster on two nodes.
> >>
> >> And I am continuously adding more space to the nodes.
> >>
> >> Can some body let me know how to get the total space available on
> the
> >> hadoop cluster using command line.
> >>
> >> or
> >>
> >> Hadoop version "df", Unix command.
> >>
> >> Any input is helpful.
> >>
> >> Thanks
> >> Rahul
> >>
> >>