Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Threaded View
Hadoop >> mail # user >> I can't see this email ... So to clarify ..


Copy link to this message
-
Re: I can't see this email ... So to clarify ..
Is  /cs/student/mark/ on the *shared* NFS volume you mentioned in your
original post?  In that case all nodes would be trying to use the exact same  
directory.

Luca

On May 25, 2011 08:22:50 Mark question wrote:
> I do ...
>
>  $ ls -l /cs/student/mark/tmp/hodhod
> total 4
> drwxr-xr-x 3 mark grad 4096 May 24 21:10 dfs
>
> and ..
>
> $ ls -l /tmp/hadoop-mark
> total 4
> drwxr-xr-x 3 mark grad 4096 May 24 21:10 dfs
>
> $ ls -l /tmp/hadoop-maha/dfs/name/       <<<< only name is created here no
> data
>
> Thanks agian,
> Mark
>
> On Tue, May 24, 2011 at 9:26 PM, Mapred Learn <[EMAIL PROTECTED]>wrote:
> > Do u Hv right permissions on the new dirs ?
> > Try stopping n starting cluster...
> >
> > -JJ
> >
> > On May 24, 2011, at 9:13 PM, Mark question <[EMAIL PROTECTED]> wrote:
> > > Well, you're right  ... moving it to hdfs-site.xml had an effect at
> >
> > least.
> >
> > > But now I'm in the NameSpace incompatable error:
> > >
> > > WARN org.apache.hadoop.hdfs.server.common.Util: Path
> > > /tmp/hadoop-mark/dfs/data should be specified as a URI in configuration
> > > files. Please update hdfs configuration.
> > > java.io.IOException: Incompatible namespaceIDs in
> >
> > /tmp/hadoop-mark/dfs/data
> >
> > > My configuration for this part in hdfs-site.xml:
> > > <configuration>
> > > <property>
> > >
> > >    <name>dfs.data.dir</name>
> > >    <value>/tmp/hadoop-mark/dfs/data</value>
> > >
> > > </property>
> > > <property>
> > >
> > >    <name>dfs.name.dir</name>
> > >    <value>/tmp/hadoop-mark/dfs/name</value>
> > >
> > > </property>
> > > <property>
> > >
> > >    <name>hadoop.tmp.dir</name>
> > >    <value>/cs/student/mark/tmp/hodhod</value>
> > >
> > > </property>
> > > </configuration>
> > >
> > > The reason why I want to change hadoop.tmp.dir is because the student
> >
> > quota
> >
> > > under /tmp is small so I wanted to mount on /cs/student instead for
> > > hadoop.tmp.dir.
> > >
> > > Thanks,
> > > Mark
> > >
> > > On Tue, May 24, 2011 at 7:25 PM, Joey Echeverria <[EMAIL PROTECTED]>
> >
> > wrote:
> > >> Try moving the the configuration to hdfs-site.xml.
> > >>
> > >> One word of warning, if you use /tmp to store your HDFS data, you risk
> > >> data loss. On many operating systems, files and directories in /tmp
> > >> are automatically deleted.
> > >>
> > >> -Joey
> > >>
> > >> On Tue, May 24, 2011 at 10:22 PM, Mark question <[EMAIL PROTECTED]>
> > >>
> > >> wrote:
> > >>> Hi guys,
> > >>>
> > >>> I'm using an NFS cluster consisting of 30 machines, but only
> > >>> specified
> >
> > 3
> >
> > >> of
> > >>
> > >>> the nodes to be my hadoop cluster. So my problem is this. Datanode
> >
> > won't
> >
> > >>> start in one of the nodes because of the following error:
> > >>>
> > >>> org.apache.hadoop.hdfs.server.
> > >>> common.Storage: Cannot lock storage
> >
> > /cs/student/mark/tmp/hodhod/dfs/data.
> >
> > >>> The directory is already locked
> > >>>
> > >>> I think it's because of the NFS property which allows one node to
> > >>> lock
> >
> > it
> >
> > >>> then the second node can't lock it. So I had to change the following
> > >>>
> > >>> configuration:
> > >>>      dfs.data.dir to be "/tmp/hadoop-user/dfs/data"
> > >>>
> > >>> But this configuration is overwritten by ${hadoop.tmp.dir}/dfs/data
> >
> > where
> >
> > >> my
> > >>
> > >>> hadoop.tmp.dir = " /cs/student/mark/tmp" as you might guess from
> > >>> above.
> > >>>
> > >>> Where is this configuration over-written ? I thought my core-site.xml
> >
> > has
> >
> > >>> the final configuration values.
> > >>> Thanks,
> > >>> Mark
> > >>
> > >> --
> > >> Joseph Echeverria
> > >> Cloudera, Inc.
> > >> 443.305.9434

--
Luca Pireddu
CRS4 - Distributed Computing Group
Loc. Pixina Manna Edificio 1
Pula 09010 (CA), Italy
Tel:  +39 0709250452
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB