Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Threaded View
Hadoop >> mail # user >> hadoop under cygwin issue


Copy link to this message
-
Re: hadoop under cygwin issue
Brian, it looks like you missed a step in the instructions. You'll need to
format the hdfs filesystem instance before starting the NameNode server:

You need to run:

$ bin/hadoop namenode -format

.. then you can do bin/start-dfs.sh
Hope this helps,
- Aaron
On Sat, Jan 30, 2010 at 12:27 AM, Brian Wolf <[EMAIL PROTECTED]> wrote:

>
> Hi,
>
> I am trying to run Hadoop 0.19.2 under cygwin as per directions on the
> hadoop "quickstart" web page.
>
> I know sshd is running and I can "ssh localhost" without a password.
>
> This is from my hadoop-site.xml
>
> <configuration>
> <property>
> <name>hadoop.tmp.dir</name>
> <value>/cygwin/tmp/hadoop-${user.name}</value>
> </property>
> <property>
> <name>fs.default.name</name>
> <value>hdfs://localhost:9000</value>
> </property>
> <property>
> <name>mapred.job.tracker</name>
> <value>localhost:9001</value>
> </property>
> <property>
> <name>mapred.job.reuse.jvm.num.tasks</name>
> <value>-1</value>
> </property>
> <property>
> <name>dfs.replication</name>
> <value>1</value>
> </property>
> <property>
> <name>dfs.permissions</name>
> <value>false</value>
> </property>
> <property>
> <name>webinterface.private.actions</name>
> <value>true</value>
> </property>
> </configuration>
>
> These are errors from my log files:
>
>
> 2010-01-30 00:03:33,091 INFO org.apache.hadoop.ipc.metrics.RpcMetrics:
> Initializing RPC Metrics with hostName=NameNode, port=9000
> 2010-01-30 00:03:33,121 INFO
> org.apache.hadoop.hdfs.server.namenode.NameNode: Namenode up at: localhost/
> 127.0.0.1:9000
> 2010-01-30 00:03:33,161 INFO org.apache.hadoop.metrics.jvm.JvmMetrics:
> Initializing JVM Metrics with processName=NameNode, sessionId=null
> 2010-01-30 00:03:33,181 INFO
> org.apache.hadoop.hdfs.server.namenode.metrics.NameNodeMetrics: Initializing
> NameNodeMeterics using context
> object:org.apache.hadoop.metrics.spi.NullContext
> 2010-01-30 00:03:34,603 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
> fsOwner=brian,None,Administrators,Users
> 2010-01-30 00:03:34,603 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: supergroup=supergroup
> 2010-01-30 00:03:34,603 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
> isPermissionEnabled=false
> 2010-01-30 00:03:34,653 INFO
> org.apache.hadoop.hdfs.server.namenode.metrics.FSNamesystemMetrics:
> Initializing FSNamesystemMetrics using context
> object:org.apache.hadoop.metrics.spi.NullContext
> 2010-01-30 00:03:34,653 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Registered
> FSNamesystemStatusMBean
> 2010-01-30 00:03:34,803 INFO org.apache.hadoop.hdfs.server.common.Storage:
> Storage directory C:\cygwin\tmp\hadoop-brian\dfs\name does not exist.
> 2010-01-30 00:03:34,813 ERROR
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: FSNamesystem
> initialization failed.
> org.apache.hadoop.hdfs.server.common.InconsistentFSStateException:
> Directory C:\cygwin\tmp\hadoop-brian\dfs\name is in an inconsistent state:
> storage directory does not exist or is not accessible.
>   at
> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:278)
>   at
> org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:87)
>   at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:309)
>   at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:288)
>   at
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:163)
>   at
> org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:208)
>   at
> org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:194)
>   at
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:859)
>   at
> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:868)
> 2010-01-30 00:03:34,823 INFO org.apache.hadoop.ipc.Server: Stopping server
> on 9000
>
>
>
>
>
> ========================================================>
> 2010-01-29 15:13:30,270 INFO org.apache.hadoop.ipc.Client: Retrying connect
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB