Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Plain View
Pig >> mail # user >> ERROR 2999: Unexpected internal error. Failed to create DataStorage


+
kiranprasad 2011-09-07, 08:55
+
Marek Miglinski 2011-09-07, 08:59
+
kiranprasad 2011-09-07, 11:23
+
Marek Miglinski 2011-09-07, 11:39
+
kiranprasad 2011-09-07, 14:16
Copy link to this message
-
Re: ERROR 2999: Unexpected internal error. Failed to create DataStorage
Kiran,

I guess your problem is config file. You have different value of
fs.default.name in hdfs-site.xml (10.0.0.61:8020) then in mapred-site.xml
and core-site.xml (10.0.0.61:9000) Make them consistent and then try.

Hope it helps,
Ashutosh

On Wed, Sep 7, 2011 at 07:16, kiranprasad <[EMAIL PROTECTED]>wrote:

> Hi
>
> Ive checked all the files are configured.
>
> For this Iam using 4 VMs (10.0.0.61,10.0.0.62,10.0.0.**63,10.0.0.64)
> 1 VM(10.0.0.61) is for namenode , 2nd VM is for  mapreduce(10.0.0.62), 3rd
> is datanode(10.0.0.63) ie is slave.
> I ve configured the same in masters and slaves file.
>
> core-site.xml
> ------------------
> <configuration>
>  <property>
>   <name>fs.default.name</name>
>   <value>hdfs://10.0.0.61:9000</**value>
>  </property>
> </configuration>
>
> mapred-site.xml
> ------------------
> <configuration>
>  <property>
>        <name>mapred.job.tracker</**name>
>  <value>10.0.0.62:9000</value>
>  </property>
> </configuration>
>
> hdfs-site.xml
> --------------------
> <configuration>
> <property>
>  <name>fs.default.name</name>
>  <value>hdfs://10.0.0.61:8020</**value>
> </property>
> <property>
>  <name>mapred.job.tracker</**name>
>  <value>10.0.0.62:8021</value>
> </property>
> <property>
>   <name>dfs.replication</name>
>   <value>2</value>
> </property>
> </configuration>
>
> masters
> ---------------
> 10.0.0.61
> 10.0.0.62
>
> slaves
> ---------
> 10.0.0.63
>
> But still getting the same exception.
>
>
> Regards
> Kiran.G
>
> IMImobile Plot 770, Rd. 44 Jubilee Hills, Hyderabad - 500033
> M +91 9000170909 T +91 40 2355 5945 - Ext: 229 www.imimobile.com
>  -----Original Message----- From: Marek Miglinski
> Sent: Wednesday, September 07, 2011 5:09 PM
>
> To: [EMAIL PROTECTED]
> Subject: RE: ERROR 2999: Unexpected internal error. Failed to create
> DataStorage
>
> Check if you have configured /etc/hadoop/conf/* files properly.
>
>
> Marek M.
>
> -----Original Message-----
> From: kiranprasad [mailto:kiranprasad.g@**imimobile.com<[EMAIL PROTECTED]>
> ]
> Sent: Wednesday, September 07, 2011 2:23 PM
> To: [EMAIL PROTECTED]
> Subject: Re: ERROR 2999: Unexpected internal error. Failed to create
> DataStorage
>
> Hi
>
> I ve started the namenode and datanode but still iam getting the same
> exception.
>
> Kiran.G
>
> IMImobile Plot 770, Rd. 44 Jubilee Hills, Hyderabad - 500033 M +91
> 9000170909 T +91 40 2355 5945 - Ext: 229 www.imimobile.com -----Original
> Message-----
> From: Marek Miglinski
> Sent: Wednesday, September 07, 2011 2:29 PM
> To: [EMAIL PROTECTED]
> Subject: RE: ERROR 2999: Unexpected internal error. Failed to create
> DataStorage
>
> Check if Hadoop services are running:
>
> ${pathToHadoop service folder}/ hadoop-${version}-namenode status
> ${pathToHadoop service folder}/ hadoop-${version}-**secondarynameode
> status ${pathToHadoop service folder}/ hadoop-${version}-datanode status
> ${pathToHadoop service folder}/ hadoop-${version}-jobtracker status (if you
> use it) ${pathToHadoop service folder}/ hadoop-${version}-tasktracker status
> (if you use it)
>
> Example:
> /etc/init.d/hadoop-0.20-**namenode status
>
>
> Marek M.
>
> -----Original Message-----
> From: kiranprasad [mailto:kiranprasad.g@**imimobile.com<[EMAIL PROTECTED]>
> ]
> Sent: Wednesday, September 07, 2011 11:55 AM
> To: [EMAIL PROTECTED]
> Subject: ERROR 2999: Unexpected internal error. Failed to create
> DataStorage
>
> Hi
>
> Iam new to PIG, trying to set up HADOOP cluster.
>
> The error Iam getting is
>
> [kiranprasad.g@pig1 pig-0.8.1]$ bin/pig
> 2011-09-07 19:45:50,606 [main] INFO  org.apache.pig.Main - Logging error
> messages to: /home/kiranprasad.g/pig-0.8.1/**pig_1315404950603.log
> 2011-09-07 19:45:50,764 [main] INFO
> org.apache.pig.backend.hadoop.**executionengine.**HExecutionEngine -
> Connecting to hadoop file system at: hdfs://10.0.0.61:0
> 2011-09-07 19:45:52,171 [main] INFO  org.apache.hadoop.ipc.Client -
> Retrying connect to server: /10.0.0.61:0. Already tried 0 time(s).
+
kiranprasad 2011-09-08, 04:58
+
Ashutosh Chauhan 2011-09-08, 06:11
+
kiranprasad 2011-09-08, 09:07
+
Daniel Dai 2011-09-08, 21:14
+
kiranprasad 2011-09-09, 06:20
+
Daniel Dai 2011-09-11, 01:01
+
Schappet, James C 2012-12-05, 18:43
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB