Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Plain View
Hadoop >> mail # user >> java.util.NoSuchElementException


+
jamal sasha 2013-07-31, 18:10
+
Devaraj k 2013-07-31, 18:20
+
jamal sasha 2013-07-31, 18:22
Copy link to this message
-
RE: java.util.NoSuchElementException
If you want to write a mapreduce Job, you need to have basic knowledge on core Java.  You can get many resources in the internet for that.

If you face any problems related to Hadoop, you could ask here for help.

Thanks
Devaraj k

From: jamal sasha [mailto:[EMAIL PROTECTED]]
Sent: 31 July 2013 23:52
To: [EMAIL PROTECTED]
Subject: Re: java.util.NoSuchElementException

Hi,
  Thanks for responding.
How do I do that? (very new in java )
There are just two words per line..
One is word, second is integer.
Thanks

On Wed, Jul 31, 2013 at 11:20 AM, Devaraj k <[EMAIL PROTECTED]<mailto:[EMAIL PROTECTED]>> wrote:
Here seems to be some problem in the mapper logic. You need to have the input according to your code or need to update the code to handle the cases like having the odd no of words in a line.

Before getting the element second time, need to check whether tokenizer has more elements or not. If you have only two words in a line, you can modify the code to get these directly instead of iterating multiple times.

Thanks
Devaraj k

From: jamal sasha [mailto:[EMAIL PROTECTED]<mailto:[EMAIL PROTECTED]>]
Sent: 31 July 2013 23:40
To: [EMAIL PROTECTED]<mailto:[EMAIL PROTECTED]>
Subject: java.util.NoSuchElementException

Hi,
  I am getting this error:

13/07/31 09:29:41 INFO mapred.JobClient: Task Id : attempt_201307102216_0270_m_000002_2, Status : FAILED
java.util.NoSuchElementException
            at java.util.StringTokenizer.nextToken(StringTokenizer.java:332)
            at java.util.StringTokenizer.nextElement(StringTokenizer.java:390)
            at org.mean.Mean$MeanMapper.map(Mean.java:60)
            at org.mean.Mean$MeanMapper.map(Mean.java:1)
            at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:144)
            at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:764)
            at org.apache.hadoop.mapred.MapTask.run(MapTask.java:370)
            at org.apache.hadoop.mapred.Child$4.run(Child.java:255)
            at java.security.AccessController.doPrivileged(Native Method)
            at javax.security.auth.Subject.doAs(Subject.java:396)
            at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1093)
            at org.apache.hadoop.mapred.Child.main(Child.java:249)

            public void map(LongWritable key, Text value , Context context) throws IOException, InterruptedException,NoSuchElementException{
                                    initialize(context);
                                    StringTokenizer tokenizer = new StringTokenizer(value.toString());
                                    while (tokenizer.hasMoreElements()){
                                                String curWord = tokenizer.nextElement().toString();
                                                //The line which causes this error.
                                                Integer curValue = Integer.parseInt(tokenizer.nextElement().toString());

                                                Integer sum = summation.get(curWord);
                                                Integer count = counter.get(curWord);

                                                ..
                                 ...

                                    }

                                    close(context);
                        }
What am i doing wrong?

My data looks like:

//word count
foo 20
bar  21
and so on???
The code works fine if I strip the hadoop part and run it in java?
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB