Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Threaded View
Hadoop >> mail # general >> readFields() throws a NullPointerException


Copy link to this message
-
Re: readFields() throws a NullPointerException
Do you have a no-argument constructor for your customer class and in
the constructor do you do initialize the members ? Otherwise these
members in your customer class will be null.
On Mon, Sep 20, 2010 at 1:21 AM,  <[EMAIL PROTECTED]> wrote:
> How are you constructing your WritableDeserializer?  The reason that I
> ask is that on the line you are seeing an error:
>
> writable.readFields(dataIn);
>
> The only thing that could throw a null pointer exception is if writable
> was null.  Writable is constructed thusly:
>
>    public Writable deserialize(Writable w) throws IOException {
>      Writable writable;
>      if (w == null) {
>        writable
>          = (Writable) ReflectionUtils.newInstance(writableClass,
> getConf());
>      } else {
>        writable = w;
>      }
>      writable.readFields(dataIn);
>      return writable;
>    }
>
> So I suspect that you are passing null as the argument to Deserialize
> and when constructing your WritableDeserializer, you are passing null as
> the second argument (Class<?> c).  This would result in writable being
> null, and you'd see that error.
>
> In one of these two cases you must define your Writable class.
>
> Hope this helps,
>
> Chris
>
>
> -----Original Message-----
> From: Rakesh Ramakrishnan [mailto:[EMAIL PROTECTED]]
> Sent: Sunday, September 19, 2010 1:12 PM
> To: [EMAIL PROTECTED]
> Subject: readFields() throws a NullPointerException
>
> I have a simple map-reduce program in which my map and reduce primitives
> look like this
>
> map(K,V) = (Text, OutputAggregator)
> reduce(Text, OutputAggregator) = (Text,Text)
>
> The important point is that from my map function I emit an object of
> type
> OutputAggregator which is a custom class that implements the Writable
> interface. However, my reduce fails with the following exception. More
> specifically, the readFieds() function is throwing an exception. Any
> clue
> why ? I use hadoop 0.18.3
>
>
> 10/09/19 04:04:59 INFO jvm.JvmMetrics: Initializing JVM Metrics with
> processName=JobTracker, sessionId> 10/09/19 04:04:59 WARN mapred.JobClient: Use GenericOptionsParser for
> parsing the arguments. Applications should implement Tool for the
> same.
> 10/09/19 04:04:59 INFO mapred.FileInputFormat: Total input paths to
> process : 1
> 10/09/19 04:04:59 INFO mapred.FileInputFormat: Total input paths to
> process : 1
> 10/09/19 04:04:59 INFO mapred.FileInputFormat: Total input paths to
> process : 1
> 10/09/19 04:04:59 INFO mapred.FileInputFormat: Total input paths to
> process : 1
> 10/09/19 04:04:59 INFO mapred.JobClient: Running job: job_local_0001
> 10/09/19 04:04:59 INFO mapred.MapTask: numReduceTasks: 1
> 10/09/19 04:04:59 INFO mapred.MapTask: io.sort.mb = 100
> 10/09/19 04:04:59 INFO mapred.MapTask: data buffer = 79691776/99614720
> 10/09/19 04:04:59 INFO mapred.MapTask: record buffer = 262144/327680
> Length = 10
> 10
> 10/09/19 04:04:59 INFO mapred.MapTask: Starting flush of map output
> 10/09/19 04:04:59 INFO mapred.MapTask: bufstart = 0; bufend = 231;
> bufvoid = 99614720
> 10/09/19 04:04:59 INFO mapred.MapTask: kvstart = 0; kvend = 10; length > 327680
> gl_books
> 10/09/19 04:04:59 WARN mapred.LocalJobRunner: job_local_0001
> java.lang.NullPointerException
>  at org.myorg.OutputAggregator.readFields(OutputAggregator.java:46)
>  at
> org.apache.hadoop.io.serializer.WritableSerialization$WritableDeserializ
> er.deserialize(WritableSerialization.java:67)
>  at
> org.apache.hadoop.io.serializer.WritableSerialization$WritableDeserializ
> er.deserialize(WritableSerialization.java:40)
>  at
> org.apache.hadoop.mapred.Task$ValuesIterator.readNextValue(Task.java:751
> )
>  at org.apache.hadoop.mapred.Task$ValuesIterator.next(Task.java:691)
>  at
> org.apache.hadoop.mapred.Task$CombineValuesIterator.next(Task.java:770)
>  at org.myorg.xxxParallelizer$Reduce.reduce(xxxParallelizer.java:117)
>  at org.myorg.xxxParallelizer$Reduce.reduce(xxxParallelizer.java:1)
>  at
> org.apache.hadoop.mapred.MapTask$MapOutputBuffer.combineAndSpill(MapTask

Best Regards

Jeff Zhang
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB