Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Threaded View
MapReduce >> mail # user >> Re: Implementing a custom hadoop key and value - need Help


Copy link to this message
-
Re: Implementing a custom hadoop key and value - need Help
yes .By editing some of my code..... i am able to get my emitted matrix
VALUE in reducer :) :) :) :).
context.write(......, new MatrixWritable(Eval));

But i am confused , HOW TO EMIT A KEY too from mapper ..what will be my
CompareTo() method in my MatrixWritable class.
can anyone suggest me.
On Sun, Nov 3, 2013 at 11:33 PM, unmesha sreeveni <[EMAIL PROTECTED]>wrote:

> Thanks for ur reply..Mirko Kampf. And the suggestion was really good for
> beginners.
>
>
> The second one is right :) .*But you wrote also: I need to emit a 2D
> double array as key and value from mapper.*
> *Means, you work with a k-v-pair *
>
> *KVP<Matrix,Matrix>*
>
> *There Matrix is 2-D matrix of double values.*
>
>
> Yes i need to emit 2 matrices,1 key and the other is value.
>
> ie key ----->  A*Atrans--------->after multiplication the result will be a
> 2D array which is declared as double (matrix) lets say the result be Matrix
> "*Ekey*"(double[][] Ekey)
>
> value ------>  Atrans*D ---------> after multiplication the result will be
> Matrix "*Eval*" (double[][] Eval).
>
> After tat i need to emit these matrix to reducer for further calculations.
>
> so in mapper
>        context.write(*Ekey*,*Eval*);
>
> reducer
>       i need to do further calculations with these Ekey nd Eval.
>
>
> so i need to emit context.write(matrix,matrix).....for that i created
> MatrixWritable  class.
>
> 1.Is that the correct way or i can directly go for TwoDArrayWritable?
> 2.In reducer i gave iterable why becoz my key and value are matrices.That
> y i gave them as iterable. IS nt that right?????
> If wrong how to give the reducer signature.
>
>
>
>
>
>
>
>
>
>
> On Sun, Nov 3, 2013 at 5:44 PM, Mirko Kämpf <[EMAIL PROTECTED]>wrote:
>
>> public class MyReducer extends
>>
>>
>> Reducer<MatrixWritable, MatrixWritable, IntWritable, Text> {
>>
>>     public void reduce(*Iterable<MatrixWritable>*  key,
>>             Iterable<MatrixWritable> values, Context context){
>>               for(MatrixWritable c : values){
>>
>>                 System.out.println("print value "+c.toString());
>>
>>             }
>>
>> }
>>
>> Usually a key is only one object, not an Iterable.
>>
>> To make things more clear:
>>
>> What is the exact k-v-pair you need in the Reducer?
>>
>> One matrix is the key, and a set of (two matrices together) are used as
>> value in the Reducer? What I understood from your question is
>> KVP<Matrix,Matrix[2]>
>>
>>
>> *But you wrote also:* I need to emit a 2D double array as key and value
>> from mapper.
>> Means, you work with a k-v-pair
>>
>> KVP<Matrix,Matrix>
>>
>> There Matrix is 2-D matrix of double values.
>>
>> I suggest:
>>
>> 1.) Define the MR Data Flow.
>> 2.) Build the custom types.
>> 3.) Test the flow (no computation)
>> 4.) Implement logic / computation
>>
>>
>>
>>
>>
>>
>>
>>
>> 2013/11/3 unmesha sreeveni <[EMAIL PROTECTED]>
>>
>>> I tried with TwoDArrayWritable too.
>>>
>>> but i tried it by emitting only one value.
>>>
>>> row = E.length;
>>> col = E[0].length;
>>>                      TwoDArrayWritable array = new TwoDArrayWritable (DoubleWritable.class);
>>>                      DoubleWritable[][] myInnerArray = new DoubleWritable[row][col];
>>>                      // set values in myInnerArray
>>>                      for (int k1 = 0; k1 < row; k1++) {
>>>                         for(int j1=0;j1< col;j1++){
>>>                             myInnerArray[k1][j1] = new DoubleWritable(E[k1][j1]);
>>>
>>>                     }
>>>                  array.set(myInnerArray);
>>>                  context.write(clusterNumber, array);
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>> this is also not working for me
>>>
>>> showing NullPointerException
>>>
>>> 13/11/01 16:34:07 INFO mapred.LocalJobRunner: Map task executor complete.
>>> 13/11/01 16:34:07 WARN mapred.LocalJobRunner: job_local724758890_0001
>>> java.lang.Exception: java.lang.NullPointerException
>>>     at org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:404)
>>> Caused by: java.lang.NullPointerException

*Thanks & Regards*

Unmesha Sreeveni U.B

*Junior Developer*

*Amrita Center For Cyber Security*
* Amritapuri.www.amrita.edu/cyber/ <http://www.amrita.edu/cyber/>*
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB