Home | About | Sematext search-lucene.com search-hadoop.com
 Search Hadoop and all its subprojects:

Switch to Threaded View
MapReduce, mail # user - Ant Colony Optimization for Travelling Salesman Problem in Hadoop


Copy link to this message
-
RE: Ant Colony Optimization for Travelling Salesman Problem in Hadoop
Steve Lewis 2012-05-08, 14:47
Which api are you using? They changed between 0.18 and 0.20.this is the
more recent version
On May 8, 2012 3:55 AM, "sharat attupurath" <[EMAIL PROTECTED]> wrote:

>  Hi Steve,
>
> I tried using the NShotInputFormat, but after setting it as the
> InputFormat class and running my mapreduce job I get the following error :
>
> Exception in thread "main" java.lang.RuntimeException: class
> tsphadoop.NShotInputFormat not org.apache.hadoop.mapred.InputFormat
> at org.apache.hadoop.conf.Configuration.setClass(Configuration.java:915)
> at org.apache.hadoop.mapred.JobConf.setInputFormat(JobConf.java:590)
> at tsphadoop.TspHadoop.main(TspHadoop.java:40)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
> at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:601)
> at org.apache.hadoop.util.RunJar.main(RunJar.java:156)
>
>
> Am i missing something? Is there anything else I should know about using
> NShotInputFormat?
>
> Thanks and regards
>
> Sharat
>
> ------------------------------
> Date: Mon, 7 May 2012 09:24:05 -0700
> Subject: Re: Ant Colony Optimization for Travelling Salesman Problem in
> Hadoop
> From: [EMAIL PROTECTED]
> To: [EMAIL PROTECTED]
>
> Fair enough - I write a lot of InputFormats since for most of my problems
> a line of text is not the proper unit -
> I read fasta files - read lines intil you hit a line starting with > and
> xml fragments - read until you hit a closing tag
>
>
> On Mon, May 7, 2012 at 9:03 AM, GUOJUN Zhu <[EMAIL PROTECTED]>wrote:
>
>
> The default FileInputformat split the file according to the size.  If you
> use line text data, the TextFileInputformat respects the line structure for
> input.   We got splits as small as a few KBs.  The file split is a tricky
> business, especially when you want it to respect your logical boundary. It
> is better to use the existing battle-test code than invent your own wheel.
>
> Zhu, Guojun
> Modeling Sr Graduate
> 571-3824370
> [EMAIL PROTECTED]
> Financial Engineering
> Freddie Mac
>
>
>
>     *Steve Lewis <[EMAIL PROTECTED]>*    05/07/2012 11:17 AM
>     Please respond to
> [EMAIL PROTECTED]
>
>   To
> [EMAIL PROTECTED]
> cc
>   Subject
> Re: Ant Colony Optimization for Travelling Salesman Problem in Hadoop
>
>
>
>
> Yes but it is the job of the InputFormat code to implement the behavior -
> it is not necessary to do so and in other cases I choose to create more
> mappers when the mapper has a lot of work
>
> On Mon, May 7, 2012 at 7:54 AM, GUOJUN Zhu <*[EMAIL PROTECTED]*<[EMAIL PROTECTED]>>
> wrote:
>
> We are using old API of 0.20.  I think when you set "mapred.reduce.tasks"
> with certain number N and use fileinputformat, the default behavior is that
> any file will be split into that number, N, each split smaller than the
> default block size. Of course, other restriction, such as
> "mapred.min.split.size" cannot be set too large (default is as small as
> possible I think).
>
> Zhu, Guojun
> Modeling Sr Graduate*
> **571-3824370**
> **[EMAIL PROTECTED]* <[EMAIL PROTECTED]>
> Financial Engineering
> Freddie Mac
>
>
>
>
>     *sharat attupurath <**[EMAIL PROTECTED]* <[EMAIL PROTECTED]>*>*
>  05/05/2012 11:37 AM
>
>
>     Please respond to*
> **[EMAIL PROTECTED]* <[EMAIL PROTECTED]>
>
>   To
> <*[EMAIL PROTECTED]* <[EMAIL PROTECTED]>>
> cc
>   Subject
> RE: Ant Colony Optimization for Travelling Salesman Problem in Hadoop
>
>
>
>
>
>
> Since the input files are very small, the default input formats in Hadoop
> all generate just a single InputSplit, so only a single map task is
> executed, and we wont have much parallelism.
>
> I was thinking of writing an InputFormat that would read the whole file as
> an InputSplit and replicate this input split n times (where n would be the