Home | About | Sematext search-lucene.com search-hadoop.com
 Search Hadoop and all its subprojects:

Switch to Threaded View
MapReduce, mail # user - Re: Running Jobs with capacity scheduler on hadoop in eclipse


Copy link to this message
-
Re: Running Jobs with capacity scheduler on hadoop in eclipse
Swathi V 2011-09-16, 10:26
It might be in safemode. turn off the safemode by using
*bin/hadoop dfsadmin safemode -leave*
to see the task tracker.

On Fri, Sep 16, 2011 at 2:09 PM, arun k <[EMAIL PROTECTED]> wrote:

> Hi !
>
> I have setup hadoop0.20.2 on eclipse Helios and able to run the Example
> wordcount using ExampleDriver class as mentioned by Faraz in
> http://lucene.472066.n3.nabble.com/HELP-configuring-hadoop-on-ECLIPSE-td1086829.html#a2241534
>
> Two questions ?
> 1. I am unable to see the jobtracker & others in browser at http addr
> mentioned in mapred- default.xml. i have not edited any site.xml files.
>  I have tried to edit site.xml files as per Michael noll site but that
> didn't help.
>
> 2.Capacity Scheduler :I see the capacity -*- jar in lib folder. I have
> modified mapred-site.xml and capacity-scheduler.xml as required. How do i
> run some application jobs by submitting a job to a queue in this case ?
> I have tried to run:
> Program & Args as : wordcount -Dmapred.job.queue.name=myqueue1
> input_file_loc output_file_loc
> But i get error :
> Exception in thread "main" java.lang.Error: Unresolved compilation
> problems:
>         ProgramDriver cannot be resolved to a type
>         ProgramDriver cannot be resolved to a type
>         DistributedPentomino cannot be resolved to a type
>          .................
>
> Thanks,
> Arun
>
>
> On Fri, Sep 16, 2011 at 12:46 PM, Harsh J <[EMAIL PROTECTED]> wrote:
>
>> Arun,
>>
>> Good to know. Happy Hadoopin'!
>>
>> On Fri, Sep 16, 2011 at 12:34 PM, arun k <[EMAIL PROTECTED]> wrote:
>> > Hi !
>> > Thanks Harsh !
>> > The problem was that i have set up queue info in mapred-site.xml instead
>> of
>> > capacity-scheduler.xml .
>> > Arun
>> >
>> > On Fri, Sep 16, 2011 at 10:52 AM, Harsh J <[EMAIL PROTECTED]> wrote:
>> >>
>> >> Arun,
>> >>
>> >> Please do not cross-post to multiple lists. Lets continue this on
>> >> mapreduce-user@ alone.
>> >>
>> >> Your problem isn't the job submission here, but your Capacity
>> >> Scheduler configuration. For every queue you configure, you need to
>> >> add in capacities: Please see the queue properties documentation at
>> >>
>> >>
>> http://hadoop.apache.org/common/docs/current/capacity_scheduler.html#Queue+properties
>> >> for the vital configs required in additional to mapred.queue.names.
>> >> Once done, you should have a fully functional JobTracker!
>> >>
>> >> On Fri, Sep 16, 2011 at 10:17 AM, arun k <[EMAIL PROTECTED]> wrote:
>> >> > Hi all !
>> >> >
>> >> > Harsh ! Namenode appears to be out of safe mode :
>> >> > In http://nn-host:50070 i see in time
>> >> >
>> >> > T1>Safe mode is ON. The ratio of reported blocks 0.0000 has not
>> reached
>> >> > the
>> >> > threshold 0.9990. Safe mode will be turned off automatically.
>> >> > 7 files and directories, 1 blocks = 8 total. Heap Size is 15.06 MB /
>> >> > 966.69
>> >> > MB (1%)
>> >> >
>> >> > T2>Safe mode is ON. The ratio of reported blocks 1.0000 has reached
>> the
>> >> > threshold 0.9990. Safe mode will be turned off automatically in 17
>> >> > seconds.
>> >> > 7 files and directories, 1 blocks = 8 total. Heap Size is 15.06 MB /
>> >> > 966.69
>> >> > MB (1%)
>> >> >
>> >> > T3>9 files and directories, 3 blocks = 12 total. Heap Size is 15.06
>> MB /
>> >> > 966.69 MB (1%)
>> >> >
>> >> > Added properties :
>> >> >
>> >> >  mapred.jobtracker.taskScheduler org.apache.hadoopertiep.mapred.CTS
>> >> >
>> >> >  mapred.queue.names                          myqueue1,myqueue2
>> >> >  mapred.capacity-scheduler.queue.myqueue1.capacity               25
>> >> >  mapred.capacity-scheduler.queue.myqueue1.capacity               75
>> >> > ${HADOOP_HOME}$ bin/hadoop jar hadoop*examples*.jar wordcount
>> >> > -Dmapred.job.queue.name>> >> > myqueue1 /user/hduser/wcinput /user/hduser/wcoutput
>> >> >
>> >> > I get the error:
>> >> > java.io.IOException: Call to localhost/127.0.0.1:54311 failed on
>> local
>> >> > exception: java.io.IOException: Connection reset by peer
>> >> >     at org.apache.hadoop.ipc.Client.wrapException(Client.java:1065)
Regards,
Swathi.V.