Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Threaded View
Hadoop >> mail # user >> Submitting and running hadoop jobs Programmatically


Copy link to this message
-
Re: Submitting and running hadoop jobs Programmatically
Thank you Harsha . I am able to run the jobs by ditching *.

On Wed, Jul 27, 2011 at 11:41 AM, Harsh J <[EMAIL PROTECTED]> wrote:

> Madhu,
>
> Ditch the '*' in the classpath element that has the configuration
> directory. The directory ought to be on the classpath, not the files
> AFAIK.
>
> Try and let us know if it then picks up the proper config (right now,
> its using the local mode).
>
> On Wed, Jul 27, 2011 at 10:25 AM, madhu phatak <[EMAIL PROTECTED]>
> wrote:
> > Hi
> > I am submitting the job as follows
> >
> > java -cp
> >
>  Nectar-analytics-0.0.1-SNAPSHOT.jar:/home/hadoop/hadoop-for-nectar/hadoop-0.21.0/conf/*:$HADOOP_COMMON_HOME/lib/*:$HADOOP_COMMON_HOME/*
> > com.zinnia.nectar.regression.hadoop.primitive.jobs.SigmaJob
> input/book.csv
> > kkk11fffrrw 1
> >
> > I get the log in CLI as below
> >
> > 11/07/27 10:22:54 INFO security.Groups: Group mapping
> > impl=org.apache.hadoop.security.ShellBasedUnixGroupsMapping;
> > cacheTimeout=300000
> > 11/07/27 10:22:54 INFO jvm.JvmMetrics: Initializing JVM Metrics with
> > processName=JobTracker, sessionId> > 11/07/27 10:22:54 INFO jvm.JvmMetrics: Cannot initialize JVM Metrics with
> > processName=JobTracker, sessionId= - already initialized
> > 11/07/27 10:22:54 WARN mapreduce.JobSubmitter: Use GenericOptionsParser
> for
> > parsing the arguments. Applications should implement Tool for the same.
> > 11/07/27 10:22:54 INFO mapreduce.JobSubmitter: Cleaning up the staging
> area
> >
> file:/tmp/hadoop-hadoop/mapred/staging/hadoop-1331241340/.staging/job_local_0001
> >
> > It doesn't create any job in hadoop.
> >
> > On Tue, Jul 26, 2011 at 5:11 PM, Devaraj K <[EMAIL PROTECTED]> wrote:
> >
> >> Madhu,
> >>
> >>  Can you check the client logs, whether any error/exception is coming
> while
> >> submitting the job?
> >>
> >> Devaraj K
> >>
> >> -----Original Message-----
> >> From: Harsh J [mailto:[EMAIL PROTECTED]]
> >> Sent: Tuesday, July 26, 2011 5:01 PM
> >> To: [EMAIL PROTECTED]
> >> Subject: Re: Submitting and running hadoop jobs Programmatically
> >>
> >> Yes. Internally, it calls regular submit APIs.
> >>
> >> On Tue, Jul 26, 2011 at 4:32 PM, madhu phatak <[EMAIL PROTECTED]>
> >> wrote:
> >> > I am using JobControl.add() to add a job and running job control in
> >> > a separate thread and using JobControl.allFinished() to see all jobs
> >> > completed or not . Is this work same as Job.submit()??
> >> >
> >> > On Tue, Jul 26, 2011 at 4:08 PM, Harsh J <[EMAIL PROTECTED]> wrote:
> >> >
> >> >> Madhu,
> >> >>
> >> >> Do you get a specific error message / stack trace? Could you also
> >> >> paste your JT logs?
> >> >>
> >> >> On Tue, Jul 26, 2011 at 4:05 PM, madhu phatak <[EMAIL PROTECTED]>
> >> >> wrote:
> >> >> > Hi
> >> >> >  I am using the same APIs but i am not able to run the jobs by just
> >> >> adding
> >> >> > the configuration files and jars . It never create a job in Hadoop
> ,
> >> it
> >> >> just
> >> >> > shows cleaning up staging area and fails.
> >> >> >
> >> >> > On Tue, Jul 26, 2011 at 3:46 PM, Devaraj K <[EMAIL PROTECTED]>
> >> wrote:
> >> >> >
> >> >> >> Hi Madhu,
> >> >> >>
> >> >> >>   You can submit the jobs using the Job API's programmatically
> from
> >> any
> >> >> >> system. The job submission code can be written this way.
> >> >> >>
> >> >> >>     // Create a new Job
> >> >> >>     Job job = new Job(new Configuration());
> >> >> >>     job.setJarByClass(MyJob.class);
> >> >> >>
> >> >> >>     // Specify various job-specific parameters
> >> >> >>     job.setJobName("myjob");
> >> >> >>
> >> >> >>     job.setInputPath(new Path("in"));
> >> >> >>     job.setOutputPath(new Path("out"));
> >> >> >>
> >> >> >>     job.setMapperClass(MyJob.MyMapper.class);
> >> >> >>     job.setReducerClass(MyJob.MyReducer.class);
> >> >> >>
> >> >> >>     // Submit the job
> >> >> >>     job.submit();
> >> >> >>
> >> >> >>
> >> >> >>
> >> >> >> For submitting this, need to add the hadoop jar files and
> >> configuration
> >> >> >> files in the class path of the application from where you want to
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB