Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Threaded View
Hadoop >> mail # user >> Re:


Hi Abhishek,

I faced similar problem with cdh4 few days ago. THe problem i found out was
the classpath. The user from which i installed the cdh packages was having
the right classpath but the users "yarn" and "hdfs" were having the
incorrect classpath. When i was trying to run the job as yarn or hdfs user,
it used to run in local mode and when i ran it with "root"(root user was
used to install cdh4) it ran in distributed mode.

Can you try running the job with user you used to install cdh4.
Also compare the classpath of "hdfs" and "yarn" and user used while
installation by running the following:
sudo -u hdfs hadoop classpath
sudo -u yarn hadoop classpath
sudo -u $installation_user classpath

~Anil

On Sun, Jul 29, 2012 at 12:20 PM, abhiTowson cal
<[EMAIL PROTECTED]>wrote:

> Hi Anil,
>           Iam using chd4 with yarn.
>
> On Sun, Jul 29, 2012 at 3:17 PM, Anil Gupta <[EMAIL PROTECTED]> wrote:
> > Are you using cdh4? In you cluster are you using yarn or mr1?
> > Check the classpath of Hadoop by Hadoop classpath command.
> >
> > Best Regards,
> > Anil
> >
> > On Jul 29, 2012, at 12:12 PM, abhiTowson cal <[EMAIL PROTECTED]>
> wrote:
> >
> >> HI Anil,
> >>
> >> I have already tried this,but issue could not be resolved.
> >>
> >> Regards
> >> Abhishek
> >>
> >> On Sun, Jul 29, 2012 at 3:05 PM, anil gupta <[EMAIL PROTECTED]>
> wrote:
> >>> Hi Abhishek,
> >>>
> >>> Once you make sure that whatever Harsh said in the previous email is
> >>> present in the cluster and then also the job runs in Local Mode. Then
> try
> >>> running the job with hadoop --config option.
> >>> Refer to this discussion for more detail:
> >>>
> https://groups.google.com/a/cloudera.org/forum/#!topic/cdh-user/4tMGfvJFzrg
> >>>
> >>> HTH,
> >>> Anil
> >>>
> >>> On Sun, Jul 29, 2012 at 11:43 AM, Harsh J <[EMAIL PROTECTED]> wrote:
> >>>
> >>>> For a job to get submitted to a cluster, you will need proper client
> >>>> configurations. Have you configured your mapred-site.xml and
> >>>> yarn-site.xml properly inside /etc/hadoop/conf/mapred-site.xml and
> >>>> /etc/hadoop/conf/yarn-site.xml at the client node?
> >>>>
> >>>> On Mon, Jul 30, 2012 at 12:00 AM, abhiTowson cal
> >>>> <[EMAIL PROTECTED]> wrote:
> >>>>> Hi All,
> >>>>>
> >>>>> I am getting problem that job is running in localrunner rather than
> >>>>> the cluster enviormnent.
> >>>>> And when am running the job i would not be able to see the job id in
> >>>>> the resource manager UI
> >>>>>
> >>>>> Can you please go through the issues and let me know ASAP.
> >>>>>
> >>>>> sudo -u hdfs hadoop jar
> >>>>> /usr/lib/hadoop-mapreduce/hadoop-mapreduce-examples.jar teragen
> >>>>> 1000000 /benchmark/teragen/input
> >>>>> 12/07/29 13:35:59 WARN conf.Configuration: session.id is deprecated.
> >>>>> Instead, use dfs.metrics.session-id
> >>>>> 12/07/29 13:35:59 INFO jvm.JvmMetrics: Initializing JVM Metrics with
> >>>>> processName=JobTracker, sessionId> >>>>> 12/07/29 13:35:59 INFO util.NativeCodeLoader: Loaded the
> native-hadoop
> >>>> library
> >>>>> 12/07/29 13:35:59 WARN mapred.JobClient: Use GenericOptionsParser for
> >>>>> parsing the arguments. Applications should implement Tool for the
> >>>>> same.
> >>>>> Generating 1000000 using 1 maps with step of 1000000
> >>>>> 12/07/29 13:35:59 INFO mapred.JobClient: Running job: job_local_0001
> >>>>> 12/07/29 13:35:59 INFO mapred.LocalJobRunner: OutputCommitter set in
> >>>> config null
> >>>>> 12/07/29 13:35:59 INFO mapred.LocalJobRunner: OutputCommitter is
> >>>>> org.apache.hadoop.mapred.FileOutputCommitter
> >>>>> 12/07/29 13:35:59 WARN mapreduce.Counters: Group
> >>>>> org.apache.hadoop.mapred.Task$Counter is deprecated. Use
> >>>>> org.apache.hadoop.mapreduce.TaskCounter instead
> >>>>> 12/07/29 13:35:59 INFO util.ProcessTree: setsid exited with exit
> code 0
> >>>>> 12/07/29 13:35:59 INFO mapred.Task:  Using ResourceCalculatorPlugin :
> >>>>> org.apache.hadoop.util.LinuxResourceCalculatorPlugin@47c297a3
> >>>>> 12/07/29 13:36
Thanks & Regards,
Anil Gupta
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB