Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Threaded View
HDFS >> mail # user >> Maven Cloudera Configuration problem


Copy link to this message
-
Re: Maven Cloudera Configuration problem
Here are the log details when i run the jar file:
08:10:29,738  INFO ZooKeeper:438 - Initiating client connection,
connectString=localhost:2181 sessionTimeout=180000 watcher=hconnection
08:10:29,777  INFO RecoverableZooKeeper:104 - The identifier of this
process is [EMAIL PROTECTED]-west-
1.compute.internal
08:10:29,784  INFO ClientCnxn:966 - Opening socket connection to
server localhost/127.0.0.1:2181. Will not attempt to authenticate
using SASL (Unable to locate a login configuration)
08:10:29,796  INFO ClientCnxn:849 - Socket connection established to
localhost/127.0.0.1:2181, initiating session
08:10:29,804  INFO ClientCnxn:1207 - Session establishment complete on
server localhost/127.0.0.1:2181, sessionid = 0x13ff1cff71b5503,
negotiated timeout = 60000
08:10:29,905  WARN Configuration:824 - hadoop.native.lib is
deprecated. Instead, use io.native.lib.available

Is it utilizing the cluster? Sorry for a noob question.

On Wed, Aug 14, 2013 at 5:24 AM, Suresh Srinivas <[EMAIL PROTECTED]> wrote:
> Folks, can you please take this thread to CDH related mailing list?
>
>
> On Tue, Aug 13, 2013 at 3:07 PM, Brad Cox <[EMAIL PROTECTED]> wrote:
>>
>> That link got my hopes up. But Cloudera Manager  (what I'm running; on
>> CDH4) does not offer an "Export Client Config" option. What am I missing?
>>
>> On Aug 13, 2013, at 4:04 PM, Shahab Yunus <[EMAIL PROTECTED]> wrote:
>>
>> You should not use LocalJobRunner. Make sure that the mapred.job.tracker
>> property does not point to 'local' an instead to your job-tracker host and
>> port.
>>
>> *But before that* as Sandy said, your client machine (from where you will
>> be kicking of your jobs and apps) should be using config files which will
>> have your cluster's configuration. This is the alternative that you should
>> follow if you don't want to bundle the configs for your cluster in the
>> application itself (either in java code or separate copies of relevant
>> properties set of config files.) This was something which I was suggesting
>> early on to just to get you started using your cluster instead of local
>> mode.
>>
>> By the way have you seen the following link? It gives you step by step
>> information about how to generate config files from your cluster specific
>> to your cluster and then how to place them and use the from any machine
>> you
>> want to designate as your client. Running your jobs form one of the
>> datanodes without proper config would not work.
>>
>> https://ccp.cloudera.com/display/FREE373/Generating+Client+Configuration
>>
>> Regards,
>> Shahab
>>
>>
>> On Tue, Aug 13, 2013 at 1:07 PM, Pavan Sudheendra
>> <[EMAIL PROTECTED]>wrote:
>>
>> Yes Sandy, I'm referring to LocalJobRunner. I'm actually running the
>> job on one datanode..
>>
>> What changes should i make so that my application would take advantage
>> of the cluster as a whole?
>>
>> On Tue, Aug 13, 2013 at 10:33 PM,  <[EMAIL PROTECTED]> wrote:
>>
>> Nothing in your pom.xml should affect the configurations your job runs
>>
>> with.
>>
>>
>> Are you running your job from a node on the cluster? When you say
>>
>> localhost configurations, do you mean it's using the LocalJobRunner?
>>
>>
>> -sandy
>>
>> (iphnoe tpying)
>>
>> On Aug 13, 2013, at 9:07 AM, Pavan Sudheendra <[EMAIL PROTECTED]>
>>
>> wrote:
>>
>>
>> When i actually run the job on the multi node cluster, logs shows it
>> uses localhost configurations which i don't want..
>>
>> I just have a pom.xml which lists all the dependencies like standard
>> hadoop, standard hbase, standard zookeeper etc., Should i remove these
>> dependencies?
>>
>> I want the cluster settings to apply in my map-reduce application..
>> So, this is where i'm stuck at..
>>
>> On Tue, Aug 13, 2013 at 9:30 PM, Pavan Sudheendra <[EMAIL PROTECTED]>
>>
>> wrote:
>>
>> Hi Shabab and Sandy,
>> The thing is we have a 6 node cloudera cluster running.. For
>> development purposes, i was building a map-reduce application on a
>> single node apache distribution hadoop with maven..

Regards-
Pavan
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB