Hi Shabab and Sandy,
The thing is we have a 6 node cloudera cluster running.. For
development purposes, i was building a map-reduce application on a
single node apache distribution hadoop with maven..
To be frank, i don't know how to deploy this application on a multi
node cloudera cluster. I am fairly well versed with Multi Node Apache
Hadoop Distribution.. So, how can i go forward?
Thanks for all the help :)
On Tue, Aug 13, 2013 at 9:22 PM, <[EMAIL PROTECTED]> wrote:
> Hi Pavan,
> Configuration properties generally aren't included in the jar itself unless you explicitly set them in your java code. Rather they're picked up from the mapred-site.xml file located in the Hadoop configuration directory on the host you're running your job from.
> Is there an issue you're coming up against when trying to run your job on a cluster?
> (iphnoe tpying)
> On Aug 13, 2013, at 4:19 AM, Pavan Sudheendra <[EMAIL PROTECTED]> wrote:
>> I'm currently using maven to build the jars necessary for my
>> map-reduce program to run and it works for a single node cluster..
>> For a multi node cluster, how do i specify my map-reduce program to
>> ingest the cluster settings instead of localhost settings?
>> I don't know how to specify this using maven to build my jar.
>> I'm using the cdh distribution by the way..