Home | About | Sematext search-lucene.com search-hadoop.com
 Search Hadoop and all its subprojects:

Switch to Threaded View
Hadoop >> mail # user >> Re: Hadoop and Cuda , JCuda (CPU+GPU architecture)


Copy link to this message
-
Re: Hadoop and Cuda , JCuda (CPU+GPU architecture)
You could also try creating a lib directory with the dependant jar and
package that along with the job's jar file. Please refer to this blog post
for information:
http://www.cloudera.com/blog/2011/01/how-to-include-third-party-libraries-in-your-map-reduce-job/

On Wed, Sep 26, 2012 at 4:57 PM, sudha sadhasivam <[EMAIL PROTECTED]
> wrote:

> Sir
> We have also tried the option of putting JCUBLAA in hadoop jar.
> Still it does not recognise.
> We would be thankful if you could provide us with a sample exercise on the
> same with steps for execution
> I am herewith attaching the error file
> Thanking you
> with warm regards
> Dr G sudha Sadasivam
>
>
> --- On *Tue, 9/25/12, Chen He <[EMAIL PROTECTED]>* wrote:
>
>
> From: Chen He <[EMAIL PROTECTED]>
> Subject: Re: Hadoop and Cuda , JCuda (CPU+GPU architecture)
> To: "sudha sadhasivam" <[EMAIL PROTECTED]>
> Cc: [EMAIL PROTECTED]
> Date: Tuesday, September 25, 2012, 9:01 PM
>
>
> Hi Sudha
>
> Good question.
>
> First of all, you need to specify clearly about your Hadoop environment,
> (pseudo distributed or real cluster)
>
> Secondly, you need to clearly understand how hadoop load job's jar file to
> all worker nodes, it only copy the jar file to worker nodes. It does not
> contain the jcuda.jar file. MapReduce program may not know where it is even
> you specify the jcuda.jar file in our worker node classpath.
>
> I prefer you can include the Jcuda.jar into your wordcount.jar. Then when
> Hadoop copy the wordcount.jar file to all worker nodes' temporary working
> directory, you do not need to worry about this issue.
>
> Let me know if you meet further question.
>
> Chen
>
> On Tue, Sep 25, 2012 at 12:38 AM, sudha sadhasivam <
> [EMAIL PROTECTED] <http://mc/compose?[EMAIL PROTECTED]>>
> wrote:
>
> > Sir
> > We tried to integrate hadoop and JCUDA.
> > We tried a code from
> >
> >
> >
> http://code.google.com/p/mrcl/source/browse/trunk/hama-mrcl/src/mrcl/mrcl/?r=76
> >
> > We re able to compile. We are not able to execute. It does not recognise
> > JCUBLAS.jar. We tried setting the classpath
> > We are herewith attaching the procedure for the same along with errors
> > Kindly inform us how to proceed. It is our UG project
> > Thanking you
> > Dr G sudha Sadasivam
> >
> > --- On *Mon, 9/24/12, Chen He <[EMAIL PROTECTED]<http://mc/compose?[EMAIL PROTECTED]>>*
> wrote:
> >
> >
> > From: Chen He <[EMAIL PROTECTED]<http://mc/compose?[EMAIL PROTECTED]>
> >
> > Subject: Re: Hadoop and Cuda , JCuda (CPU+GPU architecture)
> > To: [EMAIL PROTECTED]<http://mc/compose?[EMAIL PROTECTED]>
> > Date: Monday, September 24, 2012, 9:03 PM
> >
> >
> > http://wiki.apache.org/hadoop/CUDA%20On%20Hadoop
> >
> > On Mon, Sep 24, 2012 at 10:30 AM, Oleg Ruchovets <[EMAIL PROTECTED]<http://mc/compose?[EMAIL PROTECTED]>
> <http://mc/compose?[EMAIL PROTECTED]>
> > >wrote:
> >
> > > Hi
> > >
> > > I am going to process video analytics using hadoop
> > > I am very interested about CPU+GPU architercute espessially using CUDA
> (
> > > http://www.nvidia.com/object/cuda_home_new.html) and JCUDA (
> > > http://jcuda.org/)
> > > Does using HADOOP and CPU+GPU architecture bring significant
> performance
> > > improvement and does someone succeeded to implement it in production
> > > quality?
> > >
> > > I didn't fine any projects / examples  to use such technology.
> > > If someone could give me a link to best practices and example using
> > > CUDA/JCUDA + hadoop that would be great.
> > > Thanks in advane
> > > Oleg.
> > >
> >
> >
>
>