Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Threaded View
MapReduce >> mail # user >> Re: Hadoop C++ HDFS test running Exception


Copy link to this message
-
Re: Hadoop C++ HDFS test running Exception
I've found in past that the native code runtime somehow doesn't
support wildcarded classpaths. If you add the jars explicitly to the
CLASSPATH, your app will work. You could use a simple shell loop such
as at one of my random examples at
https://github.com/QwertyManiac/cdh4-libhdfs-example/blob/master/exec.sh#L3
to populate it easily instead of doing it by hand.

On Mon, Jan 13, 2014 at 3:36 PM, Andrea Barbato <[EMAIL PROTECTED]> wrote:
> I'm working with Hadoop 2.2.0 and trying to run this hdfs_test.cpp
> application:
>
> #include "hdfs.h"
>
> int main(int argc, char **argv) {
>
>     hdfsFS fs = hdfsConnect("default", 0);
>     const char* writePath = "/tmp/testfile.txt";
>     hdfsFile writeFile = hdfsOpenFile(fs, writePath, O_WRONLY|O_CREAT, 0, 0,
> 0);
>     if(!writeFile) {
>           fprintf(stderr, "Failed to open %s for writing!\n", writePath);
>           exit(-1);
>     }
>     char* buffer = "Hello, World!";
>     tSize num_written_bytes = hdfsWrite(fs, writeFile, (void*)buffer,
> strlen(buffer)+1);
>     if (hdfsFlush(fs, writeFile)) {
>            fprintf(stderr, "Failed to 'flush' %s\n", writePath);
>           exit(-1);
>     }
>    hdfsCloseFile(fs, writeFile);
> }
>
> I compiled it but when I'm running it with ./hdfs_test I have this:
>
> loadFileSystems error:
> (unable to get stack trace for java.lang.NoClassDefFoundError exception:
> ExceptionUtils::getStackTrace error.)
> hdfsBuilderConnect(forceNewInstance=0, nn=default, port=0,
> kerbTicketCachePath=(NULL), userName=(NULL)) error:
> (unable to get stack trace for java.lang.NoClassDefFoundError exception:
> ExceptionUtils::getStackTrace error.)
> hdfsOpenFile(/tmp/testfile.txt): constructNewObjectOfPath error:
> (unable to get stack trace for java.lang.NoClassDefFoundError exception:
> ExceptionUtils::getStackTrace error.)
> Failed to open /tmp/testfile.txt for writing!
>
> Maybe is a problem with the classpath. My $HADOOP_HOME is /usr/local/hadoop
> and actually this is my variable *CLASSPATH*:
>
> echo $CLASSPATH
> /usr/local/hadoop/etc/hadoop:/usr/local/hadoop/share/hadoop/common/lib/*:/usr/local/hadoop/share/hadoop/common/*:/usr/local/hadoop/share/hadoop/hdfs:/usr/local/hadoop/share/hadoop/hdfs/lib/*:/usr/local/hadoop/share/hadoop/hdfs/*:/usr/local/hadoop/share/hadoop/yarn/lib/*:/usr/local/hadoop/share/hadoop/yarn/*:/usr/local/hadoop/share/hadoop/mapreduce/lib/*:/usr/local/hadoop/share/hadoop/mapreduce/*:/contrib/capacity-scheduler/*.jar
>
>
> Any help is appreciated.. thanks

--
Harsh J
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB