Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Threaded View
HDFS >> mail # user >> Error while using libhdfs C API


Copy link to this message
-
Re: Error while using libhdfs C API


On 03/09/2012 07:34 AM, Amritanshu Shekhar wrote:
>
> Hi Marcos,
>
> Figured out the compilation issue. It was due to error.h header file
> which was not used and not present in the distribution.  There is one
> small issue however I was trying to test hdfs read. I copied an input
> file to /user/inputData(this can be listed using bin/hadoop dfs  -ls
>  /user/inputData).  hdfsExists call  fails for this directory however
> it works when I copy my file  to /tmp.  Is it because hdfs only
> recognizes /tmp as a valid dir?  Thus I was wondering  what directory
> structure does hdfs recognize by default and if we can override it
> through a conf variable what would that variable be and where to set it?
>
> Thanks,
>
> Amritanshu
>
Awesome, Amritanshu. CC to [EMAIL PROTECTED]
Please, give some logs about your work with the compilation.
How did you solve this? To have it on the mailing list archives.

About your another issue,
1- Did you check that the $HADOOP_USER has access to /user/inputData?

HDFS:
           It recognize the directory that you entered on the
hdfs-site.xml on the dfs.name.dir(NN) property and on the dfs.data.dir
(DN), but by default, it works with /tmp directory (not recommended in
production). Look on the Eugene Ciurana�s Refcard called "Deploying
Hadoop", where he did a amazing work explaining in a few pages some
tricky configurations tips.

Regards

> *From:*Marcos Ortiz [mailto:[EMAIL PROTECTED]]
> *Sent:* Wednesday, March 07, 2012 7:36 PM
> *To:* Amritanshu Shekhar
> *Subject:* Re: Error while using libhdfs C API
>
>
>
> On 03/07/2012 01:15 AM, Amritanshu Shekhar wrote:
>
> Hi Marcos,
>
> Thanks for the quick reply. Actually I am using a gmake build system
> where the library is being linked as a static library(.a ) rather than
> a shared object.  It seems strange since stderr is a standard symbol
> which should be resolved.  Currently I am using the version that came
> with the distribution($HOME/c++/Linux-amd64-64/lib/libhdfs.a) . I
> tried building the library from the source but there were build
> dependencies that could not be resolved. I tried building
> $HOME/hadoop/hdfs/src/c++/libhdfs by running:
>
> ./configure
>
> ./make
>
> I got a lot of dependency errors so gave up the effort.  If you happen
> to have a working application that make suse of libhdfs please let me
> know. Any inputs would be welcome as I have hit a roadblock as far as
> libhdfs is concerned.
>
> Thanks,
>
> Amritanshu
>
> No, Amritansu. I don't have any examples of the use of libhdfs API,
> but I remembered that some folks were using it. Search on the mailing
> list archives (http://www.search-hadoop.com).
> Can you put the errors that you had in your system when you tried to
> compile the library?
> Regards and best wishes
>
> *From:*Marcos Ortiz [mailto:[EMAIL PROTECTED]]
> *Sent:* Monday, March 05, 2012 6:51 PM
> *To:* [EMAIL PROTECTED] <mailto:[EMAIL PROTECTED]>
> *Cc:* Amritanshu Shekhar
> *Subject:* Re: Error while using libhdfs C API
>
> Which platform are you using?
> Did you update the dynamic linker runtime bindings (ldconfig)?
>
> ldconfig $HOME/hadoop/c++/Linux-amd64/lib
> Regards
>
> On 03/06/2012 02:38 AM, Amritanshu Shekhar wrote:
>
> Hi,
>
> I was trying to link  64 bit libhdfs in my application program but it
> seems there is an issue with this library.  Get the following error:
>
> Undefined                       first referenced
>
> symbol                             in file
>
> stderr                              libhdfs.a(hdfs.o)
>
> __errno_location                   libhdfs.a(hdfs.o)
>
> ld: fatal: Symbol referencing errors. No output written to
> ../../bin/sun86/mapreduce
>
> collect2: ld returned 1 exit status
>
> Now I was wondering  if  this a common error and is there an actual
> issue with the library or am I getting an error because of  an
> incorrect configuration? I am using the following library:
> $HOME/hadoop/c++/Linux-amd64-64/lib/libhdfs.a

Marcos Luis Ort�z Valmaseda
  Sr. Software Engineer (UCI)
  http://marcosluis2186.posterous.com
  http://postgresql.uci.cu/blog/38
Fin a la injusticia, LIBERTAD AHORA A NUESTROS CINCO COMPATRIOTAS QUE SE ENCUENTRAN INJUSTAMENTE EN PRISIONES DE LOS EEUU!
http://www.antiterroristas.cu
http://justiciaparaloscinco.wordpress.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB