Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Threaded View
HDFS >> mail # dev >> problems with fuse-dfs


Copy link to this message
-
Re: problems with fuse-dfs

On Mar 2, 2011, at 12:46 AM, Aastha Mehta wrote:

> Thank you very much for replying. And I am sorry for mailing to users and
> dev together, I was not sure where the question belonged to.
>
> I did try -d option while running the wrapper script. It runs into an
> infinite loop of connection retries and I can see some socket connection
> exception also thrown up. I have to terminate the process and then at the
> end it shows  "Transport endpoint is not connected"
>

Can you post the attempts?  Sounds like it may not be configured correctly.

> lsof on /media/myDrive/newhdfs returns a warning as:
> lsof: WARNING: can't stat fuse.fuse_dfs file system on
> /media/myDrive/newhdfs
>       Output information maybe incomplete
> lsof: status error on /media/myDrive/newhdfs: Input/output error
>
> $lsof -c fuse_dfs
> COMMAND   PID      USER      FD           TYPE        DEVICE   SIZE/OFF
> NODE NAME
> fuse_dfs        2884     root         cwd         unknown
>                          /proc/2884/cwd (readlink: Permission denied)
> fuse_dfs        2884     root         rtd           unknown
>                            /proc/2884/rtd (readlink: Permission denied)
> fuse_dfs        2884     root         txt           unknown
>                            /proc/2884/txt (readlink: Permission denied)
> fuse_dfs        2884     root         NOFD      unknown
>                        /proc/2884/fd (opendir: Permission denied)
>

This is not useful.  As it says in the output, you don't have permission to perform this operation.

> $ps faux (listing only the relevant proceses)
> root       283  0.0  0.2   4104  1192 ?        S    Mar01   0:00 mountall
> --daemon
> hadoop    1263  0.0  0.5  30284  2560 ?        Ssl  Mar01   0:00
> /usr/lib/gvfs//gvfs-fuse-daemon /home/hadoop/.gvfs
> root         2884  0.3  8.3 331128 42756 ?        Ssl  Mar01   3:21
> ./fuse_dfs dfs://aastha-desktop:9000 /media/myDrive/hell
>

This looks strange.  The relevant line from my running systems looks like this:

root     11767  0.1  4.2 6739536 1054776 ?     Ssl  Feb17  23:54 /usr/lib/hadoop-0.20/bin/fuse_dfs /mnt/hadoop -o rw,server=hadoop-name,port=9000,rdbuffer=32768,allow_other

It could be you are simply invoking fuse_dfs incorrectly.  I've never used fuse_dfs_wrapper.sh myself.  I have a script as below (*) and then mount it in fstab:

hdfs /mnt/hadoop fuse server=hadoop-name,port=9000,rdbuffer=32768,allow_other 0 0

> Regarding libhdfs, I checked where all it is located in my system. It is
> present only in the hadoop directories:
> /home/hadoop/hadoop/hadoop-0.20.2/src/c++/libhdfs/
> /usr/local/hadoop/hadoop-0.20.2/src/c++/libhdfs/
>
> Now, I cannot understand why the changes to the libhdfs code are not
> reflected.
>

Are those libraries on the linker's path?

> Thanks again for your help.
>
> Regards,
>
> Aastha.
>
>
> On 2 March 2011 05:56, Brian Bockelman <[EMAIL PROTECTED]> wrote:
>
>> Sorry, resending on hdfs-dev; apparently I'm not on -user.
>>
>> Begin forwarded message:
>>
>> *From: *Brian Bockelman <[EMAIL PROTECTED]>
>> *Date: *March 1, 2011 6:24:28 PM CST
>> *Cc: *hdfs-user <[EMAIL PROTECTED]>
>> *Subject: **Re: problems with fuse-dfs*
>>
>>
>> Side note: Do not cross post to multiple lists.  It annoys folks.
>>
>>
>> On Mar 1, 2011, at 11:50 AM, Aastha Mehta wrote:
>>
>> Hello,
>>
>>
>> I am facing problems in running fuse-dfs over hdfs. I came across this
>>
>> thread while searching for my problem:
>>
>> http://www.mail-archive.com/[EMAIL PROTECTED]/msg00341.html
>>
>> OR
>>
>> http://search-hadoop.com/m/T1Bjv17q0eF1&subj=Re+Fuse+DFS
>>
>>
>> and it exactly mentions some of the symptoms I am looking at.
>>
>>
>> To quote Eli,
>>
>> "fuse_impls_getattr.c connects via hdfsConnectAsUser so you should see a
>> log
>>
>> (unless its returning from a case that doesn't print an error). Next
>>
>> step is to determine that you're actually reaching the code you modified by
>>
>> adding a syslog to the top of the function (need to make sure you're
[bbockelm@t3-sl5 ~]$ cat /usr/bin/hdfs
#!/bin/bash

/sbin/modprobe fuse

export HADOOP_HOME=/usr/lib/hadoop-0.20

if [ -f /etc/default/hadoop-0.20-fuse ]
then . /etc/default/hadoop-0.20-fuse
fi

if [ -f $HADOOP_HOME/bin/hadoop-config.sh ]
then . $HADOOP_HOME/bin/hadoop-config.sh  
fi

if [ "$LD_LIBRARY_PATH" = "" ]
then JVM_LIB=`find ${JAVA_HOME}/jre/lib -name libjvm.so |tail -n 1`
        export LD_LIBRARY_PATH=`dirname $JVM_LIB`:/usr/lib/

fi
for i in ${HADOOP_HOME}/*.jar ${HADOOP_HOME}/lib/*.jar
        do CLASSPATH+=$i:
done

export PATH=$PATH:${HADOOP_HOME}/bin/
CLASSPATH=/etc/hadoop-0.20/conf:$CLASSPATH
env CLASSPATH=$CLASSPATH ${HADOOP_HOME}/bin/fuse_dfs $@
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB