Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Threaded View
Hadoop >> mail # general >> Filesystem error


Copy link to this message
-
Re: Filesystem error
Thanks, I started services and moved corrupted block files and it worked.
but users are complaining they have lost data that i didn't move. this is
very unfortunate i didn't move those file that they are complaining lost.

2nd thing, This filesystem corruption happened second time. Can you please
let me know what can be the reason of corruption of filesystem second time.

Can anbybody answer this.?
On Fri, Mar 29, 2013 at 6:56 PM, Daryn Sharp <[EMAIL PROTECTED]> wrote:

> The UGI preface is just reporting who you are when the exception occurred.
>  The issue isn't permissions but rather when you stopped the services it
> can't connect to localhost:8020 because nothing is listening on 8020, hence
> the "connection refused".  I think you need to force the NN into safe mode
> rather than stop the services.
>
> Daryn
>
> On Mar 29, 2013, at 12:54 AM, Mohit Vadhera wrote:
>
> > Hi,
> >
> > I have filsystem error. when i run fsck to move corrupted blocks i get
> the
> > following error after stopping services i get the below error. but if i
> > don't start the services and run the fsck command the corrupted block
> > doesn't move.  I am not getting this Usergroupinformation error. it is
> > looking permission error. Can any body fix it . It is an urgent issue on
> my
> > hadoop machine. It is a standalone cluster configured using the below
> link
> >
> >
> https://ccp.cloudera.com/display/CDH4DOC/Installing+CDH4+on+a+Single+Linux+Node+in+Pseudo-distributed+Mode
> >
> > Error
> > ==================================================> >
> > # sudo -u hdfs  hadoop fsck / -move
> > DEPRECATED: Use of this script to execute hdfs command is deprecated.
> > Instead use the hdfs command for it.
> >
> > 13/03/29 01:20:20 ERROR security.UserGroupInformation:
> > PriviledgedActionException as:hdfs (auth:SIMPLE)
> > cause:java.net.ConnectException: Call From OPERA-MAST1.ny.os.local/
> > 172.20.3.119 to localhost:8020 failed on connection exception:
> > java.net.ConnectException: Connection refused; For more details see:
> > http://wiki.apache.org/hadoop/ConnectionRefused
> > Exception in thread "main" java.net.ConnectException: Call From
> > OPERA-MAST1.ny.os.local/172.20.3.119 to localhost:8020 failed on
> connection
> > exception: java.net.ConnectException: Connection refused; For more
> details
> > see:  http://wiki.apache.org/hadoop/ConnectionRefused
> >        at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:721)
> >        at org.apache.hadoop.ipc.Client.call(Client.java:1228)
> >        at
> >
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:202)
> >        at $Proxy9.getFileInfo(Unknown Source)
> >        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> >        at
> >
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
> >        at
> >
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> >        at java.lang.reflect.Method.invoke(Method.java:597)
> >        at
> >
> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:164)
> >        at
> >
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:83)
> >        at $Proxy9.getFileInfo(Unknown Source)
> >        at
> >
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getFileInfo(ClientNamenodeProtocolTranslatorPB.java:628)
> >        at
> org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:1507)
> >        at
> >
> org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:783)
> >        at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:1257)
> >        at
> org.apache.hadoop.hdfs.HAUtil.getAddressOfActive(HAUtil.java:298)
> >        at
> >
> org.apache.hadoop.hdfs.tools.DFSck.getCurrentNamenodeAddress(DFSck.java:229)
> >        at org.apache.hadoop.hdfs.tools.DFSck.doWork(DFSck.java:235)
> >        at org.apache.hadoop.hdfs.tools.DFSck.access$000(DFSck.java:71)
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB