Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Threaded View
Hadoop >> mail # user >> Help with Hadoop runtime error


Copy link to this message
-
Re: Help with Hadoop runtime error
Do you happen to see something similar to:

10/03/17 15:47:58 WARN hdfs.DFSClient: NotReplicatedYetException sleeping
/user/perserver/data/575Gb/ps_
es_mstore_events_fact.txt retries left 4
10/03/17 15:47:58 INFO hdfs.DFSClient:
org.apache.hadoop.ipc.RemoteException:
org.apache.hadoop.hdfs.server.namenode.NotReplicatedYetException: Not
replicated yet:/user/perserver/data/575Gb/ps_es_mstore_events_fact.txt

Other people saw the above along with Bad connect ack error.

On Fri, Jul 9, 2010 at 2:06 PM, Raymond Jennings III
<[EMAIL PROTECTED]>wrote:

> Hi Ted, thanks for your replay.  That does not seem to make a difference
> though.  I put that property in the xml file, restarted everything, tried
> to
> transfer the file again but the same thing occurred.
>
> I had my cluster working perfectly for about a year but I recently had some
> disk
> failures and scrubbed all of my machines reinstalled linux (same version)
> and
> moved from hadoop 0.20.1 to 0.20.2.
>
>
>
> ----- Original Message ----
> From: Ted Yu <[EMAIL PROTECTED]>
> To: [EMAIL PROTECTED]
> Sent: Fri, July 9, 2010 4:26:30 PM
> Subject: Re: Help with Hadoop runtime error
>
> Please see the description about xcievers at:
> http://hbase.apache.org/docs/r0.20.5/api/overview-summary.html#requirements
>
> You can confirm that you have a xcievers problem by grepping the
> datanode logs with the error message pasted in the last bullet point.
>
> On Fri, Jul 9, 2010 at 1:10 PM, Raymond Jennings III
> <[EMAIL PROTECTED]>wrote:
>
> > Does anyone know what might be causing this error?  I am using version
> > Hadoop
> > 0.20.2 and it happens when I run bin/hadoop dfs -copyFromLocal ...
> >
> > 10/07/09 15:51:45 INFO hdfs.DFSClient: Exception in
> createBlockOutputStream
> > java.io.IOException: Bad connect ack with firstBadLink
> 128.238.55.43:50010
> > 10/07/09 15:51:45 INFO hdfs.DFSClient: Abandoning block
> > blk_2932625575574450984_1002
> >
> >
> >
> >
>
>
>
>
>
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB