Home | About | Sematext search-lucene.com search-hadoop.com
 Search Hadoop and all its subprojects:

Switch to Threaded View
Hadoop, mail # user - Can not upload local file to HDFS


Copy link to this message
-
Re: Can not upload local file to HDFS
Nan Zhu 2010-09-26, 16:43
Have you ever check the log file in the directory?

I always find some important information there,

I suggest you to recompile hadoop with ant since mapred daemons also don't
work

Nan

On Sun, Sep 26, 2010 at 7:29 PM, He Chen <[EMAIL PROTECTED]> wrote:

> The problem is every datanode may be listed in the error report. That means
> all my datanodes are bad?
>
> One thing I forgot to mention. I can not use start-all.sh and stop-all.sh
> to
> start and stop all dfs and mapred processes on my clusters. But the
> jobtracker and namenode web interface still work.
>
> I think I can solve this problem by ssh to every node and kill current
> hadoop processes and restart them again. The previous problem will also be
> solved( it's my opinion). But I really want to know why the HDFS reports me
> previous errors.
>
>
> On Sat, Sep 25, 2010 at 11:20 PM, Nan Zhu <[EMAIL PROTECTED]> wrote:
>
> > Hi Chen,
> >
> > It seems that you have a bad datanode? maybe you should reformat them?
> >
> > Nan
> >
> > On Sun, Sep 26, 2010 at 10:42 AM, He Chen <[EMAIL PROTECTED]> wrote:
> >
> > > Hello Neil
> > >
> > > No matter how big the file is. It always report this to me. The file
> size
> > > is
> > > from 10KB to 100MB.
> > >
> > > On Sat, Sep 25, 2010 at 6:08 PM, Neil Ghosh <[EMAIL PROTECTED]>
> > wrote:
> > >
> > > > How Big is the file? Did you try Formatting Name node and Datanode?
> > > >
> > > > On Sun, Sep 26, 2010 at 2:12 AM, He Chen <[EMAIL PROTECTED]> wrote:
> > > >
> > > > > Hello everyone
> > > > >
> > > > > I can not load local file to HDFS. It gave the following errors.
> > > > >
> > > > > WARN hdfs.DFSClient: DFSOutputStream ResponseProcessor exception
>  for
> > > > block
> > > > > blk_-236192853234282209_419415java.io.EOFException
> > > > >        at
> java.io.DataInputStream.readFully(DataInputStream.java:197)
> > > > >        at
> java.io.DataInputStream.readLong(DataInputStream.java:416)
> > > > >        at
> > > > >
> > > > >
> > > >
> > >
> >
> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$ResponseProcessor.run(DFSClient.java:2397)
> > > > > 10/09/25 15:38:25 WARN hdfs.DFSClient: Error Recovery for block
> > > > > blk_-236192853234282209_419415 bad datanode[0] 192.168.0.23:50010
> > > > > 10/09/25 15:38:25 WARN hdfs.DFSClient: Error Recovery for block
> > > > > blk_-236192853234282209_419415 in pipeline 192.168.0.23:50010,
> > > > > 192.168.0.39:50010: bad datanode 192.168.0.23:50010
> > > > > Any response will be appreciated!
> > > > >
> > > > >
> >
>