Home | About | Sematext search-lucene.com search-hadoop.com
 Search Hadoop and all its subprojects:

Switch to Threaded View
Hadoop, mail # user - writing to hdfs via java api


Copy link to this message
-
Re: writing to hdfs via java api
Jay Vyas 2011-10-29, 02:57
Thanks tom : Thats interesting....

First, I tried, and it complained that the input directory didnt exist, so I
ran
$> hadoop fs -mkdir /user/cloudera/input

Then, I tried to do this :

$> hadoop jar /usr/lib/hadoop-0.20/hadoop-examples.jar grep input output2
'dfs[a-z.]+'

And it seemed to start working ...... But then it abruptly printed "killed"
somehow at the end of the job [scroll down] ?

Maybe this is related to why i cant connect ..... ?!

1) the hadoop jar 11/10/14 21:34:43 WARN util.NativeCodeLoader: Unable to
load native-hadoop library for your platform... using builtin-java classes
where applicable
11/10/14 21:34:43 WARN snappy.LoadSnappy: Snappy native library not loaded
11/10/14 21:34:43 INFO mapred.FileInputFormat: Total input paths to process
: 0
11/10/14 21:34:44 INFO mapred.JobClient: Running job: job_201110142010_0009
11/10/14 21:34:45 INFO mapred.JobClient:  map 0% reduce 0%
11/10/14 21:34:55 INFO mapred.JobClient:  map 0% reduce 100%
11/10/14 21:34:57 INFO mapred.JobClient: Job complete: job_201110142010_0009
11/10/14 21:34:57 INFO mapred.JobClient: Counters: 14
11/10/14 21:34:57 INFO mapred.JobClient:   Job Counters
11/10/14 21:34:57 INFO mapred.JobClient:     Launched reduce tasks=1
11/10/14 21:34:57 INFO mapred.JobClient:     SLOTS_MILLIS_MAPS=5627
11/10/14 21:34:57 INFO mapred.JobClient:     Total time spent by all reduces
waiting after reserving slots (ms)=0
11/10/14 21:34:57 INFO mapred.JobClient:     Total time spent by all maps
waiting after reserving slots (ms)=0
11/10/14 21:34:57 INFO mapred.JobClient:     SLOTS_MILLIS_REDUCES=5050
11/10/14 21:34:57 INFO mapred.JobClient:   FileSystemCounters
11/10/14 21:34:57 INFO mapred.JobClient:     FILE_BYTES_WRITTEN=53452
11/10/14 21:34:57 INFO mapred.JobClient:     HDFS_BYTES_WRITTEN=86
11/10/14 21:34:57 INFO mapred.JobClient:   Map-Reduce Framework
11/10/14 21:34:57 INFO mapred.JobClient:     Reduce input groups=0
11/10/14 21:34:57 INFO mapred.JobClient:     Combine output records=0
11/10/14 21:34:57 INFO mapred.JobClient:     Reduce shuffle bytes=0
11/10/14 21:34:57 INFO mapred.JobClient:     Reduce output records=0
11/10/14 21:34:57 INFO mapred.JobClient:     Spilled Records=0
11/10/14 21:34:57 INFO mapred.JobClient:     Combine input records=0
11/10/14 21:34:57 INFO mapred.JobClient:     Reduce input records=0
11/10/14 21:34:57 WARN mapred.JobClient: Use GenericOptionsParser for
parsing the arguments. Applications should implement Tool for the same.
11/10/14 21:34:58 INFO mapred.FileInputFormat: Total input paths to process
: 1
11/10/14 21:34:58 INFO mapred.JobClient: Running job: job_201110142010_0010
11/10/14 21:34:59 INFO mapred.JobClient:  map 0% reduce 0%
Killed
On Fri, Oct 28, 2011 at 8:24 PM, Tom Melendez <[EMAIL PROTECTED]> wrote:

> Hi Jay,
>
> Some questions for you:
>
> - Does the hadoop client itself work from that same machine?
> - Are you actually able to run the hadoop example jar (in other words,
> your setup is valid otherwise)?
> - Is port 8020 actually available?  (you can telnet or nc to it?)
> - What does jps show on the namenode?
>
> Thanks,
>
> Tom
>
> On Fri, Oct 28, 2011 at 4:04 PM, Jay Vyas <[EMAIL PROTECTED]> wrote:
> > Hi guys : Made more progress debugging my hadoop connection, but still
> > haven't got it working......  It looks like my VM (cloudera hadoop) won't
> > let me in.  I find that there is no issue connecting to the name node -
> that
> > is , using hftp and 50070......
> >
> > via standard HFTP as in here :
> >
> > //This method works fine - connecting directly to hadoop's namenode and
> > querying the filesystem
> > public static void main1(String[] args) throws Exception
> >    {
> >        String uri = "hftp://155.37.101.76:50070/";
> >
> >        System.out.println( "uri: " + uri );
> >        Configuration conf = new Configuration();
> >
> >        FileSystem fs = FileSystem.get( URI.create( uri ), conf );
> >        fs.printStatistics();
> >    }
> >
> >
> > But unfortunately, I can't get into hdfs ..... Any thoughts on this ?  I

Jay Vyas
MMSB/UCHC