Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Threaded View
Hadoop >> mail # user >> Re: Accessing HDFS


Copy link to this message
-
RE: Accessing HDFS
Thanks!  They are fine, I was just confused seeing them talked about in forums.
John
-----Original Message-----
From: Harsh J [mailto:[EMAIL PROTECTED]]
Sent: Friday, July 05, 2013 8:01 PM
To: <[EMAIL PROTECTED]>
Subject: Re: Accessing HDFS

These APIs (ClientProtocol, DFSClient) are not for Public access.
Please do not use them in production. The only API we care not to change incompatibly are the FileContext and the FileSystem APIs. They provide much of what you want - if not, log a JIRA.

On Fri, Jul 5, 2013 at 11:40 PM, John Lilley <[EMAIL PROTECTED]> wrote:
> I've seen mentioned that you can access HDFS via ClientProtocol, as in:
>
> ClientProtocol namenode = DFSClient.createNamenode(conf);
> LocatedBlocks lbs = namenode.getBlockLocations(path, start, length);
>
>
>
> But we use:
>
> fs = FileSystem.get(URI, conf);
>
> filestatus = fs.getFileStatus(path);
>
> fs.getFileBlockLocations(filestatus, start, length);
>
>
>
> As a YARN application and/or DFS client, are there times when I should
> use the ClientProtocol directly?
>
>
>
> Thanks
>
> John
>
>

--
Harsh J
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB