Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Threaded View
MapReduce >> mail # user >> Accessing HDFS


Copy link to this message
-
Accessing HDFS
I've seen mentioned that you can access HDFS via ClientProtocol, as in:
ClientProtocol namenode = DFSClient.createNamenode(conf);
LocatedBlocks lbs = namenode.getBlockLocations(path, start, length);

But we use:
fs = FileSystem.get(URI, conf);
filestatus = fs.getFileStatus(path);
fs.getFileBlockLocations(filestatus, start, length);

As a YARN application and/or DFS client, are there times when I should use the ClientProtocol directly?

Thanks
John

NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB