Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
clear query|facets|time Search criteria: .   Results from 1 to 10 from 34 (0.081s).
Loading phrases to help you
refine your search...
[HDFS-1578] First step towards data transter protocol compatibility: support DatanodeProtocol#getDataTransferProtocolVersion - HDFS - [issue]
...HADOOP-6904 allows us to handle RPC changes in a compatible way. However, we have one more protocol to take care of, the data transfer protocol, which a dfs client uses to read data from or ...
http://issues.apache.org/jira/browse/HDFS-1578    Author: Hairong Kuang, 2015-03-10, 01:56
[HDFS-1553] Hftp file read should retry a different datanode if the chosen best datanode fails to connect to NameNode - HDFS - [issue]
...Currently when reading a file through HftpFileSystem interface, namenode deterministically selects the "best" datanode from which the file is read. But this can cause the read to fail if the...
http://issues.apache.org/jira/browse/HDFS-1553    Author: Hairong Kuang, 2015-03-10, 01:49
[HDFS-1577] Fall back to a random datanode when bestNode fails - HDFS - [issue]
...When NameNod decides to redirect a read request to a datanode, if it can not find a live node that contains a block of the file, NameNode should choose a random datanode instead of throwing ...
http://issues.apache.org/jira/browse/HDFS-1577    Author: Hairong Kuang, 2015-03-10, 01:47
[HDFS-1537] Add a metrics for tracking the number of reported corrupt replicas - HDFS - [issue]
...We have a cluster, some of its datanodes' disks are corrupt. But it tooks us a few days to be aware of the problem. Adding a metrics that keeps track of the number of reported corrupt replic...
http://issues.apache.org/jira/browse/HDFS-1537    Author: Hairong Kuang, 2015-03-10, 01:46
[HDFS-2248] Port recoverLease API from append 0.20 to trunk - HDFS - [issue]
...HDFS-1520 and HDFS-1554 adds a new DistributedFileSystem API recoverLease in append 0.20 that forces a file's lease to be recovered immediately.I'd like to port it to trunk to support the HB...
http://issues.apache.org/jira/browse/HDFS-2248    Author: Hairong Kuang, 2015-03-10, 01:44
[HDFS-1348] Improve NameNode reponsiveness while it is checking if datanode decommissions are complete - HDFS - [issue]
...NameNode normally is busy all the time. Its log is full of activities every second. But once for a while, NameNode seems to pause for more than 10 seconds without doing anything, leaving a b...
http://issues.apache.org/jira/browse/HDFS-1348    Author: Hairong Kuang, 2015-03-02, 22:39
[HDFS-583] HDFS should enforce a max block size - HDFS - [issue]
...When DataNode creates a replica, it should enforce a max block size, so clients can't go crazy. One way of enforcing this is to make BlockWritesStreams to be filter steams that check the blo...
http://issues.apache.org/jira/browse/HDFS-583    Author: Hairong Kuang, 2014-07-25, 19:02
[HDFS-275] FSNamesystem should have an InvalidateBlockMap class to manage blocks scheduled to remove - HDFS - [issue]
...This jira intends to move the code that handles recentInvalideSet to a separate class InvalidateBlockMap....
http://issues.apache.org/jira/browse/HDFS-275    Author: Hairong Kuang, 2014-07-21, 19:00
[HDFS-166] NameNode#invalidateBlock's requirement on more than 1 valid replica exists before scheduling a replica to delete is too strict - HDFS - [issue]
...Currently invalideBlock does not allow to delete a replica only if at least two valid replicas exist before deletion is scheduled. This is too restrictive if the replica to delete is a corru...
http://issues.apache.org/jira/browse/HDFS-166    Author: Hairong Kuang, 2014-07-21, 18:35
[HDFS-48] NN should check a block's length even if the block is not a new block when processing a blockreport - HDFS - [issue]
...If the block length does not match the one in the blockMap, we should mark the block as corrupted. This could help clearing the polluted replicas caused by HADOOP-4810 and also help detect t...
http://issues.apache.org/jira/browse/HDFS-48    Author: Hairong Kuang, 2014-07-21, 18:13
Sort:
project
Hadoop (88)
HDFS (34)
MapReduce (6)
HBase (2)
type
issue (25)
mail # dev (6)
mail # user (3)
date
last 7 days (0)
last 30 days (0)
last 90 days (6)
last 6 months (6)
last 9 months (34)
author
Todd Lipcon (336)
Colin Patrick McCabe (281)
Harsh J (266)
Eli Collins (265)
Jing Zhao (219)
Haohui Mai (208)
Tsz Wo (198)
Chris Nauroth (191)
Arpit Agarwal (172)
Andrew Wang (158)
Brandon Li (150)
Aaron T. Myers (146)
Suresh Srinivas (141)
Kihwal Lee (138)
Tsz Wo Nicholas Sze (131)
Daryn Sharp (117)
Ted Yu (108)
Uma Maheswara Rao G (95)
Konstantin Shvachko (86)
Allen Wittenauer (78)
Alejandro Abdelnur (73)
Yongjun Zhang (73)
Akira AJISAKA (71)
Vinayakumar B (71)
Yi Liu (67)
Hairong Kuang
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB