Home | About | Sematext search-lucene.com search-hadoop.com
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
clear query|facets|time Search criteria: .   Results from 1 to 10 from 25 (0.08s).
Loading phrases to help you
refine your search...
[HDFS-1578] First step towards data transter protocol compatibility: support DatanodeProtocol#getDataTransferProtocolVersion - HDFS - [issue]
...HADOOP-6904 allows us to handle RPC changes in a compatible way. However, we have one more protocol to take care of, the data transfer protocol, which a dfs client uses to read data from or ...
http://issues.apache.org/jira/browse/HDFS-1578    Author: Hairong Kuang, 2015-03-10, 01:56
[HDFS-1553] Hftp file read should retry a different datanode if the chosen best datanode fails to connect to NameNode - HDFS - [issue]
...Currently when reading a file through HftpFileSystem interface, namenode deterministically selects the "best" datanode from which the file is read. But this can cause the read to fail if the...
http://issues.apache.org/jira/browse/HDFS-1553    Author: Hairong Kuang, 2015-03-10, 01:49
[HDFS-1577] Fall back to a random datanode when bestNode fails - HDFS - [issue]
...When NameNod decides to redirect a read request to a datanode, if it can not find a live node that contains a block of the file, NameNode should choose a random datanode instead of throwing ...
http://issues.apache.org/jira/browse/HDFS-1577    Author: Hairong Kuang, 2015-03-10, 01:47
[HDFS-1537] Add a metrics for tracking the number of reported corrupt replicas - HDFS - [issue]
...We have a cluster, some of its datanodes' disks are corrupt. But it tooks us a few days to be aware of the problem. Adding a metrics that keeps track of the number of reported corrupt replic...
http://issues.apache.org/jira/browse/HDFS-1537    Author: Hairong Kuang, 2015-03-10, 01:46
[HDFS-2248] Port recoverLease API from append 0.20 to trunk - HDFS - [issue]
...HDFS-1520 and HDFS-1554 adds a new DistributedFileSystem API recoverLease in append 0.20 that forces a file's lease to be recovered immediately.I'd like to port it to trunk to support the HB...
http://issues.apache.org/jira/browse/HDFS-2248    Author: Hairong Kuang, 2015-03-10, 01:44
[HDFS-1348] Improve NameNode reponsiveness while it is checking if datanode decommissions are complete - HDFS - [issue]
...NameNode normally is busy all the time. Its log is full of activities every second. But once for a while, NameNode seems to pause for more than 10 seconds without doing anything, leaving a b...
http://issues.apache.org/jira/browse/HDFS-1348    Author: Hairong Kuang, 2015-03-02, 22:39
[HDFS-583] HDFS should enforce a max block size - HDFS - [issue]
...When DataNode creates a replica, it should enforce a max block size, so clients can't go crazy. One way of enforcing this is to make BlockWritesStreams to be filter steams that check the blo...
http://issues.apache.org/jira/browse/HDFS-583    Author: Hairong Kuang, 2014-07-25, 19:02
[HDFS-275] FSNamesystem should have an InvalidateBlockMap class to manage blocks scheduled to remove - HDFS - [issue]
...This jira intends to move the code that handles recentInvalideSet to a separate class InvalidateBlockMap....
http://issues.apache.org/jira/browse/HDFS-275    Author: Hairong Kuang, 2014-07-21, 19:00
[HDFS-166] NameNode#invalidateBlock's requirement on more than 1 valid replica exists before scheduling a replica to delete is too strict - HDFS - [issue]
...Currently invalideBlock does not allow to delete a replica only if at least two valid replicas exist before deletion is scheduled. This is too restrictive if the replica to delete is a corru...
http://issues.apache.org/jira/browse/HDFS-166    Author: Hairong Kuang, 2014-07-21, 18:35
[HDFS-48] NN should check a block's length even if the block is not a new block when processing a blockreport - HDFS - [issue]
...If the block length does not match the one in the blockMap, we should mark the block as corrupted. This could help clearing the polluted replicas caused by HADOOP-4810 and also help detect t...
http://issues.apache.org/jira/browse/HDFS-48    Author: Hairong Kuang, 2014-07-21, 18:13
Hadoop (71)
HDFS (25)
MapReduce (6)
HBase (2)
issue (25)
last 7 days (0)
last 30 days (0)
last 90 days (0)
last 6 months (6)
last 9 months (25)
Colin Patrick McCabe (284)
Todd Lipcon (277)
Eli Collins (237)
Haohui Mai (220)
Jing Zhao (219)
Tsz Wo (198)
Arpit Agarwal (189)
Chris Nauroth (185)
Andrew Wang (154)
Brandon Li (150)
Tsz Wo Nicholas Sze (143)
Kihwal Lee (139)
Aaron T. Myers (134)
Daryn Sharp (118)
Suresh Srinivas (116)
Uma Maheswara Rao G (85)
Ted Yu (83)
Konstantin Shvachko (80)
Vinayakumar B (80)
Yi Liu (80)
Yongjun Zhang (78)
Akira AJISAKA (74)
Alejandro Abdelnur (66)
Zhe Zhang (66)
Brahma Reddy Battula (65)
Stephen Chu (60)
J.Andreina (59)
Xiaoyu Yao (57)
Harsh J (55)
Kai Zheng (55)
Hairong Kuang
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB