Home | About | Sematext search-lucene.com search-hadoop.com
clear query|facets|time Search criteria: .   Results from 1 to 10 from 17 (0.176s).
Loading phrases to help you
refine your search...
[expand - 2 more] - Re: how to access workers from spark context - Spark - [mail # user]
...actually if you search the spark mail archives you will find many similar topics. At this time, I just want to manage it by myself.On Tuesday, August 12, 2014 8:46 PM, Stanley Shi  wrot...
   Author: S. Zhou, 2014-08-13, 03:55
[expand - 1 more] - Re: A wired producer connection timeout issue - Kafka - [mail # user]
...Thanks Guozhang. Any ideas on what could be wrong on that machine? We set up multiple producers in the same way but only one has this issue.On Friday, August 8, 2014 2:41 PM, Guozhang Wang &...
   Author: S. Zhou, 2014-08-08, 21:43
Hadoop exception: DFSInputStream.java: - Error making BlockReader. Closing stale NioInetPeer - Hadoop - [mail # user]
...The exception happens on hadoop 2.2 version. The whole error message is shown below. Notice that the level is DEBUG. Not sure if such exception is serious.=2014-06-05 14:39:31,135 DEBUG [poo...
   Author: S. Zhou, 2014-06-06, 17:08
HBase exception: Failed after retry of OutOfOrderScannerNextException - HBase - [mail # user]
...I saw the following exceptions happened frequently: any hints?Failed to scan rows on table XXXX: start=0000000004, end=0000000005, Failed after retry of OutOfOrderScannerNextException: was t...
   Author: S. Zhou, 2014-05-23, 18:21
how to get the failed rows when executing a batch PUT request? - HBase - [mail # user]
...I checked the Java doc on "put(List puts)" of HTableInterface and it does not say how to get the failed rows in case exception happened (see below): can I assume the failed rows are containe...
   Author: S. Zhou, 2013-12-31, 19:17
[expand - 1 more] - Re: How to delete multiple columns in the same row? - HBase - [mail # user]
...Thanks Ted & Lars. For the clarification of Java doc for "Delete",  I would say: adding some statement like "call this method once for each column to be deleted"      On Monda...
   Author: S. Zhou, 2013-12-31, 02:30
Re: AsyncHBase 1.5.0-rc1 available for download and testing (HBase 0.96 compatibility inside) - HBase - [mail # user]
...I am trying the new version and run into some problem: details here:  https://groups.google.com/forum/#!topic/asynchbase/zsIsLOZgiVc Could u please help? We are trying to migrate to Had...
   Author: S. Zhou, 2013-12-24, 16:34
copy data inter-cluster with different version of Hadoop - HBase - [mail # user]
...I need to copy data from Hadoop cluster A to cluster B. I know I can use "distCp" tool to do that. Now the problem is: cluster A has version 1.2.1 and cluster B has version 0.20.x.  So "dist...
   Author: S. Zhou, 2013-10-28, 19:14
stop generating these "part-XXXX" empty files when using MultipleOutputs in mapreduce job - MapReduce - [mail # user]
...I use MultipleOutputs so the output data are no longer stored in files "part-XXX". But they are still generated (though empty). Is it possible to stop generating these files when running MR ...
   Author: S. Zhou, 2013-10-28, 19:11
set file permission on mapreduce outputs - MapReduce - [mail # user]
...I have a MR job (which only has mapper) and the file permission of the output files is "rwx------". I want it to be "rwxr-xr-x". How can I set it  up in job config?  Thanks  Senqiang...
   Author: S. Zhou, 2013-10-28, 19:08
HBase (11)
MapReduce (3)
Hadoop (1)
Kafka (1)
Spark (1)
mail # user (17)
last 7 days (0)
last 30 days (0)
last 90 days (2)
last 6 months (4)
last 9 months (17)
Ted Yu (1701)
Harsh J (1295)
Jun Rao (1059)
Todd Lipcon (1001)
Stack (977)
Jonathan Ellis (844)
Andrew Purtell (822)
Jean-Daniel Cryans (753)
Yusaku Sako (733)
stack (714)
Jarek Jarcec Cecho (702)
Eric Newton (698)
Jonathan Hsieh (673)
Brock Noland (668)
Neha Narkhede (665)
Roman Shaposhnik (665)
Namit Jain (649)
Hitesh Shah (627)
Owen O'Malley (625)
Steve Loughran (624)
Siddharth Seth (614)
Josh Elser (590)
Eli Collins (545)
Arun C Murthy (543)
Doug Cutting (533)
S. Zhou