Home | About | Sematext search-lucene.com search-hadoop.com
clear query|facets|time Search criteria: .   Results from 1 to 10 from 33 (0.085s).
Loading phrases to help you
refine your search...
Re: Basic Doubt in Hadoop - Hadoop - [mail # user]
...The data is in HDFS in case of WordCount MR sample.   In hdfs, you have the metadata in NameNode and actual data as blocks replicated across DataNodes.  In case of reducer, If a re...
   Author: bejoy.hadoop@..., 2013-04-17, 05:00
Re: adding space on existing datanode ? - Hadoop - [mail # user]
...Hi Brice  By adding a new storage location to dfs.data.dir you are not incrementing the replication factor.  You are giving one mode location for the blocks to be copied for that d...
   Author: bejoy.hadoop@..., 2013-02-25, 08:56
Re: ISSUE :Hadoop with HANA using sqoop - Hadoop - [mail # user]
...Hi Samir  Looks like there is some syntax issue with the sql query generated internally .  Can you try doing a Sqoop import by specifying the query with -query option.  Regard...
   Author: bejoy.hadoop@..., 2013-02-21, 11:16
Re: probably very stupid question - Hadoop - [mail # user]
...Hi Jamal  I believe a reduce side join is what you are looking for.   You can use MultipleInputs and achieve a reduce side join to achieve this.  http://kickstarthadoop.blogsp...
   Author: bejoy.hadoop@..., 2013-01-15, 02:56
Re: hadoop namenode recovery - Hadoop - [mail # user]
...Hi Panshul,  Usually for reliability there will be multiple dfs.name.dir configured. Of which one would be a remote location such as a nfs mount.  So that even if the NN machine cr...
   Author: bejoy.hadoop@..., 2013-01-15, 02:50
Re: Writing a sequence file - Hadoop - [mail # user]
...Hi Peter  Did you ensure that using SequenceFileOutputFormat from the right package?  Based on the API you are using, mapred or mapreduce you need to use the OutputFormat from the ...
   Author: bejoy.hadoop@..., 2013-01-04, 16:04
Re: more reduce tasks - Hadoop - [mail # user]
...Hi Chen,  You do have an option in hadoop to achieve this if you want the merged file in LFS.  1) Run your job with n number of reducers. And you'll have n files in the output dir....
   Author: bejoy.hadoop@..., 2013-01-04, 05:24
[expand - 1 more] - Re: Increasing number of Reducers - Hadoop - [mail # user]
...Hi Masoud        One reducer would definitely emit one output file. If you are looking at just one file as your final result in lfs, Then once you have the MR job done us...
   Author: bejoy.hadoop@..., 2012-03-20, 11:05
Re: mapred.tasktracker.map.tasks.maximum not working - Hadoop - [mail # user]
...Adding on to Chen's response.  This is a setting meant at Task Tracker level(environment setting based on parameters like your CPU cores, memory etc) and you need to override the same a...
   Author: bejoy.hadoop@..., 2012-03-10, 06:39
Re: mapred.map.tasks vs mapred.tasktracker.map.tasks.maximum - Hadoop - [mail # user]
...Mohit      It is a job level config parameter. For plain map reduce jobs you can set the same through CLI as hadoop jar ... -D mapred.map.tasks=n You should be able to do it p...
   Author: bejoy.hadoop@..., 2012-03-10, 06:35
Sort:
project
Hadoop (31)
MapReduce (25)
HDFS (12)
type
mail # user (33)
date
last 7 days (0)
last 30 days (0)
last 90 days (0)
last 6 months (0)
last 9 months (33)
author
Harsh J (557)
Owen O'Malley (394)
Steve Loughran (384)
Todd Lipcon (238)
Eli Collins (182)
Alejandro Abdelnur (171)
Arun C Murthy (163)
Chris Nauroth (145)
Allen Wittenauer (142)
Ted Yu (123)
Tom White (121)
Daryn Sharp (115)
Nigel Daley (115)
Konstantin Shvachko (107)
Doug Cutting (96)
Aaron Kimball (94)
Colin Patrick McCabe (90)
Edward Capriolo (88)
Mark Kerzner (87)
jason hadoop (82)
Hairong Kuang (74)
Konstantin Boudnik (72)
Runping Qi (72)
Benoy Antony (70)
Suresh Srinivas (64)
bejoy.hadoop@...
bejoy.hadoop@...