javaLee 2013-01-08, 11:35
Hemanth Yamijala 2013-01-08, 15:11
Mohammad Tariq 2013-01-08, 11:59
Oleg Zhurakousky 2013-01-08, 12:13
-Re: Why the official Hadoop Documents are so messy?
Glen Mazza 2013-01-08, 13:20
quote: "Obviously in the second there is a vested interested by such
individual or company to promote the product therefore things like
documentation tend to be much crispier then its ASF counterparts." --
I'm not so sure about that; in cases where companies provide commercial
wraps of products but pool their resources with other companies in
maintaining the open-souce product they're wrapping, their financial
incentive would be in keeping their commercial wrap documentation
top-notch to lure people to their wraps but less so the Apache website
I think the original poster just needs to help out with the
documentation, check it out from SVN and submit patches to improve it
(or at least submit a JIRA as Mohammad mentioned). I cleaned up much of
the Hadoop Wiki as I was learning from it.
On 01/08/2013 07:13 AM, Oleg Zhurakousky wrote:
> Just a little clarification
> This is NOT "how open source works" by any means as there are many
> Open Source projects with well written and maintained documentation.
> It all comes down to the 2 Open Source models
> 1. ASF Open Source - which is a pure democracy or may be even anarchy
> without any governing (individual or corporate) other then the ASF
> procedures/guidelines themselves
> 2. Stewardship-based Open Source - controlled and managed by an
> individual or company
> Obviously in the second there is a vested interested by such
> individual or company to promote the product therefore things like
> documentation tend to be much crispier then its ASF counterparts.
> However the Stewardship-based Open Source model is much tighter with
> regard to control of what goes in, quality of code etc., then its ASF
> counterpart which allows a greater flow to free ideas from the
> community, so both are valid both are open source and both needs to
> exist and we developers just need to deal with it. After all its Open
> Source and the code is always a good source of documentation
> On Jan 8, 2013, at 6:59 AM, Mohammad Tariq <[EMAIL PROTECTED]
> <mailto:[EMAIL PROTECTED]>> wrote:
>> Hello there,
>> Thank you for the comments. But, just to let you know,
>> it's a community work and no one in particular can be held
>> responsible for these kind of small things. This is how open
>> source works. Guys who are working on Hadoop have a lot
>> of things to do. In spite of that, they are giving their best. In
>> the process sometimes these kinda things might happen.
>> I really appreciate your effort. But rather than this you can
>> raise a JIRA if you find something wrong somewhere and
>> fix it or let somebody else fix it.
>> Many thanks.
>> P.S. : Don't take it otherwise.
>> Best Regards,
>> On Tue, Jan 8, 2013 at 5:05 PM, javaLee <[EMAIL PROTECTED]
>> <mailto:[EMAIL PROTECTED]>> wrote:
>> For example,look at the documents about HDFS shell guide:
>> In 0.17, the prefix of HDFS shell is hadoop dfs:
>> In 0.19, the prefix of HDFS shell is hadoop fs:
>> In 1.0.4,the prefix of HDFS shell is hdfs dfs:
>> Reading official Hadoop ducuments is such a suffering.
>> As a end user, I am confused...
Talend Community Coders - coders.talend.com
Oleg Zhurakousky 2013-01-08, 13:28