This was due to the fact that when HBASE-2312 was integrated, there were
many flavors of hadoop running in production.
So the code had to support all the flavors.
On Thu, Oct 24, 2013 at 9:27 AM, Wukang Lin <[EMAIL PROTECTED]> wrote:
> Hi Ted,
> Thank you for your response. for #1, I have tried to understand that
> comment in SequenceFileLogWriter, i can't figure out that instead of
> reflection, why not use the version of SF.createWriter below directly?
> SequenceFile.Writer createWriter(FileSystem fs,
> Configuration conf,
> Path name,
> Class keyClass,
> Class valClass,
> int bufferSize,
> short replication,
> long blockSize,
> boolean createParent,
> Metadata metadata)
> throws java.io.IOException;
> Thank you again.
> 2013/10/24 Ted Yu <[EMAIL PROTECTED]>
> > For #2, see HBASE-5954
> > For #1, see the following comment in SequenceFileLogWriter:
> > + // reflection for a version of SequenceFile.createWriter that
> > doesn't
> > + // automatically create the parent directory (see HBASE-2312)
> > + this.writer = (SequenceFile.Writer) SequenceFile.class
> > On Thu, Oct 24, 2013 at 8:49 AM, Wukang Lin <[EMAIL PROTECTED]>
> > > Hi all,
> > > Recently, i read the source of HBase's HLog, and i got some
> > > that puzzled me a lot. here there are:
> > > 1 why use reflection to init a SequenceFile.Writer
> > > in SequenceFileLogWriter? i read HBASE-2312 but still can't catch the
> > > point.
> > > 2 It seems that hlog use SequenceFile.Writer's append method to
> > > the WAL logs to DataNode, not FSDataOutputStream.hflush(), for each
> > > mutation(or batch mutations), so may it lose data when HDFS crash while
> > WAL
> > > logs were 'sync' to DataNode but not flush to disk? or are there
> > something
> > > i misunderstanded?
> > >
> > > Thank you.
> > >