Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Plain View
Chukwa >> mail # dev >> Creating a new adaptor: FileTailingAdaptor that would not cut lines


+
Luangsay Sourygna 2013-04-18, 18:33
+
Eric Yang 2013-04-19, 06:17
+
Luangsay Sourygna 2013-04-19, 19:01
+
Eric Yang 2013-04-21, 17:05
+
Luangsay Sourygna 2013-04-21, 22:07
+
Eric Yang 2013-04-22, 04:25
+
Luangsay Sourygna 2013-04-24, 04:49
+
Eric Yang 2013-04-25, 04:33
Copy link to this message
-
Re: Creating a new adaptor: FileTailingAdaptor that would not cut lines
Here is the Jira I opened:
https://issues.apache.org/jira/browse/CHUKWA-686

Writting the Junit tests, I discovered a small "error" in the classes
CharFileTailingAdaptorUTF8 and CharFileTailingAdaptorUTF8NewLineEscaped.
When we create the chunk, all the buffer is passed to the constructor,
meaning that the chunk will get both the usefull data and the useless data:
      ChunkImpl event = new ChunkImpl(type, toWatch.getAbsolutePath(),
                buffOffsetInFile + bytesUsed, buf, this);

I think we should only pass the usefull part of the data, just like this:
      ChunkImpl event = new ChunkImpl(type,
              toWatch.getAbsolutePath(), buffOffsetInFile
              + bytesUsed, Arrays.copyOf(buf, bytesUsed), this);

Although it does not seem a real issue because the method hasNext() of
AbstractProcessor.class ensures we only process the usefull part, I see two
reasons to fix this:
- it makes CharFileTailingAdaptorUTF8 fails some of my unit tests
(TestFileTailingAdaptorPreserveLines.testDontBreakLines() for instance)
that should not fail for this adaptor.
- we send data on the network for nothing. Since the useless part only
represents less than a line, it is not usually a big deal: we only transfer
a few bytes for nothing. However, a customer of mine has a log file with
lines as long as 300 kB (I know, quite strange for a "log file"...) so in
that case I think the fix is worth it.

Regards,

Sourygna
On Fri, Apr 19, 2013 at 9:01 PM, Luangsay Sourygna <[EMAIL PROTECTED]>wrote:

> Well, log4j socket adaptor may be great if you control the software that
> generates logs.
> That is not usually my case: customers don't really like having to install
> a Chukwa agents
> on their production servers so I don't want to think about telling them to
> change the log system
> of their software.
>
> As for partial line when log files rotate, I don't think this is something
> Chukwa should manage (what
> is more: how could Chukwa be aware there is a problem?).
> To my view, this would be an error of the "logrotate" system. As far as I
> know, RFA and DRFA log4j
> appenders handle quite well the rotation.
>
> Regards,
>
> Sourygna
>
>
> On Fri, Apr 19, 2013 at 8:17 AM, Eric Yang <[EMAIL PROTECTED]> wrote:
>
>> I think the best solution is to use Log4j socket appender and Chukwa log4j
>> socket adaptor to get the full entry of the log without worry about line
>> feed.  However, this solution only works with program that is written in
>> Java, and does not keep a copy of existing log file on disk.
>>
>> I think your proposal is a good idea to solve tailing text file and only
>> line delimited entry will be send.  How do we handle partial line and log
>> file has rotated?
>>
>> regards,
>> Eric
>>
>> On Thu, Apr 18, 2013 at 11:33 AM, Luangsay Sourygna <[EMAIL PROTECTED]
>> >wrote:
>>
>> > Hi all,
>> >
>> > FileTailingAdaptor is great to tail log files and send them to Hadoop.
>> > However, last line of the chunk is usually cut which leads to some
>> errors.
>> >
>> > I know that we can use CharFileTailingAdaptorUTF8 to solve such problem.
>> > Nonetheless, this adaptor calls the MapProcessor.process() method for
>> every
>> > line in each chunk, thus slowing a lot the Demux phase.
>> >
>> > I suggest creating a new adaptor that would mix the benefits of the two
>> > adaptors: the (Demux) speed of FileTailingAdaptor and
>> > the preservation of lines from CharFileTailingAdaptorUTF8.
>> >
>> > The implementation of the extractRecords() would be:
>> > - "for loop" on the buffer, starting from the end of the buffer and
>> going
>> > backward
>> > - if we find a separator, save the offset and exit the loop
>> > - rest of method would be similar to CharFileTailingAdaptorUTF8.
>> >
>> > Could you guys please tell me what do you think about it?
>> > How do you currently manage the "lines cut" with Chukwa?
>> >
>> > Regards,
>> >
>> > Sourygna
>> >
>>
>
>
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB