scott 2012-05-24, 09:27
scott 2012-05-25, 04:03
Eric Yang 2012-06-08, 04:36
scott 2012-06-14, 10:26
Eric Yang 2012-06-15, 03:33
scott 2012-06-19, 02:37
Eric Yang 2012-06-19, 05:07
scott 2012-06-28, 10:19
File tailing adaptors are using two file pointers to track the current
offset of the file. First, file pointer is a persistent tracking, and
second file pointer is periodically check for end of file offset.
When the second file pointer offset is smaller than first file
pointer, occurrence of file rotation is detected. LastModifiedTime
should also be compared between two file pointers to cover the case
where log file of previous day is 0 bytes and sudden increase of log
file for the next day.
You probably should do a rename or hard link for file A to **bak00,
then remove file A. Let the file A recreate from scratch. This will
save a lot of time in log file rotation.
Hope this helps.
On Thu, Jun 28, 2012 at 3:19 AM, scott <[EMAIL PROTECTED]> wrote:
> Hi, Eric
> could you please give me some suggestions on continuously tailing a file if
> any rotates happened?
> In our project, the log mechasim is as follows:
> when the file (eg file A )is over the size, it will copy all data to another
> backup file ( which always with file name ***bak00) , then the file A will
> truncate and resetted.
> it seems that there are some code in CHUKWA to detect the rotation, and some
> solution on that. Could you plz give some detailed on that, and any advice
> in consideration of our log mechanism, to make little revision to fully
> collect the log data?
> Scott Huan
> 2012/6/19 Eric Yang-3 [via Apache Chukwa] <[hidden email]>
>> On Mon, Jun 18, 2012 at 7:37 PM, scott <[hidden email]> wrote:
>> > Thanks，Eric.
>> > I still have some doubts about the process of writing logs to hbase and
>> > hdfs.
>> > 1. In my project, i need to collect and record some metrics for near
>> > real-time monitoring, and also store some logs for later analysis. For
>> > monitoring, hbase can be used, and for later log analysis, logs should
>> > be
>> > stored into hdfs. Chukwa has provided writting to both hbase and hdfs.
>> > which can be set in
>> > chukwa-collector-conf.xml using chukwaCollector.pipeline
>> > <name>chukwaCollector.pipeline</name>
>> > <value>org.apache.hadoop.chukwa.datacollection.writer.hbase.HBaseWriter,org.apache.hadoop.chukwa.datacollection.writer.SeqFileWriter</value>
>> > ....
>> > However, in chukwaCollector.pipeline, we must write hbase.HBaseWriter
>> > ahead
>> > of writer.SeqFileWriter, for in SeqFileWriter source code, i found
>> > that chunks will not pass to next writer. please verify that.
>> SeqFileWriter does not pass to next writer, this is the reason that it
>> has to be the last writer in the pipeline as a workaround.
>> > 2. For HDFS, I want to store the data categorized by
>> > [dataType]/[yyyyMMdd]/[HH]/[mm]/. in your last letter, you said that
>> > it's
>> > old design to have PostProcessor to extract metrics to store in DB and
>> > it
>> > should not be used now, then, what will we do to achieve aggregating the
>> > data into such category. Is there any new code to check in to solve it ?
>> There is no new code written for inject MR data to DB or HBase,
>> patches are welcome.
>> If you reply to this email, your message will be added to the discussion
>> To unsubscribe from hicc start problem:"Unable to load dashboard", click
> System.out.println("hello world again!")
> View this message in context: Re: hicc start problem:"Unable to load
> Sent from the Chukwa - Users mailing list archive at Nabble.com.