Typically flume components avoid modifying the event data. In this case you
could write a custom serializer for the hdfs sink which uses timestamp from
event header and writes it out along with the event body. Do consider the
following two alternatives first though...

1) Often the time at which the event occurred is more interesting than the
time at which flume processed it. so its better to put the timestamp into
the log event when it is generated by the application.

2) If you are interested in the time at which flume processed it, then you
may not care about minute or second level granularity very much. So you can
just setup hdfs sink to roll the file on HDFS every few minutes or so.

On Mon, Jun 30, 2014 at 11:41 PM, Guillermo Ortiz <[EMAIL PROTECTED]>
NOTICE: This message is intended for the use of the individual or entity to
which it is addressed and may contain information that is confidential,
privileged and exempt from disclosure under applicable law. If the reader
of this message is not the intended recipient, you are hereby notified that
any printing, copying, dissemination, distribution, disclosure or
forwarding of this communication is strictly prohibited. If you have
received this communication in error, please contact the sender immediately
and delete it from your system. Thank You.

NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB