Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Threaded View
Flume >> mail # user >> Dynamic Key=Value Parsing with an Interceptor?


Copy link to this message
-
Re: Dynamic Key=Value Parsing with an Interceptor?
Anyone have any ideas on the best way to do this?

Matt Wise
Sr. Systems Architect
Nextdoor.com
On Sat, Nov 9, 2013 at 5:28 PM, Matt Wise <[EMAIL PROTECTED]> wrote:

> Hey we'd like to set up a default format for all of our logging systems...
> perhaps looking like this:
>
>   "key1=value1;key2=value2;key3=value3...."
>
> With this pattern, we'd allow developers to define any key/value pairs
> they want to log, and separate them with a common separator.
>
> If we did this, what do we need to do in Flume to get Flume to parse out
> the key=value pairs into dynamic headers? We pass our data from Flume into
> both HDFS and ElasticSearch sinks. We would really like to have these
> fields dynamically sent to the sinks for much easier parsing and analysis
> later.
>
> Any thoughts on this? I know that we can define a unique interceptor for
> each service that looks for explicit field names ... but thats a nightmare
> to manage. I really want something truly dynamic.
>
> Matt Wise
> Sr. Systems Architect
> Nextdoor.com
>
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB