can I create an interceptor with member attributes?
El 19/07/13 19:22, "Wolfgang Hoschek" <[EMAIL PROTECTED]> escribió:
>Perhaps a MR job that writes directly into HBase (without going through
>Flume) might be more efficient. For examples see
>On Jul 19, 2013, at 1:13 AM, Flavio Pompermaier wrote:
>> Thank you for the reply Wolfgang, I was just looking at the great use
>>case presented by Ari Flink of Cisco and infact those technologies sound
>> The problem is that in my use case there will be an initial mapreduce
>>job that will parse some text, perform some analysis and sends the
>>results of those analyses to my HBaseSink.
>> Just once finished (not in streaming!), I have to start processing the
>>data stored in that HBase table "newer than some date contained in this
>>end-message" (I need thus a way to trigger the start of such processing),
>> which requires to invoke an external REST service and stores data in
>>another output table. Also here, just once finished I have to reduce all
>>those information and put them into Solr..
>> So I think that the main problem is to avoid streaming and trigger
>>mapreduce jobs. Is there a way to do it with Flume?
>> On Fri, Jul 19, 2013 at 12:51 AM, Wolfgang Hoschek
>><[EMAIL PROTECTED]> wrote:
>> Take a look at these options:
>> - HBase Sinks (send data into HBase):
>> - Apache Flume Morphline Solr Sink (for heavy duty ETL processing and
>>ingestion into Solr):
>> - Apache Flume MorphlineInterceptor (for light-weight event annotations
>> - For MapReduce jobs it is typically more straightforward and efficient
>>to send data directly to destinations, i.e. without going through Flume.
>>For example using the MapReduceIndexerTool when going from HDFS into
>> On Jul 18, 2013, at 3:37 PM, Flavio Pompermaier wrote:
>> > Hi to all,
>> > I'm new to Flume but I'm very excited about it!
>> > I'd like to use it to gather some data, process received messages and
>>then indexing to solr.
>> > Any suggestion about how to do that with Flume?
>> > I've already tested an Avro source that sends data to HBase,
>> > but my use case requires those messages to be saved in HBase but also
>>processed and then indexed in Solr (obviously I also need to convert the
>>object structure to convert them).
>> > I think the first part is quite simple (I just use 2 sinks, one that
>>store in HBase) and another one that forward to another Avro instance,
>> > If messages are sent during a map/reduce job, is the avro source the
>>best option to send documents to index to my sink (i.e. that is my first
>>part of the flow that up to now I simulated with an avro source..)?
>> > Best,
>> > Flavio
Este mensaje se dirige exclusivamente a su destinatario. Puede consultar nuestra política de envío y recepción de correo electrónico en el enlace situado más abajo.
This message is intended exclusively for its addressee. We only send and receive email on the basis of the terms set out at: