Hi, Paul/Hari:

Yes, that was it.. A fresh pair of eyes do help.  Thanks.

BTW, the following is my current topology.  I am extending this to 2 more channels and sinks but all using the same serializer.

                            -> channel 1 -> sink 1 -> mySerializer -> HbaseTable1
src -> multiplexer                                      
                            -> channel 2 -> sink 2 -> mySerializer -> HbaseTable2

In mySerializer, depending on the header information, I I break down the event bytes and send the data to different column qualifiers. Basically, the columns for each tables are different and read from the configuration file.

Does that seem the correct way of doing this ?

On Fri, Aug 8, 2014 at 6:41 PM, Paul Chavez <[EMAIL PROTECTED]> wrote:
There is a configuration error in your multiplexing channel selector section. You are referencing ‘server-agent.sources.avor-Src.’ and it should be ‘server-agent.sources.mySrc.’. Otherwise, the configuration looks good and should satisfy your requirements.


From: terrey shih [mailto:[EMAIL PROTECTED]]
Sent: Friday, August 08, 2014 1:20 PM
Subject: Re: question using multiplexing and the the same serializer for multiple sinks to multiple hbase tables



One more thing.  I would like to know if one serializer can be used for two table.



server-agent.sources = mySrc
server-agent.sinks = hbase-sink1 hbase-sink2
server-agent.channels = C1 C2

# Describe/configure the source
server-agent.sources.mySrc.type = avro
server-agent.sources.mySrc.bind =
server-agent.sources.mySrc.port = 5000

# Use a channel which buffers events in memory
server-agent.channels.C1.type = memory
server-agent.channels.C1.capacity = 1000
server-agent.channels.C1.transactionCapacity = 100
server-agent.sinks.hbase-sink1.type = asynchbase
server-agent.sinks.hbase-sink1.table = table1
server-agent.sinks.hbase-sink1.columnFamily = fam1
server-agent.sinks.hbase-sink1.batchSize = 1000
server-agent.sinks.hbase-sink1.serializer = com.test.flume.server.HBaseSinkSerializer
server-agent.sinks.hbase-sink1.channel = C1
server-agent.sinks.hbase-sink1.serializer.columns = col1,col2,col3

server-agent.channels.C2.type = memory
server-agent.channels.C2.capacity = 1000
server-agent.channels.C2.transactionCapacity = 100
server-agent.sinks.hbase-sink2.type = asynchbase
server-agent.sinks.hbase-sink2.table = table2
server-agent.sinks.hbase-sink2.columnFamily = fam2
server-agent.sinks.hbase-sink2.batchSize = 1000
server-agent.sinks.hbase-sink2.serializer = com.test.flume.server.HBaseSinkSerializer
server-agent.sinks.hbase-sink2.channel = C2
server-agent.sinks.hbase-sink2.serializer.columns = table2Col

# Bind the source and sink to the channel
server-agent.sources.mySrc.channels = C1 C2
server-agent.sources.avor-Src.selector.type = multiplexing
server-agent.sources.avor-Src.selector.header = DataSrc
server-agent.sources.avor-Src.selector.mapping.ABC = C1
server-agent.sources.avor-Src.selector.mapping.BCD = C2


On Fri, Aug 8, 2014 at 11:19 AM, terrey shih <[EMAIL PROTECTED]> wrote:



Here is my sample config.




On Fri, Aug 8, 2014 at 10:52 AM, Hari Shreedharan <[EMAIL PROTECTED]> wrote:

Can you please send your config? That would make it easier to understand.

terrey shih wrote:

I have a fanning out operation where I have one source and and based
on the event headers (headers are added from the source input), I
would like to channel the event to different Hbase tables.  I am using
the same serializer for the hbase tables.



NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB