Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Plain View
Flume >> mail # user >> Null pointer trying to use multiplexing


+
Steve Knott 2013-05-08, 22:32
+
Paul Chavez 2013-05-08, 22:55
Copy link to this message
-
Re: Null pointer trying to use multiplexing
That was the issue, it works now. Thanks.

On 5/8/2013 6:55 PM, Paul Chavez wrote:
> Not sure if this is the issue, but I believe this configuration property is wrong:
>
> ais_agent.sources.ais-source1.selector.mapping.default = ais-ch1
>
> It should be:
>
> ais_agent.sources.ais-source1.selector.default = ais-ch1
>
> Hope that helps,
> Paul Chavez
>
> -----Original Message-----
> From: Steve Knott [mailto:[EMAIL PROTECTED]]
> Sent: Wednesday, May 08, 2013 3:33 PM
> To: [EMAIL PROTECTED]
> Subject: Null pointer trying to use multiplexing
>
> Hi,
>
> I am trying to use multiplexing. Basically, I have 1 source which sets a header field 'datatype' to either 'Position' or 'Ship'. Depending on the value, it should go to 1 of 2 channels, so the data will flow to two different files/sinks. I believe I have configured it correctly, but I keep getting a null pointer trying to start flume:
>
>> java.lang.NullPointerException
>>          at
>> org.apache.flume.channel.MultiplexingChannelSelector.getChannelListFromNames(MultiplexingChannelSelector.java:160)
>>          at
>> org.apache.flume.channel.MultiplexingChannelSelector.configure(MultiplexingChannelSelector.java:94)
>>          at
>> org.apache.flume.conf.Configurables.configure(Configurables.java:41)
> Has anyone seen this error?  My setup file is below for reference.
>
> Thanks for any help,
> Steve
>
> ----
>
> # tell ais_agent which ones we want to activate.
> ais_agent.channels = ais-ch1 ais-ch2
> ais_agent.sources = ais-source1
> ais_agent.sinks = ais-sink1 ais-sink2
>
> # Define a memory channel called ch1 on ais_agent ais_agent.channels.ais-ch1.type = memory ais_agent.channels.ais-ch1.capacity = 100000 ais_agent.channels.ais-ch1.transactionCapactiy = 1000
>
> ais_agent.channels.ais-ch2.type = memory ais_agent.channels.ais-ch2.capacity = 100000 ais_agent.channels.ais-ch2.transactionCapactiy = 1000
>
> # Define an AIS source called ais-source1 on ais_agent and tell it # to connect to dog:8657. Connect it to channel ais-ch1.
> ais_agent.sources.ais-source1.channels = ais-ch1 ais-ch2 ais_agent.sources.ais-source1.type = ais_flume.AISPortSource ais_agent.sources.ais-source1.host = dog ais_agent.sources.ais-source1.port = 8657 ais_agent.sources.ais-source1.selector.type = multiplexing ais_agent.sources.ais-source1.selector.header = datatype ais_agent.sources.ais-source1.selector.mapping.Position = ais-ch1 ais_agent.sources.ais-source1.selector.mapping.Ship = ais-ch2 ais_agent.sources.ais-source1.selector.mapping.default = ais-ch1
>
>
> # Describe the sink
> ais_agent.sinks.ais-sink1.channel = ais-ch1 ais_agent.sinks.ais-sink1.type = hdfs ais_agent.sinks.ais-sink1.hdfs.path = /csv6/ ais_agent.sinks.ais-sink1.serializer = TEXT ais_agent.sinks.ais-sink1.hdfs.filePrefix = position-%Y-%m-%d ais_agent.sinks.ais-sink1.hdfs.fileType = DataStream ais_agent.sinks.ais-sink1.hdfs.rollCount = 10000000 ais_agent.sinks.ais-sink1.hdfs.rollSize = 0 ais_agent.sinks.ais-sink1.hdfs.rollInterval = 0 ais_agent.sinks.ais-sink1.hdfs.batchSize = 100 ais_agent.sinks.ais-sink1.hdfs.maxOpenFiles = 5 ais_agent.sinks.ais-sink1.hdfs.writeFormat = Text
>
>
> ais_agent.sinks.ais-sink2.channel = ais-ch2 ais_agent.sinks.ais-sink2.type = hdfs ais_agent.sinks.ais-sink2.hdfs.path = /csv/csv5/ ais_agent.sinks.ais-sink2.serializer = TEXT ais_agent.sinks.ais-sink2.hdfs.filePrefix = ship-%Y-%m-%d ais_agent.sinks.ais-sink2.hdfs.fileType = DataStream ais_agent.sinks.ais-sink2.hdfs.rollCount = 10000000 ais_agent.sinks.ais-sink2.hdfs.rollSize = 0 ais_agent.sinks.ais-sink2.hdfs.rollInterval = 0 ais_agent.sinks.ais-sink2.hdfs.batchSize = 100 ais_agent.sinks.ais-sink2.hdfs.maxOpenFiles = 5 ais_agent.sinks.ais-sink2.hdfs.writeFormat = Text ais_agent.sinks.ais-sink2.hdfs.maxOpenFiles = 5 ais_agent.sinks.ais-sink2.hdfs.writeFormat = Text
>
>
>
+
Brock Noland 2013-05-09, 14:38
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB