Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Threaded View
Flume >> mail # user >> Log4JAppender Ignoring the Logging Pattern. Getting only description.


Copy link to this message
-
Re: Log4JAppender Ignoring the Logging Pattern. Getting only description.
Khadar,

I am not sure if your reply was meant as a response to my comment or just as additional information. The Log4j 2 FlumeAppender is not part of Flume. See http://logging.apache.org/log4j/2.x/manual/appenders.html#FlumeAvroAppender. However, Log4j 2 is still waiting for its first release so you will have to build it yourself if you want to try it.

Ralph

On Jul 27, 2012, at 6:58 AM, khadar basha wrote:

> FlumeLog4jAvroAppender is available as part of 0.94. But its not available in 1.2.Log4jAppender is from flume-ng-log4jappender-1.2.0.jar
>
> I think it is replace as part of 1.2.
>
> On Fri, Jul 27, 2012 at 6:31 PM, Ralph Goers <[EMAIL PROTECTED]> wrote:
> You might consider looking at Log4j 2. It has a Flume Appender that records the whole formatted log message in the body. In addition, it will record the MDC fields as well.
>
> Sent from my iPad
>
> On Jul 27, 2012, at 5:49 AM, khadar basha <[EMAIL PROTECTED]> wrote:
>
>> Hi
>>
>> I am using Flume1.2. Using avro source and hdfs sink. Sending message using the Logger channel from application server. For that i am using the org.apache.flume.clients.log4jappender.Log4jAppender in MyApp.
>>
>> But i am getting only body of the message (description). loosing the time, thread, Level information.
>>
>> flume-conf.properties file
>> =================>> agent2Test1.sources = seqGenSrc
>> agent2Test1.channels = memoryChannel
>> agent2Test1.sinks = loggerSink
>>
>> # For each one of the sources, the type is defined
>> agent2Test1.sources.seqGenSrc.type = avro
>> agent2Test1.sources.seqGenSrc.bind=localhost
>> agent2Test1.sources.seqGenSrc.port=41414
>>
>> # interceptors for host and date
>> agent2Test1.sources.seqGenSrc.interceptors = time hostInterceptor
>> agent2Test1.sources.seqGenSrc.interceptors.hostInterceptor.type = org.apache.flume.interceptor.HostInterceptor$Builder
>> agent2Test1.sources.seqGenSrc.interceptors.hostInterceptor.hostHeader = host
>> agent2Test1.sources.seqGenSrc.interceptors.hostInterceptor.useIP = false
>> agent2Test1.sources.seqGenSrc.interceptors.hostInterceptor.hostHeader.preserveExisting = false
>> agent2Test1.sources.seqGenSrc.interceptors.time.type = org.apache.flume.interceptor.TimestampInterceptor$Builder
>>
>> # The channel can be defined as follows.
>> agent2Test1.sources.seqGenSrc.channels = memoryChannel
>>
>> # Each sink's type must be defined
>> agent2Test1.sinks.loggerSink.type = hdfs
>> agent2Test1.sinks.loggerSink.hdfs.path = hdfs://hadoopHost:8020/data/%Y/%m/%d/%{host}/Logs
>>
>> agent2Test1.sinks.loggerSink.hdfs.fileType = DataStream
>>
>> #Specify the channel the sink should use
>> agent2Test1.sinks.loggerSink.channel = memoryChannel
>>
>> # Each channel's type is defined.
>> agent2Test1.channels.memoryChannel.type = memory
>>
>> # Other config values specific to each type of channel(sink or source)
>> # can be defined as well
>> # In this case, it specifies the capacity of the memory channel
>> agent2Test1.channels.memoryChannel.capacity = 1000
>>
>>
>>
>> Sample java program to generate the log message:
>> ====================================>>
>>
>> package com.test;
>>
>>
>> import org.apache.flume.clients.log4jappender.Log4jAppender;
>> import org.apache.log4j.Logger;
>> import org.apache.log4j.MDC;
>> import org.apache.log4j.PatternLayout;
>>
>> import java.util.UUID;
>>
>>
>> public class Main {
>>     static Logger log = Logger.getLogger(Main.class);
>>
>>     public static void main(String[] args) {
>>         try {
>>         Log4jAppender appender = new Log4jAppender();
>>             appender.setHostname("localhost");
>>             appender.setPort(41414);
>>             appender.setLayout(new PatternLayout("%d [%c] (%t) <%X{user} %X{field}> %m"));
>>          //   appender.setReconnectAttempts(100);
>>            
>>            appender.activateOptions();
>>
>>             log.addAppender(appender);
>>
>>             MDC.put("user", "chris");
>>           //  while (true) {
>>                 MDC.put("field", UUID.randomUUID().toString());
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB