Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Threaded View
Flume >> mail # user >> Error in Upload the log file into hdfs


Copy link to this message
-
Re: Error in Upload the log file into hdfs
Alex is right and our error message there needs much improvement. I have
created a JIRA here https://issues.apache.org/jira/browse/FLUME-1744

On Thu, Nov 29, 2012 at 9:39 AM, Alexander Alten-Lorenz <[EMAIL PROTECTED]
> wrote:

> Hi,
>
> agent.channels.memoryChannel.transactionCapacity=1000
>
> Is wrong. You cant have equal or more transaction capacity in a channel as
> configured capacity. Use the opposite, when you want to use it.
> from our Guide:
>
> capacity                NUM     The max number of events stored in the
> channel
> transactionCapacity     NUM     The max number of events stored in the
> channel per transaction
>
> Try this:
>
> agent.channels.memoryChannel.capacity = 1000
> agent.channels.memoryChannel.transactionCapacity=10
>
> cheers
> - Alex
>
>
> On Nov 29, 2012, at 1:03 PM, kashif khan <[EMAIL PROTECTED]> wrote:
>
> > Hi,
> >
> > I am just struggling to learn the flume and doing some testing. I am
> > running two agents (agent, agent1). The agent used to upload the log data
> > into hdfs and agent1 used as logger. The configuration of two agents as:
> >
> > agent.sources = tail
> > agent.channels = memoryChannel
> > agent.sinks = hdfs-clusterSink
> >
> > agent.sources.tail.type = exec
> > agent.sources.tail.command = tail -f /var/log/flume-ng/flume.log
> > agent.sources.tail.channels = memoryChannel
> >
> > agent.sinks.hdfs-clusterSink.
> > channel = memoryChannel
> > agent.sinks.hdfs-clusterSink.type = hdfs
> > agent.sinks.hdfs-clusterSink.hdfs.path = hdfs://
> > hadoop1.example.com/user/root/Test/
> >
> >
> > agent.channels.memoryChannel.type = memory
> > agent.channels.memoryChannel.transactionCapacity=1000
> > agent.channels.memoryChannel.capacity = 100
> >
> >
> >
> >
> > agent1.sources = source1
> > agent1.sinks = sink1
> > agent1.channels = channel1
> >
> > # Describe/configure source1
> > agent1.sources.source1.type = netcat
> > agent1.sources.source1.bind = localhost
> > agent1.sources.source1.port = 44444
> >
> > # Describe sink1
> > agent1.sinks.sink1.type = logger
> >
> > # Use a channel which buffers events in memory
> > agent1.channels.channel1.type = memory
> > agent1.channels.channel1.capacity = 1000
> > agent1.channels.channel1.transactionCapactiy = 100
> >
> > # Bind the source and sink to the channel
> > agent1.sources.source1.channels = channel1
> > agent1.sinks.sink1.channel = channel1
> >
> >
> > I dont know why it does not upload the log file into hdfs. where I am
> doing
> > mistake . If anyone who have solution please let me know.
> >
> >
> > The log file as:
> >
> >
> > 29 Nov 2012 11:49:13,046 INFO  [main]
> > (org.apache.flume.lifecycle.LifecycleSupervisor.start:67)  - Starting
> > lifecycle supervisor 1
> > 29 Nov 2012 11:49:13,050 INFO  [main]
> > (org.apache.flume.node.FlumeNode.start:54)  - Flume node starting - agent
> > 29 Nov 2012 11:49:13,051 INFO  [lifecycleSupervisor-1-0]
> > (org.apache.flume.node.nodemanager.DefaultLogicalNodeManager.start:203)
>  -
> > Node manager starting
> > 29 Nov 2012 11:49:13,053 INFO  [lifecycleSupervisor-1-0]
> > (org.apache.flume.lifecycle.LifecycleSupervisor.start:67)  - Starting
> > lifecycle supervisor 10
> > 29 Nov 2012 11:49:13,052 INFO  [lifecycleSupervisor-1-2]
> > (org.apache.flume.conf.file.AbstractFileConfigurationProvider.start:67)
>  -
> > Configuration provider starting
> > 29 Nov 2012 11:49:13,054 INFO  [conf-file-poller-0]
> >
> (org.apache.flume.conf.file.AbstractFileConfigurationProvider$FileWatcherRunnable.run:195)
> > - Reloading configuration file:/etc/flume-ng/conf/flume.conf
> > 29 Nov 2012 11:49:13,057 INFO  [conf-file-poller-0]
> >
> (org.apache.flume.conf.FlumeConfiguration$AgentConfiguration.addProperty:912)
> > - Added sinks: hdfs-clusterSink Agent: agent
> > 29 Nov 2012 11:49:13,057 INFO  [conf-file-poller-0]
> >
> (org.apache.flume.conf.FlumeConfiguration$AgentConfiguration.addProperty:998)
> > - Processing:hdfs-clusterSink
> > 29 Nov 2012 11:49:13,057 INFO  [conf-file-poller-0]
> >
> (org.apache.flume.conf.FlumeConfiguration$AgentConfiguration.addProperty:998)
Apache MRUnit - Unit testing MapReduce - http://incubator.apache.org/mrunit/
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB