Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Plain View
Flume >> mail # user >> How to upload the SEQ data into hdfs


+
mardan Khan 2012-07-24, 00:26
+
Brock Noland 2012-07-24, 00:33
+
mardan Khan 2012-07-24, 01:07
Copy link to this message
-
Re: How to upload the SEQ data into hdfs
Or as Brock said, you can refer to the link he posted and use the example
from the user guide instead, then you'll need to include this:

agent.sources = avro-AppSrv-sourceagent.sinks hdfs-Cluster1-sinkagent.channels = mem-channel-1

... but that example uses an Avro source so you'll likely need to start an
avro-client to test (or use Flume SDK). Or just change the source type to
SEQ.

Cheers,
Will

On Mon, Jul 23, 2012 at 6:07 PM, mardan Khan <[EMAIL PROTECTED]> wrote:

>
>
>
> Thanks Brocks,
>
> I have just gone through the posted link and just copy past the one of
> configuration file  and change the hdfs path as below:
>
>
>
> # properties of avro-AppSrv-source
> agent.sources.avro-AppSrv-source.type = avro
> agent.sources.avro-AppSrv-source.bind = localhost
> agent.sources.avro-AppSrv-source.port = 10000
>
> # properties of mem-channel-1
> agent.channels.mem-channel-1.type = memory
> agent.channels.mem-channel-1.capacity = 1000
> agent.channels.mem-channel-1.transactionCapacity = 100
>
> # properties of hdfs-Cluster1-sink
> agent.sinks.hdfs-Cluster1-sink.type = hdfs
> agent.sinks.hdfs-Cluster1-sink.hdfs.path = hdfs://
> 134.83.35.24/user/mardan/flume/
>
>
> apply the following command:
>
> $  /usr/bin/flume-ng agent -n agent -c conf -f
> /usr/lib/flume-ng/conf/flume.conf
>
>
> and got the following error. Most of the time of getting this error
>
> 12/07/24 01:54:43 ERROR properties.PropertiesFileConfigurationProvider:
> Failed to load configuration data. Exception follows.
> java.lang.NullPointerException
>     at
> org.apache.flume.conf.properties.PropertiesFileConfigurationProvider.loadSources(PropertiesFileConfigurationProvider.java:324)
>     at
> org.apache.flume.conf.properties.PropertiesFileConfigurationProvider.load(PropertiesFileConfigurationProvider.java:222)
>     at
> org.apache.flume.conf.file.AbstractFileConfigurationProvider.doLoad(AbstractFileConfigurationProvider.java:123)
>     at
> org.apache.flume.conf.file.AbstractFileConfigurationProvider.access$300(AbstractFileConfigurationProvider.java:38)
>     at
> org.apache.flume.conf.file.AbstractFileConfigurationProvider$FileWatcherRunnable.run(AbstractFileConfigurationProvider.java:202)
>     at
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:441)
>     at
> java.util.concurrent.FutureTask$Sync.innerRunAndReset(FutureTask.java:317)
>     at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:150)
>     at
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$101(ScheduledThreadPoolExecutor.java:98)
>     at
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.runPeriodic(ScheduledThreadPoolExecutor.java:180)
>     at
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:204)
>     at
> java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
>     at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
>     at java.lang.Thread.run(Thread.java:662)
>
> I think some thing wrong in the configuration file. I am using flume1.x
> version and installed in /usr/lib/flume-ng/
>
> Could you please check the command and configuration file.
>
> Thanks
>
>
>
>
>
>
>
> On Tue, Jul 24, 2012 at 1:33 AM, Brock Noland <[EMAIL PROTECTED]> wrote:
>
>> Yes, you can do that. In fact that is the most common case. The
>> documents which should help you do so are here:
>>
>> https://cwiki.apache.org/confluence/display/FLUME/Flume+1.x+Documentation
>>
>> Brock
>>
>> On Mon, Jul 23, 2012 at 7:26 PM, mardan Khan <[EMAIL PROTECTED]>
>> wrote:
>> > Hi,
>> >
>> > I am just doing testing. I am generating the sequence and want to upload
>> > into hdfs. My configuration file as:
>> >
>> > agent2.channels = c1
>> > agent2.sources = r1
>> > agent2.sinks = k1
>> >
>> > agent2.channels.c1.type = MEMORY
>> >
>> > agent2.sources.r1.channels = c1
>> > agent2.sources.r1.type = SEQ
>> >
>> > agent2.sinks.k1.channel = c1
+
mardan Khan 2012-07-24, 11:58
+
Brock Noland 2012-07-24, 12:23
+
mardan Khan 2012-07-24, 23:18
+
mardan Khan 2012-07-25, 03:45
+
Will McQueen 2012-07-24, 02:26
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB