Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Threaded View
Flume >> mail # user >> About data collection with flume problem


Copy link to this message
-
About data collection with flume problem
Hi SÜLEYMAN CEBESOY,

Am redirecting your question to the Apache Flume users list
([EMAIL PROTECTED]) which is a better place to send these type of
questions to:

====
Hi,
I am interested in hadoop nowadays.I installed the hadoop also
flume,zookeeper,hbase etc.I have read the tutorials of
hadoop,flume,hbase,pig,hdfs.Also I have applied some examples
according to the tutorials.
I have some questions.I want to ask you so I want to learn these
tech.I like it.
I want to collect data from web server log or any data in documents.
/var/log/apache2/access.log.1 --> log documentsI want to see these
logs in hdfs file

I run thisflume node_nowatch -n "collector"flume node_nowatch -n "agent"
then
See the flume_master_page.png
I create the configuration in flume master in config page:
agent: tail("/var/log/apache2/access.log.1") |
agentSink("localhost",35853);collector: collectorSource(35853) |
collectorSink("hdfs://localhost:9000/user/oracle/flume/",
"access.log.1");
After that ı didnt see the log files in the hadoop file
See the hdfs.png
Can you show me the truth way for data collections in flume architecture?
Thank you for all thing

--
Harsh J
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB