Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Threaded View
Flume >> mail # user >> File Channel  performance and fsync


Copy link to this message
-
Re: File Channel performance and fsync
Which version? 1.2 or trunk?

--
Brock Noland
Sent with Sparrow (http://www.sparrowmailapp.com/?sig)
On Monday, October 22, 2012 at 8:18 AM, Jagadish Bihani wrote:

> Hi
>
> This is the simplistic configuration with which I am getting lower performance.
> Even with 2-tier architecture (cat source - avro sinks - avro source- HDFS sink)
> I get the similar performance with file channel.
>
> Configuration:
> ========> adServerAgent.sources = avro-collection-source
> adServerAgent.channels = fileChannel
> adServerAgent.sinks = hdfsSink fileSink
>
> # For each one of the sources, the type is defined
> adServerAgent.sources.avro-collection-source.type=exec
> adServerAgent.sources.avro-collection-source.command= cat /home/hadoop/file.tsf
>
> # The channel can be defined as follows.
> adServerAgent.sources.avro-collection-source.channels = fileChannel
>
> #Define file sink
> adServerAgent.sinks.fileSink.type = file_roll
> adServerAgent.sinks.fileSink.sink.directory = /home/hadoop/flume_sink
>
> adServerAgent.sinks.fileSink.channel = fileChannel
> adServerAgent.channels.fileChannel.type=file
> adServerAgent.channels.fileChannel.dataDirs=/home/hadoop/flume/channel/dataDir5
> adServerAgent.channels.fileChannel.checkpointDir=/home/hadoop/flume/channel/checkpointDir5
> adServerAgent.channels.fileChannel.maxFileSize=4000000000
>
> And it is run with :
> JAVA_OPTS = -Xms500m -Xmx700m -Dcom.sun.management.jmxremote -XX:MaxDirectMemorySize=2g
>
> Regards,
> Jagadish
>
> On 10/22/2012 05:42 PM, Brock Noland wrote:
> > Hi,
> >
> > I'll respond in more depth later, but it would help if you posted your configuration file and the version of flume you are using.
> >
> > Brock
> >
> > On Mon, Oct 22, 2012 at 6:48 AM, Jagadish Bihani <[EMAIL PROTECTED] (mailto:[EMAIL PROTECTED])> wrote:
> > > Hi
> > >
> > > I am writing this on top of another thread where there was discussion on "fsync lies" and
> > > only file channel used fsync and not file sink. :
> > >
> > > -- I tested the fsync performance on 2 machines  (On 1 machine I was getting very good throughput
> > > using file channel and on another almost 100 times slower with almost same hardware configuration.)
> > > using following code
> > >
> > >
> > > #define PAGESIZE 4096
> > >
> > > int main(int argc, char *argv[])
> > > {
> > >
> > >         char my_write_str[PAGESIZE];
> > >         char my_read_str[PAGESIZE];
> > >         char *read_filename= argv[1];
> > >         int readfd,writefd;
> > >
> > >         readfd = open(read_filename,O_RDONLY);
> > >         writefd = open("written_file",O_WRONLY|O_CREAT,777);
> > >         int len=lseek(readfd,0,2);
> > >         lseek(readfd,0,0);
> > >         int iterations = len/PAGESIZE;
> > >         int i;
> > >         struct timeval t0,t1;
> > >
> > >        for(i=0;i<iterations;i++)
> > >         {
> > >
> > >                 read(readfd,my_read_str,PAGESIZE);
> > >                 write(writefd,my_read_str,PAGESIZE);
> > >                 gettimeofday(&t0,0);
> > >                 fsync(writefd);
> > >               gettimeofday(&t1,0);
> > >                 long elapsed = (t1.tv_sec-t0.tv_sec)*1000000 + t1.tv_usec-t0.tv_usec;
> > >                 printf("Elapsed time is= %ld \n",elapsed);
> > >          }
> > >         close(readfd);
> > >         close(writefd);
> > > }
> > >
> > >
> > > -- As expected it requires typically 50000 microseconds for fsync to complete on one machine and 200 microseconds
> > > on another machine it took 290 microseconds to complete on an average. So is machine with higher
> > > performance is doing a 'fsync lie'?
> > > i
> > > -- If I have understood it clearly; "fsync lie" means the data is not actually written to disk and it is in
> > > some disk/controller buffer.  I) Now if disk loses power due to some shutdown or any other disaster, data will
> > > be lost. II) Can data be lost even without it ? (e.g. if it is keeping data in some disk buffer and if fsync is being
 be much more predictive of performance than CPU or RAM. Note that consumer level drives/controllers will give you much "better" performance because they lie to you about when your data is actually written to the drive. If you search for "fsync lies" you'll find more information on this. You probably want to increase the batch size to get better performance. Brock On Tue, Oct 9, 2012 at 2:46 AM, Jagadish Bihani <[EMAIL PROTECTED]> (mailto:[EMAIL PROTECTED]) wrote: Hi My flume setup is: Source Agent : cat source - File Channel - Avro Sink Dest Agent : avro source - File Channel - HDFS Sink. There is only 1 source agent and 1 destination agent. I measure throughput as amount of data written to HDFS per second. ( I have rolling interval 30 sec; so If 60 MB file is generated in 30 sec the throughput is : -- 2 MB/sec ). I have run source agent on various machines with different hardware configurations : (In all cases I run flume agent with JAVA OPTIONS as "-DJAVA_OPT
S="-Xms500m -Xmx1g -Dcom.sun.management.jmxremote -XX:MaxDirectMemorySize=2g") JDK is 32 bit. Experiment 1: ===== RAM : 16 GB Processor: Intel Xeon E5620 @ 2.40 GHz (16 cores). 64 bit Processor with 64 bit Kernel. Throughput: 2 MB/sec Experiment 2: ====== RAM : 4 GB Processor: Intel Xeon E5504 @ 2.00GHz (4 cores). 32 bit Processor 64 bit Processor with 32 bit Kernel. Throughput : 30 KB/sec Experiment 3: ====== RAM : 8 GB Processor:Intel Xeon E5520 @ 2.27 GHz (16 cores).32 bit Processor 64 bit Processor with 32 bit Kernel. Throughput : 80 KB/sec -- So as can be seen there is huge difference in the throughput with same configuration but different hardware. -- In the first case where throughput is more RES is around 160 MB in other cases it is in the range of 40 MB - 50 MB. Can anybody please give insights that why there is this huge difference in the throughput? What is the correlation between RAM and filechannel/HDFS sink performance and also with 32-bit/64 bit kernel? Regards,
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB