Home | About | Sematext search-lucene.com search-hadoop.com
 Search Hadoop and all its subprojects:

Switch to Plain View
Flume >> mail # user >> Lock contention in FileChannel


+
Pankaj Gupta 2013-08-13, 23:13
+
Hari Shreedharan 2013-08-13, 23:39
+
Pankaj Gupta 2013-08-14, 00:01
+
Hari Shreedharan 2013-08-14, 00:14
+
Brock Noland 2013-08-14, 00:51
+
Pankaj Gupta 2013-08-14, 02:06
+
Hari Shreedharan 2013-08-14, 02:18
+
Brock Noland 2013-08-14, 02:22
+
Pankaj Gupta 2013-08-14, 02:33
+
Brock Noland 2013-08-14, 02:41
+
Pankaj Gupta 2013-08-14, 02:46
+
Brock Noland 2013-08-14, 02:54
+
Pankaj Gupta 2013-08-14, 02:57
+
Brock Noland 2013-08-14, 03:06
+
Pankaj Gupta 2013-08-14, 03:16
+
Brock Noland 2013-08-14, 03:30
+
Pankaj Gupta 2013-08-14, 18:57
+
Pankaj Gupta 2013-08-14, 19:12
+
Pankaj Gupta 2013-08-14, 19:34
+
Hari Shreedharan 2013-08-14, 19:43
+
Pankaj Gupta 2013-08-14, 19:59
Copy link to this message
-
Re: Lock contention in FileChannel
I am getting very good performance by removing the groups altogether and
keeping a high number of total sinks (64). If I organize sinks into groups
of 4 I loose parallelism and don't get as good performance as with 64
sinks. The reason I want to organize sinks into groups is for failover. Do
I get any failover if I don't use any groups? E.g. if I have 64 sinks and
one of them fails to send events, would some other sink be able to send
those events or would the failed sink keep trying forever.

The reason I am not able to increase the number of sinks beyond 64 is that
I start getting failed connection exceptions. It seem like the avro source
on the destination is not allowing connections beyond a certain number. It
seems that avro sink creates a thread per persistent connection. I had set
number of threads as 16 earlier but looking at code it seemed that by not
supplying the threads parameter for avro source it uses a flexible thread
pool, so I removed that setting to let the flexible thread pool be used.
Still, beyond a certain number of connections I start getting failed
connections. It would also in general be better if I can work with low
number of connections to avoid creating too many threads. If I could work
without sink groups that would be best, I'm curios about the downsides of
doing that.

Thanks in Advance
On Wed, Aug 14, 2013 at 12:59 PM, Pankaj Gupta <[EMAIL PROTECTED]>wrote:

> @Hari
>
> You're right, removing the groups seems to have improved performance. I
> didn't realize that putting sinks in a load balanced group would have this
> effect on performance. I will be monitor it more and let you know how that
> goes. Thanks a lot for your help.
>
>
> On Wed, Aug 14, 2013 at 12:34 PM, Pankaj Gupta <[EMAIL PROTECTED]>wrote:
>
>> Null sink is actually holding up. I hadn't specified the batchSize so the
>> default of 100 was being picked up. When I set batchSize to 4000 it started
>> consuming all events and the channels are not filling up. So it seems that
>> the problem is a combination of AvroSink with FileChannel. Memory Channel
>> with Avro Sink works fine and FileChannel with null sink works fine.
>>
>>
>> On Wed, Aug 14, 2013 at 12:12 PM, Pankaj Gupta <[EMAIL PROTECTED]>wrote:
>>
>>> With Null Sink, in the call stack I see a lot of these:
>>> "SinkRunner-PollingRunner-LoadBalancingSinkProcessor" prio=10
>>> tid=0x00007feb18001000 nid=0x356b runnable [0x00007feb9e897000]
>>>     java.lang.Thread.State: RUNNABLE
>>>         at sun.nio.ch.FileChannelImpl.force0(Native Method)
>>>         at sun.nio.ch.FileChannelImpl.force(FileChannelImpl.java:348)
>>>         at
>>> org.apache.flume.channel.file.LogFile$Writer.sync(LogFile.java:258)
>>>         at
>>> org.apache.flume.channel.file.LogFile$Writer.commit(LogFile.java:225)
>>>         - locked <0x0000000518c97650> (a
>>> org.apache.flume.channel.file.LogFileV3$Writer)
>>>         at org.apache.flume.channel.file.Log.commit(Log.java:758)
>>>         at org.apache.flume.channel.file.Log.commitTake(Log.java:640)
>>>         at
>>> org.apache.flume.channel.file.FileChannel$FileBackedTransaction.doCommit(FileChannel.java:557)
>>>         at
>>> org.apache.flume.channel.BasicTransactionSemantics.commit(BasicTransactionSemantics.java:151)
>>>         at org.apache.flume.sink.NullSink.process(NullSink.java:100)
>>>         at
>>> org.apache.flume.sink.LoadBalancingSinkProcessor.process(LoadBalancingSinkProcessor.java:154)
>>>         at
>>> org.apache.flume.SinkRunner$PollingRunner.run(SinkRunner.java:147)
>>>         at java.lang.Thread.run(Thread.java:662)
>>>
>>>
>>>
>>>
>>> On Wed, Aug 14, 2013 at 11:57 AM, Pankaj Gupta <[EMAIL PROTECTED]>wrote:
>>>
>>>> I tried increasing the dataDirs to 2 and 4 per disk but doesn't seem to
>>>> help much. I then replaced the avro sink with null sinks and events are
>>>> still filling up in the channel. I tried with both 2 and 4 dataDirs per
>>>> disk and null sink, still don't get throughput higher than 1.5 MBps.
>>
*P* | (415) 677-9222 ext. 205 *F *| (415) 677-0895 | [EMAIL PROTECTED]

Pankaj Gupta | Software Engineer

*BrightRoll, Inc. *| Smart Video Advertising | www.brightroll.com
United States | Canada | United Kingdom | Germany
We're hiring<http://newton.newtonsoftware.com/career/CareerHome.action?clientId=8a42a12b3580e2060135837631485aa7>
!
+
Pankaj Gupta 2013-08-18, 04:43
+
Hari Shreedharan 2013-08-14, 19:04
+
Pankaj Gupta 2013-08-14, 02:16