Are you using hdfs sink? If so please share your configuration.
I have seen this when the sink keeps many open files...each file holding on
to a native compression buffer.
On Jan 22, 2014 1:37 AM, "Shangan Chen" <[EMAIL PROTECTED]> wrote:
> Thanks a lot for your reply. I deployed flume on ubuntu and what confused
> me is not about the virtual address space but NIO Direct Memory increasing
> boundlessly. It will finally cause the machine to report swap is low.
> On Wed, Jan 22, 2014 at 1:08 AM, Brock Noland <[EMAIL PROTECTED]> wrote:
>> Looks like it's arena allocation. Basically it's nothing to worry about
>> since virtual address space on 64bit machines is not in short supply. You
>> could limit it with a parameter like so:
>> On Tue, Jan 21, 2014 at 3:38 AM, Shangan Chen <[EMAIL PROTECTED]>wrote:
>>> I have quite a lot of flume-agents streaming logs to several
>>> flume-collectors. The problem I face is the memory consumed by
>>> flume-collector is increasing all the time but slowly even I limit the max
>>> heap. I know flume use NIO, but I don't know why it cause memory increase
>>> boundlessly. I prints some information I can get, I really appreciate if
>>> anyone can give any help.
>>> *my jvm conf:*
>>> JAVA_OPTS="-Xms8192m -Xmx8192m -Dcom.sun.management.jmxremote
>>> -XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=/var/sankuai/logs"
>>> *top prints of flume instance:*
>>> top - 17:28:59 up 103 days, 34 min, 4 users, load average: 0.82, 0.86,
>>> Tasks: 1 total, 0 running, 1 sleeping, 0 stopped, 0 zombie
>>> Cpu(s): 5.4%us, 4.7%sy, 0.0%ni, 89.3%id, 0.0%wa, 0.0%hi, 0.1%si,
>>> Mem: 16435540k total, 16194572k used, 240968k free, 25896k buffers
>>> Swap: 8385892k total, 205592k used, 8180300k free, 165968k cached
>>> PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
>>> 16580 sankuai 20 0 23.1g 14g 5464 S 92 93.7 7522:53 java
>>> *jmap prints :*
>>> jmap -heap 16580
>>> Attaching to process ID 16580, please wait...
>>> Debugger attached successfully.
>>> Server compiler detected.
>>> JVM version is 23.21-b01
>>> using thread-local object allocation.
>>> Parallel GC with 8 thread(s)
>>> Heap Configuration:
>>> MinHeapFreeRatio = 40
>>> MaxHeapFreeRatio = 70
>>> MaxHeapSize = 8589934592 (8192.0MB)
>>> NewSize = 1310720 (1.25MB)
>>> MaxNewSize = 17592186044415 MB
>>> OldSize = 5439488 (5.1875MB)
>>> NewRatio = 2
>>> SurvivorRatio = 8
>>> PermSize = 21757952 (20.75MB)
>>> MaxPermSize = 85983232 (82.0MB)
>>> G1HeapRegionSize = 0 (0.0MB)
>>> Heap Usage:
>>> PS Young Generation
>>> Eden Space:
>>> capacity = 2105147392 (2007.625MB)
>>> used = 1048958560 (1000.3648376464844MB)
>>> free = 1056188832 (1007.2601623535156MB)
>>> 49.828271596861185% used
>>> From Space:
>>> capacity = 369229824 (352.125MB)
>>> used = 369168400 (352.06642150878906MB)
>>> free = 61424 (0.0585784912109375MB)
>>> 99.98336429074591% used
>>> To Space:
>>> capacity = 388890624 (370.875MB)
>>> used = 0 (0.0MB)
>>> free = 388890624 (370.875MB)
>>> 0.0% used
>>> PS Old Generation
>>> capacity = 5726666752 (5461.375MB)
>>> used = 4282458544 (4084.0707244873047MB)
>>> free = 1444208208 (1377.3042755126953MB)
>>> 74.78099790780352% used
>>> PS Perm Generation
>>> capacity = 37879808 (36.125MB)
>>> used = 37833736 (36.08106231689453MB)
>>> free = 46072 (0.04393768310546875MB)
>>> 99.8783731955558% used
>>> 11441 interned Strings occupying 1025920 bytes.
>>> *DirectMemory prints(refer https://gist.github.com/rednaxelafx/1593521
>>> <https://gist.github.com/rednaxelafx/1593521>) :*