Home | About | Sematext search-lucene.com search-hadoop.com
 Search Hadoop and all its subprojects:

Switch to Threaded View
Flume >> mail # user >> OutOfMemory


http://blogs.opcodesolutions.com/roller/java/entry/solve_java_lang_outofmemoryerror_java

https://blogs.oracle.com/alanb/entry/heap_dumps_are_back_with

Or use a profiler or visualvm etc. There are tons of tools you can use to
debug memory problems.

On Wed, Jan 16, 2013 at 5:15 PM, Mohit Anchlia <[EMAIL PROTECTED]>wrote:

> channel transaction is 500 and I've not set any batchsize parameter.
>
>
> On Wed, Jan 16, 2013 at 1:49 PM, Bhaskar V. Karambelkar <
> [EMAIL PROTECTED]> wrote:
>
>> What is the channel transaction capacity and HDFS batch size ?
>>
>>
>> On Wed, Jan 16, 2013 at 1:52 PM, Mohit Anchlia <[EMAIL PROTECTED]>wrote:
>>
>>> I often get out of memory even when there is no load on the system. I am
>>> wondering what's the best way to debug this. I have heap size set to 2G and
>>> memory capacity is 10000
>>>
>>>
>>> *13/01/16* *09:09:38* *ERROR* *hdfs.HDFSEventSink:* *process* *failed*
>>>
>>>
>>> *java.lang.OutOfMemoryError:* *Java* *heap* *space*
>>> *at* *java.util.Arrays.copyOf*(*Arrays.java:2786*)
>>>
>>>
>>>
>>> *at* *java.io.ByteArrayOutputStream.write*(*ByteArrayOutputStream.java:94*)
>>>
>>> *at* *java.io.DataOutputStream.write*(*DataOutputStream.java:90*)
>>>
>>> *at* *org.apache.hadoop.io.Text.write*(*Text.java:282*)
>>> *...* *11* *lines* *omitted* *...*
>>> *at* *java.lang.Thread.run*(*Thread.java:662*)
>>>
>>>
>>> *Exception* *in* *thread* "*SinkRunner-PollingRunner-DefaultSinkProcessor*" *java.lang.OutOfMemoryError:* *Java* *heap* *space*
>>>
>>>
>>>
>>> *at* *java.util.Arrays.copyOf*(*Arrays.java:2786*)
>>>
>>> *at* *java.io.ByteArrayOutputStream.write*(*ByteArrayOutputStream.java:94*)
>>>
>>>
>>
>