Memory leak or mis-configured grid/Ignite cluster?

classic Classic list List threaded Threaded
5 messages Options
Dabbo Dabbo
Reply | Threaded
Open this post in threaded view
|

Memory leak or mis-configured grid/Ignite cluster?

GridGain v6.5.5
Ignite v1.0.6

I have observed a potential memory leak problem in GridGain and later in Ignite (Migrated over to Ignite in the hopes to resolve the issue, but failed).

Our web server application creates an IgniteCompute task and once it has completed, will return a byte[] representing zipped documents at around ~430mb.  The future listener attached the the task processes the result successfully and everything appears normal.

After attempting to run this same task 2-3 times, the work node (cluster) runs out of heap space.  I tried my best to hunt down where this memory leak occurs and ensuring all objects are dereferenced within my code.  However, it appears there is a byte[] object that grows within Ignite/GridGain for each task that is processed.

YourKit profiler hinted:
org.apache.ignite.marshaller.optimized.OptimizedObjectStreamRegistry$StreamHolder ->
org.apache.ignite.marshaller.optimized.OptimizedObjectOutputStream ->
org.apache.ignite.internal.util.io.GridUnsafeDataOutput -> byte[]

This was discovered by performing analysis on the worker node/cluster that is performing the task and returning the result.  I guess it's some sort of cached result?  I can't work out how to flush it or if it's a problem of the way I have implemented Ignite into my application.

I have a memory snapshot available if this if required.

I'm not to sure what information you would need to start looking into this, if you could let me know, I'll do my utmost to gather the details you require to determine to cause of the heap usage.

Kind regards,

Darren.
vkulichenko vkulichenko
Reply | Threaded
Open this post in threaded view
|

Re: Memory leak or mis-configured grid/Ignite cluster?

Darren,

This is not a memory leak. Ignite's OptimizedMarshaller internally maintains a set of thread local byte arrays which are used to serialize objects. This improves performance because we don't have to allocate these arrays each time, but consumes memory.

You ran out of memory because task result is very large. Most likely each execution happened in different thread, and each of this threads allocated its own byte array (and each of them was at least 430mb).

This can be resolved by configuring pooling of streams in marshaller:

<property name="marshaller">
    <bean class="org.apache.ignite.marshaller.optimized.OptimizedMarshaller">
        <property name="poolSize" value="2"/>
    </bean>
</property>

This will limit the number of created buffers to provided value. But note that if you execute tasks in parallel, this can cause performance degradation because different threads will use the same buffer to serialize the response.

-Val
Dabbo Dabbo
Reply | Threaded
Open this post in threaded view
|

Re: Memory leak or mis-configured grid/Ignite cluster?

Hi Val,

Thanks for the explanation, it helps me out a lot. I'll take this into consideration and see if I can allow my application to manage the tasks with this in mind.

The 430mb result was from a worst case testing.  Generally, I would expect 1mb~30mb results in a typical setting.  Would the buffer be reused per thread instance and not grow if the result is smaller? (at least not grow per thread), or will it continue to grow regardless of the size of results?

Many thanks for your helps,

Darren.

vkulichenko vkulichenko
Reply | Threaded
Open this post in threaded view
|

Re: Memory leak or mis-configured grid/Ignite cluster?

Darren,

The buffer size will be approximately the same as the largest object serialized in the current thread. If the thread serialized a very large object once and after that serializes smaller objects, it will be eventually shrinked to the smaller size.

-Val
Dabbo Dabbo
Reply | Threaded
Open this post in threaded view
|

Re: Memory leak or mis-configured grid/Ignite cluster?

Hi Val,

Perfect, I think I can work with that.

Thanks for all you help!

Darren.