10X decrease in performance with Ignite 2.0.0

classic Classic list List threaded Threaded
17 messages Options
Chris Berry Chris Berry
Reply | Threaded
Open this post in threaded view
|

10X decrease in performance with Ignite 2.0.0

Hello,

We are currently migrating a high volume/low latency system to use Ignite.
And we were excited by the claims of Ignite 2.0 to have great performance improvements.
So we did a performance test of it last night.

Unfortunately, we saw a 10X DECREASE in performance over 1.9.
This is using the exact same code. And running the 2 tests (1.9 vs 2.0) back to back (in AWS).
 

Our test system is relatively simple. It is a 10 Node Compute Grid. Hit from 5 load generators running in the same AWS Region.
We rely heavily on cache affinity  -- wherein we use 4 Partitioned caches (each w/ 2 backups) – all using the same cache Key (a UUID).

We use a simple ComputeTask – mapping jobs (UUIDs) out to the grid – and then collecting them after.

The ComputeJob then does all of it’s lookups using localPeek (to ensure we stay on-box)
The system is almost all Reads.

This system – under high load – computing in batches of 200 UUIDs – was responding to our tests (in 1.9.0) at 53ms Mean with 1370 batches/sec
In 2.0.0 – we are getting a 574ms Mean with 134 batches/sec

Clearly we are missing a tuning parameter w/ 2.0.0??

BTW: On the positive side, I do see significantly less Heap usage with 2.0.0. 

I realize that I am being a bit vague on code specifics.

But a lot of that needs to be “expunged” before I can post it to the public internet.

Although, I can provide whatever necessary, I hope….

Thanks,
-- Chris


yakov yakov
Reply | Threaded
Open this post in threaded view
|

Re: 10X decrease in performance with Ignite 2.0.0

Chris, that's very surprising. Let's get to the bottom of it.

1. You have 10 nodes with 4 partitioned caches each configured with 2 backups. There are no updates to the caches. Correct?
2. Are caches atomic or transactional?
3. 5 nodes send compute jobs to the topology of 10 nodes. Correct?
4. How many jobs does each task produce?
5. How many lookups does each job do?
6. Will there be any difference between 1.9 and 2.0 if you send the same number of empty jobs?
7. If job does not do any local processing and just returns result of cache.localPeek() then it would be fine to replace tasks with cache.getAll().

Thanks!

--Yakov

2017-05-11 17:45 GMT+03:00 Chris Berry <[hidden email]>:

Hello,

We are currently migrating a high volume/low latency system to use Ignite.
And we were excited by the claims of Ignite 2.0 to have great performance improvements.
So we did a performance test of it last night.

Unfortunately, we saw a 10X DECREASE in performance over 1.9.
This is using the exact same code. And running the 2 tests (1.9 vs 2.0) back to back (in AWS).
 

Our test system is relatively simple. It is a 10 Node Compute Grid. Hit from 5 load generators running in the same AWS Region.
We rely heavily on cache affinity  -- wherein we use 4 Partitioned caches (each w/ 2 backups) – all using the same cache Key (a UUID).

We use a simple ComputeTask – mapping jobs (UUIDs) out to the grid – and then collecting them after.

The ComputeJob then does all of it’s lookups using localPeek (to ensure we stay on-box)
The system is almost all Reads.

This system – under high load – computing in batches of 200 UUIDs – was responding to our tests (in 1.9.0) at 53ms Mean with 1370 batches/sec
In 2.0.0 – we are getting a 574ms Mean with 134 batches/sec

Clearly we are missing a tuning parameter w/ 2.0.0??

BTW: On the positive side, I do see significantly less Heap usage with 2.0.0. 

I realize that I am being a bit vague on code specifics.

But a lot of that needs to be “expunged” before I can post it to the public internet.

Although, I can provide whatever necessary, I hope….

Thanks,
-- Chris



Chris Berry Chris Berry
Reply | Threaded
Open this post in threaded view
|

Re: 10X decrease in performance with Ignite 2.0.0


Thank you so much for responding

1. You have 10 nodes with 4 partitioned caches each configured with 2 backups. There are no updates to the caches. Correct?
Not really, no.

2. Are caches atomic or transactional?
Atomic

3. 5 nodes send compute jobs to the topology of 10 nodes. Correct?
No -- the 5 nodes generate load by hitting a URL on the Ignite Nodes thru a SLB.
This request contains 200 UUIDs to compute against.
This is the exact same code used in 1.9

4. How many jobs does each task produce?
On average 10 - it depends on how the 200 UUIDs map to the Grid. But the spread is pretty uniform

5. How many lookups does each job do?
several. it depends on the code path. But at a minimum 5, where these should all be local to that Node -- so in-memory lookups

6. Will there be any difference between 1.9 and 2.0 if you send the same number of empty jobs?
None. Exact same code.

7. If job does not do any local processing and just returns result of cache.localPeek() then it would be fine to replace tasks with cache.getAll().
It all local processing. And a lot of it. CPU sits around 80-90%.
The compute data is not small, so we colocate the compute w/ the data.
NOTE: the Results of the compute is small. a few 100 bytes.
yakov yakov
Reply | Threaded
Open this post in threaded view
|

Re: 10X decrease in performance with Ignite 2.0.0

How many values are in your cache?

SLB? What is it?

>Will there be any difference between 1.9 and 2.0 if you send the same
>number of empty jobs?
>None. Exact same code
.
I want you to check if compute engine became slower in your deployment. If you comment out all lookups inside the job and run the code against 1.9 and 2.0 clusters would they show the same results? 

>It all local processing. And a lot of it. CPU sits around 80-90%.
>The compute data is not small, so we colocate the compute w/ the data.
>NOTE: the Results of the compute is small. a few 100 bytes.

Agree, colocated jobs are the only choice here.

Btw, what instance types do you use?

--Yakov

yakov yakov
Reply | Threaded
Open this post in threaded view
|

Re: 10X decrease in performance with Ignite 2.0.0

Sergi Vladykin, do you think getAll() operation on B-Tree will help here? Now getAll() is a sequence of constant-time lookups. Situation is different for tree structures. So, we can traverse the tree only once given we have keys belonging to a single partition?

--Yakov

2017-05-12 0:48 GMT+03:00 Yakov Zhdanov <[hidden email]>:
How many values are in your cache?

SLB? What is it?

>Will there be any difference between 1.9 and 2.0 if you send the same
>number of empty jobs?
>None. Exact same code
.
I want you to check if compute engine became slower in your deployment. If you comment out all lookups inside the job and run the code against 1.9 and 2.0 clusters would they show the same results? 

>It all local processing. And a lot of it. CPU sits around 80-90%.
>The compute data is not small, so we colocate the compute w/ the data.
>NOTE: the Results of the compute is small. a few 100 bytes.

Agree, colocated jobs are the only choice here.

Btw, what instance types do you use?

--Yakov


yakov yakov
Reply | Threaded
Open this post in threaded view
|

Re: 10X decrease in performance with Ignite 2.0.0

Cross-posting to devlist.

--Yakov
Sergi Vladykin Sergi Vladykin
Reply | Threaded
Open this post in threaded view
|

Re: 10X decrease in performance with Ignite 2.0.0

According to our benchmarks Ignite 2.0 is not slower for get operation. I think we need some minimal reproducer that shows the performance degradation before making any conclusions.

Sergi

2017-05-12 1:10 GMT+03:00 Yakov Zhdanov <[hidden email]>:
Cross-posting to devlist.

--Yakov

Yakov Zhdanov Yakov Zhdanov
Reply | Threaded
Open this post in threaded view
|

Re: 10X decrease in performance with Ignite 2.0.0

Absolutely agree here. I think if we can add getAll() benchmark and run it with batch sizes of 5 and 10.

Thanks!
--
Yakov Zhdanov, Director R&D
GridGain Systems

2017-05-12 10:48 GMT+03:00 Sergi Vladykin <[hidden email]>:
According to our benchmarks Ignite 2.0 is not slower for get operation. I think we need some minimal reproducer that shows the performance degradation before making any conclusions.

Sergi

2017-05-12 1:10 GMT+03:00 Yakov Zhdanov <[hidden email]>:
Cross-posting to devlist.

--Yakov


alexey.goncharuk alexey.goncharuk
Reply | Threaded
Open this post in threaded view
|

Re: 10X decrease in performance with Ignite 2.0.0

In reply to this post by Chris Berry
Hi Chris,

One of the most significant changes made in 2.0 was moving to an off-heap storage by default. This means that each time you do a get(), your value gets deserialized, which might be an overhead (though, I would be a bit surprised if this causes the 10x drop).

Can you try setting CacheConfiguration#setOnheapCacheEnabled(true) and check if performance gets back? 

2017-05-11 17:45 GMT+03:00 Chris Berry <[hidden email]>:

Hello,

We are currently migrating a high volume/low latency system to use Ignite.
And we were excited by the claims of Ignite 2.0 to have great performance improvements.
So we did a performance test of it last night.

Unfortunately, we saw a 10X DECREASE in performance over 1.9.
This is using the exact same code. And running the 2 tests (1.9 vs 2.0) back to back (in AWS).
 

Our test system is relatively simple. It is a 10 Node Compute Grid. Hit from 5 load generators running in the same AWS Region.
We rely heavily on cache affinity  -- wherein we use 4 Partitioned caches (each w/ 2 backups) – all using the same cache Key (a UUID).

We use a simple ComputeTask – mapping jobs (UUIDs) out to the grid – and then collecting them after.

The ComputeJob then does all of it’s lookups using localPeek (to ensure we stay on-box)
The system is almost all Reads.

This system – under high load – computing in batches of 200 UUIDs – was responding to our tests (in 1.9.0) at 53ms Mean with 1370 batches/sec
In 2.0.0 – we are getting a 574ms Mean with 134 batches/sec

Clearly we are missing a tuning parameter w/ 2.0.0??

BTW: On the positive side, I do see significantly less Heap usage with 2.0.0. 

I realize that I am being a bit vague on code specifics.

But a lot of that needs to be “expunged” before I can post it to the public internet.

Although, I can provide whatever necessary, I hope….

Thanks,
-- Chris



yakov yakov
Reply | Threaded
Open this post in threaded view
|

Re: 10X decrease in performance with Ignite 2.0.0

I think it will also be useful to switch to offheap tiered (cacheConfig.setMemoryMode()) in 1.9 and compare results again.

--Yakov

2017-05-12 11:30 GMT+03:00 Alexey Goncharuk <[hidden email]>:
Hi Chris,

One of the most significant changes made in 2.0 was moving to an off-heap storage by default. This means that each time you do a get(), your value gets deserialized, which might be an overhead (though, I would be a bit surprised if this causes the 10x drop).

Can you try setting CacheConfiguration#setOnheapCacheEnabled(true) and check if performance gets back? 

2017-05-11 17:45 GMT+03:00 Chris Berry <[hidden email]>:

Hello,

We are currently migrating a high volume/low latency system to use Ignite.
And we were excited by the claims of Ignite 2.0 to have great performance improvements.
So we did a performance test of it last night.

Unfortunately, we saw a 10X DECREASE in performance over 1.9.
This is using the exact same code. And running the 2 tests (1.9 vs 2.0) back to back (in AWS).
 

Our test system is relatively simple. It is a 10 Node Compute Grid. Hit from 5 load generators running in the same AWS Region.
We rely heavily on cache affinity  -- wherein we use 4 Partitioned caches (each w/ 2 backups) – all using the same cache Key (a UUID).

We use a simple ComputeTask – mapping jobs (UUIDs) out to the grid – and then collecting them after.

The ComputeJob then does all of it’s lookups using localPeek (to ensure we stay on-box)
The system is almost all Reads.

This system – under high load – computing in batches of 200 UUIDs – was responding to our tests (in 1.9.0) at 53ms Mean with 1370 batches/sec
In 2.0.0 – we are getting a 574ms Mean with 134 batches/sec

Clearly we are missing a tuning parameter w/ 2.0.0??

BTW: On the positive side, I do see significantly less Heap usage with 2.0.0. 

I realize that I am being a bit vague on code specifics.

But a lot of that needs to be “expunged” before I can post it to the public internet.

Although, I can provide whatever necessary, I hope….

Thanks,
-- Chris




Chris Berry Chris Berry
Reply | Threaded
Open this post in threaded view
|

Re: 10X decrease in performance with Ignite 2.0.0

Yakov,
The entire reason we use the Compute Grid is because the data employed to do the compute is large (~0.25MB) and we compute 200 at once (spread across the grid -- so ~20 per Node w/ 10 Nodes).
So I do not doubt that moving the data from off-heap-memory could be large.

Is there a way to try 2.0.0 using all in-memory??

I will try 1.9.0 w/ off-heap, but it will have to wait until next week. I am away-from-keyboard this weekend.

Thanks,
-- Chris
Chris Berry Chris Berry
Reply | Threaded
Open this post in threaded view
|

Re: 10X decrease in performance with Ignite 2.0.0

Hi,

I hope this helps.

This is the flow. It is very simple.
Although, the code in the ComputeJob (executor.compute(request, algType, correlationId);) is relatively application complex.

I hope this code makes sense.
I had to take the actual code and expunge all of the actual Domain bits from it…

But as far as Ignite is concerned, it is mostly boilerplate.

Thanks,
-- Chris

=====================================
Invoke:

    private List<AResponse> executeTaskOnGrid(AComputeTask<ARequest, AResponse> computeTask,  List<UUID> uuids) {
             return managedIgnite.getCompute().withTimeout(timeout).execute(computeTask, uuids);
    }

=======================================
ComputeTask:

public class AComputeTask ComputeTask<TRequest extends ARequest , TResponse>
        extends ComputeTaskAdapter<Collection<UUID>, List<TResponse>> {

    private final AExecutorType type;
    private final TRequest rootARequest;
    private final AlgorithmType algType;
    private final String correlationId;
    private IgniteCacheName cacheName;

    @IgniteInstanceResource
    private Ignite ignite;

    public AComputeTask(AExecutorType type, TRequest request,  AlgorithmType algType,  String correlationId) {
        this.cacheName = IgniteCacheName.ACache;
        this.type = type;
        this.rootARequest = request;
        this.algType = algType;
        this.correlationId = correlationId;
    }

    @Nullable
    @Override
    public Map<? extends ComputeJob, ClusterNode> map(List<ClusterNode> subgrid, @Nullable Collection<UUID> cacheKeys)
            throws IgniteException {
        Map<ClusterNode, Collection<UUID>> nodeToKeysMap = ignite.<UUID>affinity(cacheName.name()).mapKeysToNodes(cacheKeys);
        Map<ComputeJob, ClusterNode> jobMap = new HashMap<>();
        for (Map.Entry<ClusterNode, Collection<UUID>> mapping : nodeToKeysMap.entrySet()) {
            ClusterNode node = mapping.getKey();
            final Collection<UUID> mappedKeys = mapping.getValue();

            if (node != null) {
                ComputeBatchContext context = new ComputeBatchContext(node.id(), node.consistentId(), correlationId);
                Map<AlgorithmType, UUID[]> nodeRequestUUIDMap = Collections.singletonMap(algType, convertToArray(mapping.getValue()));
                ARequest nodeARequest = new ARequest(rootARequest, nodeRequestUUIDMap);
                AComputeJob job = new AComputeJob(type, nodeARequest, algType, context);
                jobMap.put(job, node);
            }
        }
        return jobMap;
    }

    private UUID[] convertToArray(Collection<UUID> cacheKeys) {
        return cacheKeys.toArray(new UUID[cacheKeys.size()]);
    }

    @Nullable
    @Override
    public List<TResponse> reduce(List<ComputeJobResult> results) throws IgniteException {
        List<TResponse> responses = new ArrayList<>();
        for (ComputeJobResult res : results) {
            if (res.getException() != null) {
                ARequest  request = ((AComputeJob) res.getJob()).getARequest();

                // The entire result failed. So return all as errors
                AExecutor<TRequest, TResponse> executor = AExecutorFactory.getAExecutor(type);
                List<UUID> unitUuids = Lists.newArrayList(request.getMappedUUIDs().get(algType));
                List<TResponse> errorResponses = executor.createErrorResponses(unitUuids.stream(), ErrorCode.UnhandledException);
                responses.addAll(errorResponses);
            } else {
                List<TResponse> perNode = res.getData();
                responses.addAll(perNode);
            }
        }
        return response;
    }
}

==================================
ComputeJob

public class AComputeJob<TRequest extends ARequest, TResponse> extends ComputeJobAdapter {
    @Getter
    private final ExecutorType executorType;
    @Getter
    private final TRequest request;
    @Getter
    private final AlgorithmType algType;
    @Getter
    private final String correlationId;
    @Getter
    private final ComputeBatchContext context;

    @IgniteInstanceResource
    private Ignite ignite;
    @JobContextResource
    private ComputeJobContext jobContext;

    public AComputeJob(ExecutorType executorType, TRequest request, AlgorithmType algType, ComputeBatchContext context) {
        this.executorType = executorType;
        this.request = request;
        this.algType = algType;
        this.correlationId = context.getCorrelationId();
        this.context = context;
    }
   
    @Override
    public Object execute() throws IgniteException {
        Executor<TRequest, TResponse> executor = ExecutorFactory.getExecutor(executorType);
        return executor.compute(request, algType, correlationId);
    }

    @Override
    public void cancel() {
        //Indicates that the cluster wants us to cooperatively cancel the job
        //Since we expect these to run quickly, not going to actually do anything with this right now
    }
}








dsetrakyan dsetrakyan
Reply | Threaded
Open this post in threaded view
|

Re: 10X decrease in performance with Ignite 2.0.0

Chris, 

After looking at your code, the only slow down that may have occurred between 1.9 and 2.0 is the actual cache "get(...)" operation. As you may already know, Ignite 2.0 has moved data off-heap completely, so we do not cache data in the deserialized form any more, by default. However, you can still enable on-heap cache, in which case the data will be cached the same way as in 1.9.

What is the average size of the object you store in cache? If it is large, then you have 2 options:
 
1. Do not deserialize your objects into classes and work directly with BinaryObject interface.
2. Turn on on-heap cache.

Will this work for you?

D.

On Fri, May 12, 2017 at 6:53 AM, Chris Berry <[hidden email]> wrote:
Hi,

I hope this helps.

This is the flow. It is very simple.
Although, the code in the ComputeJob (executor.compute(request, algType,
correlationId);) is relatively application complex.

I hope this code makes sense.
I had to take the actual code and expunge all of the actual Domain bits from
it…

But as far as Ignite is concerned, it is mostly boilerplate.

Thanks,
-- Chris

=====================================
Invoke:

    private List<AResponse> executeTaskOnGrid(AComputeTask<ARequest,
AResponse> computeTask,  List<UUID> uuids) {
             return
managedIgnite.getCompute().withTimeout(timeout).execute(computeTask, uuids);
    }

=======================================
ComputeTask:

public class AComputeTask ComputeTask<TRequest extends ARequest , TResponse>
        extends ComputeTaskAdapter<Collection&lt;UUID>, List<TResponse>> {

    private final AExecutorType type;
    private final TRequest rootARequest;
    private final AlgorithmType algType;
    private final String correlationId;
    private IgniteCacheName cacheName;

    @IgniteInstanceResource
    private Ignite ignite;

    public AComputeTask(AExecutorType type, TRequest request,  AlgorithmType
algType,  String correlationId) {
        this.cacheName = IgniteCacheName.ACache;
        this.type = type;
        this.rootARequest = request;
        this.algType = algType;
        this.correlationId = correlationId;
    }

    @Nullable
    @Override
    public Map<? extends ComputeJob, ClusterNode> map(List<ClusterNode>
subgrid, @Nullable Collection<UUID> cacheKeys)
            throws IgniteException {
        Map<ClusterNode, Collection&lt;UUID>> nodeToKeysMap =
ignite.<UUID>affinity(cacheName.name()).mapKeysToNodes(cacheKeys);
        Map<ComputeJob, ClusterNode> jobMap = new HashMap<>();
        for (Map.Entry<ClusterNode, Collection&lt;UUID>> mapping :
nodeToKeysMap.entrySet()) {
            ClusterNode node = mapping.getKey();
            final Collection<UUID> mappedKeys = mapping.getValue();

            if (node != null) {
                ComputeBatchContext context = new
ComputeBatchContext(node.id(), node.consistentId(), correlationId);
                Map<AlgorithmType, UUID[]> nodeRequestUUIDMap =
Collections.singletonMap(algType, convertToArray(mapping.getValue()));
                ARequest nodeARequest = new ARequest(rootARequest,
nodeRequestUUIDMap);
                AComputeJob job = new AComputeJob(type, nodeARequest,
algType, context);
                jobMap.put(job, node);
            }
        }
        return jobMap;
    }

    private UUID[] convertToArray(Collection<UUID> cacheKeys) {
        return cacheKeys.toArray(new UUID[cacheKeys.size()]);
    }

    @Nullable
    @Override
    public List<TResponse> reduce(List<ComputeJobResult> results) throws
IgniteException {
        List<TResponse> responses = new ArrayList<>();
        for (ComputeJobResult res : results) {
            if (res.getException() != null) {
                ARequest  request = ((AComputeJob)
res.getJob()).getARequest();

                // The entire result failed. So return all as errors
                AExecutor<TRequest, TResponse> executor =
AExecutorFactory.getAExecutor(type);
                List<UUID> unitUuids =
Lists.newArrayList(request.getMappedUUIDs().get(algType));
                List<TResponse> errorResponses =
executor.createErrorResponses(unitUuids.stream(),
ErrorCode.UnhandledException);
                responses.addAll(errorResponses);
            } else {
                List<TResponse> perNode = res.getData();
                responses.addAll(perNode);
            }
        }
        return response;
    }
}

==================================
ComputeJob

public class AComputeJob<TRequest extends ARequest, TResponse> extends
ComputeJobAdapter {
    @Getter
    private final ExecutorType executorType;
    @Getter
    private final TRequest request;
    @Getter
    private final AlgorithmType algType;
    @Getter
    private final String correlationId;
    @Getter
    private final ComputeBatchContext context;

    @IgniteInstanceResource
    private Ignite ignite;
    @JobContextResource
    private ComputeJobContext jobContext;

    public AComputeJob(ExecutorType executorType, TRequest request,
AlgorithmType algType, ComputeBatchContext context) {
        this.executorType = executorType;
        this.request = request;
        this.algType = algType;
        this.correlationId = context.getCorrelationId();
        this.context = context;
    }

    @Override
    public Object execute() throws IgniteException {
        Executor<TRequest, TResponse> executor =
ExecutorFactory.getExecutor(executorType);
        return executor.compute(request, algType, correlationId);
    }

    @Override
    public void cancel() {
        //Indicates that the cluster wants us to cooperatively cancel the
job
        //Since we expect these to run quickly, not going to actually do
anything with this right now
    }
}












--
View this message in context: http://apache-ignite-users.70518.x6.nabble.com/10X-decrease-in-performance-with-Ignite-2-0-0-tp12637p12664.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.

Chris Berry Chris Berry
Reply | Threaded
Open this post in threaded view
|

Re: 10X decrease in performance with Ignite 2.0.0

Thank you.
I will try on Monday (-ish)

Yes, the objects are large. (average ~0.25MB)

Could you please tell me the magic config I will need to try both these options?
If not, I will do my homework.

Thank you again,
-- Chris
Denis Magda-2 Denis Magda-2
Reply | Threaded
Open this post in threaded view
|

Re: 10X decrease in performance with Ignite 2.0.0

Chris,

These are some links for reference:

1. BinaryObject and BinaryObjectBuilder interfaces usage:
        https://apacheignite.readme.io/docs/binary-marshaller#section-binaryobject-cache-api
        https://apacheignite.readme.io/docs/binary-marshaller#section-modifying-binary-objects-using-binaryobjectbuilder

2. Page memory on-heap caching: https://apacheignite.readme.io/docs/page-memory#on-heap-caching


Denis

> On May 12, 2017, at 1:16 PM, Chris Berry <[hidden email]> wrote:
>
> Thank you.
> I will try on Monday (-ish)
>
> Yes, the objects are large. (average ~0.25MB)
>
> Could you please tell me the magic config I will need to try both these
> options?
> If not, I will do my homework.
>
> Thank you again,
> -- Chris
>
>
>
> --
> View this message in context: http://apache-ignite-users.70518.x6.nabble.com/10X-decrease-in-performance-with-Ignite-2-0-0-tp12637p12670.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.

yakov yakov
Reply | Threaded
Open this post in threaded view
|

Re: 10X decrease in performance with Ignite 2.0.0

Chris, any news?

--Yakov

2017-05-13 1:05 GMT+03:00 Denis Magda <[hidden email]>:
Chris,

These are some links for reference:

1. BinaryObject and BinaryObjectBuilder interfaces usage:
        https://apacheignite.readme.io/docs/binary-marshaller#section-binaryobject-cache-api
        https://apacheignite.readme.io/docs/binary-marshaller#section-modifying-binary-objects-using-binaryobjectbuilder

2. Page memory on-heap caching: https://apacheignite.readme.io/docs/page-memory#on-heap-caching


Denis

> On May 12, 2017, at 1:16 PM, Chris Berry <[hidden email]> wrote:
>
> Thank you.
> I will try on Monday (-ish)
>
> Yes, the objects are large. (average ~0.25MB)
>
> Could you please tell me the magic config I will need to try both these
> options?
> If not, I will do my homework.
>
> Thank you again,
> -- Chris
>
>
>
> --
> View this message in context: http://apache-ignite-users.70518.x6.nabble.com/10X-decrease-in-performance-with-Ignite-2-0-0-tp12637p12670.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Chris Berry Chris Berry
Reply | Threaded
Open this post in threaded view
|

Re: 10X decrease in performance with Ignite 2.0.0

Hi Yakov,
I was able to try these suggestions yesterday.
2.0.0 is now only a 19% decrease in performance -- versus the original 1000+%

This is it in more detail:


I do not truly understand the ramifications of using the BinaryMarshaller.
In fact, we tried the BinaryMarshaller before (in 1.8) and got a lot of OOMs
Is there any doc that explains more succinctly what it means to use the BinaryMarshaller?

In the end, I made only 3 substantive changes
* cacheConfig.setOnheapCacheEnabled(true)
* cache.withKeepBinary();
* igniteConfig.setMarshaller(new BinaryMarshaller());

Thank you for your assistance.
Please, if you can see room for further improvement, let me know.

Thanks,
-- Chris