![]() |
Hi, we are moving away from a legacy (in house) distributed cache solution to
ignite, and are doing some profiling of new app vs old app behaviour before we go live. One thing we have noticed, is that the object allocation rate is higher in the version of the app running with the ignite client (about 6kb/sec). We'd like to understand what might be causing this, and if there are any features within our client app we can turn off if they are not needed. Some basic poking around has shown we get a lot of TcpDiscoveryMetricsUpdateMessage instances being created, as well as what look like other discovery spi related objects. Are these needed on the client side? Is there documentation somewhere on what sorts of things are being created in the client side (a basic app which has issued a single continuous query, running in client mode) and what they are needed for - and if there are any parameters to control them? Thanks Sham -- Sent from: http://apache-ignite-users.70518.x6.nabble.com/ |
![]() |
Hi,
There is no documentation per se which states whether/how many objects are allocated in memory as this varies as new builds/versions are released. That being said, you can control the load/throughput/memory/enabled feature set/etc.., and therefore manage object allocation indirectly. For example, you have the option of turning metrics on or off for specific caches or data regions. see: https://apacheignite.readme.io/docs/cache-metrics#enabling-cache-metrics and: https://apacheignite.readme.io/docs/memory-metrics This would change the number/frequency of metric related discovery messages you mentioned. There are also a variety of optimizations you could make to make Ignite perform faster see: https://apacheignite.readme.io/docs/jvm-and-system-tuning https://apacheignite.readme.io/docs/durable-memory-tuning https://apacheignite.readme.io/docs/performance-tips https://apacheignite.readme.io/docs/preparing-for-production Here you are able to set the various memory and persistence related parameters used by Ignite, and thereby tune your app to your use-case. If you concerned w/object allocation, the best advice is to look into various high performance garbage collectors available like shenandoah(https://wiki.openjdk.java.net/display/shenandoah/Main) and Azul. See this blog post describing how Ignite and Azul Zing JVM are used together to power low-latency use cases: https://www.azul.com/igniting-in-memory-performance-with-gridgain-and-zing/ Thanks, Alex -- Sent from: http://apache-ignite-users.70518.x6.nabble.com/ |
![]() |
![]() |
In reply to this post by akorensh
This doesn't seem to help unfortunately.
Re-examining the allocation stats, it seems the app is actually allocating around 1.5mb per second with ignite (vs only 0.15mb per second without ignite in the app). I've read about past issues with IGNITE_EXCHANGE_HISTORY_SIZE causing a lot of allocations, but thought this had been fixed prior to 2.8 (we are on 2.8.1). Is there anything else we can tweak around this? Cache metrics etc are off on the server and client config. What kind of other objects might be being created at this rate? We can change the GC settings etc which we have done appropriately for our app, but we'd like to understand what is being created and why rather than change our GC settings to work around this. Thanks -- Sent from: http://apache-ignite-users.70518.x6.nabble.com/ |
![]() ![]() |
ilya.kasnacheev |
![]() |
Hello! Can you please run your app with JFR configured to record object allocation, to see where it actually happens, and share some results? Thanks, -- Ilya Kasnacheev пт, 23 окт. 2020 г. в 17:40, ssansoy <[hidden email]>: This doesn't seem to help unfortunately. |
![]() |
Hi, here's an example (using YourKit rather than JFR).
Apologies, I had to obfuscate some of the company specific information. This shows a window of about 10 seconds of allocations <http://apache-ignite-users.70518.x6.nabble.com/file/t2797/MetricsUpdated.png> Looks like these come from GridDiscoveryManager - creating a new string every time. This happens several times per second it seems. Some of these mention other client nodes - so some other production app in our firm, that uses the cluster, has an impact on a different production app. Is there any way to turn this off? Each of our clients need to be isolated such that other client apps do not interfere in any way Also <http://apache-ignite-users.70518.x6.nabble.com/file/t2797/TcpDiscoveryClientMetricsUpdateMessage.png> These update messages seem to come in even though metricsEnabled is turned off on the client (not specificied). -- Sent from: http://apache-ignite-users.70518.x6.nabble.com/ |
![]() ![]() |
ilya.kasnacheev |
![]() |
Hello! I guess that you have EVT_NODE_METRICS_UPDATED event enabled on client nodes (but maybe not on server nodes) It will indeed produce a lot of garbage so I recommend disabling the recording of this event by calling ignite.events().disableLocal(EVT_NODE_METRICS_UPDATED); + dev@ Why do we record EVT_NODE_METRICS_UPDATED by default? Sounds like a bad idea yet we enable recording of all internal events in GridEventStorageManager. -- Ilya Kasnacheev пн, 26 окт. 2020 г. в 19:37, ssansoy <[hidden email]>: Hi, here's an example (using YourKit rather than JFR). |
![]() |
That's great! seems to eliminate a lot of the traffic. Are there any other
optimizations like this you know of that we can make to reduce this any further? We also have the issue documented in this thread: http://apache-ignite-users.70518.x6.nabble.com/Large-Heap-with-lots-of-BinaryMetaDataHolders-td34359.html I will dig out the config at some point, but is there anything obvious you can think of that would enable this sort of behaviour (e.g. retaining large binary meta data on the client node for all caches - we want to turn this off if it isn't required for the particular client). Also, another undesirable observation: 1. If client app A issues a continuous query with a remote filter and transformer to the cluster 2. When client app B starts, it seems to deserialize and log client app A's remote filter and transformer within it's own VM. Why would client app A's filter and transformer end up on client app B? Can we turn this off so client app B is isolated complete from client app A? -- Sent from: http://apache-ignite-users.70518.x6.nabble.com/ |
![]() |
Apologies I may have spoken to soon (I was looking at the wrong process).
It looks like we can't turn EVT_NODE_METRICS_UPDATED off as it is designated as an internal event GridEventStorageManager.disableEvents line 441 (ignite 2.8.1), this checks to see if the event that is being disabled is part of EVTS_DISCOVERY_ALL, which it is, so this isn't set to false... Are there any workarounds to this? -- Sent from: http://apache-ignite-users.70518.x6.nabble.com/ |
![]() ![]() |
ilya.kasnacheev |
![]() |
Hello! Okay, that's not very cool. I hope to get some response from development side at this point. Sans reaction, I will file a ticket. Regards, -- Ilya Kasnacheev пн, 2 нояб. 2020 г. в 15:03, ssansoy <[hidden email]>: Apologies I may have spoken to soon (I was looking at the wrong process). |
![]() |
Thanks, please do keep us posted - perhaps with a ticket number or something
we can track as this is currently a blocker for putting ignite into production. -- Sent from: http://apache-ignite-users.70518.x6.nabble.com/ |
![]() |
Hi was there any update on this? thanks!
-- Sent from: http://apache-ignite-users.70518.x6.nabble.com/ |
![]() ![]() |
ilya.kasnacheev |
![]() |
Hello! I'm actually not convinced that 6k/sec is a lot. Metrics update messages are passed between nodes to calculate cluster-wide cache metrics. Have you tried turning them off by setting IGNITE_DISCOVERY_DISABLE_CACHE_METRICS_UPDATE=true, in form of system property or env var? Regards, -- Ilya Kasnacheev чт, 5 нояб. 2020 г. в 16:03, ssansoy <[hidden email]>: Hi was there any update on this? thanks! |
![]() |
hi, 6k/sec was a misreading on my part - its more 1mb+ !
-- Sent from: http://apache-ignite-users.70518.x6.nabble.com/ |
![]() |
Also according to the heap dump they aren't cache statistic messages, but
rather, TcpDiscoveryClientMetricsUpdateMessage -- Sent from: http://apache-ignite-users.70518.x6.nabble.com/ |
![]() ![]() |
ilya.kasnacheev |
![]() |
Hello! Yes, it's not cache statistics but node statistics (org.apache.ignite.cluster.ClusterMetrics) Regards, -- Ilya Kasnacheev пн, 9 нояб. 2020 г. в 21:09, ssansoy <[hidden email]>: Also according to the heap dump they aren't cache statistic messages, but |
![]() |
Yep theyre the ones we'd like to turn off... is that possible with
IGNITE_DISCOVERY_DISABLE_CACHE_METRICS_UPDATE=true? it doesn't seem to have an effect -- Sent from: http://apache-ignite-users.70518.x6.nabble.com/ |
![]() ![]() |
ilya.kasnacheev |
![]() |
Hello! Good question. Did something at all change after you set it? I'm not sure why the message is so large in your cases, it's tens of kb. Regards, -- Ilya Kasnacheev вт, 10 нояб. 2020 г. в 20:27, ssansoy <[hidden email]>: Yep theyre the ones we'd like to turn off... is that possible with |
![]() |
In reply to this post by ilya.kasnacheev
Hi did you get a response for this out of interest? also is there a ticket we
can follow? We really need to understand this - and ideally turn it off -- Sent from: http://apache-ignite-users.70518.x6.nabble.com/ |
Free forum by Nabble | Edit this page |