![]() |
Hi, We are doing concurrent updates to Ignite.
We have 2 nodes on the same machine. Our usecase is that we want to Order entry. It shoul dbe as fast as possible. We saw the following hotspots when we analyzed with FlightRecorder:- Recheked the hot spots seem to be GridCacheMapEntry.obsolete which results in lots of contention and and GridShmemCommunicationClient.sendMessage which is called by the CommunicationSpi GridUnsafeMapSegment also has some extremely hot methods like advance etc, is there any way to reduce this ? Or is it necessary to allocate offheap memory etc? We have primary SYNC enabled, All our transactional caches are partitioned. |
![]() |
This post was updated on .
We are getting 4000 TPS with 10 caches on C4 4x large AWS instance. With two nodes on the same machine. Each transaction updates around 8 partioned cache .
We have given large heap but still we see the cache updates get slower with static data. is there any work around for this? Also any ways to solve the hotspots or lessen the impact? |
![]() |
Hello, Can you please clirify transaction attributes? If I remember correctly it was PESSIMISTIC, REPEATABLE_READ [1]. Why do you use exactly this type of transaction? When you using transaction by same key, they can blocked each other. Also need check network and CPU loading when your application running. Why do you think 4000 TPS is a bad performance? Maybe executing transaction will line up on concurent acces to same keys. For further investigation will need to see test cases, which demonstrate issue. On Tue, Jun 14, 2016 at 6:59 PM, amitpa <[hidden email]> wrote: We are getting 4000 TPS with 10 caches on C4 $x large AWS instance. With two Vladislav Pyatkov
|
![]() |
This post was updated on .
My transaction attributes now are OPTIMISTIC, READ _COMMITED.
Avoided Transaction with the same key. Also when there is a lot of data Ignite seems to slow down due to Cache put and get delays. I see the CPU load increases to 90%. is this due to resizing the cache? How can I optimize that? Lastly my CPU load is less than 50% when the transactions takes place (4000 TPS). My Access point can take in 1 million per second so its not a bottleneck. I am expecting more than 5000 and for transactions to scale as I keep adding nodes, sicne my transaction sizes are pretty small, themselves. |
![]() |
Hello, Growth loading CPU to 90% may caused by work of GC. For determine this you can get log and analyze log of GC [1]. Also you can use Unix utility for detection causes like dstat or top. In further 50% CPU usage is maybe normal for your case (transaction OPTIMISTIC, READ _COMMITED). You can try to use OPTIMISTIC, SERIALIZABLE transaction, and retry if catch TransactionOptimisticException[2]. On Thu, Jun 16, 2016 at 2:06 PM, amitpa <[hidden email]> wrote: My transaction attributes now are OPTIMISTIC, READ _COMMITED. Vladislav Pyatkov
|
![]() ![]() |
Denis Magda |
![]() |
Also the more nodes you will have the better performance should because you’ll be scaling horizontally. — Denis
|
![]() |
In reply to this post by vdpyatkov
Does OPTIMISTIC Serializable give better performance?
|
![]() ![]() |
Denis Magda |
![]() |
Optimistic should show better performance than pessimistic transactions.
The main point about OPTIMISTIC READ_COMMITTED is that during a commit it’s not checked if an entry value has been modified since the first read or write access and an optimistic exception is never raised. However OPTIMISTIC SERIALIZABLE will detect such a situation and throw the optimistic exception allowing you to perform a transaction one more time. You can read more on optimistic modes here: — Denis
|
![]() |
yes, I understand that.
In production we plan to use that, however one question:- In this case if I use the same key in different transactiosn would it impact performance or it will have no impact? |
![]() |
In reply to this post by Denis Magda
yes, I understand that.
In production we plan to use that, however one question:- In this case if I use the same key in different transactiosn would it impact performance or it will have no impact? -- View this message in context: http://apache-ignite-users.70518.x6.nabble.com/Ignite-performance-improvements-tp5623p5710.html Sent from the Apache Ignite Users mailing list archive at Nabble.com. |
![]() ![]() |
Denis Magda |
![]() |
If the transactions are PESSIMISTIC/REPEATABLE_READ and both are executed in parallel then the chances are high that one transaction will be blocked by the other because one of them will hold a lock on the key.
In case of OPTIMISTIC/SERIALIZABLE transaction a TransactionOptimisticException may be generated for one of the transaction and you will need to re-execute it from scratch. Hope this makes things clearer for you. — Denis > On Jun 17, 2016, at 1:21 PM, amitpa <[hidden email]> wrote: > > yes, I understand that. > > In production we plan to use that, however one question:- > > In this case if I use the same key in different transactiosn would it impact > performance or it will have no impact? > > > > -- > View this message in context: http://apache-ignite-users.70518.x6.nabble.com/Ignite-performance-improvements-tp5623p5710.html > Sent from the Apache Ignite Users mailing list archive at Nabble.com. |
![]() |
Sorry for the late reply. I understand OPTIMISTIC and PESSIMISTIC modes.
I understand OPTIMISTIC Serializable is safer, and we do NEED to use it. Taking these two things out of the way, I think no mode should be faster than OPTIMISTIC READ COMMITTED. Thats what I tried to convey, sorry for the confusion. |
Free forum by Nabble | Edit this page |