Tuning parameters for performance of ignite Persistence store
I am trying to use persistent store for my application, w/o persistent store I am getting caching rate of about 280K msgs /sec (1 msg - 512 bytes). Whereas when I use persistent store in Ignite 2.1.0 then throughput decreases to 20 K msgs/sec. So,
1. what are tuning parameters that i can try to improve persistent store efficiency(i have tried increasing checkpoint threads, checkpoint freq but still performance is same)
2. Also in my set up I have 3 disks connected to machine, but persistent store only uses one disk for writing, so how can I improve performance in this case by utilizing all 3 disks?
As shown above only disk sda is used whereas sdb is idle all the time.
Re: Tuning parameters for performance of ignite Persistence store
You could try tuning persistence store update reliability vs. performance trade-opff by setting walMode to:
DEFAULT: every update is flushed to disk (sync). Least performant but survives power loss. LOG_ONLY: OS-managed buffered output (write). Survives process crash but not OS crash. BACKGROUND: updates are queued in-memory and flushed every 2 seconds.
For example, queue WAL updates in memory and flush them to disk every 5 seconds:
When you add persistence, latency of each individual updates gets bigger, because you update disk as well. In case you test on the same amount of parallel threads, throughput will obviously go down. However, if you increase the load, i.e. execute more operations in parallel, throughput will go back up as long as you're not running out of resources.
I have tried settings you have mentioned for WAL, it improves performance in case of WAL write. But when check pointing process starts(default - after 3 mins), caching process slows down(almost stops). Is there any way by which we can write checkpoint to disk in background, so that caching throughput is improved?
Also, though I have connected 3 disks to my machine only 1 is getting used while writing, is there any way by which I can use all 3 of them? This single disk "sda" is becoming bottleneck in this case as shown below.
Right now I am not sure why you see that "caching almost stops": as I said checkpointing is async. If you put data into a memory page being currently checkpoint, it will copy the page on write and continue both the caching and checkpointing in parallel. Thus, the only delay is copying the page but page size is 2K by default (you can customize it) and I do not think copying 2K takes noticeable time.
As for utilizing multiple disks - Ignite is about memory. You need to look for RAID implementations to achieve disk IO performance increase via partitioning. Just FYI - you can customize where Ignite stores persistence database (data file, active WAL segment and WAL archive). For example: