I run a count on the number of records in the cache with automatic persistence to a database. I always get the same number even though I have read through and write through switched on. Is this expected? How do I keep the cache data refreshed efficiently?
Read-through means that cache.get() operation will make an attempt to load value from the persistence store if it doesn't exist in cache.
Write-through means that any cache update (put, remove, ...) will be delegated to the store.
Can you please clarify what cache operations you execute and what behavior you expect?
The DB is being constantly updated. I'm looking for a way to keep my cache updated as well, preferably automatically. My cache is a subset of the DB tables. Hoping to run super-fast SQL against the cache.
The best way to solve this is to use cache API to update the data and never update DB directly. If you have write-through enabled, updates will be atomically delegated to the persistence store, so the data in the cache and in the DB will be always consistent. And you will always have the latest data in memory which will allow you to execute in-memory queries on this data.
If this is not possible and you have to update DB directly for some reason, you will need to update the cache manually. E.g., implement DB trigger that will properly do this.
Thank you for your response. I have used the get and put commands against the cache, and it works well in automatically persisting the changes. Is there something in the works that would sync data in both directions?
I'm trying to alleviate the work on the database due to its heavy usage and hence the attempts to query the cache instead of the database directly.
Ignite is designed to be the primary data storage and treats DB as a backing persistent store that is never accessed directly. It's most effective with this approach.
Syncing data in other direction is also possible, but it has to be done by the store, so the implementation will be different for different stores. Having said that, I'm not sure how we can support it on Ignite side (at least in the general case). Do you have any suggestions in mind?
I realize this is not a trivial task but here are a couple of ideas.
1. Triggers that run an offline task outside of the database processes (perhaps an ignite instance) that sync the cache. Depending on the database, we could auto-generate the DDL for defining the trigger making it easier for the user.
2. Every database has a log file that includes all table changes can be parsed. I know ignite has a powerful engine that can parse files and update caches.
The main problem here is that Ignite is completely abstracted from the persistence store by CacheStore interface. It can be any storage - relational database, disk based key-value storage, MongoDB, HDFS, etc... The implementation of such trigger will be different for different stores, and even for different relational databases, so it needs to be somehow abstracted as well. If you have any ideas how to properly design this, feel free to create a Jira ticket and share your thoughts.