I have a problem with a cache loader service that loads data into a cache at process start-up.
If I run a single node, everything works fine and my log message shows the exact same number of items added to the cache as the calls: ignite.jcache(cacheName).localSize(CachePeekMode.PRIMARY) and ignite.jcache(cacheName).size(CachePeekMode.PRIMARY) return.
If I then start more nodes running, then everything works as I'd expect and the full cache is balanced correctly across the cluster. The cache size remains the same overall, and the local caches show numbers indicating that the data has been shared between them correctly.
Problems appear though when I start multiple nodes at the same time. My log message confirms that the same number of records were added to the cache but ignite.jcache(cacheName).localSize(CachePeekMode.PRIMARY) and ignite.jcache(cacheName).size(CachePeekMode.PRIMARY) both return a significantly lower number. On the last run my logs show that 1,163,076 records were added to the cache, but both the above method calls show only 376,879 entries. Running ignite.jcache(cacheName).localSize(CachePeekMode.PRIMARY) on the other nodes in the cluster consistently returns 0.
Have you seen problems like this, and can you suggest a fix? This looks like a cache balancing problem during the discovery phase to me. I don't want to have to start the primary node before the others at this will complicate the deployment and will mean that one node will have to hold all the records until the other nodes are running.
Hi Dmitriy, Yes, and I'm still seeing the same thing. When all nodes are started at the same time, some data goes missing. If I start a single node, it all loads correctly; then when other nodes are started, the data is balanced correctly.
Can you provide a small reproducible example? We are specifically interested in how you configure Ignite and how you measure that the data was loaded. I know you already described it in this thread, but a reproducible example will really help us to resolve this issue.