putAll stoping at 600 entries.

classic Classic list List threaded Threaded
5 messages Options
javadevmtl javadevmtl
Reply | Threaded
Open this post in threaded view
|

putAll stoping at 600 entries.

Using 1.3.0

I have my cache configured as follows...

private static IgniteCache<String, HashSet<String>> cache = null;

IgniteConfiguration igniteCfg = new IgniteConfiguration();
igniteCfg.setMarshaller(new OptimizedMarshaller(true));
                                                       
CacheConfiguration<String, HashSet<String>> myCfg = new CacheConfiguration<>("cache");
myCfg.setCacheMode(CacheMode.PARTITIONED);
myCfg.setAtomicityMode(CacheAtomicityMode.ATOMIC);
myCfg.setMemoryMode(CacheMemoryMode.OFFHEAP_TIERED);
myCfg.setOffHeapMaxMemory(64 * 1024L * 1024L * 1024L);
myCfg.setBackups(0);
ignite = Ignition.start(igniteCfg);
cache = ignite.getOrCreateCache(myCfg).withAsync();

// Then in my "web" handler for each request

final JsonArray jsonKeys = request.getJsonArray("keys");
Map<String, HashSet<String>> keysValues = new HashMap<>();

                               
for(int i = 0; i < jsonKeys.size(); i++) {
        String keyPrefix = jsonKeys.getJsonObject(i).getString("keyPrefix");
        String key = keyPrefix + jsonKeys.getJsonObject(i).getString("key");

        HashSet<String> value = new HashSet<String>();
        value.add(jsonKeys.getJsonObject(i).getString("value"));

        keysValues.put(key, value);
}
                               
cache.putAll(keysValues);
IgniteFuture<Void> putFut = cache.future();
                               
putFut.listen(f -> {
        myWebHandler.reply(new JsonObject().put("result", "written"));
});


For what ever reason it seems to hang after 600 entries... Currently I'm putting in 18 keys per request.
vkulichenko vkulichenko
Reply | Threaded
Open this post in threaded view
|

Re: putAll stoping at 600 entries.

Hi,

Most likely you get a deadlock in case same keys are updated in parallel putAll operations, but the order of these keys in the provided map is different. I would recommend to try using TreeMap instead of HashMap for 'keysValues'. It guarantees consistent ordering and should fix your issue.

Let us know if it helps.

-Val
javadevmtl javadevmtl
Reply | Threaded
Open this post in threaded view
|

Re: putAll stoping at 600 entries.

Yes i do have the possibility to put more then one key at the same time. So I suppose that I have to go back to single puts?
vkulichenko vkulichenko
Reply | Threaded
Open this post in threaded view
|

Re: putAll stoping at 600 entries.

No, you don't have to do that. The only requirement is that keys in putAll map are always ordered in the same way. In other words, intersecting keys in two concurrent putAlls should not be ordered differently.

For example, these two operations can lead to deadlock if executed concurrently:

// Only keys are listed for simplicity.
putAll(1, 2, 3);
putAll(4, 3, 2);

Here keys 2 and 3 are reordered and it's possible that they will wait for each other forever.

To avoid this you should use ordered map (e.g., TreeMap) instead of HashMap. It will guarantee that the order of the keys is always the same.

Makes sense?

-Val
javadevmtl javadevmtl
Reply | Threaded
Open this post in threaded view
|

Re: putAll stoping at 600 entries.

Ah ok cool!