Read through not working as expected in case of Replicated cache

classic Classic list List threaded Threaded
5 messages Options
akash shinde akash shinde
Reply | Threaded
Open this post in threaded view
|

Read through not working as expected in case of Replicated cache

I am using Ignite 2.6 version.

I am starting 3 server nodes with a replicated cache and 1 client node. Cache configuration is as follows.
Read-through true on but write-through is false. Load data by key is implemented as given below in cache-loader.

Steps to reproduce issue:
1) Delete an entry from cache using IgniteCache.remove() method. (Entry is just removed from cache but present in DB as write-through is false)
2) Invoke IgniteCache.get() method for the same key in step 1. 
3) Now query the cache from client node. Every invocation returns different results.
Sometimes it returns reloaded entry, sometime returns the results without reloaded entry.

Looks like read-through is not replicating the reloaded entry on all nodes in case of REPLICATED cache.

So to investigate further I changed the cache mode to PARTITIONED and set the backup count to 3 i.e. total number of nodes present in cluster (to mimic REPLICATED behavior).
This time it worked as expected. 
Every invocation returned the same result with reloaded entry.

  private CacheConfiguration networkCacheCfg() {
    CacheConfiguration networkCacheCfg = new CacheConfiguration<>(CacheName.NETWORK_CACHE.name());
    networkCacheCfg.setAtomicityMode(CacheAtomicityMode.TRANSACTIONAL);
    networkCacheCfg.setWriteThrough(false);
    networkCacheCfg.setReadThrough(true);
    networkCacheCfg.setRebalanceMode(CacheRebalanceMode.ASYNC);
    networkCacheCfg.setWriteSynchronizationMode(CacheWriteSynchronizationMode.FULL_SYNC);
    //networkCacheCfg.setBackups(3);
    networkCacheCfg.setCacheMode(CacheMode.REPLICATED);
    Factory<NetworkDataCacheLoader> storeFactory = FactoryBuilder.factoryOf(NetworkDataCacheLoader.class);
    networkCacheCfg.setCacheStoreFactory(storeFactory);
    networkCacheCfg.setIndexedTypes(DefaultDataAffinityKey.class, NetworkData.class);
    networkCacheCfg.setSqlIndexMaxInlineSize(65);
    RendezvousAffinityFunction affinityFunction = new RendezvousAffinityFunction();
    affinityFunction.setExcludeNeighbors(false);
    networkCacheCfg.setAffinity(affinityFunction);
    networkCacheCfg.setStatisticsEnabled(true);
   // networkCacheCfg.setNearConfiguration(nearCacheConfiguration());

    return networkCacheCfg;
  }


@Override
public V load(K k) throws CacheLoaderException {
V value = null;
DataSource dataSource = springCtx.getBean(DataSource.class);
try (Connection connection = dataSource.getConnection();
PreparedStatement statement = connection.prepareStatement(loadByKeySql)) {
//statement.setObject(1, k.getId());
setPreparedStatement(statement,k);
try (ResultSet rs = statement.executeQuery()) {
if (rs.next()) {
value = rowMapper.mapRow(rs, 0);
}
}
} catch (SQLException e) {

throw new CacheLoaderException(e.getMessage(), e);
}

return value;
}

Thanks,
Akash
ilya.kasnacheev ilya.kasnacheev
Reply | Threaded
Open this post in threaded view
|

Re: Read through not working as expected in case of Replicated cache

Hello!

I remember that we had this issue. Have you tried 2.7.6 yet?

Regards,
--
Ilya Kasnacheev


вт, 29 окт. 2019 г. в 18:18, Akash Shinde <[hidden email]>:
I am using Ignite 2.6 version.

I am starting 3 server nodes with a replicated cache and 1 client node. Cache configuration is as follows.
Read-through true on but write-through is false. Load data by key is implemented as given below in cache-loader.

Steps to reproduce issue:
1) Delete an entry from cache using IgniteCache.remove() method. (Entry is just removed from cache but present in DB as write-through is false)
2) Invoke IgniteCache.get() method for the same key in step 1. 
3) Now query the cache from client node. Every invocation returns different results.
Sometimes it returns reloaded entry, sometime returns the results without reloaded entry.

Looks like read-through is not replicating the reloaded entry on all nodes in case of REPLICATED cache.

So to investigate further I changed the cache mode to PARTITIONED and set the backup count to 3 i.e. total number of nodes present in cluster (to mimic REPLICATED behavior).
This time it worked as expected. 
Every invocation returned the same result with reloaded entry.

  private CacheConfiguration networkCacheCfg() {
    CacheConfiguration networkCacheCfg = new CacheConfiguration<>(CacheName.NETWORK_CACHE.name());
    networkCacheCfg.setAtomicityMode(CacheAtomicityMode.TRANSACTIONAL);
    networkCacheCfg.setWriteThrough(false);
    networkCacheCfg.setReadThrough(true);
    networkCacheCfg.setRebalanceMode(CacheRebalanceMode.ASYNC);
    networkCacheCfg.setWriteSynchronizationMode(CacheWriteSynchronizationMode.FULL_SYNC);
    //networkCacheCfg.setBackups(3);
    networkCacheCfg.setCacheMode(CacheMode.REPLICATED);
    Factory<NetworkDataCacheLoader> storeFactory = FactoryBuilder.factoryOf(NetworkDataCacheLoader.class);
    networkCacheCfg.setCacheStoreFactory(storeFactory);
    networkCacheCfg.setIndexedTypes(DefaultDataAffinityKey.class, NetworkData.class);
    networkCacheCfg.setSqlIndexMaxInlineSize(65);
    RendezvousAffinityFunction affinityFunction = new RendezvousAffinityFunction();
    affinityFunction.setExcludeNeighbors(false);
    networkCacheCfg.setAffinity(affinityFunction);
    networkCacheCfg.setStatisticsEnabled(true);
   // networkCacheCfg.setNearConfiguration(nearCacheConfiguration());

    return networkCacheCfg;
  }


@Override
public V load(K k) throws CacheLoaderException {
V value = null;
DataSource dataSource = springCtx.getBean(DataSource.class);
try (Connection connection = dataSource.getConnection();
PreparedStatement statement = connection.prepareStatement(loadByKeySql)) {
//statement.setObject(1, k.getId());
setPreparedStatement(statement,k);
try (ResultSet rs = statement.executeQuery()) {
if (rs.next()) {
value = rowMapper.mapRow(rs, 0);
}
}
} catch (SQLException e) {

throw new CacheLoaderException(e.getMessage(), e);
}

return value;
}

Thanks,
Akash
akash shinde akash shinde
Reply | Threaded
Open this post in threaded view
|

Re: Read through not working as expected in case of Replicated cache

Hi,
I tried this scenario with version 2.7.6 and issue is still there with 2.7.6.
I can not go with version 2.7.6 due to IGNITE-10884. This issue(IGNITE-10884) if fixed but not yet released.
Could you please let me know what is the workaround for replicated cache issue.

Thanks,
Akash


On Tue, Oct 29, 2019 at 8:53 PM Ilya Kasnacheev <[hidden email]> wrote:
Hello!

I remember that we had this issue. Have you tried 2.7.6 yet?

Regards,
--
Ilya Kasnacheev


вт, 29 окт. 2019 г. в 18:18, Akash Shinde <[hidden email]>:
I am using Ignite 2.6 version.

I am starting 3 server nodes with a replicated cache and 1 client node. Cache configuration is as follows.
Read-through true on but write-through is false. Load data by key is implemented as given below in cache-loader.

Steps to reproduce issue:
1) Delete an entry from cache using IgniteCache.remove() method. (Entry is just removed from cache but present in DB as write-through is false)
2) Invoke IgniteCache.get() method for the same key in step 1. 
3) Now query the cache from client node. Every invocation returns different results.
Sometimes it returns reloaded entry, sometime returns the results without reloaded entry.

Looks like read-through is not replicating the reloaded entry on all nodes in case of REPLICATED cache.

So to investigate further I changed the cache mode to PARTITIONED and set the backup count to 3 i.e. total number of nodes present in cluster (to mimic REPLICATED behavior).
This time it worked as expected. 
Every invocation returned the same result with reloaded entry.

  private CacheConfiguration networkCacheCfg() {
    CacheConfiguration networkCacheCfg = new CacheConfiguration<>(CacheName.NETWORK_CACHE.name());
    networkCacheCfg.setAtomicityMode(CacheAtomicityMode.TRANSACTIONAL);
    networkCacheCfg.setWriteThrough(false);
    networkCacheCfg.setReadThrough(true);
    networkCacheCfg.setRebalanceMode(CacheRebalanceMode.ASYNC);
    networkCacheCfg.setWriteSynchronizationMode(CacheWriteSynchronizationMode.FULL_SYNC);
    //networkCacheCfg.setBackups(3);
    networkCacheCfg.setCacheMode(CacheMode.REPLICATED);
    Factory<NetworkDataCacheLoader> storeFactory = FactoryBuilder.factoryOf(NetworkDataCacheLoader.class);
    networkCacheCfg.setCacheStoreFactory(storeFactory);
    networkCacheCfg.setIndexedTypes(DefaultDataAffinityKey.class, NetworkData.class);
    networkCacheCfg.setSqlIndexMaxInlineSize(65);
    RendezvousAffinityFunction affinityFunction = new RendezvousAffinityFunction();
    affinityFunction.setExcludeNeighbors(false);
    networkCacheCfg.setAffinity(affinityFunction);
    networkCacheCfg.setStatisticsEnabled(true);
   // networkCacheCfg.setNearConfiguration(nearCacheConfiguration());

    return networkCacheCfg;
  }


@Override
public V load(K k) throws CacheLoaderException {
V value = null;
DataSource dataSource = springCtx.getBean(DataSource.class);
try (Connection connection = dataSource.getConnection();
PreparedStatement statement = connection.prepareStatement(loadByKeySql)) {
//statement.setObject(1, k.getId());
setPreparedStatement(statement,k);
try (ResultSet rs = statement.executeQuery()) {
if (rs.next()) {
value = rowMapper.mapRow(rs, 0);
}
}
} catch (SQLException e) {

throw new CacheLoaderException(e.getMessage(), e);
}

return value;
}

Thanks,
Akash
ilya.kasnacheev ilya.kasnacheev
Reply | Threaded
Open this post in threaded view
|

Re: Read through not working as expected in case of Replicated cache

Hello!

Can you provide a reproducer project for this problematic behavior? We could check it, file an issue (or you can file a JIRA issue yourself).

Regards,
--
Ilya Kasnacheev


вт, 29 окт. 2019 г. в 21:01, Akash Shinde <[hidden email]>:
Hi,
I tried this scenario with version 2.7.6 and issue is still there with 2.7.6.
I can not go with version 2.7.6 due to IGNITE-10884. This issue(IGNITE-10884) if fixed but not yet released.
Could you please let me know what is the workaround for replicated cache issue.

Thanks,
Akash


On Tue, Oct 29, 2019 at 8:53 PM Ilya Kasnacheev <[hidden email]> wrote:
Hello!

I remember that we had this issue. Have you tried 2.7.6 yet?

Regards,
--
Ilya Kasnacheev


вт, 29 окт. 2019 г. в 18:18, Akash Shinde <[hidden email]>:
I am using Ignite 2.6 version.

I am starting 3 server nodes with a replicated cache and 1 client node. Cache configuration is as follows.
Read-through true on but write-through is false. Load data by key is implemented as given below in cache-loader.

Steps to reproduce issue:
1) Delete an entry from cache using IgniteCache.remove() method. (Entry is just removed from cache but present in DB as write-through is false)
2) Invoke IgniteCache.get() method for the same key in step 1. 
3) Now query the cache from client node. Every invocation returns different results.
Sometimes it returns reloaded entry, sometime returns the results without reloaded entry.

Looks like read-through is not replicating the reloaded entry on all nodes in case of REPLICATED cache.

So to investigate further I changed the cache mode to PARTITIONED and set the backup count to 3 i.e. total number of nodes present in cluster (to mimic REPLICATED behavior).
This time it worked as expected. 
Every invocation returned the same result with reloaded entry.

  private CacheConfiguration networkCacheCfg() {
    CacheConfiguration networkCacheCfg = new CacheConfiguration<>(CacheName.NETWORK_CACHE.name());
    networkCacheCfg.setAtomicityMode(CacheAtomicityMode.TRANSACTIONAL);
    networkCacheCfg.setWriteThrough(false);
    networkCacheCfg.setReadThrough(true);
    networkCacheCfg.setRebalanceMode(CacheRebalanceMode.ASYNC);
    networkCacheCfg.setWriteSynchronizationMode(CacheWriteSynchronizationMode.FULL_SYNC);
    //networkCacheCfg.setBackups(3);
    networkCacheCfg.setCacheMode(CacheMode.REPLICATED);
    Factory<NetworkDataCacheLoader> storeFactory = FactoryBuilder.factoryOf(NetworkDataCacheLoader.class);
    networkCacheCfg.setCacheStoreFactory(storeFactory);
    networkCacheCfg.setIndexedTypes(DefaultDataAffinityKey.class, NetworkData.class);
    networkCacheCfg.setSqlIndexMaxInlineSize(65);
    RendezvousAffinityFunction affinityFunction = new RendezvousAffinityFunction();
    affinityFunction.setExcludeNeighbors(false);
    networkCacheCfg.setAffinity(affinityFunction);
    networkCacheCfg.setStatisticsEnabled(true);
   // networkCacheCfg.setNearConfiguration(nearCacheConfiguration());

    return networkCacheCfg;
  }


@Override
public V load(K k) throws CacheLoaderException {
V value = null;
DataSource dataSource = springCtx.getBean(DataSource.class);
try (Connection connection = dataSource.getConnection();
PreparedStatement statement = connection.prepareStatement(loadByKeySql)) {
//statement.setObject(1, k.getId());
setPreparedStatement(statement,k);
try (ResultSet rs = statement.executeQuery()) {
if (rs.next()) {
value = rowMapper.mapRow(rs, 0);
}
}
} catch (SQLException e) {

throw new CacheLoaderException(e.getMessage(), e);
}

return value;
}

Thanks,
Akash
ilya.kasnacheev ilya.kasnacheev
Reply | Threaded
Open this post in threaded view
|

Re: Read through not working as expected in case of Replicated cache

In reply to this post by akash shinde
Hello!

I have discussed this with fellow Ignite developers, and they say read through for replicated cache would work where there is either:

- writeThrough enabled and all changes do through it.
- database contents do not change for already read keys.

I can see that neither is met in your case, so you can expect the behavior that you are seeing.

Regards,
--
Ilya Kasnacheev


вт, 29 окт. 2019 г. в 18:18, Akash Shinde <[hidden email]>:
I am using Ignite 2.6 version.

I am starting 3 server nodes with a replicated cache and 1 client node. Cache configuration is as follows.
Read-through true on but write-through is false. Load data by key is implemented as given below in cache-loader.

Steps to reproduce issue:
1) Delete an entry from cache using IgniteCache.remove() method. (Entry is just removed from cache but present in DB as write-through is false)
2) Invoke IgniteCache.get() method for the same key in step 1. 
3) Now query the cache from client node. Every invocation returns different results.
Sometimes it returns reloaded entry, sometime returns the results without reloaded entry.

Looks like read-through is not replicating the reloaded entry on all nodes in case of REPLICATED cache.

So to investigate further I changed the cache mode to PARTITIONED and set the backup count to 3 i.e. total number of nodes present in cluster (to mimic REPLICATED behavior).
This time it worked as expected. 
Every invocation returned the same result with reloaded entry.

  private CacheConfiguration networkCacheCfg() {
    CacheConfiguration networkCacheCfg = new CacheConfiguration<>(CacheName.NETWORK_CACHE.name());
    networkCacheCfg.setAtomicityMode(CacheAtomicityMode.TRANSACTIONAL);
    networkCacheCfg.setWriteThrough(false);
    networkCacheCfg.setReadThrough(true);
    networkCacheCfg.setRebalanceMode(CacheRebalanceMode.ASYNC);
    networkCacheCfg.setWriteSynchronizationMode(CacheWriteSynchronizationMode.FULL_SYNC);
    //networkCacheCfg.setBackups(3);
    networkCacheCfg.setCacheMode(CacheMode.REPLICATED);
    Factory<NetworkDataCacheLoader> storeFactory = FactoryBuilder.factoryOf(NetworkDataCacheLoader.class);
    networkCacheCfg.setCacheStoreFactory(storeFactory);
    networkCacheCfg.setIndexedTypes(DefaultDataAffinityKey.class, NetworkData.class);
    networkCacheCfg.setSqlIndexMaxInlineSize(65);
    RendezvousAffinityFunction affinityFunction = new RendezvousAffinityFunction();
    affinityFunction.setExcludeNeighbors(false);
    networkCacheCfg.setAffinity(affinityFunction);
    networkCacheCfg.setStatisticsEnabled(true);
   // networkCacheCfg.setNearConfiguration(nearCacheConfiguration());

    return networkCacheCfg;
  }


@Override
public V load(K k) throws CacheLoaderException {
V value = null;
DataSource dataSource = springCtx.getBean(DataSource.class);
try (Connection connection = dataSource.getConnection();
PreparedStatement statement = connection.prepareStatement(loadByKeySql)) {
//statement.setObject(1, k.getId());
setPreparedStatement(statement,k);
try (ResultSet rs = statement.executeQuery()) {
if (rs.next()) {
value = rowMapper.mapRow(rs, 0);
}
}
} catch (SQLException e) {

throw new CacheLoaderException(e.getMessage(), e);
}

return value;
}

Thanks,
Akash