Partitions distribution across nodes

classic Classic list List threaded Threaded
4 messages Options
akash shinde akash shinde
Reply | Threaded
Open this post in threaded view
|

Partitions distribution across nodes

Hi ,
I am loading cache in partition aware mode.I have started four nodes.
Out of these four node two nodes are loading only backup partitions and other two nodes are loading only primary partitions.

As per my understanding each node should have backup and primary partition both. 

But in my cluster distributions of partitions looks like this

Node Primary partitions Backup partitions
NODE 1 518 0
NODE 2 0 498
NODE 3 506 0
NODE 4 0 503


Configuration of Cache

CacheConfiguration ipv4AssetGroupDetailCacheCfg = new CacheConfiguration<>(
CacheName.IPV4_ASSET_GROUP_DETAIL_CACHE.name());
ipv4AssetGroupDetailCacheCfg.setAtomicityMode(CacheAtomicityMode.TRANSACTIONAL);
ipv4AssetGroupDetailCacheCfg.setWriteThrough(true);
ipv4AssetGroupDetailCacheCfg.setReadThrough(false);
ipv4AssetGroupDetailCacheCfg.setRebalanceMode(CacheRebalanceMode.ASYNC);
ipv4AssetGroupDetailCacheCfg.setWriteSynchronizationMode(CacheWriteSynchronizationMode.FULL_SYNC);
ipv4AssetGroupDetailCacheCfg.setBackups(1);
ipv4AssetGroupDetailCacheCfg.setIndexedTypes(DefaultDataAffinityKey.class, IpV4AssetGroupData.class);

Factory<IpV4AssetGroupCacheStore> storeFactory = FactoryBuilder.factoryOf(IpV4AssetGroupCacheStore.class);
ipv4AssetGroupDetailCacheCfg.setCacheStoreFactory(storeFactory);
ipv4AssetGroupDetailCacheCfg.setCacheStoreSessionListenerFactories(cacheStoreSessionListenerFactory());

RendezvousAffinityFunction affinityFunction = new RendezvousAffinityFunction();
affinityFunction.setExcludeNeighbors(false);
ipv4AssetGroupDetailCacheCfg.setAffinity(affinityFunction);


Could someone please advice why there is no balanced distribution of primary and backup partitions?

Thanks,
Akash
dkarachentsev dkarachentsev
Reply | Threaded
Open this post in threaded view
|

Re: Partitions distribution across nodes

Hi Akash,

How do you measure partition distribution? Can you provide code for that
test? I can assume that you get partitions before exchange process if
finished. Try to use delay in 5 sec after all nodes are started and check
again.

Thanks!
-Dmitry



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/
akash shinde akash shinde
Reply | Threaded
Open this post in threaded view
|

Re: Partitions distribution across nodes

Hi,

I introduced the delay of 5 seconds and it worked.

1) What is exchange process and how to identify whether exchange process is finished?   

2) I am doing partition key aware data loading and I want to start load process from server node only and not from the client node. I just want to initiate my load process only after all the configured nodes are up and running.
For that I am using Distributed count down latch. Each node when started reduces the count on LifecycleEventType.AFTER_NODE_START event. When this count down latch count becomes zero, cache.loadCache() method is invoked and this method is always executed from the node which joins the cluster last. 
Is there any better way to achieve this?

3) I also want to make sure that if any other node joins the cluster after the data loading process is comeplete, cache.loadCache method is not invoked and the data is made available to this node using re-balancing process.
I am thinking to use some variable which will tell the cache loading is complete. Does ignite have any builtin feature to achieve this?


Code is as shown below to get the ignite partitions.
private List<Integer> getPrimaryParitionIdsLocalToNode() {
Affinity affinity = igniteSpringBean.affinity(cacheName);
ClusterNode locNode = igniteSpringBean.cluster().localNode();
List<Integer> primaryPartitionIds = Arrays.stream(affinity.primaryPartitions(locNode)).boxed()
.collect(Collectors.toList());
LOGGER.info("Primary Partition Ids for Node {} are {}", locNode.id(), primaryPartitionIds);
LOGGER.info("Number of Primary Partition Ids for Node {} are {}", locNode.id(), primaryPartitionIds.size());
return primaryPartitionIds;
}

private List<Integer> getBackupParitionIdsLocalToNode() {
Affinity affinity = igniteSpringBean.affinity(cacheName);
ClusterNode locNode = igniteSpringBean.cluster().localNode();
List<Integer> backPartitionIds = Arrays.stream(affinity.backupPartitions(locNode)).boxed()
.collect(Collectors.toList());
LOGGER.info("Backup Partition Ids for Node {} are {}", locNode.id(), backPartitionIds);
LOGGER.info("Number of Backup Partition Ids for Node {} are {}", locNode.id(), backPartitionIds.size());
return backPartitionIds;
}

Thanks,
Akash


On Wed, Aug 8, 2018 at 1:16 PM dkarachentsev <[hidden email]> wrote:
Hi Akash,

How do you measure partition distribution? Can you provide code for that
test? I can assume that you get partitions before exchange process if
finished. Try to use delay in 5 sec after all nodes are started and check
again.

Thanks!
-Dmitry



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/
dkarachentsev dkarachentsev
Reply | Threaded
Open this post in threaded view
|

Re: Partitions distribution across nodes

Hi Akash,

1) Actually exchange is a short-time process when nodes remap partitions.
But Ignite uses late affinity assignment, that means affinity distribution
will be switched after rebalance completed. In other words after rebalance
it will atomically switch partition distribution.
But you don't have to wait when rebalance finish, because it works
asynchronously.

2) I think, simpler would be to use IgniteCluster to determine number of
nodes [1]:
Ignite ignite = Ignition.start("examples/config/example-ignite.xml");

        if (ignite.cluster().forServers().nodes().size() == 4) {
            //... loadCache
        }
3) No, you can use some custom value in cache with putIfAsent() to
atomically get if some action was performed.

[1] https://apacheignite.readme.io/docs/cluster-groups

Thanks!
-Dmitry



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/