Equal Distribution of data among Ignite instances

classic Classic list List threaded Threaded
2 messages Options
rishi007bansod rishi007bansod
Reply | Threaded
Open this post in threaded view
|

Equal Distribution of data among Ignite instances

This post was updated on .
I have 9 ignite server instances I0, I1,..., I8 which has cache in PARTITIONED mode, in which I am loading data in parallel from partitions P0, P1.....P8 in kafka. Here partition P0, P1....P8 contains number of entries which can be uniquely identified by field  seq_no, also I am using part_ID for collocating entries from one partition to one instance only. I have defined key as,

class key()
{
    int seq_no;
    @AffinityKeyMapped
    int part_ID; //for collocating entries from one partition to one instance only
}

So, I am trying to achieve one to one mapping between cache entries in ignite instances and partitions e.g. I0->P0, I1->P1, .......,I8->P8. But in my case mapping I am getting is,

I0-> NULL(No Entries),
I1-> P5,
I2-> NULL,
I3-> P7,
I4-> P2, P6
I5-> P1
I6-> P8
I7-> P0, P4
I8-> P3

Affinity collocation part is achieved here i.e. entries with same partition ID gets cached on same ignite instance. But, data is not equally distributed among ignite instances i.e. I4 and I7 holds 2 partitions' data whereas I0 and I2 does not contain any data. So here how can we achieve equal distribution of data so that each ignite instance gets one partition data?
vkulichenko vkulichenko
Reply | Threaded
Open this post in threaded view
|

Re: Equal Distribution of data among Ignite instances

Rishi,

The problem is that you created a partition strategy in which number of partitions is equal to number of nodes. Affinity functions are designed to work when number of partitions is much bigger then number of nodes, so you will have to implement new function, or change the partition strategy (I would recommend the latter). It's always better to logically separate the data (i.e. if there is no reason for some pieces of data to be on the same node, do not force them to do so). This is much more scalable.

-Val