Same Affinity For Same Key On All Caches

classic Classic list List threaded Threaded
19 messages Options
Alper Tekinalp Alper Tekinalp
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Same Affinity For Same Key On All Caches

Hi all.

Is it possible to configures affinities in a way that partition for same key will be on same node? So calling ignite.affinity(CACHE).mapKeyToNode(KEY).id() with same key for any cache will return same node id. Is that possible with a configuration etc.?

--
Alper Tekinalp

Software Developer
Evam Streaming Analytics

Atatürk Mah. Turgut Özal Bulv. 
Gardenya 5 Plaza K:6 Ataşehir 
34758 İSTANBUL

Tel:  <a href="tel:%2B90%20216%20455%2001%2053" value="+902164550153" target="_blank">+90 216 455 01 53 Fax: <a href="tel:%2B90%20216%20455%2001%2054" value="+902164550154" target="_blank">+90 216 455 01 54
www.evam.com.tr
dsetrakyan dsetrakyan
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: Same Affinity For Same Key On All Caches

If you use the same (or default) configuration for the affinity, then the same key in different caches will always end up on the same node. This is guaranteed.

D.

On Thu, Feb 23, 2017 at 8:09 AM, Andrey Mashenkov <[hidden email]> wrote:
Val,

Yes, with same affinity function entries with same key should be saved in
same nodes.
As far as I know, primary node is assinged automatically by Ignite. And I'm
not sure that
there is a guarantee that 2 entries from different caches with same key
will have same primary and backup nodes.
So, get operation for 1-st key can be local while get() for 2-nd key will
be remote.


On Thu, Feb 23, 2017 at 6:49 PM, Valentin Kulichenko <
[hidden email]> wrote:

> Actually, this should work this way out of the box, as long as the same
> affinity function is configured for all caches (that's true for default
> settings).
>
> Andrey, am I missing something?
>
> -Val
>
> On Thu, Feb 23, 2017 at 7:02 AM, Andrey Mashenkov <
> [hidden email]> wrote:
>
> > Hi Alper,
> >
> > You can implement you own affinityFunction to achieve this.
> > In AF you should implement 2 mappings: key to partition and partition to
> > node.
> >
> > First mapping looks trivial, but second doesn't.
> > Even if you will lucky to do it, there is no way to choose what node wil
> be
> > primary and what will be backup for a partition,
> > that can be an issue.
> >
> >
> > On Thu, Feb 23, 2017 at 10:44 AM, Alper Tekinalp <[hidden email]> wrote:
> >
> > > Hi all.
> > >
> > > Is it possible to configures affinities in a way that partition for
> same
> > > key will be on same node? So calling
> > > ignite.affinity(CACHE).mapKeyToNode(KEY).id() with same key for any
> > cache
> > > will return same node id. Is that possible with a configuration etc.?
> > >
> > > --
> > > Alper Tekinalp
> > >
> > > Software Developer
> > > Evam Streaming Analytics
> > >
> > > Atatürk Mah. Turgut Özal Bulv.
> > > Gardenya 5 Plaza K:6 Ataşehir
> > > 34758 İSTANBUL
> > >
> > > Tel:  <a href="tel:%2B90%20216%20455%2001%2053" value="+902164550153">+90 216 455 01 53 Fax: <a href="tel:%2B90%20216%20455%2001%2054" value="+902164550154">+90 216 455 01 54
> > > www.evam.com.tr
> > > <http://www.evam.com>
> > >
> >
> >
> >
> > --
> > Best regards,
> > Andrey V. Mashenkov
> >
>



--
Best regards,
Andrey V. Mashenkov

Alper Tekinalp Alper Tekinalp
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: Same Affinity For Same Key On All Caches

Hi.

Thanks for your comments. Let me investigate the issue deeper.

Regards.

On Thu, Feb 23, 2017 at 11:00 PM, Dmitriy Setrakyan <[hidden email]> wrote:
If you use the same (or default) configuration for the affinity, then the same key in different caches will always end up on the same node. This is guaranteed.

D.

On Thu, Feb 23, 2017 at 8:09 AM, Andrey Mashenkov <[hidden email]> wrote:
Val,

Yes, with same affinity function entries with same key should be saved in
same nodes.
As far as I know, primary node is assinged automatically by Ignite. And I'm
not sure that
there is a guarantee that 2 entries from different caches with same key
will have same primary and backup nodes.
So, get operation for 1-st key can be local while get() for 2-nd key will
be remote.


On Thu, Feb 23, 2017 at 6:49 PM, Valentin Kulichenko <
[hidden email]> wrote:

> Actually, this should work this way out of the box, as long as the same
> affinity function is configured for all caches (that's true for default
> settings).
>
> Andrey, am I missing something?
>
> -Val
>
> On Thu, Feb 23, 2017 at 7:02 AM, Andrey Mashenkov <
> [hidden email]> wrote:
>
> > Hi Alper,
> >
> > You can implement you own affinityFunction to achieve this.
> > In AF you should implement 2 mappings: key to partition and partition to
> > node.
> >
> > First mapping looks trivial, but second doesn't.
> > Even if you will lucky to do it, there is no way to choose what node wil
> be
> > primary and what will be backup for a partition,
> > that can be an issue.
> >
> >
> > On Thu, Feb 23, 2017 at 10:44 AM, Alper Tekinalp <[hidden email]> wrote:
> >
> > > Hi all.
> > >
> > > Is it possible to configures affinities in a way that partition for
> same
> > > key will be on same node? So calling
> > > ignite.affinity(CACHE).mapKeyToNode(KEY).id() with same key for any
> > cache
> > > will return same node id. Is that possible with a configuration etc.?
> > >
> > > --
> > > Alper Tekinalp
> > >
> > > Software Developer
> > > Evam Streaming Analytics
> > >
> > > Atatürk Mah. Turgut Özal Bulv.
> > > Gardenya 5 Plaza K:6 Ataşehir
> > > 34758 İSTANBUL
> > >
> > > Tel:  <a href="tel:%2B90%20216%20455%2001%2053" value="+902164550153" target="_blank">+90 216 455 01 53 Fax: <a href="tel:%2B90%20216%20455%2001%2054" value="+902164550154" target="_blank">+90 216 455 01 54
> > > www.evam.com.tr
> > > <http://www.evam.com>
> > >
> >
> >
> >
> > --
> > Best regards,
> > Andrey V. Mashenkov
> >
>



--
Best regards,
Andrey V. Mashenkov




--
Alper Tekinalp

Software Developer
Evam Streaming Analytics

Atatürk Mah. Turgut Özal Bulv. 
Gardenya 5 Plaza K:6 Ataşehir 
34758 İSTANBUL

Tel:  <a href="tel:%2B90%20216%20455%2001%2053" value="+902164550153" target="_blank">+90 216 455 01 53 Fax: <a href="tel:%2B90%20216%20455%2001%2054" value="+902164550154" target="_blank">+90 216 455 01 54
www.evam.com.tr
Alper Tekinalp Alper Tekinalp
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: Same Affinity For Same Key On All Caches

Hi.

As I investigated the issue occurs when different nodes creates the caches.

Say I have 2 nodes node1 and node2 and 2 caches cache1 and cache2. If I create cache1 on node1 and create cache2 on node2 with same FairAffinityFunction with same partition size, keys can map different nodes on different caches.

You can find my test code and resuts as attachment. 

So is that a bug? Is there a way to force same mappings althought caches created on different nodes?


On Fri, Feb 24, 2017 at 9:37 AM, Alper Tekinalp <[hidden email]> wrote:
Hi.

Thanks for your comments. Let me investigate the issue deeper.

Regards.

On Thu, Feb 23, 2017 at 11:00 PM, Dmitriy Setrakyan <[hidden email]> wrote:
If you use the same (or default) configuration for the affinity, then the same key in different caches will always end up on the same node. This is guaranteed.

D.

On Thu, Feb 23, 2017 at 8:09 AM, Andrey Mashenkov <[hidden email]> wrote:
Val,

Yes, with same affinity function entries with same key should be saved in
same nodes.
As far as I know, primary node is assinged automatically by Ignite. And I'm
not sure that
there is a guarantee that 2 entries from different caches with same key
will have same primary and backup nodes.
So, get operation for 1-st key can be local while get() for 2-nd key will
be remote.


On Thu, Feb 23, 2017 at 6:49 PM, Valentin Kulichenko <
[hidden email]> wrote:

> Actually, this should work this way out of the box, as long as the same
> affinity function is configured for all caches (that's true for default
> settings).
>
> Andrey, am I missing something?
>
> -Val
>
> On Thu, Feb 23, 2017 at 7:02 AM, Andrey Mashenkov <
> [hidden email]> wrote:
>
> > Hi Alper,
> >
> > You can implement you own affinityFunction to achieve this.
> > In AF you should implement 2 mappings: key to partition and partition to
> > node.
> >
> > First mapping looks trivial, but second doesn't.
> > Even if you will lucky to do it, there is no way to choose what node wil
> be
> > primary and what will be backup for a partition,
> > that can be an issue.
> >
> >
> > On Thu, Feb 23, 2017 at 10:44 AM, Alper Tekinalp <[hidden email]> wrote:
> >
> > > Hi all.
> > >
> > > Is it possible to configures affinities in a way that partition for
> same
> > > key will be on same node? So calling
> > > ignite.affinity(CACHE).mapKeyToNode(KEY).id() with same key for any
> > cache
> > > will return same node id. Is that possible with a configuration etc.?
> > >
> > > --
> > > Alper Tekinalp
> > >
> > > Software Developer
> > > Evam Streaming Analytics
> > >
> > > Atatürk Mah. Turgut Özal Bulv.
> > > Gardenya 5 Plaza K:6 Ataşehir
> > > 34758 İSTANBUL
> > >
> > > Tel:  <a href="tel:%2B90%20216%20455%2001%2053" value="+902164550153" target="_blank">+90 216 455 01 53 Fax: <a href="tel:%2B90%20216%20455%2001%2054" value="+902164550154" target="_blank">+90 216 455 01 54
> > > www.evam.com.tr
> > > <http://www.evam.com>
> > >
> >
> >
> >
> > --
> > Best regards,
> > Andrey V. Mashenkov
> >
>



--
Best regards,
Andrey V. Mashenkov




--
Alper Tekinalp

Software Developer
Evam Streaming Analytics

Atatürk Mah. Turgut Özal Bulv. 
Gardenya 5 Plaza K:6 Ataşehir 
34758 İSTANBUL

Tel:  <a href="tel:%2B90%20216%20455%2001%2053" value="+902164550153" target="_blank">+90 216 455 01 53 Fax: <a href="tel:%2B90%20216%20455%2001%2054" value="+902164550154" target="_blank">+90 216 455 01 54
www.evam.com.tr



--
Alper Tekinalp

Software Developer
Evam Streaming Analytics

Atatürk Mah. Turgut Özal Bulv. 
Gardenya 5 Plaza K:6 Ataşehir 
34758 İSTANBUL

Tel:  <a href="tel:%2B90%20216%20455%2001%2053" value="+902164550153" target="_blank">+90 216 455 01 53 Fax: <a href="tel:%2B90%20216%20455%2001%2054" value="+902164550154" target="_blank">+90 216 455 01 54
www.evam.com.tr

node1_output.txt (4K) Download Attachment
node0_output.txt (4K) Download Attachment
Main.java (6K) Download Attachment
Andrey Mashenkov Andrey Mashenkov
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: Same Affinity For Same Key On All Caches

Hi Alper,

This is what I mean about primary\backup nodes for same key. It looks like there is no guarantee what node will be primary for same key for different caches.

Would you please check mapKeyToPrimaryAndBackups() method? You should get same result for same key on different caches with same affinity function.


On Mon, Feb 27, 2017 at 6:44 PM, Alper Tekinalp <[hidden email]> wrote:
Hi.

As I investigated the issue occurs when different nodes creates the caches.

Say I have 2 nodes node1 and node2 and 2 caches cache1 and cache2. If I create cache1 on node1 and create cache2 on node2 with same FairAffinityFunction with same partition size, keys can map different nodes on different caches.

You can find my test code and resuts as attachment. 

So is that a bug? Is there a way to force same mappings althought caches created on different nodes?


On Fri, Feb 24, 2017 at 9:37 AM, Alper Tekinalp <[hidden email]> wrote:
Hi.

Thanks for your comments. Let me investigate the issue deeper.

Regards.

On Thu, Feb 23, 2017 at 11:00 PM, Dmitriy Setrakyan <[hidden email]> wrote:
If you use the same (or default) configuration for the affinity, then the same key in different caches will always end up on the same node. This is guaranteed.

D.

On Thu, Feb 23, 2017 at 8:09 AM, Andrey Mashenkov <[hidden email]> wrote:
Val,

Yes, with same affinity function entries with same key should be saved in
same nodes.
As far as I know, primary node is assinged automatically by Ignite. And I'm
not sure that
there is a guarantee that 2 entries from different caches with same key
will have same primary and backup nodes.
So, get operation for 1-st key can be local while get() for 2-nd key will
be remote.


On Thu, Feb 23, 2017 at 6:49 PM, Valentin Kulichenko <
[hidden email]> wrote:

> Actually, this should work this way out of the box, as long as the same
> affinity function is configured for all caches (that's true for default
> settings).
>
> Andrey, am I missing something?
>
> -Val
>
> On Thu, Feb 23, 2017 at 7:02 AM, Andrey Mashenkov <
> [hidden email]> wrote:
>
> > Hi Alper,
> >
> > You can implement you own affinityFunction to achieve this.
> > In AF you should implement 2 mappings: key to partition and partition to
> > node.
> >
> > First mapping looks trivial, but second doesn't.
> > Even if you will lucky to do it, there is no way to choose what node wil
> be
> > primary and what will be backup for a partition,
> > that can be an issue.
> >
> >
> > On Thu, Feb 23, 2017 at 10:44 AM, Alper Tekinalp <[hidden email]> wrote:
> >
> > > Hi all.
> > >
> > > Is it possible to configures affinities in a way that partition for
> same
> > > key will be on same node? So calling
> > > ignite.affinity(CACHE).mapKeyToNode(KEY).id() with same key for any
> > cache
> > > will return same node id. Is that possible with a configuration etc.?
> > >
> > > --
> > > Alper Tekinalp
> > >
> > > Software Developer
> > > Evam Streaming Analytics
> > >
> > > Atatürk Mah. Turgut Özal Bulv.
> > > Gardenya 5 Plaza K:6 Ataşehir
> > > 34758 İSTANBUL
> > >
> > > Tel:  <a href="tel:%2B90%20216%20455%2001%2053" value="+902164550153" target="_blank">+90 216 455 01 53 Fax: <a href="tel:%2B90%20216%20455%2001%2054" value="+902164550154" target="_blank">+90 216 455 01 54
> > > www.evam.com.tr
> > > <http://www.evam.com>
> > >
> >
> >
> >
> > --
> > Best regards,
> > Andrey V. Mashenkov
> >
>



--
Best regards,
Andrey V. Mashenkov




--
Alper Tekinalp

Software Developer
Evam Streaming Analytics

Atatürk Mah. Turgut Özal Bulv. 
Gardenya 5 Plaza K:6 Ataşehir 
34758 İSTANBUL

Tel:  <a href="tel:%2B90%20216%20455%2001%2053" value="+902164550153" target="_blank">+90 216 455 01 53 Fax: <a href="tel:%2B90%20216%20455%2001%2054" value="+902164550154" target="_blank">+90 216 455 01 54
www.evam.com.tr



--
Alper Tekinalp

Software Developer
Evam Streaming Analytics

Atatürk Mah. Turgut Özal Bulv. 
Gardenya 5 Plaza K:6 Ataşehir 
34758 İSTANBUL

Tel:  <a href="tel:%2B90%20216%20455%2001%2053" value="+902164550153" target="_blank">+90 216 455 01 53 Fax: <a href="tel:%2B90%20216%20455%2001%2054" value="+902164550154" target="_blank">+90 216 455 01 54
www.evam.com.tr

vkulichenko vkulichenko
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: Same Affinity For Same Key On All Caches

Andrey,

Is there an explanation for this? If this all is true, it sounds like a bug to me, and pretty serious one.

Alper, what is the reason for using fair affinity function? Do you have the same behavior with rendezvous (the default one)?

-Val
Alper Tekinalp Alper Tekinalp
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: Same Affinity For Same Key On All Caches

Hi Val,

We are using fair affinity function because we want to keep data more balanced among nodes. When I change "new FairAffinityFunction(128)"  with "new RendezvousAffinityFunction(false, 128)" I could not reproduce the problem.
 

On Tue, Feb 28, 2017 at 7:15 AM, vkulichenko <[hidden email]> wrote:
Andrey,

Is there an explanation for this? If this all is true, it sounds like a bug
to me, and pretty serious one.

Alper, what is the reason for using fair affinity function? Do you have the
same behavior with rendezvous (the default one)?

-Val



--
View this message in context: http://apache-ignite-users.70518.x6.nabble.com/Same-Affinity-For-Same-Key-On-All-Caches-tp10829p10933.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.



--
Alper Tekinalp

Software Developer
Evam Streaming Analytics

Atatürk Mah. Turgut Özal Bulv. 
Gardenya 5 Plaza K:6 Ataşehir 
34758 İSTANBUL

Tel:  <a href="tel:%2B90%20216%20455%2001%2053" value="+902164550153" target="_blank">+90 216 455 01 53 Fax: <a href="tel:%2B90%20216%20455%2001%2054" value="+902164550154" target="_blank">+90 216 455 01 54
www.evam.com.tr
Andrew Mashenkov Andrew Mashenkov
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: Same Affinity For Same Key On All Caches

Hi Val,

Assume, we have A-B-A topology and we have X,Y caches of single partition (just for simplicity)  with 1 backup and same afinity function.
Obviously, every node contains all partitions of both caches.
Now, we can have on nodeA:  X as primary and Y as backup
On nodeB: X as backup and Y as primary.

So, for same key data of both caches is collocated, but primary node differs. 

As far as I know, Rendezvous move partitions only to newly added node while Fair can move partitions among all nodes.

It is possible I am wrong, but IHMO

It looks like partitions partition primary and backups creates in order and do not reassign after exchange. 
E.g. for topology A-B-C-D-A:
Newly created cache will have partition 'x' primary on A and backups on B,C, partition 'y' primary on B and backups on C,D and etc.....
After adding node E, some primary will be moved to it and order will be broken. E.g 'y' can become primary on E and backups on C,D.
Now if we add new cache, we wil have for 'y' C as primary and D,E as backup.

Possible we just have wrong node order in partition->node mapping.

Can somebody clarify how primary partition is assigned?

On Tue, Feb 28, 2017 at 9:56 AM, Alper Tekinalp <[hidden email]> wrote:
Hi Val,

We are using fair affinity function because we want to keep data more balanced among nodes. When I change "new FairAffinityFunction(128)"  with "new RendezvousAffinityFunction(false, 128)" I could not reproduce the problem.
 

On Tue, Feb 28, 2017 at 7:15 AM, vkulichenko <[hidden email]> wrote:
Andrey,

Is there an explanation for this? If this all is true, it sounds like a bug
to me, and pretty serious one.

Alper, what is the reason for using fair affinity function? Do you have the
same behavior with rendezvous (the default one)?

-Val



--
View this message in context: http://apache-ignite-users.70518.x6.nabble.com/Same-Affinity-For-Same-Key-On-All-Caches-tp10829p10933.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.



--
Alper Tekinalp

Software Developer
Evam Streaming Analytics

Atatürk Mah. Turgut Özal Bulv. 
Gardenya 5 Plaza K:6 Ataşehir 
34758 İSTANBUL

Tel:  <a href="tel:%2B90%20216%20455%2001%2053" value="+902164550153" target="_blank">+90 216 455 01 53 Fax: <a href="tel:%2B90%20216%20455%2001%2054" value="+902164550154" target="_blank">+90 216 455 01 54
www.evam.com.tr



--
Best regards,
Andrey V. Mashenkov
Regards, Andrew.
Alper Tekinalp Alper Tekinalp
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: Same Affinity For Same Key On All Caches

In reply to this post by Alper Tekinalp
Hi.

I guess I was wrong about the problem. The issue does not occur when different nodes creates the caches but if partitions reassigned.

Say I created cache1 on node1, then added node2. Partitions for cache1 will be reassigned. Then I create cache2 (regardless of node). Partitions assignments for cache1 and cache2 are not same.

When partitions reassigned ctx.previousAssignment(part); refers to the node that creates the cache:

previousAssignment:
[127.0.0.1, 192.168.1.42]
[127.0.0.1, 192.168.1.42]
[127.0.0.1, 192.168.1.42]
[127.0.0.1, 192.168.1.42]
[127.0.0.1, 192.168.1.42]
[127.0.0.1, 192.168.1.42]
[127.0.0.1, 192.168.1.42]
[127.0.0.1, 192.168.1.42]
[127.0.0.1, 192.168.1.42]
[127.0.0.1, 192.168.1.42]
[127.0.0.1, 192.168.1.42]
[127.0.0.1, 192.168.1.42]
[127.0.0.1, 192.168.1.42]
[127.0.0.1, 192.168.1.42]
[127.0.0.1, 192.168.1.42]
[127.0.0.1, 192.168.1.42]

assignment:
[127.0.0.1, 192.168.1.42]
[127.0.0.1, 192.168.1.42]
[127.0.0.1, 192.168.1.42]
[127.0.0.1, 192.168.1.42]
[127.0.0.1, 192.168.1.42]
[127.0.0.1, 192.168.1.42]
[127.0.0.1, 192.168.1.42]
[127.0.0.1, 192.168.1.42]
[127.0.0.1, 192.168.1.42]
[127.0.0.1, 192.168.1.42]
[127.0.0.1, 192.168.1.42]
[127.0.0.1, 192.168.1.42]
[127.0.0.1, 192.168.1.42]
[127.0.0.1, 192.168.1.42]
[127.0.0.1, 192.168.1.42]
[127.0.0.1, 192.168.1.42]

backups: 1

tiers: 2
partition set for tier:0
PartSet [nodeId=5dff841e-c578-476d-8996-39618d39790b, size=16, parts=[0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15]]
PartSet [nodeId=192f1ddb-89ed-417f-91ae-4cd16b5b1b69, size=0, parts=[]]
partition set for tier:1
PartSet [nodeId=5dff841e-c578-476d-8996-39618d39790b, size=0, parts=[]]
PartSet [nodeId=192f1ddb-89ed-417f-91ae-4cd16b5b1b69, size=0, parts=[]]

Full mapping for partitions:
[127.0.0.1, 192.168.1.42]
[127.0.0.1, 192.168.1.42]
[127.0.0.1, 192.168.1.42]
[127.0.0.1, 192.168.1.42]
[127.0.0.1, 192.168.1.42]
[127.0.0.1, 192.168.1.42]
[127.0.0.1, 192.168.1.42]
[127.0.0.1, 192.168.1.42]
[127.0.0.1, 192.168.1.42]
[127.0.0.1, 192.168.1.42]
[127.0.0.1, 192.168.1.42]
[127.0.0.1, 192.168.1.42]
[127.0.0.1, 192.168.1.42]
[127.0.0.1, 192.168.1.42]
[127.0.0.1, 192.168.1.42]
[127.0.0.1, 192.168.1.42]

No pendings for tier 0 and then it tries to rebalance partitions and mapping becomes:

Full mapping for partitions:
[127.0.0.1, 192.168.1.28]
[127.0.0.1, 192.168.1.28]
[127.0.0.1, 192.168.1.28]
[127.0.0.1, 192.168.1.28]
[127.0.0.1, 192.168.1.28]
[127.0.0.1, 192.168.1.28]
[127.0.0.1, 192.168.1.28]
[127.0.0.1, 192.168.1.28]
[127.0.0.1, 192.168.1.42]
[127.0.0.1, 192.168.1.42]
[127.0.0.1, 192.168.1.42]
[127.0.0.1, 192.168.1.42]
[127.0.0.1, 192.168.1.42]
[127.0.0.1, 192.168.1.42]
[127.0.0.1, 192.168.1.42]
[127.0.0.1, 192.168.1.42]

After going through tier 1 for pendings, which is all, mapping becomes:


Full mapping for partitions:
[127.0.0.1, 192.168.1.28] => [127.0.0.1, 192.168.1.42]
[127.0.0.1, 192.168.1.28] => [127.0.0.1, 192.168.1.42]
[127.0.0.1, 192.168.1.28] => [127.0.0.1, 192.168.1.42]
[127.0.0.1, 192.168.1.28] => [127.0.0.1, 192.168.1.42]
[127.0.0.1, 192.168.1.28] => [127.0.0.1, 192.168.1.42]
[127.0.0.1, 192.168.1.28] => [127.0.0.1, 192.168.1.42]
[127.0.0.1, 192.168.1.28] => [127.0.0.1, 192.168.1.42]
[127.0.0.1, 192.168.1.28] => [127.0.0.1, 192.168.1.42]
[127.0.0.1, 192.168.1.42] => [127.0.0.1, 192.168.1.28]
[127.0.0.1, 192.168.1.42] => [127.0.0.1, 192.168.1.28]
[127.0.0.1, 192.168.1.42] => [127.0.0.1, 192.168.1.28]
[127.0.0.1, 192.168.1.42] => [127.0.0.1, 192.168.1.28]
[127.0.0.1, 192.168.1.42] => [127.0.0.1, 192.168.1.28]
[127.0.0.1, 192.168.1.42] => [127.0.0.1, 192.168.1.28]
[127.0.0.1, 192.168.1.42] => [127.0.0.1, 192.168.1.28]
[127.0.0.1, 192.168.1.42] => [127.0.0.1, 192.168.1.28]

But if I destroy and recreate cache previous assignments are all null:

previousAssignment:
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null

assignment:
[]
[]
[]
[]
[]
[]
[]
[]
[]
[]
[]
[]
[]
[]
[]
[]

backups: 1

tiers: 2
partition set for tier:0
PartSet [nodeId=5dff841e-c578-476d-8996-39618d39790b, size=0, parts=[]]
PartSet [nodeId=192f1ddb-89ed-417f-91ae-4cd16b5b1b69, size=0, parts=[]]
partition set for tier:1
PartSet [nodeId=5dff841e-c578-476d-8996-39618d39790b, size=0, parts=[]]
PartSet [nodeId=192f1ddb-89ed-417f-91ae-4cd16b5b1b69, size=0, parts=[]]

Full mapping for partitions:
[]
[]
[]
[]
[]
[]
[]
[]
[]
[]
[]
[]
[]
[]
[]
[]


And after that it assign partitions as in round robin:

Full mapping for partitions:
[127.0.0.1, 192.168.1.42]
[127.0.0.1, 192.168.1.28]
[127.0.0.1, 192.168.1.42]
[127.0.0.1, 192.168.1.28]
[127.0.0.1, 192.168.1.42]
[127.0.0.1, 192.168.1.28]
[127.0.0.1, 192.168.1.42]
[127.0.0.1, 192.168.1.28]
[127.0.0.1, 192.168.1.42]
[127.0.0.1, 192.168.1.28]
[127.0.0.1, 192.168.1.42]
[127.0.0.1, 192.168.1.28]
[127.0.0.1, 192.168.1.42]
[127.0.0.1, 192.168.1.28]
[127.0.0.1, 192.168.1.42]
[127.0.0.1, 192.168.1.28]

And after tier 1 assignments:

Full mapping for partitions:
[127.0.0.1, 192.168.1.42] => [127.0.0.1, 192.168.1.28]
[127.0.0.1, 192.168.1.28] => [127.0.0.1, 192.168.1.42]
[127.0.0.1, 192.168.1.42] => [127.0.0.1, 192.168.1.28]
[127.0.0.1, 192.168.1.28] => [127.0.0.1, 192.168.1.42]
[127.0.0.1, 192.168.1.42] => [127.0.0.1, 192.168.1.28]
[127.0.0.1, 192.168.1.28] => [127.0.0.1, 192.168.1.42]
[127.0.0.1, 192.168.1.42] => [127.0.0.1, 192.168.1.28]
[127.0.0.1, 192.168.1.28] => [127.0.0.1, 192.168.1.42]
[127.0.0.1, 192.168.1.42] => [127.0.0.1, 192.168.1.28]
[127.0.0.1, 192.168.1.28] => [127.0.0.1, 192.168.1.42]
[127.0.0.1, 192.168.1.42] => [127.0.0.1, 192.168.1.28]
[127.0.0.1, 192.168.1.28] => [127.0.0.1, 192.168.1.42]
[127.0.0.1, 192.168.1.42] => [127.0.0.1, 192.168.1.28]
[127.0.0.1, 192.168.1.28] => [127.0.0.1, 192.168.1.42]
[127.0.0.1, 192.168.1.42] => [127.0.0.1, 192.168.1.28]
[127.0.0.1, 192.168.1.28] => [127.0.0.1, 192.168.1.42]


That is what I found while debugging. Sorry for verbose mail.


On Tue, Feb 28, 2017 at 9:56 AM, Alper Tekinalp <[hidden email]> wrote:
Hi Val,

We are using fair affinity function because we want to keep data more balanced among nodes. When I change "new FairAffinityFunction(128)"  with "new RendezvousAffinityFunction(false, 128)" I could not reproduce the problem.
 

On Tue, Feb 28, 2017 at 7:15 AM, vkulichenko <[hidden email]> wrote:
Andrey,

Is there an explanation for this? If this all is true, it sounds like a bug
to me, and pretty serious one.

Alper, what is the reason for using fair affinity function? Do you have the
same behavior with rendezvous (the default one)?

-Val



--
View this message in context: http://apache-ignite-users.70518.x6.nabble.com/Same-Affinity-For-Same-Key-On-All-Caches-tp10829p10933.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.



--
Alper Tekinalp

Software Developer
Evam Streaming Analytics

Atatürk Mah. Turgut Özal Bulv. 
Gardenya 5 Plaza K:6 Ataşehir 
34758 İSTANBUL

Tel:  <a href="tel:%2B90%20216%20455%2001%2053" value="+902164550153" target="_blank">+90 216 455 01 53 Fax: <a href="tel:%2B90%20216%20455%2001%2054" value="+902164550154" target="_blank">+90 216 455 01 54
www.evam.com.tr



--
Alper Tekinalp

Software Developer
Evam Streaming Analytics

Atatürk Mah. Turgut Özal Bulv. 
Gardenya 5 Plaza K:6 Ataşehir 
34758 İSTANBUL

Tel:  <a href="tel:%2B90%20216%20455%2001%2053" value="+902164550153" target="_blank">+90 216 455 01 53 Fax: <a href="tel:%2B90%20216%20455%2001%2054" value="+902164550154" target="_blank">+90 216 455 01 54
www.evam.com.tr
Alper Tekinalp Alper Tekinalp
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: Same Affinity For Same Key On All Caches

Hi.

So do you think that kind of behaviour is a bug or is it as it has to be? Will there be a ticket or shoul I handle it on my own?

Regards.

On Tue, Feb 28, 2017 at 3:38 PM, Alper Tekinalp <[hidden email]> wrote:
Hi.

I guess I was wrong about the problem. The issue does not occur when different nodes creates the caches but if partitions reassigned.

Say I created cache1 on node1, then added node2. Partitions for cache1 will be reassigned. Then I create cache2 (regardless of node). Partitions assignments for cache1 and cache2 are not same.

When partitions reassigned ctx.previousAssignment(part); refers to the node that creates the cache:

previousAssignment:
[127.0.0.1, 192.168.1.42]
[127.0.0.1, 192.168.1.42]
[127.0.0.1, 192.168.1.42]
[127.0.0.1, 192.168.1.42]
[127.0.0.1, 192.168.1.42]
[127.0.0.1, 192.168.1.42]
[127.0.0.1, 192.168.1.42]
[127.0.0.1, 192.168.1.42]
[127.0.0.1, 192.168.1.42]
[127.0.0.1, 192.168.1.42]
[127.0.0.1, 192.168.1.42]
[127.0.0.1, 192.168.1.42]
[127.0.0.1, 192.168.1.42]
[127.0.0.1, 192.168.1.42]
[127.0.0.1, 192.168.1.42]
[127.0.0.1, 192.168.1.42]

assignment:
[127.0.0.1, 192.168.1.42]
[127.0.0.1, 192.168.1.42]
[127.0.0.1, 192.168.1.42]
[127.0.0.1, 192.168.1.42]
[127.0.0.1, 192.168.1.42]
[127.0.0.1, 192.168.1.42]
[127.0.0.1, 192.168.1.42]
[127.0.0.1, 192.168.1.42]
[127.0.0.1, 192.168.1.42]
[127.0.0.1, 192.168.1.42]
[127.0.0.1, 192.168.1.42]
[127.0.0.1, 192.168.1.42]
[127.0.0.1, 192.168.1.42]
[127.0.0.1, 192.168.1.42]
[127.0.0.1, 192.168.1.42]
[127.0.0.1, 192.168.1.42]

backups: 1

tiers: 2
partition set for tier:0
PartSet [nodeId=5dff841e-c578-476d-8996-39618d39790b, size=16, parts=[0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15]]
PartSet [nodeId=192f1ddb-89ed-417f-91ae-4cd16b5b1b69, size=0, parts=[]]
partition set for tier:1
PartSet [nodeId=5dff841e-c578-476d-8996-39618d39790b, size=0, parts=[]]
PartSet [nodeId=192f1ddb-89ed-417f-91ae-4cd16b5b1b69, size=0, parts=[]]

Full mapping for partitions:
[127.0.0.1, 192.168.1.42]
[127.0.0.1, 192.168.1.42]
[127.0.0.1, 192.168.1.42]
[127.0.0.1, 192.168.1.42]
[127.0.0.1, 192.168.1.42]
[127.0.0.1, 192.168.1.42]
[127.0.0.1, 192.168.1.42]
[127.0.0.1, 192.168.1.42]
[127.0.0.1, 192.168.1.42]
[127.0.0.1, 192.168.1.42]
[127.0.0.1, 192.168.1.42]
[127.0.0.1, 192.168.1.42]
[127.0.0.1, 192.168.1.42]
[127.0.0.1, 192.168.1.42]
[127.0.0.1, 192.168.1.42]
[127.0.0.1, 192.168.1.42]

No pendings for tier 0 and then it tries to rebalance partitions and mapping becomes:

Full mapping for partitions:
[127.0.0.1, 192.168.1.28]
[127.0.0.1, 192.168.1.28]
[127.0.0.1, 192.168.1.28]
[127.0.0.1, 192.168.1.28]
[127.0.0.1, 192.168.1.28]
[127.0.0.1, 192.168.1.28]
[127.0.0.1, 192.168.1.28]
[127.0.0.1, 192.168.1.28]
[127.0.0.1, 192.168.1.42]
[127.0.0.1, 192.168.1.42]
[127.0.0.1, 192.168.1.42]
[127.0.0.1, 192.168.1.42]
[127.0.0.1, 192.168.1.42]
[127.0.0.1, 192.168.1.42]
[127.0.0.1, 192.168.1.42]
[127.0.0.1, 192.168.1.42]

After going through tier 1 for pendings, which is all, mapping becomes:


Full mapping for partitions:
[127.0.0.1, 192.168.1.28] => [127.0.0.1, 192.168.1.42]
[127.0.0.1, 192.168.1.28] => [127.0.0.1, 192.168.1.42]
[127.0.0.1, 192.168.1.28] => [127.0.0.1, 192.168.1.42]
[127.0.0.1, 192.168.1.28] => [127.0.0.1, 192.168.1.42]
[127.0.0.1, 192.168.1.28] => [127.0.0.1, 192.168.1.42]
[127.0.0.1, 192.168.1.28] => [127.0.0.1, 192.168.1.42]
[127.0.0.1, 192.168.1.28] => [127.0.0.1, 192.168.1.42]
[127.0.0.1, 192.168.1.28] => [127.0.0.1, 192.168.1.42]
[127.0.0.1, 192.168.1.42] => [127.0.0.1, 192.168.1.28]
[127.0.0.1, 192.168.1.42] => [127.0.0.1, 192.168.1.28]
[127.0.0.1, 192.168.1.42] => [127.0.0.1, 192.168.1.28]
[127.0.0.1, 192.168.1.42] => [127.0.0.1, 192.168.1.28]
[127.0.0.1, 192.168.1.42] => [127.0.0.1, 192.168.1.28]
[127.0.0.1, 192.168.1.42] => [127.0.0.1, 192.168.1.28]
[127.0.0.1, 192.168.1.42] => [127.0.0.1, 192.168.1.28]
[127.0.0.1, 192.168.1.42] => [127.0.0.1, 192.168.1.28]

But if I destroy and recreate cache previous assignments are all null:

previousAssignment:
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null

assignment:
[]
[]
[]
[]
[]
[]
[]
[]
[]
[]
[]
[]
[]
[]
[]
[]

backups: 1

tiers: 2
partition set for tier:0
PartSet [nodeId=5dff841e-c578-476d-8996-39618d39790b, size=0, parts=[]]
PartSet [nodeId=192f1ddb-89ed-417f-91ae-4cd16b5b1b69, size=0, parts=[]]
partition set for tier:1
PartSet [nodeId=5dff841e-c578-476d-8996-39618d39790b, size=0, parts=[]]
PartSet [nodeId=192f1ddb-89ed-417f-91ae-4cd16b5b1b69, size=0, parts=[]]

Full mapping for partitions:
[]
[]
[]
[]
[]
[]
[]
[]
[]
[]
[]
[]
[]
[]
[]
[]


And after that it assign partitions as in round robin:

Full mapping for partitions:
[127.0.0.1, 192.168.1.42]
[127.0.0.1, 192.168.1.28]
[127.0.0.1, 192.168.1.42]
[127.0.0.1, 192.168.1.28]
[127.0.0.1, 192.168.1.42]
[127.0.0.1, 192.168.1.28]
[127.0.0.1, 192.168.1.42]
[127.0.0.1, 192.168.1.28]
[127.0.0.1, 192.168.1.42]
[127.0.0.1, 192.168.1.28]
[127.0.0.1, 192.168.1.42]
[127.0.0.1, 192.168.1.28]
[127.0.0.1, 192.168.1.42]
[127.0.0.1, 192.168.1.28]
[127.0.0.1, 192.168.1.42]
[127.0.0.1, 192.168.1.28]

And after tier 1 assignments:

Full mapping for partitions:
[127.0.0.1, 192.168.1.42] => [127.0.0.1, 192.168.1.28]
[127.0.0.1, 192.168.1.28] => [127.0.0.1, 192.168.1.42]
[127.0.0.1, 192.168.1.42] => [127.0.0.1, 192.168.1.28]
[127.0.0.1, 192.168.1.28] => [127.0.0.1, 192.168.1.42]
[127.0.0.1, 192.168.1.42] => [127.0.0.1, 192.168.1.28]
[127.0.0.1, 192.168.1.28] => [127.0.0.1, 192.168.1.42]
[127.0.0.1, 192.168.1.42] => [127.0.0.1, 192.168.1.28]
[127.0.0.1, 192.168.1.28] => [127.0.0.1, 192.168.1.42]
[127.0.0.1, 192.168.1.42] => [127.0.0.1, 192.168.1.28]
[127.0.0.1, 192.168.1.28] => [127.0.0.1, 192.168.1.42]
[127.0.0.1, 192.168.1.42] => [127.0.0.1, 192.168.1.28]
[127.0.0.1, 192.168.1.28] => [127.0.0.1, 192.168.1.42]
[127.0.0.1, 192.168.1.42] => [127.0.0.1, 192.168.1.28]
[127.0.0.1, 192.168.1.28] => [127.0.0.1, 192.168.1.42]
[127.0.0.1, 192.168.1.42] => [127.0.0.1, 192.168.1.28]
[127.0.0.1, 192.168.1.28] => [127.0.0.1, 192.168.1.42]


That is what I found while debugging. Sorry for verbose mail.


On Tue, Feb 28, 2017 at 9:56 AM, Alper Tekinalp <[hidden email]> wrote:
Hi Val,

We are using fair affinity function because we want to keep data more balanced among nodes. When I change "new FairAffinityFunction(128)"  with "new RendezvousAffinityFunction(false, 128)" I could not reproduce the problem.
 

On Tue, Feb 28, 2017 at 7:15 AM, vkulichenko <[hidden email]> wrote:
Andrey,

Is there an explanation for this? If this all is true, it sounds like a bug
to me, and pretty serious one.

Alper, what is the reason for using fair affinity function? Do you have the
same behavior with rendezvous (the default one)?

-Val



--
View this message in context: http://apache-ignite-users.70518.x6.nabble.com/Same-Affinity-For-Same-Key-On-All-Caches-tp10829p10933.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.



--
Alper Tekinalp

Software Developer
Evam Streaming Analytics

Atatürk Mah. Turgut Özal Bulv. 
Gardenya 5 Plaza K:6 Ataşehir 
34758 İSTANBUL

Tel:  <a href="tel:%2B90%20216%20455%2001%2053" value="+902164550153" target="_blank">+90 216 455 01 53 Fax: <a href="tel:%2B90%20216%20455%2001%2054" value="+902164550154" target="_blank">+90 216 455 01 54
www.evam.com.tr



--
Alper Tekinalp

Software Developer
Evam Streaming Analytics

Atatürk Mah. Turgut Özal Bulv. 
Gardenya 5 Plaza K:6 Ataşehir 
34758 İSTANBUL

Tel:  <a href="tel:%2B90%20216%20455%2001%2053" value="+902164550153" target="_blank">+90 216 455 01 53 Fax: <a href="tel:%2B90%20216%20455%2001%2054" value="+902164550154" target="_blank">+90 216 455 01 54
www.evam.com.tr



--
Alper Tekinalp

Software Developer
Evam Streaming Analytics

Atatürk Mah. Turgut Özal Bulv. 
Gardenya 5 Plaza K:6 Ataşehir 
34758 İSTANBUL

Tel:  <a href="tel:%2B90%20216%20455%2001%2053" value="+902164550153" target="_blank">+90 216 455 01 53 Fax: <a href="tel:%2B90%20216%20455%2001%2054" value="+902164550154" target="_blank">+90 216 455 01 54
www.evam.com.tr
Andrew Mashenkov Andrew Mashenkov
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: Same Affinity For Same Key On All Caches

In reply to this post by Alper Tekinalp
Crossposting to dev list.

I've made a test. 
It looks ok for  Rendevouz A, partition distribution for caches with similar settings and same Rendevouz AF keep same.
But FairAF partition distribution can differed for two caches that one was created before and second - after rebalancing. 

So, collocation is not guarateed for same key and similar caches with same Fair AF.

PFA repro.

If it is a bug?

On Tue, Feb 28, 2017 at 3:38 PM, Alper Tekinalp <[hidden email]> wrote:
Hi.

I guess I was wrong about the problem. The issue does not occur when different nodes creates the caches but if partitions reassigned.

Say I created cache1 on node1, then added node2. Partitions for cache1 will be reassigned. Then I create cache2 (regardless of node). Partitions assignments for cache1 and cache2 are not same.

When partitions reassigned ctx.previousAssignment(part); refers to the node that creates the cache:

previousAssignment:
[127.0.0.1, 192.168.1.42]
[127.0.0.1, 192.168.1.42]
[127.0.0.1, 192.168.1.42]
[127.0.0.1, 192.168.1.42]
[127.0.0.1, 192.168.1.42]
[127.0.0.1, 192.168.1.42]
[127.0.0.1, 192.168.1.42]
[127.0.0.1, 192.168.1.42]
[127.0.0.1, 192.168.1.42]
[127.0.0.1, 192.168.1.42]
[127.0.0.1, 192.168.1.42]
[127.0.0.1, 192.168.1.42]
[127.0.0.1, 192.168.1.42]
[127.0.0.1, 192.168.1.42]
[127.0.0.1, 192.168.1.42]
[127.0.0.1, 192.168.1.42]

assignment:
[127.0.0.1, 192.168.1.42]
[127.0.0.1, 192.168.1.42]
[127.0.0.1, 192.168.1.42]
[127.0.0.1, 192.168.1.42]
[127.0.0.1, 192.168.1.42]
[127.0.0.1, 192.168.1.42]
[127.0.0.1, 192.168.1.42]
[127.0.0.1, 192.168.1.42]
[127.0.0.1, 192.168.1.42]
[127.0.0.1, 192.168.1.42]
[127.0.0.1, 192.168.1.42]
[127.0.0.1, 192.168.1.42]
[127.0.0.1, 192.168.1.42]
[127.0.0.1, 192.168.1.42]
[127.0.0.1, 192.168.1.42]
[127.0.0.1, 192.168.1.42]

backups: 1

tiers: 2
partition set for tier:0
PartSet [nodeId=5dff841e-c578-476d-8996-39618d39790b, size=16, parts=[0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15]]
PartSet [nodeId=192f1ddb-89ed-417f-91ae-4cd16b5b1b69, size=0, parts=[]]
partition set for tier:1
PartSet [nodeId=5dff841e-c578-476d-8996-39618d39790b, size=0, parts=[]]
PartSet [nodeId=192f1ddb-89ed-417f-91ae-4cd16b5b1b69, size=0, parts=[]]

Full mapping for partitions:
[127.0.0.1, 192.168.1.42]
[127.0.0.1, 192.168.1.42]
[127.0.0.1, 192.168.1.42]
[127.0.0.1, 192.168.1.42]
[127.0.0.1, 192.168.1.42]
[127.0.0.1, 192.168.1.42]
[127.0.0.1, 192.168.1.42]
[127.0.0.1, 192.168.1.42]
[127.0.0.1, 192.168.1.42]
[127.0.0.1, 192.168.1.42]
[127.0.0.1, 192.168.1.42]
[127.0.0.1, 192.168.1.42]
[127.0.0.1, 192.168.1.42]
[127.0.0.1, 192.168.1.42]
[127.0.0.1, 192.168.1.42]
[127.0.0.1, 192.168.1.42]

No pendings for tier 0 and then it tries to rebalance partitions and mapping becomes:

Full mapping for partitions:
[127.0.0.1, 192.168.1.28]
[127.0.0.1, 192.168.1.28]
[127.0.0.1, 192.168.1.28]
[127.0.0.1, 192.168.1.28]
[127.0.0.1, 192.168.1.28]
[127.0.0.1, 192.168.1.28]
[127.0.0.1, 192.168.1.28]
[127.0.0.1, 192.168.1.28]
[127.0.0.1, 192.168.1.42]
[127.0.0.1, 192.168.1.42]
[127.0.0.1, 192.168.1.42]
[127.0.0.1, 192.168.1.42]
[127.0.0.1, 192.168.1.42]
[127.0.0.1, 192.168.1.42]
[127.0.0.1, 192.168.1.42]
[127.0.0.1, 192.168.1.42]

After going through tier 1 for pendings, which is all, mapping becomes:


Full mapping for partitions:
[127.0.0.1, 192.168.1.28] => [127.0.0.1, 192.168.1.42]
[127.0.0.1, 192.168.1.28] => [127.0.0.1, 192.168.1.42]
[127.0.0.1, 192.168.1.28] => [127.0.0.1, 192.168.1.42]
[127.0.0.1, 192.168.1.28] => [127.0.0.1, 192.168.1.42]
[127.0.0.1, 192.168.1.28] => [127.0.0.1, 192.168.1.42]
[127.0.0.1, 192.168.1.28] => [127.0.0.1, 192.168.1.42]
[127.0.0.1, 192.168.1.28] => [127.0.0.1, 192.168.1.42]
[127.0.0.1, 192.168.1.28] => [127.0.0.1, 192.168.1.42]
[127.0.0.1, 192.168.1.42] => [127.0.0.1, 192.168.1.28]
[127.0.0.1, 192.168.1.42] => [127.0.0.1, 192.168.1.28]
[127.0.0.1, 192.168.1.42] => [127.0.0.1, 192.168.1.28]
[127.0.0.1, 192.168.1.42] => [127.0.0.1, 192.168.1.28]
[127.0.0.1, 192.168.1.42] => [127.0.0.1, 192.168.1.28]
[127.0.0.1, 192.168.1.42] => [127.0.0.1, 192.168.1.28]
[127.0.0.1, 192.168.1.42] => [127.0.0.1, 192.168.1.28]
[127.0.0.1, 192.168.1.42] => [127.0.0.1, 192.168.1.28]

But if I destroy and recreate cache previous assignments are all null:

previousAssignment:
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null

assignment:
[]
[]
[]
[]
[]
[]
[]
[]
[]
[]
[]
[]
[]
[]
[]
[]

backups: 1

tiers: 2
partition set for tier:0
PartSet [nodeId=5dff841e-c578-476d-8996-39618d39790b, size=0, parts=[]]
PartSet [nodeId=192f1ddb-89ed-417f-91ae-4cd16b5b1b69, size=0, parts=[]]
partition set for tier:1
PartSet [nodeId=5dff841e-c578-476d-8996-39618d39790b, size=0, parts=[]]
PartSet [nodeId=192f1ddb-89ed-417f-91ae-4cd16b5b1b69, size=0, parts=[]]

Full mapping for partitions:
[]
[]
[]
[]
[]
[]
[]
[]
[]
[]
[]
[]
[]
[]
[]
[]


And after that it assign partitions as in round robin:

Full mapping for partitions:
[127.0.0.1, 192.168.1.42]
[127.0.0.1, 192.168.1.28]
[127.0.0.1, 192.168.1.42]
[127.0.0.1, 192.168.1.28]
[127.0.0.1, 192.168.1.42]
[127.0.0.1, 192.168.1.28]
[127.0.0.1, 192.168.1.42]
[127.0.0.1, 192.168.1.28]
[127.0.0.1, 192.168.1.42]
[127.0.0.1, 192.168.1.28]
[127.0.0.1, 192.168.1.42]
[127.0.0.1, 192.168.1.28]
[127.0.0.1, 192.168.1.42]
[127.0.0.1, 192.168.1.28]
[127.0.0.1, 192.168.1.42]
[127.0.0.1, 192.168.1.28]

And after tier 1 assignments:

Full mapping for partitions:
[127.0.0.1, 192.168.1.42] => [127.0.0.1, 192.168.1.28]
[127.0.0.1, 192.168.1.28] => [127.0.0.1, 192.168.1.42]
[127.0.0.1, 192.168.1.42] => [127.0.0.1, 192.168.1.28]
[127.0.0.1, 192.168.1.28] => [127.0.0.1, 192.168.1.42]
[127.0.0.1, 192.168.1.42] => [127.0.0.1, 192.168.1.28]
[127.0.0.1, 192.168.1.28] => [127.0.0.1, 192.168.1.42]
[127.0.0.1, 192.168.1.42] => [127.0.0.1, 192.168.1.28]
[127.0.0.1, 192.168.1.28] => [127.0.0.1, 192.168.1.42]
[127.0.0.1, 192.168.1.42] => [127.0.0.1, 192.168.1.28]
[127.0.0.1, 192.168.1.28] => [127.0.0.1, 192.168.1.42]
[127.0.0.1, 192.168.1.42] => [127.0.0.1, 192.168.1.28]
[127.0.0.1, 192.168.1.28] => [127.0.0.1, 192.168.1.42]
[127.0.0.1, 192.168.1.42] => [127.0.0.1, 192.168.1.28]
[127.0.0.1, 192.168.1.28] => [127.0.0.1, 192.168.1.42]
[127.0.0.1, 192.168.1.42] => [127.0.0.1, 192.168.1.28]
[127.0.0.1, 192.168.1.28] => [127.0.0.1, 192.168.1.42]


That is what I found while debugging. Sorry for verbose mail.


On Tue, Feb 28, 2017 at 9:56 AM, Alper Tekinalp <[hidden email]> wrote:
Hi Val,

We are using fair affinity function because we want to keep data more balanced among nodes. When I change "new FairAffinityFunction(128)"  with "new RendezvousAffinityFunction(false, 128)" I could not reproduce the problem.
 

On Tue, Feb 28, 2017 at 7:15 AM, vkulichenko <[hidden email]> wrote:
Andrey,

Is there an explanation for this? If this all is true, it sounds like a bug
to me, and pretty serious one.

Alper, what is the reason for using fair affinity function? Do you have the
same behavior with rendezvous (the default one)?

-Val



--
View this message in context: http://apache-ignite-users.70518.x6.nabble.com/Same-Affinity-For-Same-Key-On-All-Caches-tp10829p10933.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.



--
Alper Tekinalp

Software Developer
Evam Streaming Analytics

Atatürk Mah. Turgut Özal Bulv. 
Gardenya 5 Plaza K:6 Ataşehir 
34758 İSTANBUL

Tel:  <a href="tel:%2B90%20216%20455%2001%2053" value="+902164550153" target="_blank">+90 216 455 01 53 Fax: <a href="tel:%2B90%20216%20455%2001%2054" value="+902164550154" target="_blank">+90 216 455 01 54
www.evam.com.tr



--
Alper Tekinalp

Software Developer
Evam Streaming Analytics

Atatürk Mah. Turgut Özal Bulv. 
Gardenya 5 Plaza K:6 Ataşehir 
34758 İSTANBUL

Tel:  <a href="tel:%2B90%20216%20455%2001%2053" value="+902164550153" target="_blank">+90 216 455 01 53 Fax: <a href="tel:%2B90%20216%20455%2001%2054" value="+902164550154" target="_blank">+90 216 455 01 54
www.evam.com.tr



--
Best regards,
Andrey V. Mashenkov

PartitionDsitributionTest.java (6K) Download Attachment
Regards, Andrew.
vkulichenko vkulichenko
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: Same Affinity For Same Key On All Caches

Andrew,

Yes, I believe it's a bug, let's create a ticket.

Do you have any idea why this happens? The function doesn't have any state, so I don't see any difference between two its instances on same node for different caches, and two instances on different nodes for the same cache. This makes me think that mapping inconsistency can occur in the latter case as well, and if so, it's a very critical issue.

-Val
Alper Tekinalp Alper Tekinalp
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: Same Affinity For Same Key On All Caches

Hi.

I created a bug ticket for that: https://issues.apache.org/jira/browse/IGNITE-4765

Val, the problem here is fair affinity function calculates partition mappings based on previous assignments. When rebalancing partitions previous assignments for a cache is known and new assignment calculated based on that. But when you create a new cache there is no previous assignments and the calculation is different.


On Thu, Mar 2, 2017 at 12:14 AM, vkulichenko <[hidden email]> wrote:
Andrew,

Yes, I believe it's a bug, let's create a ticket.

Do you have any idea why this happens? The function doesn't have any state,
so I don't see any difference between two its instances on same node for
different caches, and two instances on different nodes for the same cache.
This makes me think that mapping inconsistency can occur in the latter case
as well, and if so, it's a very critical issue.

-Val



--
View this message in context: http://apache-ignite-users.70518.x6.nabble.com/Same-Affinity-For-Same-Key-On-All-Caches-tp10829p10979.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.



--
Alper Tekinalp

Software Developer
Evam Streaming Analytics

Atatürk Mah. Turgut Özal Bulv. 
Gardenya 5 Plaza K:6 Ataşehir 
34758 İSTANBUL

Tel:  <a href="tel:%2B90%20216%20455%2001%2053" value="+902164550153" target="_blank">+90 216 455 01 53 Fax: <a href="tel:%2B90%20216%20455%2001%2054" value="+902164550154" target="_blank">+90 216 455 01 54
www.evam.com.tr
alexey.goncharuk alexey.goncharuk
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: Same Affinity For Same Key On All Caches

In reply to this post by vkulichenko
This does not look like a bug to me. Rendezvous affinity function is stateless, while FairAffinityFunction relies on the previous partition distribution among nodes, thus it IS stateful. The partition distribution would be the same if caches were created on the same cluster topology and then a sequence of topology changes was applied.

2017-03-02 0:14 GMT+03:00 vkulichenko <[hidden email]>:
Andrew,

Yes, I believe it's a bug, let's create a ticket.

Do you have any idea why this happens? The function doesn't have any state,
so I don't see any difference between two its instances on same node for
different caches, and two instances on different nodes for the same cache.
This makes me think that mapping inconsistency can occur in the latter case
as well, and if so, it's a very critical issue.

-Val



--
View this message in context: http://apache-ignite-users.70518.x6.nabble.com/Same-Affinity-For-Same-Key-On-All-Caches-tp10829p10979.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.

vkulichenko vkulichenko
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: Same Affinity For Same Key On All Caches

Hi Alex,

I see your point. Can you please outline its advantages vs rendezvous function?

In my view issue discussed here makes it pretty much useless in vast majority of use cases, and very error-prone in all others.

-Val
vkulichenko vkulichenko
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: Same Affinity For Same Key On All Caches

Adding back the dev list.

Folks,

Are there any opinions on the problem discussed here? Do we really need FairAffinityFunction if it can't guarantee cross-cache collocation?

-Val

On Thu, Mar 2, 2017 at 2:41 PM, vkulichenko <[hidden email]> wrote:
Hi Alex,

I see your point. Can you please outline its advantages vs rendezvous
function?

In my view issue discussed here makes it pretty much useless in vast
majority of use cases, and very error-prone in all others.

-Val



--
View this message in context: http://apache-ignite-users.70518.x6.nabble.com/Same-Affinity-For-Same-Key-On-All-Caches-tp10829p11006.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.

Denis Magda-2 Denis Magda-2
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: Same Affinity For Same Key On All Caches

What??? Unbelievable. It sounds like a design flaw to me. Any ideas how to fix?

Denis

On Mar 2, 2017, at 2:43 PM, Valentin Kulichenko <[hidden email]> wrote:

Adding back the dev list.

Folks,

Are there any opinions on the problem discussed here? Do we really need FairAffinityFunction if it can't guarantee cross-cache collocation?

-Val

On Thu, Mar 2, 2017 at 2:41 PM, vkulichenko <[hidden email]> wrote:
Hi Alex,

I see your point. Can you please outline its advantages vs rendezvous
function?

In my view issue discussed here makes it pretty much useless in vast
majority of use cases, and very error-prone in all others.

-Val



--
View this message in context: http://apache-ignite-users.70518.x6.nabble.com/Same-Affinity-For-Same-Key-On-All-Caches-tp10829p11006.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Taras Ledkov Taras Ledkov
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: Same Affinity For Same Key On All Caches

Folks,

I worked on issue https://issues.apache.org/jira/browse/IGNITE-3018 that is related to performance of Rendezvous AF.

But Wang/Jenkins hash integer hash distribution is worse then MD5. So, i try to use simple partition balancer close
to Fair AF for Rendezvous AF.

Take a look at the heatmaps of distributions at the issue. e.g.:
- Compare of current Rendezvous AF and new Rendezvous AF based of Wang/Jenkins hash: https://issues.apache.org/jira/secure/attachment/12858701/004.png
- Compare of current Rendezvous AF and new Rendezvous AF based of Wang/Jenkins hash with partition balancer: https://issues.apache.org/jira/secure/attachment/12858690/balanced.004.png

When the balancer is enabled the distribution of partitions by nodes looks like close to even distribution
but in this case there is not guarantee that a partition doesn't move from one node to another
when node leave topology.
It is not guarantee but we try to minimize it because sorted array of nodes is used (like in for pure-Rendezvous AF).

I think we can use new fast Rendezvous AF and use 'useBalancer' flag instead of Fair AF.

On 03.03.2017 1:56, Denis Magda wrote:
What??? Unbelievable. It sounds like a design flaw to me. Any ideas how to fix?

Denis

On Mar 2, 2017, at 2:43 PM, Valentin Kulichenko <[hidden email]> wrote:

Adding back the dev list.

Folks,

Are there any opinions on the problem discussed here? Do we really need FairAffinityFunction if it can't guarantee cross-cache collocation?

-Val

On Thu, Mar 2, 2017 at 2:41 PM, vkulichenko <[hidden email]> wrote:
Hi Alex,

I see your point. Can you please outline its advantages vs rendezvous
function?

In my view issue discussed here makes it pretty much useless in vast
majority of use cases, and very error-prone in all others.

-Val



--
View this message in context: http://apache-ignite-users.70518.x6.nabble.com/Same-Affinity-For-Same-Key-On-All-Caches-tp10829p11006.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.



-- 
Taras Ledkov
Mail-To: [hidden email]
Andrew Mashenkov Andrew Mashenkov
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: Same Affinity For Same Key On All Caches

Good catch, Taras!

+1 for balanced Rendezvous AF instead of Fair AF.


On Wed, Mar 15, 2017 at 1:29 PM, Taras Ledkov <[hidden email]> wrote:

Folks,

I worked on issue https://issues.apache.org/jira/browse/IGNITE-3018 that is related to performance of Rendezvous AF.

But Wang/Jenkins hash integer hash distribution is worse then MD5. So, i try to use simple partition balancer close
to Fair AF for Rendezvous AF.

Take a look at the heatmaps of distributions at the issue. e.g.:
- Compare of current Rendezvous AF and new Rendezvous AF based of Wang/Jenkins hash: https://issues.apache.org/jira/secure/attachment/12858701/004.png
- Compare of current Rendezvous AF and new Rendezvous AF based of Wang/Jenkins hash with partition balancer: https://issues.apache.org/jira/secure/attachment/12858690/balanced.004.png

When the balancer is enabled the distribution of partitions by nodes looks like close to even distribution
but in this case there is not guarantee that a partition doesn't move from one node to another
when node leave topology.
It is not guarantee but we try to minimize it because sorted array of nodes is used (like in for pure-Rendezvous AF).

I think we can use new fast Rendezvous AF and use 'useBalancer' flag instead of Fair AF.


On 03.03.2017 1:56, Denis Magda wrote:
What??? Unbelievable. It sounds like a design flaw to me. Any ideas how to fix?

Denis

On Mar 2, 2017, at 2:43 PM, Valentin Kulichenko <[hidden email]> wrote:

Adding back the dev list.

Folks,

Are there any opinions on the problem discussed here? Do we really need FairAffinityFunction if it can't guarantee cross-cache collocation?

-Val

On Thu, Mar 2, 2017 at 2:41 PM, vkulichenko <[hidden email]> wrote:
Hi Alex,

I see your point. Can you please outline its advantages vs rendezvous
function?

In my view issue discussed here makes it pretty much useless in vast
majority of use cases, and very error-prone in all others.

-Val



--
View this message in context: http://apache-ignite-users.70518.x6.nabble.com/Same-Affinity-For-Same-Key-On-All-Caches-tp10829p11006.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.



-- 
Taras Ledkov
Mail-To: [hidden email]



--
Best regards,
Andrey V. Mashenkov
Regards, Andrew.
Loading...