about single service redistribution after node restart

classic Classic list List threaded Threaded
2 messages Options
lee110001 lee110001
Reply | Threaded
Open this post in threaded view
|

about single service redistribution after node restart

My cluster has two nodes.When one of the node(uuid:22ff0f84-9d6c-4b7f-9e71-8892c0dcc7fe)  is down and restarted, the single service will not be redistributed  and only normal nodes(uuid:d44abb38-b870-42c7-8c37-ca5e9cee3232)  exist.

I print service topologySnapshot
before one node(uuid:22ff0f84-9d6c-4b7f-9e71-8892c0dcc7fe)  reboot :
=====================
scheduleQ_1
uuid=d44abb38-b870-42c7-8c37-ca5e9cee3232
integer=0
uuid=22ff0f84-9d6c-4b7f-9e71-8892c0dcc7fe
integer=1
=====================
scheduleQ_2
uuid=d44abb38-b870-42c7-8c37-ca5e9cee3232
integer=1
uuid=22ff0f84-9d6c-4b7f-9e71-8892c0dcc7fe
integer=0
=====================
scheduleQ_3
uuid=d44abb38-b870-42c7-8c37-ca5e9cee3232
integer=0
uuid=22ff0f84-9d6c-4b7f-9e71-8892c0dcc7fe
integer=1
=====================
scheduleQ_4
uuid=d44abb38-b870-42c7-8c37-ca5e9cee3232
integer=1
uuid=22ff0f84-9d6c-4b7f-9e71-8892c0dcc7fe
integer=0

after reboot one node:
===================== 
scheduleQ_1
uuid=d44abb38-b870-42c7-8c37-ca5e9cee3232
integer=1
uuid=22ff0f84-9d6c-4b7f-9e71-8892c0dcc7fe
integer=0
=====================
scheduleQ_2
uuid=d44abb38-b870-42c7-8c37-ca5e9cee3232
integer=1
uuid=22ff0f84-9d6c-4b7f-9e71-8892c0dcc7fe
integer=0
=====================
scheduleQ_3
uuid=d44abb38-b870-42c7-8c37-ca5e9cee3232
integer=1
uuid=22ff0f84-9d6c-4b7f-9e71-8892c0dcc7fe
integer=0
=====================
scheduleQ_4
uuid=d44abb38-b870-42c7-8c37-ca5e9cee3232
integer=1
uuid=22ff0f84-9d6c-4b7f-9e71-8892c0dcc7fe
integer=0  


I hope the  single service can be redistributed in two nodes
How do I do it?

Regards,    
Denis Mekhanikov Denis Mekhanikov
Reply | Threaded
Open this post in threaded view
|

Re: about single service redistribution after node restart

Hi!

Currently services don't get redistributed if new nodes join a topology.
So, if you have all services deployed on one node, then they won't be moved to newly joined ones.
This is a known issue. The following ticket mentions it: https://issues.apache.org/jira/browse/IGNITE-7667
Currently even distribution can be achieved by redeploying the services.
Also if you add the third node, and then kill the first one (containing all the services), then the two nodes, that are left in the cluster, will host an even number of services.

Denis

пн, 1 июл. 2019 г. в 19:21, 李奉先 <[hidden email]>:
My cluster has two nodes.When one of the node(uuid:22ff0f84-9d6c-4b7f-9e71-8892c0dcc7fe)  is down and restarted, the single service will not be redistributed  and only normal nodes(uuid:d44abb38-b870-42c7-8c37-ca5e9cee3232)  exist.

I print service topologySnapshot
before one node(uuid:22ff0f84-9d6c-4b7f-9e71-8892c0dcc7fe)  reboot :
=====================
scheduleQ_1
uuid=d44abb38-b870-42c7-8c37-ca5e9cee3232
integer=0
uuid=22ff0f84-9d6c-4b7f-9e71-8892c0dcc7fe
integer=1
=====================
scheduleQ_2
uuid=d44abb38-b870-42c7-8c37-ca5e9cee3232
integer=1
uuid=22ff0f84-9d6c-4b7f-9e71-8892c0dcc7fe
integer=0
=====================
scheduleQ_3
uuid=d44abb38-b870-42c7-8c37-ca5e9cee3232
integer=0
uuid=22ff0f84-9d6c-4b7f-9e71-8892c0dcc7fe
integer=1
=====================
scheduleQ_4
uuid=d44abb38-b870-42c7-8c37-ca5e9cee3232
integer=1
uuid=22ff0f84-9d6c-4b7f-9e71-8892c0dcc7fe
integer=0

after reboot one node:
===================== 
scheduleQ_1
uuid=d44abb38-b870-42c7-8c37-ca5e9cee3232
integer=1
uuid=22ff0f84-9d6c-4b7f-9e71-8892c0dcc7fe
integer=0
=====================
scheduleQ_2
uuid=d44abb38-b870-42c7-8c37-ca5e9cee3232
integer=1
uuid=22ff0f84-9d6c-4b7f-9e71-8892c0dcc7fe
integer=0
=====================
scheduleQ_3
uuid=d44abb38-b870-42c7-8c37-ca5e9cee3232
integer=1
uuid=22ff0f84-9d6c-4b7f-9e71-8892c0dcc7fe
integer=0
=====================
scheduleQ_4
uuid=d44abb38-b870-42c7-8c37-ca5e9cee3232
integer=1
uuid=22ff0f84-9d6c-4b7f-9e71-8892c0dcc7fe
integer=0  


I hope the  single service can be redistributed in two nodes
How do I do it?

Regards,