Ignite Persistence: Baseline Topology

classic Classic list List threaded Threaded
5 messages Options
ashishb888 ashishb888
Reply | Threaded
Open this post in threaded view
|

Ignite Persistence: Baseline Topology

I have a few queries:

1. There are 4 nodes. 2 of them with persistence enabled and rest with
persistence disabled. And now I want to activate the cluster. Is it normal?
Will it work?

2. There 2 nodes with persistence enabled. I have activated the cluster, and
I am able to see two nodes in baseline topology. I have added 2 more nodes
with persistence enabled. And I want to add them to the baseline topology.
I am able to add those nodes via ./bin/control.sh --baseline add. Do I have
a way to add those nodes to BLT from Java code? Why has
ignite.cluster().setBaselineTopology() no effect?




--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/
djm132 djm132
Reply | Threaded
Open this post in threaded view
|

Re: Ignite Persistence: Baseline Topology

aealexsandrov aealexsandrov
Reply | Threaded
Open this post in threaded view
|

Re: Ignite Persistence: Baseline Topology

Hi,

I guess that every data node should have have the same data regions. I
checked that in case if you have for example 2 nodes with persistence
region in BLT and then start a new node (that isn't the part of BLT)
with new region and some cache in this new region then it will produce
next exception:

[17:53:30,446][SEVERE][exchange-worker-#48][GridDhtPartitionsExchangeFuture]
Failed to reinitialize local partitions (rebalancing will be stopped):
GridDhtPartitionExchangeId [topVer=AffinityTopologyVersion [topVer=3,
minorTopVer=0], discoEvt=DiscoveryEvent [evtNode=TcpDiscoveryNode
[id=44c8ba83-4a4d-4b0e-b4b6-530a23b25d24, addrs=[0:0:0:0:0:0:0:1,
10.0.1.1, 10.0.75.1, 127.0.0.1, 172.25.4.231, 192.168.244.113,
192.168.56.1],
sockAddrs=[LAPTOP-I5CE4BEI.mshome.net/192.168.244.113:47502,
/192.168.56.1:47502, host.docker.internal/172.25.4.231:47502,
LAPTOP-I5CE4BEI/10.0.75.1:47502, /0:0:0:0:0:0:0:1:47502,
/10.0.1.1:47502, /127.0.0.1:47502], discPort=47502, order=3, intOrder=3,
lastExchangeTime=1578322410223, loc=false,
ver=2.7.2#20191202-sha1:2e9d1c89, isClient=false], topVer=3,
nodeId8=f581f039, msg=Node joined: TcpDiscoveryNode
[id=44c8ba83-4a4d-4b0e-b4b6-530a23b25d24, addrs=[0:0:0:0:0:0:0:1,
10.0.1.1, 10.0.75.1, 127.0.0.1, 172.25.4.231, 192.168.244.113,
192.168.56.1],
sockAddrs=[LAPTOP-I5CE4BEI.mshome.net/192.168.244.113:47502,
/192.168.56.1:47502, host.docker.internal/172.25.4.231:47502,
LAPTOP-I5CE4BEI/10.0.75.1:47502, /0:0:0:0:0:0:0:1:47502,
/10.0.1.1:47502, /127.0.0.1:47502], discPort=47502, order=3, intOrder=3,
lastExchangeTime=1578322410223, loc=false,
ver=2.7.2#20191202-sha1:2e9d1c89, isClient=false], type=NODE_JOINED,
tstamp=1578322410400], nodeId=44c8ba83, evt=NODE_JOINED]
class org.apache.ignite.IgniteCheckedException: Requested DataRegion is
not configured: 1GB_Region_Eviction
     at
org.apache.ignite.internal.processors.cache.persistence.IgniteCacheDatabaseSharedManager.dataRegion(IgniteCacheDatabaseSharedManager.java:729)

BR,
Andrei

1/6/2020 2:52 PM, djm132 пишет:
> You can also look to this topic, probably related to yours with code sample
> http://apache-ignite-users.70518.x6.nabble.com/Embedded-ignite-and-baseline-upgrade-questions-td30822.html
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
dmagda dmagda
Reply | Threaded
Open this post in threaded view
|

Re: Ignite Persistence: Baseline Topology

Andrey,

Are you saying we require to have regions of the same size preconfigured across the nodes? Hope I misunderstood you.

-
Denis


On Mon, Jan 6, 2020 at 7:18 AM Andrei Aleksandrov <[hidden email]> wrote:
Hi,

I guess that every data node should have have the same data regions. I
checked that in case if you have for example 2 nodes with persistence
region in BLT and then start a new node (that isn't the part of BLT)
with new region and some cache in this new region then it will produce
next exception:

[17:53:30,446][SEVERE][exchange-worker-#48][GridDhtPartitionsExchangeFuture]
Failed to reinitialize local partitions (rebalancing will be stopped):
GridDhtPartitionExchangeId [topVer=AffinityTopologyVersion [topVer=3,
minorTopVer=0], discoEvt=DiscoveryEvent [evtNode=TcpDiscoveryNode
[id=44c8ba83-4a4d-4b0e-b4b6-530a23b25d24, addrs=[0:0:0:0:0:0:0:1,
10.0.1.1, 10.0.75.1, 127.0.0.1, 172.25.4.231, 192.168.244.113,
192.168.56.1],
sockAddrs=[LAPTOP-I5CE4BEI.mshome.net/192.168.244.113:47502,
/192.168.56.1:47502, host.docker.internal/172.25.4.231:47502,
LAPTOP-I5CE4BEI/10.0.75.1:47502, /0:0:0:0:0:0:0:1:47502,
/10.0.1.1:47502, /127.0.0.1:47502], discPort=47502, order=3, intOrder=3,
lastExchangeTime=1578322410223, loc=false,
ver=2.7.2#20191202-sha1:2e9d1c89, isClient=false], topVer=3,
nodeId8=f581f039, msg=Node joined: TcpDiscoveryNode
[id=44c8ba83-4a4d-4b0e-b4b6-530a23b25d24, addrs=[0:0:0:0:0:0:0:1,
10.0.1.1, 10.0.75.1, 127.0.0.1, 172.25.4.231, 192.168.244.113,
192.168.56.1],
sockAddrs=[LAPTOP-I5CE4BEI.mshome.net/192.168.244.113:47502,
/192.168.56.1:47502, host.docker.internal/172.25.4.231:47502,
LAPTOP-I5CE4BEI/10.0.75.1:47502, /0:0:0:0:0:0:0:1:47502,
/10.0.1.1:47502, /127.0.0.1:47502], discPort=47502, order=3, intOrder=3,
lastExchangeTime=1578322410223, loc=false,
ver=2.7.2#20191202-sha1:2e9d1c89, isClient=false], type=NODE_JOINED,
tstamp=1578322410400], nodeId=44c8ba83, evt=NODE_JOINED]
class org.apache.ignite.IgniteCheckedException: Requested DataRegion is
not configured: 1GB_Region_Eviction
     at
org.apache.ignite.internal.processors.cache.persistence.IgniteCacheDatabaseSharedManager.dataRegion(IgniteCacheDatabaseSharedManager.java:729)

BR,
Andrei

1/6/2020 2:52 PM, djm132 пишет:
> You can also look to this topic, probably related to yours with code sample
> http://apache-ignite-users.70518.x6.nabble.com/Embedded-ignite-and-baseline-upgrade-questions-td30822.html
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
ezhuravlev ezhuravlev
Reply | Threaded
Open this post in threaded view
|

Re: Ignite Persistence: Baseline Topology

Denis, it's not about the size - the size could be different. It's about having different configured DataRegions and creating caches without NodeFilter. If newly added node has new DataRegion and cache created for this region - it will lead to the cluster failure.

Evgenii

ср, 8 янв. 2020 г. в 13:15, Denis Magda <[hidden email]>:
Andrey,

Are you saying we require to have regions of the same size preconfigured across the nodes? Hope I misunderstood you.

-
Denis


On Mon, Jan 6, 2020 at 7:18 AM Andrei Aleksandrov <[hidden email]> wrote:
Hi,

I guess that every data node should have have the same data regions. I
checked that in case if you have for example 2 nodes with persistence
region in BLT and then start a new node (that isn't the part of BLT)
with new region and some cache in this new region then it will produce
next exception:

[17:53:30,446][SEVERE][exchange-worker-#48][GridDhtPartitionsExchangeFuture]
Failed to reinitialize local partitions (rebalancing will be stopped):
GridDhtPartitionExchangeId [topVer=AffinityTopologyVersion [topVer=3,
minorTopVer=0], discoEvt=DiscoveryEvent [evtNode=TcpDiscoveryNode
[id=44c8ba83-4a4d-4b0e-b4b6-530a23b25d24, addrs=[0:0:0:0:0:0:0:1,
10.0.1.1, 10.0.75.1, 127.0.0.1, 172.25.4.231, 192.168.244.113,
192.168.56.1],
sockAddrs=[LAPTOP-I5CE4BEI.mshome.net/192.168.244.113:47502,
/192.168.56.1:47502, host.docker.internal/172.25.4.231:47502,
LAPTOP-I5CE4BEI/10.0.75.1:47502, /0:0:0:0:0:0:0:1:47502,
/10.0.1.1:47502, /127.0.0.1:47502], discPort=47502, order=3, intOrder=3,
lastExchangeTime=1578322410223, loc=false,
ver=2.7.2#20191202-sha1:2e9d1c89, isClient=false], topVer=3,
nodeId8=f581f039, msg=Node joined: TcpDiscoveryNode
[id=44c8ba83-4a4d-4b0e-b4b6-530a23b25d24, addrs=[0:0:0:0:0:0:0:1,
10.0.1.1, 10.0.75.1, 127.0.0.1, 172.25.4.231, 192.168.244.113,
192.168.56.1],
sockAddrs=[LAPTOP-I5CE4BEI.mshome.net/192.168.244.113:47502,
/192.168.56.1:47502, host.docker.internal/172.25.4.231:47502,
LAPTOP-I5CE4BEI/10.0.75.1:47502, /0:0:0:0:0:0:0:1:47502,
/10.0.1.1:47502, /127.0.0.1:47502], discPort=47502, order=3, intOrder=3,
lastExchangeTime=1578322410223, loc=false,
ver=2.7.2#20191202-sha1:2e9d1c89, isClient=false], type=NODE_JOINED,
tstamp=1578322410400], nodeId=44c8ba83, evt=NODE_JOINED]
class org.apache.ignite.IgniteCheckedException: Requested DataRegion is
not configured: 1GB_Region_Eviction
     at
org.apache.ignite.internal.processors.cache.persistence.IgniteCacheDatabaseSharedManager.dataRegion(IgniteCacheDatabaseSharedManager.java:729)

BR,
Andrei

1/6/2020 2:52 PM, djm132 пишет:
> You can also look to this topic, probably related to yours with code sample
> http://apache-ignite-users.70518.x6.nabble.com/Embedded-ignite-and-baseline-upgrade-questions-td30822.html
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/