connecting to visor shell permanently stops unwanted WAL clean-up
I have noticed a strange behaviour when i connect to visor shell after
ingesting large amount of data to ignite cluster.
Below is the scenario:
I have deployed 5 node Ignite cluster on K8S with persistence enabled
(version 2.9.0 on java 11)
I started ingesting data to 3 tables and after ingesting large amount of
data using JDBC batch insertion (around 20 million records to each of 3
tables with backup set to 1), now i connected to visor shell (from one of
the pod which i deployed just to use as visor shell) using the same ignite
config file which is used for ignite servers and after visor shell connects
to ignite cluster the unwanted wal record cleanup stopped (which should run
post checkpoint ) and WAL started growing linearly as there is continuous
data ingestion. This is making WAL disk run out of space and pods crashes.
I have attached config file which i used to deploy ignite as well as used to
connect to Ignite visor shell.
Please let me know if am doing something wrong.
command used to connect visor shell
There are frequent logs with following message after connection of visor,
Could not clear historyMap due to WAL reservation on cp: CheckpointEntry
ptr=FileWALPointer [idx=26090, fileOff=24982860, len=9572]], history map
size is 38
could you please let me know if am doing anything wrong or any known issue
Re: connecting to visor shell permanently stops unwanted WAL clean-up
This post was updated on .
with 100 GB of data in persistence store this is easily reproducible not
sure what is causing this,
anyone please let me know how visor gets number of records from cache and
what could be the issue of blocking wal cleanup. Iam adding wal usage graph
where without connecting to viror the wal usage was max 6GB but once i
connect visor it started growing until i stop data ingestion and it will not
even comeback from the max.