Lag before records are visible after transaction commit

classic Classic list List threaded Threaded
25 messages Options
12
ssansoy ssansoy
Reply | Threaded
Open this post in threaded view
|

Lag before records are visible after transaction commit

Hi, I am performing the following operation on node 1 of my 3 node cluster

(All caches use CacheRebalanceMode.SYNC,
CacheWriteSynchronizationMode.FULL_SYNC, CacheAtomicityMode.TRANSACTIONAL):


              try (Transaction tx = ignite.transactions().txStart(
                        TransactionConcurrency.PESSIMISTIC,
                        TransactionIsolation.READ_COMMITTED,
transactionTimeout, igniteTransactionBatchSize)) {

// write 1 record to cache A
// write 11 records to cache B

tx.commit()

}


How should I expect the updated A and B records to appear on some other
node, e.g. node 2.
I was expecting them to both become visible together at exactly the same
time. I am using CacheMode.REPLICATED. I am not seeing this however - there
seems to be a delay in between both these A and B records being made
available.

On node 2, I am performing a continuous query on A, and in the local listen
for A,
I am fetching those 11 B records related to A (using an SQLFieldsQuery) that
were updated
in the same transaction.
After the tx commit on node 1, my local listen for A is called, and I try
and fetch the Bs.
However, there seems to be a delay in seeing these B records - they are not
always returned by my query.
If I put a sleep in there and try the SQLFieldsQuery again, I do get all the
B's.

2020-08-21 16:25:05,484 [callback-#192] DEBUG x.TableDataSelector [] -
Executing SQL query SqlFieldsQuery [sql=SELECT * FROM B WHERE A_FK =
'TEST4', args=null, collocated=false, timeout=-1, enforceJoinOrder=false,
distributedJoins=false, replicatedOnly=false, lazy=false, schema=null,
updateBatchSize=1]
2020-08-21 16:25:05,486 [callback-#192] DEBUG x.TableDataSelector [] -
Received 3 results
2020-08-21 16:25:05,486 [callback-#192] DEBUG x.TableDataSelector [] -
Trying again in 5 seconds
2020-08-21 16:25:10,486 [callback-#192] DEBUG x.TableDataSelector [] -
Received 11 results

My local listen for A is annotated with @IgniteAsyncCallback incase that
matters. Anything obviously wrong here?
My requirement is that node 2 has access to A and all the associated updated
B's that
were committed in the same transaction.

Thanks!
Sham




--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/
ezhuravlev ezhuravlev
Reply | Threaded
Open this post in threaded view
|

Re: Lag before records are visible after transaction commit

Hi,

It looks like you need to use https://ignite.apache.org/releases/latest/javadoc/org/apache/ignite/cache/CacheWriteSynchronizationMode.html, just set it to FULL_SYNC for the cache using CacheConfiguration.setWriteSynchronizationMode.

Evgenii

вт, 25 авг. 2020 г. в 02:27, ssansoy <[hidden email]>:
Hi, I am performing the following operation on node 1 of my 3 node cluster

(All caches use CacheRebalanceMode.SYNC,
CacheWriteSynchronizationMode.FULL_SYNC, CacheAtomicityMode.TRANSACTIONAL):


              try (Transaction tx = ignite.transactions().txStart(
                        TransactionConcurrency.PESSIMISTIC,
                        TransactionIsolation.READ_COMMITTED,
transactionTimeout, igniteTransactionBatchSize)) {

// write 1 record to cache A
// write 11 records to cache B

tx.commit()

}


How should I expect the updated A and B records to appear on some other
node, e.g. node 2.
I was expecting them to both become visible together at exactly the same
time. I am using CacheMode.REPLICATED. I am not seeing this however - there
seems to be a delay in between both these A and B records being made
available.

On node 2, I am performing a continuous query on A, and in the local listen
for A,
I am fetching those 11 B records related to A (using an SQLFieldsQuery) that
were updated
in the same transaction.
After the tx commit on node 1, my local listen for A is called, and I try
and fetch the Bs.
However, there seems to be a delay in seeing these B records - they are not
always returned by my query.
If I put a sleep in there and try the SQLFieldsQuery again, I do get all the
B's.

2020-08-21 16:25:05,484 [callback-#192] DEBUG x.TableDataSelector [] -
Executing SQL query SqlFieldsQuery [sql=SELECT * FROM B WHERE A_FK =
'TEST4', args=null, collocated=false, timeout=-1, enforceJoinOrder=false,
distributedJoins=false, replicatedOnly=false, lazy=false, schema=null,
updateBatchSize=1]
2020-08-21 16:25:05,486 [callback-#192] DEBUG x.TableDataSelector [] -
Received 3 results
2020-08-21 16:25:05,486 [callback-#192] DEBUG x.TableDataSelector [] -
Trying again in 5 seconds
2020-08-21 16:25:10,486 [callback-#192] DEBUG x.TableDataSelector [] -
Received 11 results

My local listen for A is annotated with @IgniteAsyncCallback incase that
matters. Anything obviously wrong here?
My requirement is that node 2 has access to A and all the associated updated
B's that
were committed in the same transaction.

Thanks!
Sham




--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/
ssansoy ssansoy
Reply | Threaded
Open this post in threaded view
|

Re: Lag before records are visible after transaction commit

Thanks for the reply but we are already using that setting which is the
strange thing

(All caches use CacheRebalanceMode.SYNC,
CacheWriteSynchronizationMode.FULL_SYNC, CacheAtomicityMode.TRANSACTIONAL):

As an update, the same behaviour is observed if B is retrieved using a
ScanQuery rather than an SQLFieldsQuery.

Any ideas? Thanks



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/
ssansoy ssansoy
Reply | Threaded
Open this post in threaded view
|

Re: Lag before records are visible after transaction commit

Here is a reproducer for this btw.

Run the mainclass with program argument READER and again with argument
WRITER.
In the console for WRITER press a key (this will generate an A and 100
associated Bs)
READER subscribes to A and gets the associated B's with a scan query.
However, it takes some number of retries before all 100 arrive.

package com.testproject.server;

import java.util.Arrays;
import java.util.List;
import java.util.Scanner;
import javax.cache.Cache.Entry;
import javax.cache.CacheException;
import javax.cache.event.CacheEntryEvent;
import javax.cache.event.CacheEntryListenerException;
import javax.cache.event.CacheEntryUpdatedListener;
import org.apache.ignite.Ignite;
import org.apache.ignite.IgniteCache;
import org.apache.ignite.Ignition;
import org.apache.ignite.binary.BinaryObject;
import org.apache.ignite.binary.BinaryObjectBuilder;
import org.apache.ignite.cache.CacheAtomicityMode;
import org.apache.ignite.cache.CacheMode;
import org.apache.ignite.cache.CacheRebalanceMode;
import org.apache.ignite.cache.CacheWriteSynchronizationMode;
import org.apache.ignite.cache.query.ContinuousQuery;
import org.apache.ignite.cache.query.QueryCursor;
import org.apache.ignite.cache.query.ScanQuery;
import org.apache.ignite.configuration.CacheConfiguration;
import org.apache.ignite.configuration.IgniteConfiguration;
import org.apache.ignite.lang.IgniteAsyncCallback;
import org.apache.ignite.lang.IgniteBiPredicate;
import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi;
import
org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder;
import org.apache.ignite.transactions.Transaction;
import org.apache.ignite.transactions.TransactionConcurrency;
import org.apache.ignite.transactions.TransactionIsolation;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;

public class TransactionProblem{

    private static final Logger LOGGER =
LoggerFactory.getLogger(TransactionProblem.class);

    private static class TestIgniteConfiguration extends IgniteConfiguration
{

        public TestIgniteConfiguration(String name){
            setWorkDirectory("c:\\data\\testproject\\"+name);
            TcpDiscoveryVmIpFinder tcpPortConfig = new
TcpDiscoveryVmIpFinder();
            tcpPortConfig.setAddresses(Arrays.asList("localhost:47500",
"localhost:47501"));
            TcpDiscoverySpi discoverySpi = new TcpDiscoverySpi();
            discoverySpi.setIpFinder(tcpPortConfig);
            setDiscoverySpi(discoverySpi);
            setPeerClassLoadingEnabled(true);
        }
    }

    private static class TestCacheConfiguration extends CacheConfiguration {
        public TestCacheConfiguration(String name){
            super(name);
            setRebalanceMode(CacheRebalanceMode.SYNC);
            setCacheMode(CacheMode.REPLICATED);
           
setWriteSynchronizationMode(CacheWriteSynchronizationMode.FULL_SYNC);
            setAtomicityMode(CacheAtomicityMode.TRANSACTIONAL);
        }
    }

    @IgniteAsyncCallback
    private static class ACallback implements
        CacheEntryUpdatedListener<BinaryObject, BinaryObject> {

        private final Ignite ignite;

        public ACallback(Ignite ignite) {
            this.ignite = ignite;
        }

        @Override
        public void onUpdated(
            Iterable<CacheEntryEvent&lt;? extends BinaryObject, ? extends
BinaryObject>> cacheEntryEvents)
            throws CacheEntryListenerException {

            cacheEntryEvents.forEach(e -> {
                    LOGGER.info("Continuous update: {}", e);
                    BinaryObject b = e.getValue();
                    long id = b.field("ID");
                    LOGGER.info("ID is {}", id);
                    // find the B's for this A
                    // keep retrying until 100 are seen
                    int count=0;
                    long start = System.currentTimeMillis();
                    while(count<100){
                        count = printBs(id);
                    }
                    long end = System.currentTimeMillis();
                    LOGGER.info("Took {} ms to receive all B's",
(end-start));
                }
            );
        }

        private int printBs(long id) {
            IgniteCache cacheB = ignite.cache("B").withKeepBinary();

            ScanQuery<String, BinaryObject> scanQuery = new ScanQuery<>(
                (IgniteBiPredicate<String, BinaryObject>) (key, value) ->
value
                    .field("PARENT_ID").equals(id));

            cacheB.query(scanQuery);
            List<?> scanResults = cacheB.query(scanQuery).getAll();
            LOGGER.debug("Received {} scan results", scanResults.size());
            return scanResults.size();
        }
    }


    public static void main(String[] args){
        String type = args.length>0?args[0]:"BLANK";
        if(!"READER".equals(type) && !"WRITER".equals(type)){
            throw new UnsupportedOperationException("Unknown option
"+type+". Choose one one READER or WRITER");
        }

        Ignite ignite = Ignition.start(new TestIgniteConfiguration(type));

        LOGGER.info("Node was successfully started");

        IgniteCache<String, BinaryObject> cacheA =
ignite.getOrCreateCache(new TestCacheConfiguration("A")).withKeepBinary();
        IgniteCache<String, BinaryObject> cacheB =
ignite.getOrCreateCache(new TestCacheConfiguration("B")).withKeepBinary();

        if("WRITER".equals(type)){
            // generate A and 100 associated B's. Write them all in one
transaction
            Scanner scanner = new Scanner(System.in);
            while (true) {
                long id=System.currentTimeMillis();
                LOGGER.info("Press a key to generate an A and 100 B's with
join ID "+id);
                scanner.nextLine();

                BinaryObjectBuilder aBuilder = ignite.binary().builder("A");
                aBuilder.setField("ID", id);

                // begin a transaction
                try (Transaction tx = ignite.transactions().txStart(
                    TransactionConcurrency.PESSIMISTIC,
                    TransactionIsolation.READ_COMMITTED, 30000,
                    101)) {

                    try {
                        // insert an A record with ID
                        cacheA.put("ID_" + id, aBuilder.build());
                        // insert 100 B records with this PARENT_ID
                        BinaryObjectBuilder bBuilder =
ignite.binary().builder("B");
                        bBuilder.setField("PARENT_ID", id);
                        for (int i = 0; i < 100; i++) {
                            bBuilder.setField("B_ID", i);
                            cacheB.put("ID_" + id + "_B_" + i,
bBuilder.build());
                        }
                        tx.commit();
                        LOGGER.info("COMMITTED");
                    } catch (CacheException e) {
                        tx.rollback();
                    }
                }
                // end transaction
            }
        }
        else{
            // subscribe to A's and print associated B's
            ContinuousQuery<BinaryObject, BinaryObject> query = new
ContinuousQuery<>();
            query.setInitialQuery(new ScanQuery());
            query.setLocalListener(new ACallback(ignite));

            QueryCursor<Entry&lt;BinaryObject, BinaryObject>> cur =
cacheA.query(query);
            cur.forEach(entry -> {
                LOGGER.info("Initial record: {}", entry);
            });
        }

    }

}




--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/
ssansoy ssansoy
Reply | Threaded
Open this post in threaded view
|

Re: Lag before records are visible after transaction commit

As an update, if I update the printBs method to also try a
cache.getAll(keys), it still exhibits the same problem (missing records):

       private int printBs(long id) {
            IgniteCache cacheB = ignite.cache("B").withKeepBinary();

            ScanQuery<String, BinaryObject> scanQuery = new ScanQuery<>(
                (IgniteBiPredicate<String, BinaryObject>) (key, value) ->
value
                    .field("PARENT_ID").equals(id));

            Set<String> keys = new HashSet<>();
            for(int i=0;i<100;i++){
                keys.add("ID_" + id + "_B_" + i);
            }
            Map<String,BinaryObject> getResults = cacheB.getAll(keys);
            List<?> scanResults = cacheB.query(scanQuery).getAll();

            int scanResultsSize = scanResults.size();
            int getResultsSize = getResults.size();

            LOGGER.debug("Received {} scan results, {} getAll results",
scanResultsSize, getResultsSize);
            return Math.max(scanResultsSize, getResultsSize);
        }
    }



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/
ssansoy ssansoy
Reply | Threaded
Open this post in threaded view
|

Re: Lag before records are visible after transaction commit

Hi is anyone able to help look into/reproduce this with the example code
given? thanks!



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/
ezhuravlev ezhuravlev
Reply | Threaded
Open this post in threaded view
|

Re: Lag before records are visible after transaction commit

Hi,

Checked this reproducer. Continuous Query itself is not transactional and it looks like it can't be used for this at the moment. So, it gets notification before other entries were committed.

Best Regards,
Evgenii

вт, 1 сент. 2020 г. в 00:34, ssansoy <[hidden email]>:
Hi is anyone able to help look into/reproduce this with the example code
given? thanks!



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/
ezhuravlev ezhuravlev
Reply | Threaded
Open this post in threaded view
|

Re: Lag before records are visible after transaction commit

Hi,

To make this work, you can change the transaction type - from READ_COMMITED to SERIALIZABLE and replace scanQuery with getAll. In this case, getAll operation will be waiting for locked keys. Note that running cache operations in CQ listener thread may cause deadlock and it's better to use another thread for that.

Best Regards,
Evgenii
ssansoy ssansoy
Reply | Threaded
Open this post in threaded view
|

Re: Lag before records are visible after transaction commit

In reply to this post by ezhuravlev
Thanks for looking into it. Is this expected?
Just wondering how another node can ever be transactionally notified of an
update in an event driven way if continuous queries don't support
transactions?

using getAll isn't a practical workaround unfortunately as we want to get
the B records based on the value of some of it's fields. E.g. a scan query
with a filter on the child, or an sql fields query. Getting all the records
will pull everything back onto the and we would have to filter locally if I
am understanding correctly?



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/
ezhuravlev ezhuravlev
Reply | Threaded
Open this post in threaded view
|

Re: Lag before records are visible after transaction commit

Yes, it is expected that ScanQuery and ContinuousQuery are not transactional. 

>Getting all the records will pull everything back onto the and we would have to filter locally if I am understanding correctly?
There is no need to get all entries from the cache, you can get entries with certain keys. In your example, you can get all entries based on generated keys if you know the number of inserted entries. This number, for example, can be inserted as a part of the first object.

Evgenii

ср, 2 сент. 2020 г. в 08:53, ssansoy <[hidden email]>:
Thanks for looking into it. Is this expected?
Just wondering how another node can ever be transactionally notified of an
update in an event driven way if continuous queries don't support
transactions?

using getAll isn't a practical workaround unfortunately as we want to get
the B records based on the value of some of it's fields. E.g. a scan query
with a filter on the child, or an sql fields query. Getting all the records
will pull everything back onto the and we would have to filter locally if I
am understanding correctly?



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/
ssansoy ssansoy
Reply | Threaded
Open this post in threaded view
|

Re: Lag before records are visible after transaction commit

Thanks Evgenii,

Could you please elaborate a bit on how the get would work here.

E.g. parent object A has properties p, q, r
child object B has properties q, r, s, t

{q, r, s} are the primary key of B (as defined in backing SQL table DDL
which is how the cache was created)

When an A update comes in with values p1, q1, r1, we were doing a select *
from B where q=q1 and r=r1 which would return multiple records.

Is there an equivalent using igniteCacheForB.get(key). What would key be
here?

Thanks



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/
ezhuravlev ezhuravlev
Reply | Threaded
Open this post in threaded view
|

Re: Lag before records are visible after transaction commit

You can put the number of entries in B cache related to this object A right in the object A. After that, you can use this number to make keys of all objects from cache B, as you already know q and r. But it depends on use case.

Evgenii

пт, 4 сент. 2020 г. в 03:21, ssansoy <[hidden email]>:
Thanks Evgenii,

Could you please elaborate a bit on how the get would work here.

E.g. parent object A has properties p, q, r
child object B has properties q, r, s, t

{q, r, s} are the primary key of B (as defined in backing SQL table DDL
which is how the cache was created)

When an A update comes in with values p1, q1, r1, we were doing a select *
from B where q=q1 and r=r1 which would return multiple records.

Is there an equivalent using igniteCacheForB.get(key). What would key be
here?

Thanks



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/
ssansoy ssansoy
Reply | Threaded
Open this post in threaded view
|

Re: Lag before records are visible after transaction commit

Thanks Evgenii,

Sorry to keep revisiting this - maybe I am misunderstanding, but don't we
also need 's' to be able to query B by key. E.g. the key of B consists of
{q, r, s} We only have q and r from the parent A.



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/
ezhuravlev ezhuravlev
Reply | Threaded
Open this post in threaded view
|

Re: Lag before records are visible after transaction commit

Yes, but if you know the number of entries B for this object A, then you can get all objects using s, which will be 0..n

Evgenii

пн, 7 сент. 2020 г. в 06:38, ssansoy <[hidden email]>:
Thanks Evgenii,

Sorry to keep revisiting this - maybe I am misunderstanding, but don't we
also need 's' to be able to query B by key. E.g. the key of B consists of
{q, r, s} We only have q and r from the parent A.



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/
ssansoy ssansoy
Reply | Threaded
Open this post in threaded view
|

Re: Lag before records are visible after transaction commit

unfortunately the 's' on B here can't be derived from a number 0..n - e.g. it
isn't a numeric id.

E.g. in practice lets say:

A is a "Location"
it has properties: "city", "street" etc

B is a "Person" with key:
p = city
q = street
r = social security number

E.g. an A and associated B's are updated in a transaction, we want our
client app to see the updated A and B's where the Person lives at that that
Location.

E.g. A is updated and our continuous query on A picks up:
city = London
street = Downing Street

We would like to say:
Select * from B where city="London" and street="Downing Street"

Is there any way at all in Ignite to do this transactionally, so if an A and
associated B's are updated in one transaction (e.g. a street is renamed from
"Downing Street" to "Regent Street"), then our client app can see them
consistently?







--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/
ezhuravlev ezhuravlev
Reply | Threaded
Open this post in threaded view
|

Re: Lag before records are visible after transaction commit

No, I don't see other ways to do this transactionally, as CQ itself is not transactional.

Evgenii

чт, 10 сент. 2020 г. в 00:52, ssansoy <[hidden email]>:
unfortunately the 's' on B here can't be derived from a number 0..n - e.g. it
isn't a numeric id.

E.g. in practice lets say:

A is a "Location"
it has properties: "city", "street" etc

B is a "Person" with key:
p = city
q = street
r = social security number

E.g. an A and associated B's are updated in a transaction, we want our
client app to see the updated A and B's where the Person lives at that that
Location.

E.g. A is updated and our continuous query on A picks up:
city = London
street = Downing Street

We would like to say:
Select * from B where city="London" and street="Downing Street"

Is there any way at all in Ignite to do this transactionally, so if an A and
associated B's are updated in one transaction (e.g. a street is renamed from
"Downing Street" to "Regent Street"), then our client app can see them
consistently?







--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/
ilya.kasnacheev ilya.kasnacheev
Reply | Threaded
Open this post in threaded view
|

Re: Lag before records are visible after transaction commit

Hello!

Maybe keys may be queued from the CQ to be revisited later with transaction per key approach.

Regards,
--
Ilya Kasnacheev


пн, 14 сент. 2020 г. в 21:15, Evgenii Zhuravlev <[hidden email]>:
No, I don't see other ways to do this transactionally, as CQ itself is not transactional.

Evgenii

чт, 10 сент. 2020 г. в 00:52, ssansoy <[hidden email]>:
unfortunately the 's' on B here can't be derived from a number 0..n - e.g. it
isn't a numeric id.

E.g. in practice lets say:

A is a "Location"
it has properties: "city", "street" etc

B is a "Person" with key:
p = city
q = street
r = social security number

E.g. an A and associated B's are updated in a transaction, we want our
client app to see the updated A and B's where the Person lives at that that
Location.

E.g. A is updated and our continuous query on A picks up:
city = London
street = Downing Street

We would like to say:
Select * from B where city="London" and street="Downing Street"

Is there any way at all in Ignite to do this transactionally, so if an A and
associated B's are updated in one transaction (e.g. a street is renamed from
"Downing Street" to "Regent Street"), then our client app can see them
consistently?







--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/
ezhuravlev ezhuravlev
Reply | Threaded
Open this post in threaded view
|

Re: Lag before records are visible after transaction commit

Ilya,

This won't help, since the problem here is that CQ doesn't return all needed keys.

Evgenii

вт, 15 сент. 2020 г. в 02:28, Ilya Kasnacheev <[hidden email]>:
Hello!

Maybe keys may be queued from the CQ to be revisited later with transaction per key approach.

Regards,
--
Ilya Kasnacheev


пн, 14 сент. 2020 г. в 21:15, Evgenii Zhuravlev <[hidden email]>:
No, I don't see other ways to do this transactionally, as CQ itself is not transactional.

Evgenii

чт, 10 сент. 2020 г. в 00:52, ssansoy <[hidden email]>:
unfortunately the 's' on B here can't be derived from a number 0..n - e.g. it
isn't a numeric id.

E.g. in practice lets say:

A is a "Location"
it has properties: "city", "street" etc

B is a "Person" with key:
p = city
q = street
r = social security number

E.g. an A and associated B's are updated in a transaction, we want our
client app to see the updated A and B's where the Person lives at that that
Location.

E.g. A is updated and our continuous query on A picks up:
city = London
street = Downing Street

We would like to say:
Select * from B where city="London" and street="Downing Street"

Is there any way at all in Ignite to do this transactionally, so if an A and
associated B's are updated in one transaction (e.g. a street is renamed from
"Downing Street" to "Regent Street"), then our client app can see them
consistently?







--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/
ilya.kasnacheev ilya.kasnacheev
Reply | Threaded
Open this post in threaded view
|

Re: Lag before records are visible after transaction commit

Hello!

I think this suggests data model adjustment.

Regards,
--
Ilya Kasnacheev


вт, 15 сент. 2020 г. в 17:46, Evgenii Zhuravlev <[hidden email]>:
Ilya,

This won't help, since the problem here is that CQ doesn't return all needed keys.

Evgenii

вт, 15 сент. 2020 г. в 02:28, Ilya Kasnacheev <[hidden email]>:
Hello!

Maybe keys may be queued from the CQ to be revisited later with transaction per key approach.

Regards,
--
Ilya Kasnacheev


пн, 14 сент. 2020 г. в 21:15, Evgenii Zhuravlev <[hidden email]>:
No, I don't see other ways to do this transactionally, as CQ itself is not transactional.

Evgenii

чт, 10 сент. 2020 г. в 00:52, ssansoy <[hidden email]>:
unfortunately the 's' on B here can't be derived from a number 0..n - e.g. it
isn't a numeric id.

E.g. in practice lets say:

A is a "Location"
it has properties: "city", "street" etc

B is a "Person" with key:
p = city
q = street
r = social security number

E.g. an A and associated B's are updated in a transaction, we want our
client app to see the updated A and B's where the Person lives at that that
Location.

E.g. A is updated and our continuous query on A picks up:
city = London
street = Downing Street

We would like to say:
Select * from B where city="London" and street="Downing Street"

Is there any way at all in Ignite to do this transactionally, so if an A and
associated B's are updated in one transaction (e.g. a street is renamed from
"Downing Street" to "Regent Street"), then our client app can see them
consistently?







--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/
ssansoy ssansoy
Reply | Threaded
Open this post in threaded view
|

Re: Lag before records are visible after transaction commit

Are there any other ways we can model this to make this problem easier to
solve with Ignite (95% of our other caches don't have this requirement, but
we need to solve the 5% to migrate to ignite from our legacy solution)

The original thread on this is here btw:

http://apache-ignite-users.70518.x6.nabble.com/Parent-Child-relationships-td31605.html

E.g. Parent A, Child B.
A contains a list of B's
If A's fields, or any of it's B's fields are updated, then the client's
callback is triggered with A and all the B's so the entire structure can be
processed.

A and B need to be editable via SQL by human users e.g. be represented as
table A and table B (one of the main benefits Ignite provides over our
current legacy system).



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/
12