Insert into select OOM exception on java heap

classic Classic list List threaded Threaded
7 messages Options
yblazart yblazart
Reply | Threaded
Open this post in threaded view
|

Insert into select OOM exception on java heap

Hello,  we have 6 nodes configured with 3Gb heap, 30Gb offheap.

We store lot's of data in some partitioned tables, then we are executing some "insert into select... join..." using SqlQueryField (or SqlQueryFieldEx).

With tables of 5000 000 lines, we ran in a OOM error, even with lazy set to true and skipOnReduceTable.

How can we handle this please ?

Regards.
ilya.kasnacheev ilya.kasnacheev
Reply | Threaded
Open this post in threaded view
|

Re: Insert into select OOM exception on java heap

Hello!

I'm afraid you will have to split this query into smaller ones. Ignite doesn't really have lazy insert ... select, so the result set will have to be held in heap for some time.

Regards,
--
Ilya Kasnacheev


чт, 30 мая 2019 г. в 18:36, yann Blazart <[hidden email]>:
Hello,  we have 6 nodes configured with 3Gb heap, 30Gb offheap.

We store lot's of data in some partitioned tables, then we are executing some "insert into select... join..." using SqlQueryField (or SqlQueryFieldEx).

With tables of 5000 000 lines, we ran in a OOM error, even with lazy set to true and skipOnReduceTable.

How can we handle this please ?

Regards.
yblazart yblazart
Reply | Threaded
Open this post in threaded view
|

Re: Insert into select OOM exception on java heap

Hmmm. Can I use limit and offset ?

Doing limit 10000 by example and continue while insert  ount = 10000 ???



Le jeu. 30 mai 2019 à 17:57, Ilya Kasnacheev <[hidden email]> a écrit :
Hello!

I'm afraid you will have to split this query into smaller ones. Ignite doesn't really have lazy insert ... select, so the result set will have to be held in heap for some time.

Regards,
--
Ilya Kasnacheev


чт, 30 мая 2019 г. в 18:36, yann Blazart <[hidden email]>:
Hello,  we have 6 nodes configured with 3Gb heap, 30Gb offheap.

We store lot's of data in some partitioned tables, then we are executing some "insert into select... join..." using SqlQueryField (or SqlQueryFieldEx).

With tables of 5000 000 lines, we ran in a OOM error, even with lazy set to true and skipOnReduceTable.

How can we handle this please ?

Regards.
ilya.kasnacheev ilya.kasnacheev
Reply | Threaded
Open this post in threaded view
|

Re: Insert into select OOM exception on java heap

Hello!

I think it would make better sense to mark already updated entries, update in batches until no unmarked entries left.

Regards,
--
Ilya Kasnacheev


чт, 30 мая 2019 г. в 19:14, yann Blazart <[hidden email]>:
Hmmm. Can I use limit and offset ?

Doing limit 10000 by example and continue while insert  ount = 10000 ???



Le jeu. 30 mai 2019 à 17:57, Ilya Kasnacheev <[hidden email]> a écrit :
Hello!

I'm afraid you will have to split this query into smaller ones. Ignite doesn't really have lazy insert ... select, so the result set will have to be held in heap for some time.

Regards,
--
Ilya Kasnacheev


чт, 30 мая 2019 г. в 18:36, yann Blazart <[hidden email]>:
Hello,  we have 6 nodes configured with 3Gb heap, 30Gb offheap.

We store lot's of data in some partitioned tables, then we are executing some "insert into select... join..." using SqlQueryField (or SqlQueryFieldEx).

With tables of 5000 000 lines, we ran in a OOM error, even with lazy set to true and skipOnReduceTable.

How can we handle this please ?

Regards.
yblazart yblazart
Reply | Threaded
Open this post in threaded view
|

Re: Insert into select OOM exception on java heap

It's an insert into select. We made "meta" tables to allow doing other selects.

Or can I do a lazy select then batch insert you mean ?

Le jeu. 30 mai 2019 à 18:15, Ilya Kasnacheev <[hidden email]> a écrit :
Hello!

I think it would make better sense to mark already updated entries, update in batches until no unmarked entries left.

Regards,
--
Ilya Kasnacheev


чт, 30 мая 2019 г. в 19:14, yann Blazart <[hidden email]>:
Hmmm. Can I use limit and offset ?

Doing limit 10000 by example and continue while insert  ount = 10000 ???



Le jeu. 30 mai 2019 à 17:57, Ilya Kasnacheev <[hidden email]> a écrit :
Hello!

I'm afraid you will have to split this query into smaller ones. Ignite doesn't really have lazy insert ... select, so the result set will have to be held in heap for some time.

Regards,
--
Ilya Kasnacheev


чт, 30 мая 2019 г. в 18:36, yann Blazart <[hidden email]>:
Hello,  we have 6 nodes configured with 3Gb heap, 30Gb offheap.

We store lot's of data in some partitioned tables, then we are executing some "insert into select... join..." using SqlQueryField (or SqlQueryFieldEx).

With tables of 5000 000 lines, we ran in a OOM error, even with lazy set to true and skipOnReduceTable.

How can we handle this please ?

Regards.
yann.blazart@externe.bnpparibas.com yann.blazart@externe.bnpparibas.com
Reply | Threaded
Open this post in threaded view
|

Re: Insert into select OOM exception on java heap

Hello, finally I did the following trick.

I broadcast my select request on each node, executing the select with Lazy and collocated, then I user QueryCursor and create BinaryObjects that I insert into cache by 5000 packet.

Wich seems to be good enough.

Thanks for your help.

Le 30 mai 2019 à 18:18, yann Blazart <[hidden email]> a écrit :

It's an insert into select. We made "meta" tables to allow doing other selects.

Or can I do a lazy select then batch insert you mean ?

Le jeu. 30 mai 2019 à 18:15, Ilya Kasnacheev <[hidden email]> a écrit :
Hello!

I think it would make better sense to mark already updated entries, update in batches until no unmarked entries left.

Regards,
--
Ilya Kasnacheev


чт, 30 мая 2019 г. в 19:14, yann Blazart <[hidden email]>:
Hmmm. Can I use limit and offset ?

Doing limit 10000 by example and continue while insert  ount = 10000 ???



Le jeu. 30 mai 2019 à 17:57, Ilya Kasnacheev <[hidden email]> a écrit :
Hello!

I'm afraid you will have to split this query into smaller ones. Ignite doesn't really have lazy insert ... select, so the result set will have to be held in heap for some time.

Regards,
--
Ilya Kasnacheev


чт, 30 мая 2019 г. в 18:36, yann Blazart <[hidden email]>:
Hello,  we have 6 nodes configured with 3Gb heap, 30Gb offheap.

We store lot's of data in some partitioned tables, then we are executing some "insert into select... join..." using SqlQueryField (or SqlQueryFieldEx).

With tables of 5000 000 lines, we ran in a OOM error, even with lazy set to true and skipOnReduceTable.

How can we handle this please ?

Regards.

This message and any attachments (the "message") is
intended solely for the intended addressees and is confidential.
If you receive this message in error,or are not the intended recipient(s),
please delete it and any copies from your systems and immediately notify
the sender. Any unauthorized view, use that does not comply with its purpose,
dissemination or disclosure, either whole or partial, is prohibited. Since the internet
cannot guarantee the integrity of this message which may not be reliable, BNP PARIBAS
(and its subsidiaries) shall not be liable for the message if modified, changed or falsified.
Do not print this message unless it is necessary, consider the environment.

----------------------------------------------------------------------------------------------------------------------------------

Ce message et toutes les pieces jointes (ci-apres le "message")
sont etablis a l'intention exclusive de ses destinataires et sont confidentiels.
Si vous recevez ce message par erreur ou s'il ne vous est pas destine,
merci de le detruire ainsi que toute copie de votre systeme et d'en avertir
immediatement l'expediteur. Toute lecture non autorisee, toute utilisation de
ce message qui n'est pas conforme a sa destination, toute diffusion ou toute
publication, totale ou partielle, est interdite. L'Internet ne permettant pas d'assurer
l'integrite de ce message electronique susceptible d'alteration, BNP Paribas
(et ses filiales) decline(nt) toute responsabilite au titre de ce message dans l'hypothese
ou il aurait ete modifie, deforme ou falsifie.
N'imprimez ce message que si necessaire, pensez a l'environnement.

dmagda dmagda
Reply | Threaded
Open this post in threaded view
|

Re: Insert into select OOM exception on java heap

Have a look at Ignite data streamer API, they are designed better for data streaming and big chunks loading:

Or, switch the SQL engine to the streaming mode for the time of that INSERT:

Let me know if any of the above works. We can document this.

-
Denis


On Tue, Jun 4, 2019 at 10:50 AM [hidden email] <[hidden email]> wrote:
Hello, finally I did the following trick.

I broadcast my select request on each node, executing the select with Lazy and collocated, then I user QueryCursor and create BinaryObjects that I insert into cache by 5000 packet.

Wich seems to be good enough.

Thanks for your help.

Le 30 mai 2019 à 18:18, yann Blazart <[hidden email]> a écrit :

It's an insert into select. We made "meta" tables to allow doing other selects.

Or can I do a lazy select then batch insert you mean ?

Le jeu. 30 mai 2019 à 18:15, Ilya Kasnacheev <[hidden email]> a écrit :
Hello!

I think it would make better sense to mark already updated entries, update in batches until no unmarked entries left.

Regards,
--
Ilya Kasnacheev


чт, 30 мая 2019 г. в 19:14, yann Blazart <[hidden email]>:
Hmmm. Can I use limit and offset ?

Doing limit 10000 by example and continue while insert  ount = 10000 ???



Le jeu. 30 mai 2019 à 17:57, Ilya Kasnacheev <[hidden email]> a écrit :
Hello!

I'm afraid you will have to split this query into smaller ones. Ignite doesn't really have lazy insert ... select, so the result set will have to be held in heap for some time.

Regards,
--
Ilya Kasnacheev


чт, 30 мая 2019 г. в 18:36, yann Blazart <[hidden email]>:
Hello,  we have 6 nodes configured with 3Gb heap, 30Gb offheap.

We store lot's of data in some partitioned tables, then we are executing some "insert into select... join..." using SqlQueryField (or SqlQueryFieldEx).

With tables of 5000 000 lines, we ran in a OOM error, even with lazy set to true and skipOnReduceTable.

How can we handle this please ?

Regards.

This message and any attachments (the "message") is
intended solely for the intended addressees and is confidential.
If you receive this message in error,or are not the intended recipient(s),
please delete it and any copies from your systems and immediately notify
the sender. Any unauthorized view, use that does not comply with its purpose,
dissemination or disclosure, either whole or partial, is prohibited. Since the internet
cannot guarantee the integrity of this message which may not be reliable, BNP PARIBAS
(and its subsidiaries) shall not be liable for the message if modified, changed or falsified.
Do not print this message unless it is necessary, consider the environment.

----------------------------------------------------------------------------------------------------------------------------------

Ce message et toutes les pieces jointes (ci-apres le "message")
sont etablis a l'intention exclusive de ses destinataires et sont confidentiels.
Si vous recevez ce message par erreur ou s'il ne vous est pas destine,
merci de le detruire ainsi que toute copie de votre systeme et d'en avertir
immediatement l'expediteur. Toute lecture non autorisee, toute utilisation de
ce message qui n'est pas conforme a sa destination, toute diffusion ou toute
publication, totale ou partielle, est interdite. L'Internet ne permettant pas d'assurer
l'integrite de ce message electronique susceptible d'alteration, BNP Paribas
(et ses filiales) decline(nt) toute responsabilite au titre de ce message dans l'hypothese
ou il aurait ete modifie, deforme ou falsifie.
N'imprimez ce message que si necessaire, pensez a l'environnement.