scala.pickling and sql in Ignite

classic Classic list List threaded Threaded
2 messages Options
Ognen Duzlevski Ognen Duzlevski
Reply | Threaded
Open this post in threaded view
|

scala.pickling and sql in Ignite

Hello all,

I have been thinking about the problem of having to rolling-restart an Ignite cluster and how to avoid it - I think one of the solutions someone proposed appeals to me - serializing and deserializing things from their native Java/Scala representations into JSON stored in the cache and back. If everything you are storing is JSON/text then there is no need to have fat jars known to Ignite and hence no need to restart the cluster every time an application gets updated.

I have chosen scala.pickling for the above task (https://github.com/scala/pickling)

I am struggling to foresee how this would work with indexing and SQL in Ignite - anyone has any ideas or experiences or input (before I spend the weekend trying)? :)

Thanks!
Ognen
alexey.goncharuk alexey.goncharuk
Reply | Threaded
Open this post in threaded view
|

Re: scala.pickling and sql in Ignite

Ognen,

Current implementation of SQL uses reflection to inspect stored objects and extract field values for indexing. Both JSON format and scala pickling do not have any fields visible to reflection, which makes it impossible to use them in indexing. There is though a pluggable indexing SPI, so you can implement your own SQL queries for JSON/pickling objects.

There is a work-in-progress ticket IGNITE-950 which may partially fulfill your requirements. The idea is to make it possible to extract a particular field value without deserializing the whole object. Basically, after this ticket is implemented there will be no need to deploy cache model jars to server nodes if you work with data only from clients. There will be still impossible to change the model structure, but for many cases this already will allow you not to restart your cluster while updating an application.

2015-06-12 15:24 GMT-07:00 Ognen Duzlevski <[hidden email]>:
Hello all,

I have been thinking about the problem of having to rolling-restart an Ignite cluster and how to avoid it - I think one of the solutions someone proposed appeals to me - serializing and deserializing things from their native Java/Scala representations into JSON stored in the cache and back. If everything you are storing is JSON/text then there is no need to have fat jars known to Ignite and hence no need to restart the cluster every time an application gets updated.

I have chosen scala.pickling for the above task (https://github.com/scala/pickling)

I am struggling to foresee how this would work with indexing and SQL in Ignite - anyone has any ideas or experiences or input (before I spend the weekend trying)? :)

Thanks!
Ognen