Issue about ignite-sql limit of table quantity

classic Classic list List threaded Threaded
8 messages Options
fvyaba fvyaba
Reply | Threaded
Open this post in threaded view
|

Issue about ignite-sql limit of table quantity

Hi,I have a question about ignite-sql limit of table quantity:
1.We are design a system might have huge quantity of table,as we understand about ignite-sql table mechanism,all the tables could only be created within so-called 'PUBLIC' schema,which means all the meta-data of table are stored in a global memory space(table space stuffs?),and such kinda space would be published on every node of a cluster,am I right?
2.Does ignite has limitation on table creation?it seems undocumented...
3.Any further suggestion about ignite table creation?Any related impact on system design?

hoping for the reply,many thanks!


 

fvyaba fvyaba
Reply | Threaded
Open this post in threaded view
|

Re:Issue about ignite-sql limit of table quantity

any help?

At 2018-03-22 16:19:15, "fvyaba" <[hidden email]> wrote:
Hi,I have a question about ignite-sql limit of table quantity:
1.We are design a system might have huge quantity of table,as we understand about ignite-sql table mechanism,all the tables could only be created within so-called 'PUBLIC' schema,which means all the meta-data of table are stored in a global memory space(table space stuffs?),and such kinda space would be published on every node of a cluster,am I right?
2.Does ignite has limitation on table creation?it seems undocumented...
3.Any further suggestion about ignite table creation?Any related impact on system design?

hoping for the reply,many thanks!


 



aealexsandrov aealexsandrov
Reply | Threaded
Open this post in threaded view
|

Re: Re:Issue about ignite-sql limit of table quantity

Hi Fvyaba,

There is no information about it in documentation but according several
places in the code I see that it isn't greater than int32.

            void ReadTableMetaVector(ignite::impl::binary::BinaryReaderImpl&
reader, TableMetaVector& meta)
            {
                int32_t metaNum = reader.ReadInt32();

                meta.clear();
                meta.reserve(static_cast<size_t>(metaNum));

                for (int32_t i = 0; i < metaNum; ++i)
                {
                    meta.push_back(TableMeta());

                    meta.back().Read(reader);
                }

So all restrictions that you can see will be related to available memory on
your nodes.

Thank you,
Andrei
           



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/
fvyaba fvyaba
Reply | Threaded
Open this post in threaded view
|

Re:Re: Re:Issue about ignite-sql limit of table quantity

Hi Andrei:
Thanks for your answer!
My laptop run on MacOS(16G RAM),I just ran a simple test, it seems 'table creation' cost much much more memory&time than 'data creation',and we got 'IgniteOutOfMemoryException: Out of memory in data region [name=default, initSize=256.0 MiB, maxSize=3.2 GiB, persistenceEnabled=false]' in a few seconds, see below:
* table count < 400 (time cost: 74s, 'java process' memory cost:6.32G):

    Ignite ignite = Ignition.start("config/example-ignite.xml");

    try (IgniteCache<?, ?> cache = ignite.getOrCreateCache(cfg)) {

      long start = System.currentTimeMillis();

      for (int i = 0; i < 400; i++) {

        cache.query(new SqlFieldsQuery(String.format(

            "CREATE TABLE TBL_%s (id BIGINT,uid VARCHAR,PRIMARY KEY(id))",

            i)));

      }

      System.out.println(System.currentTimeMillis() - start);

    }

* table count (400~500 or 500+)(got 'OOME' in a few seconds)

So I guest:
sql-table is a special one in ignite, not only it's cost of memory, but also it's way of creation, it's very expensive and it's not treated as first-class citizen like 'plain-cache'

is this a problem?think about a multi-tenant scenario,if some system make tenant-distinction on tbl-name level


At 2018-03-23 23:04:59, "aealexsandrov" <[hidden email]> wrote: >Hi Fvyaba, > >There is no information about it in documentation but according several >places in the code I see that it isn't greater than int32. > > void ReadTableMetaVector(ignite::impl::binary::BinaryReaderImpl& >reader, TableMetaVector& meta) > { > int32_t metaNum = reader.ReadInt32(); > > meta.clear(); > meta.reserve(static_cast<size_t>(metaNum)); > > for (int32_t i = 0; i < metaNum; ++i) > { > meta.push_back(TableMeta()); > > meta.back().Read(reader); > } > >So all restrictions that you can see will be related to available memory on >your nodes. > >Thank you, >Andrei > > > > >-- >Sent from: http://apache-ignite-users.70518.x6.nabble.com/


 

aealexsandrov aealexsandrov
Reply | Threaded
Open this post in threaded view
|

Re: Re:Re: Re:Issue about ignite-sql limit of table quantity

Hi Fvyaba,

I investigated your example. In your code you are going to create new cache
every time when you are going to create new table. Every new cache will have
some memory overhead. Next code can help you to get the average allocated
memory:

            try (IgniteCache<?, ?> cache =
ignite.getOrCreateCache(defaultCacheCfg)) {
                for(int i = 1; i < 100; i++) {
                    cache.query(new SqlFieldsQuery(String.format(
                        "CREATE TABLE TBL_%s (id BIGINT,uid VARCHAR,PRIMARY
KEY(id))", i)));
                    System.out.println("Count " + i + "
-------------------------------------------------------------");
                    for (DataRegionMetrics metrics :
ignite.dataRegionMetrics()) {
                        System.out.println(">>> Memory Region Name: " +
metrics.getName());
                        System.out.println(">>> Allocation Rate: " +
metrics.getAllocationRate());
                        System.out.println(">>> Allocated Size Full: " +
metrics.getTotalAllocatedSize());
                        System.out.println(">>> Allocated Size avg: " +
metrics.getTotalAllocatedSize() / i);
                        System.out.println(">>> Physical Memory Size: " +
metrics.getPhysicalMemorySize());
                    }
                }
            }

On my machine with default settings I got next:

>>> Memory Region Name: Default_Region
>>> Allocation Rate: 3419.9666
>>> Allocated Size Full: 840491008
>>> Allocated Size avg: 8489808
>>> Physical Memory Size: 840491008

So it's about 8mb per cache (so if you will have 3.2 GB then you can create
about 400 caches). I am not sure is it ok but you can do next to avoid
org.apache.ignite.IgniteCheckedException: Out of memory in data region:

1)Increase the max value of available off-heap memory:

       
        <property name="dataStorageConfiguration">
            <bean
class="org.apache.ignite.configuration.DataStorageConfiguration">
               
                <property name="defaultDataRegionConfiguration">
                    <bean
class="org.apache.ignite.configuration.DataRegionConfiguration">
                        <property name="name" value="Default_Region"/>
                        <property name="maxSize" value="#{1L * 1024 * 1024 *
1024}"/> //HERE
                        <property name="metricsEnabled" value="true"/>
                    </bean>
                </property>
            </bean>
        </property>

2)Use persistence (or swaping space):

        <property name="dataStorageConfiguration">
            <bean
class="org.apache.ignite.configuration.DataStorageConfiguration">
               
                <property name="defaultDataRegionConfiguration">
                    <bean
class="org.apache.ignite.configuration.DataRegionConfiguration">
                        <property name="name" value="Default_Region"/>
                        <property name="maxSize" value="#{1L * 1024 * 1024 *
1024}"/>
                        <property name="metricsEnabled" value="true"/>
                        <property name="persistenceEnabled" value="true"/>
//THIS ONE
                    </bean>
                </property>
            </bean>
        </property>

Read more about it you can here:

https://apacheignite.readme.io/docs/distributed-persistent-store
https://apacheignite.readme.io/v1.0/docs/off-heap-memory

Please try to test next code:

1) add this to your config:

       
        <property name="dataStorageConfiguration">
            <bean
class="org.apache.ignite.configuration.DataStorageConfiguration">
               
                <property name="defaultDataRegionConfiguration">
                    <bean
class="org.apache.ignite.configuration.DataRegionConfiguration">
                        <property name="name" value="Default_Region"/>
                        <property name="maxSize" value="#{1L * 1024 * 1024 *
1024}"/>
                        <property name="metricsEnabled" value="true"/>
                        <property name="persistenceEnabled" value="true"/>
                    </bean>
                </property>
            </bean>
        </property>

2)Run next:

public class example {
    public static void main(String[] args) throws IgniteException {
        try (Ignite ignite =
Ignition.start("examples/config/example-ignite.xml")) {
            ignite.cluster().active(true);

            CacheConfiguration<?, ?> defaultCacheCfg = new
CacheConfiguration<>("Default_cache").setSqlSchema("PUBLIC");

            defaultCacheCfg.setDataRegionName("Default_Region");

            try (IgniteCache<?, ?> cache =
ignite.getOrCreateCache(defaultCacheCfg)) {
                for(int i = 1; i < 1000; i++) {
                    //remove old table cache just in case
                    cache.query(new SqlFieldsQuery(String.format(
                        "DROP TABLE TBL_%s", i)));
                    //create new table
                    cache.query(new SqlFieldsQuery(String.format(
                        "CREATE TABLE TBL_%s (id BIGINT,uid VARCHAR,PRIMARY
KEY(id))", i)));
                    System.out.println("Count " + i + "
-------------------------------------------------------------");
                    for (DataRegionMetrics metrics :
ignite.dataRegionMetrics()) {
                        System.out.println(">>> Memory Region Name: " +
metrics.getName());
                        System.out.println(">>> Allocation Rate: " +
metrics.getAllocationRate());
                        System.out.println(">>> Allocated Size Full: " +
metrics.getTotalAllocatedSize());
                        System.out.println(">>> Allocated Size avg: " +
metrics.getTotalAllocatedSize() / i);
                        System.out.println(">>> Physical Memory Size: " +
metrics.getPhysicalMemorySize());
                    }
                }
            }

            ignite.cluster().active(false);
        }
    }
}









--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/
aealexsandrov aealexsandrov
Reply | Threaded
Open this post in threaded view
|

Re: Re:Re: Re:Issue about ignite-sql limit of table quantity

In reply to this post by fvyaba
Hi Fvyaba,

I investigated your example. In your code you are going to create new cache
every time when you are going to create new table. Every new cache will have
some memory overhead. Next code can help you to get the average allocated
memory:

            try (IgniteCache<?, ?> cache =
ignite.getOrCreateCache(defaultCacheCfg)) {
                for(int i = 1; i < 100; i++) {
                    cache.query(new SqlFieldsQuery(String.format(
                        "CREATE TABLE TBL_%s (id BIGINT,uid VARCHAR,PRIMARY
KEY(id))", i)));
                    System.out.println("Count " + i + "
-------------------------------------------------------------");
                    for (DataRegionMetrics metrics :
ignite.dataRegionMetrics()) {
                        System.out.println(">>> Memory Region Name: " +
metrics.getName());
                        System.out.println(">>> Allocation Rate: " +
metrics.getAllocationRate());
                        System.out.println(">>> Allocated Size Full: " +
metrics.getTotalAllocatedSize());
                        System.out.println(">>> Allocated Size avg: " +
metrics.getTotalAllocatedSize() / i);
                        System.out.println(">>> Physical Memory Size: " +
metrics.getPhysicalMemorySize());
                    }
                }
            }

On my machine with default settings I got next:

>>> Memory Region Name: Default_Region
>>> Allocation Rate: 3419.9666
>>> Allocated Size Full: 840491008
>>> Allocated Size avg: 8489808
>>> Physical Memory Size: 840491008

So it's about 8mb per cache (so if you will have 3.2 GB then you can create
about 400 caches). I am not sure is it ok but you can do next to avoid
org.apache.ignite.IgniteCheckedException: Out of memory in data region:

1)Increase the max value of available off-heap memory:

       
        <property name="dataStorageConfiguration">
            <bean
class="org.apache.ignite.configuration.DataStorageConfiguration">
               
                <property name="defaultDataRegionConfiguration">
                    <bean
class="org.apache.ignite.configuration.DataRegionConfiguration">
                        <property name="name" value="Default_Region"/>
                        <property name="maxSize" value="#{1L * 1024 * 1024 *
1024}"/> //HERE
                        <property name="metricsEnabled" value="true"/>
                    </bean>
                </property>
            </bean>
        </property>

2)Use persistence (or swaping space):

        <property name="dataStorageConfiguration">
            <bean
class="org.apache.ignite.configuration.DataStorageConfiguration">
               
                <property name="defaultDataRegionConfiguration">
                    <bean
class="org.apache.ignite.configuration.DataRegionConfiguration">
                        <property name="name" value="Default_Region"/>
                        <property name="maxSize" value="#{1L * 1024 * 1024 *
1024}"/>
                        <property name="metricsEnabled" value="true"/>
                        <property name="persistenceEnabled" value="true"/>
//THIS ONE
                    </bean>
                </property>
            </bean>
        </property>

Read more about it you can here:

https://apacheignite.readme.io/docs/distributed-persistent-store
https://apacheignite.readme.io/v1.0/docs/off-heap-memory

Please try to test next code:

1) add this to your config:

       
        <property name="dataStorageConfiguration">
            <bean
class="org.apache.ignite.configuration.DataStorageConfiguration">
               
                <property name="defaultDataRegionConfiguration">
                    <bean
class="org.apache.ignite.configuration.DataRegionConfiguration">
                        <property name="name" value="Default_Region"/>
                        <property name="maxSize" value="#{1L * 1024 * 1024 *
1024}"/>
                        <property name="metricsEnabled" value="true"/>
                        <property name="persistenceEnabled" value="true"/>
                    </bean>
                </property>
            </bean>
        </property>

2)Run next:

public class example {
    public static void main(String[] args) throws IgniteException {
        try (Ignite ignite =
Ignition.start("examples/config/example-ignite.xml")) {
            ignite.cluster().active(true);

            CacheConfiguration<?, ?> defaultCacheCfg = new
CacheConfiguration<>("Default_cache").setSqlSchema("PUBLIC");

            defaultCacheCfg.setDataRegionName("Default_Region");

            try (IgniteCache<?, ?> cache =
ignite.getOrCreateCache(defaultCacheCfg)) {
                for(int i = 1; i < 1000; i++) {
                    //remove old table cache just in case
                    cache.query(new SqlFieldsQuery(String.format(
                        "DROP TABLE TBL_%s", i)));
                    //create new table
                    cache.query(new SqlFieldsQuery(String.format(
                        "CREATE TABLE TBL_%s (id BIGINT,uid VARCHAR,PRIMARY
KEY(id))", i)));
                    System.out.println("Count " + i + "
-------------------------------------------------------------");
                    for (DataRegionMetrics metrics :
ignite.dataRegionMetrics()) {
                        System.out.println(">>> Memory Region Name: " +
metrics.getName());
                        System.out.println(">>> Allocation Rate: " +
metrics.getAllocationRate());
                        System.out.println(">>> Allocated Size Full: " +
metrics.getTotalAllocatedSize());
                        System.out.println(">>> Allocated Size avg: " +
metrics.getTotalAllocatedSize() / i);
                        System.out.println(">>> Physical Memory Size: " +
metrics.getPhysicalMemorySize());
                    }
                }
            }

            ignite.cluster().active(false);
        }
    }
}









--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/
dsetrakyan dsetrakyan
Reply | Threaded
Open this post in threaded view
|

Re: Re:Re: Re:Issue about ignite-sql limit of table quantity

Hi Fvyaba,

In order to avoid memory overhead per table, you should create all tables as part of the same cache group:

D.

On Mon, Mar 26, 2018 at 7:26 AM, aealexsandrov <[hidden email]> wrote:
Hi Fvyaba,

I investigated your example. In your code you are going to create new cache
every time when you are going to create new table. Every new cache will have
some memory overhead. Next code can help you to get the average allocated
memory:

            try (IgniteCache<?, ?> cache =
ignite.getOrCreateCache(defaultCacheCfg)) {
                for(int i = 1; i < 100; i++) {
                    cache.query(new SqlFieldsQuery(String.format(
                        "CREATE TABLE TBL_%s (id BIGINT,uid VARCHAR,PRIMARY
KEY(id))", i)));
                    System.out.println("Count " + i + "
-------------------------------------------------------------");
                    for (DataRegionMetrics metrics :
ignite.dataRegionMetrics()) {
                        System.out.println(">>> Memory Region Name: " +
metrics.getName());
                        System.out.println(">>> Allocation Rate: " +
metrics.getAllocationRate());
                        System.out.println(">>> Allocated Size Full: " +
metrics.getTotalAllocatedSize());
                        System.out.println(">>> Allocated Size avg: " +
metrics.getTotalAllocatedSize() / i);
                        System.out.println(">>> Physical Memory Size: " +
metrics.getPhysicalMemorySize());
                    }
                }
            }

On my machine with default settings I got next:

>>> Memory Region Name: Default_Region
>>> Allocation Rate: 3419.9666
>>> Allocated Size Full: 840491008
>>> Allocated Size avg: 8489808
>>> Physical Memory Size: 840491008

So it's about 8mb per cache (so if you will have 3.2 GB then you can create
about 400 caches). I am not sure is it ok but you can do next to avoid
org.apache.ignite.IgniteCheckedException: Out of memory in data region:

1)Increase the max value of available off-heap memory:


        <property name="dataStorageConfiguration">
            <bean
class="org.apache.ignite.configuration.DataStorageConfiguration">

                <property name="defaultDataRegionConfiguration">
                    <bean
class="org.apache.ignite.configuration.DataRegionConfiguration">
                        <property name="name" value="Default_Region"/>
                        <property name="maxSize" value="#{1L * 1024 * 1024 *
1024}"/> //HERE
                        <property name="metricsEnabled" value="true"/>
                    </bean>
                </property>
            </bean>
        </property>

2)Use persistence (or swaping space):

        <property name="dataStorageConfiguration">
            <bean
class="org.apache.ignite.configuration.DataStorageConfiguration">

                <property name="defaultDataRegionConfiguration">
                    <bean
class="org.apache.ignite.configuration.DataRegionConfiguration">
                        <property name="name" value="Default_Region"/>
                        <property name="maxSize" value="#{1L * 1024 * 1024 *
1024}"/>
                        <property name="metricsEnabled" value="true"/>
                        <property name="persistenceEnabled" value="true"/>
//THIS ONE
                    </bean>
                </property>
            </bean>
        </property>

Read more about it you can here:

https://apacheignite.readme.io/docs/distributed-persistent-store
https://apacheignite.readme.io/v1.0/docs/off-heap-memory

Please try to test next code:

1) add this to your config:


        <property name="dataStorageConfiguration">
            <bean
class="org.apache.ignite.configuration.DataStorageConfiguration">

                <property name="defaultDataRegionConfiguration">
                    <bean
class="org.apache.ignite.configuration.DataRegionConfiguration">
                        <property name="name" value="Default_Region"/>
                        <property name="maxSize" value="#{1L * 1024 * 1024 *
1024}"/>
                        <property name="metricsEnabled" value="true"/>
                        <property name="persistenceEnabled" value="true"/>
                    </bean>
                </property>
            </bean>
        </property>

2)Run next:

public class example {
    public static void main(String[] args) throws IgniteException {
        try (Ignite ignite =
Ignition.start("examples/config/example-ignite.xml")) {
            ignite.cluster().active(true);

            CacheConfiguration<?, ?> defaultCacheCfg = new
CacheConfiguration<>("Default_cache").setSqlSchema("PUBLIC");

            defaultCacheCfg.setDataRegionName("Default_Region");

            try (IgniteCache<?, ?> cache =
ignite.getOrCreateCache(defaultCacheCfg)) {
                for(int i = 1; i < 1000; i++) {
                    //remove old table cache just in case
                    cache.query(new SqlFieldsQuery(String.format(
                        "DROP TABLE TBL_%s", i)));
                    //create new table
                    cache.query(new SqlFieldsQuery(String.format(
                        "CREATE TABLE TBL_%s (id BIGINT,uid VARCHAR,PRIMARY
KEY(id))", i)));
                    System.out.println("Count " + i + "
-------------------------------------------------------------");
                    for (DataRegionMetrics metrics :
ignite.dataRegionMetrics()) {
                        System.out.println(">>> Memory Region Name: " +
metrics.getName());
                        System.out.println(">>> Allocation Rate: " +
metrics.getAllocationRate());
                        System.out.println(">>> Allocated Size Full: " +
metrics.getTotalAllocatedSize());
                        System.out.println(">>> Allocated Size avg: " +
metrics.getTotalAllocatedSize() / i);
                        System.out.println(">>> Physical Memory Size: " +
metrics.getPhysicalMemorySize());
                    }
                }
            }

            ignite.cluster().active(false);
        }
    }
}










fvyaba fvyaba
Reply | Threaded
Open this post in threaded view
|

Re:Re: Re:Re: Re:Issue about ignite-sql limit of table quantity

Thanks D.





At 2018-04-01 23:07:25, "Dmitriy Setrakyan" <[hidden email]> wrote:
Hi Fvyaba,

In order to avoid memory overhead per table, you should create all tables as part of the same cache group:

D.

On Mon, Mar 26, 2018 at 7:26 AM, aealexsandrov <[hidden email]> wrote:
Hi Fvyaba,

I investigated your example. In your code you are going to create new cache
every time when you are going to create new table. Every new cache will have
some memory overhead. Next code can help you to get the average allocated
memory:

            try (IgniteCache<?, ?> cache =
ignite.getOrCreateCache(defaultCacheCfg)) {
                for(int i = 1; i < 100; i++) {
                    cache.query(new SqlFieldsQuery(String.format(
                        "CREATE TABLE TBL_%s (id BIGINT,uid VARCHAR,PRIMARY
KEY(id))", i)));
                    System.out.println("Count " + i + "
-------------------------------------------------------------");
                    for (DataRegionMetrics metrics :
ignite.dataRegionMetrics()) {
                        System.out.println(">>> Memory Region Name: " +
metrics.getName());
                        System.out.println(">>> Allocation Rate: " +
metrics.getAllocationRate());
                        System.out.println(">>> Allocated Size Full: " +
metrics.getTotalAllocatedSize());
                        System.out.println(">>> Allocated Size avg: " +
metrics.getTotalAllocatedSize() / i);
                        System.out.println(">>> Physical Memory Size: " +
metrics.getPhysicalMemorySize());
                    }
                }
            }

On my machine with default settings I got next:

>>> Memory Region Name: Default_Region
>>> Allocation Rate: 3419.9666
>>> Allocated Size Full: 840491008
>>> Allocated Size avg: 8489808
>>> Physical Memory Size: 840491008

So it's about 8mb per cache (so if you will have 3.2 GB then you can create
about 400 caches). I am not sure is it ok but you can do next to avoid
org.apache.ignite.IgniteCheckedException: Out of memory in data region:

1)Increase the max value of available off-heap memory:


        <property name="dataStorageConfiguration">
            <bean
class="org.apache.ignite.configuration.DataStorageConfiguration">

                <property name="defaultDataRegionConfiguration">
                    <bean
class="org.apache.ignite.configuration.DataRegionConfiguration">
                        <property name="name" value="Default_Region"/>
                        <property name="maxSize" value="#{1L * 1024 * 1024 *
1024}"/> //HERE
                        <property name="metricsEnabled" value="true"/>
                    </bean>
                </property>
            </bean>
        </property>

2)Use persistence (or swaping space):

        <property name="dataStorageConfiguration">
            <bean
class="org.apache.ignite.configuration.DataStorageConfiguration">

                <property name="defaultDataRegionConfiguration">
                    <bean
class="org.apache.ignite.configuration.DataRegionConfiguration">
                        <property name="name" value="Default_Region"/>
                        <property name="maxSize" value="#{1L * 1024 * 1024 *
1024}"/>
                        <property name="metricsEnabled" value="true"/>
                        <property name="persistenceEnabled" value="true"/>
//THIS ONE
                    </bean>
                </property>
            </bean>
        </property>

Read more about it you can here:

https://apacheignite.readme.io/docs/distributed-persistent-store
https://apacheignite.readme.io/v1.0/docs/off-heap-memory

Please try to test next code:

1) add this to your config:


        <property name="dataStorageConfiguration">
            <bean
class="org.apache.ignite.configuration.DataStorageConfiguration">

                <property name="defaultDataRegionConfiguration">
                    <bean
class="org.apache.ignite.configuration.DataRegionConfiguration">
                        <property name="name" value="Default_Region"/>
                        <property name="maxSize" value="#{1L * 1024 * 1024 *
1024}"/>
                        <property name="metricsEnabled" value="true"/>
                        <property name="persistenceEnabled" value="true"/>
                    </bean>
                </property>
            </bean>
        </property>

2)Run next:

public class example {
    public static void main(String[] args) throws IgniteException {
        try (Ignite ignite =
Ignition.start("examples/config/example-ignite.xml")) {
            ignite.cluster().active(true);

            CacheConfiguration<?, ?> defaultCacheCfg = new
CacheConfiguration<>("Default_cache").setSqlSchema("PUBLIC");

            defaultCacheCfg.setDataRegionName("Default_Region");

            try (IgniteCache<?, ?> cache =
ignite.getOrCreateCache(defaultCacheCfg)) {
                for(int i = 1; i < 1000; i++) {
                    //remove old table cache just in case
                    cache.query(new SqlFieldsQuery(String.format(
                        "DROP TABLE TBL_%s", i)));
                    //create new table
                    cache.query(new SqlFieldsQuery(String.format(
                        "CREATE TABLE TBL_%s (id BIGINT,uid VARCHAR,PRIMARY
KEY(id))", i)));
                    System.out.println("Count " + i + "
-------------------------------------------------------------");
                    for (DataRegionMetrics metrics :
ignite.dataRegionMetrics()) {
                        System.out.println(">>> Memory Region Name: " +
metrics.getName());
                        System.out.println(">>> Allocation Rate: " +
metrics.getAllocationRate());
                        System.out.println(">>> Allocated Size Full: " +
metrics.getTotalAllocatedSize());
                        System.out.println(">>> Allocated Size avg: " +
metrics.getTotalAllocatedSize() / i);
                        System.out.println(">>> Physical Memory Size: " +
metrics.getPhysicalMemorySize());
                    }
                }
            }

            ignite.cluster().active(false);
        }
    }
}