Near cache configuration for partitioned cache

classic Classic list List threaded Threaded
7 messages Options
Dominik Przybysz Dominik Przybysz
Reply | Threaded
Open this post in threaded view
|

Near cache configuration for partitioned cache

Hi,
I am using Ignite 2.7.6 and I have 2 server nodes with one partitioned cache and configuration:

<?xml version="1.0" encoding="UTF-8"?>
<beans xmlns="http://www.springframework.org/schema/beans"
       xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
       xsi:schemaLocation="
       http://www.springframework.org/schema/beans
       http://www.springframework.org/schema/beans/spring-beans.xsd">

    <bean id="grid.cfg" class="org.apache.ignite.configuration.IgniteConfiguration">
        <property name="cacheConfiguration">
            <bean class="org.apache.ignite.configuration.CacheConfiguration">
                <property name="name" value="cache1"/>
                <property name="cacheMode" value="PARTITIONED"/>
                <property name="statisticsEnabled" value="true"/>
                <property name="backups" value="1"/>
            </bean>
        </property>

        <property name="communicationSpi">
            <bean class="org.apache.ignite.spi.communication.tcp.TcpCommunicationSpi">
                <property name="localPort" value="47500"/>
            </bean>
        </property>

        <property name="discoverySpi">
            <bean class="org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi">
                <property name="localPort" value="47100"/>
                <property name="localPortRange" value="100"/>
                <property name="ipFinder">
                    <bean class="org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder">
                        <property name="addresses">
                            <list>
                                <value>ignite1:47100..47200</value>
                                <value>ignite2:47100..47200</value>
                            </list>
                        </property>
                    </bean>
                </property>
            </bean>
        </property>

        <property name="clientConnectorConfiguration">
            <bean class="org.apache.ignite.configuration.ClientConnectorConfiguration">
                <property name="port" value="10800"/>
            </bean>
        </property>

        <property name="dataStorageConfiguration">
            <bean class="org.apache.ignite.configuration.DataStorageConfiguration">
                <property name="defaultDataRegionConfiguration">
                    <bean class="org.apache.ignite.configuration.DataRegionConfiguration">
                        <property name="persistenceEnabled" value="true"/>
                        <property name="metricsEnabled" value="true"/>
                    </bean>
                </property>
                <property name="metricsEnabled" value="true"/>
            </bean>
        </property>

        <property name="consistentId" value="{{hostname}}"/>

        <property name="systemThreadPoolSize" value="{{ignite_system_thread_pool_size}}"/>
        <property name="dataStreamerThreadPoolSize" value="{{ignite_cluster_data_streamer_thread_pool_size}}"/>
    </bean>
</beans>

I loaded 1,5mln entries into cluster via data streamer.
I tested this topology without near cache and everything was fine, but when I tried to add near cache to my client nodes then server nodes started to keep data on heap and reads rps dramatically fell down (150k rps to 10k rps).

My clients' configuration:

<?xml version="1.0" encoding="UTF-8"?>
<beans xmlns="http://www.springframework.org/schema/beans"
       xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
       xsi:schemaLocation="
       http://www.springframework.org/schema/beans
       http://www.springframework.org/schema/beans/spring-beans.xsd">
    <bean id="grid.cfg" class="org.apache.ignite.configuration.IgniteConfiguration">
        <property name="clientMode" value="true"/>
        <property name="discoverySpi">
            <bean class="org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi">
                <property name="ipFinder">
                    <bean class="org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder">
                        <property name="addresses">
                            <list>
                            <value>ignite1:47100..47200</value>
                            <value>ignite2:47100..47200</value>
                            </list>
                        </property>
                    </bean>
                </property>
            </bean>
        </property>
        <property name="dataStreamerThreadPoolSize" value="8"/>
        <property name="systemThreadPoolSize" value="8"/>

        <property name="cacheConfiguration">
            <bean class="org.apache.ignite.configuration.CacheConfiguration">
                <!-- Cache configuration has to be the same as in server config -->
                <property name="name" value="cache1"/>
                <property name="cacheMode" value="PARTITIONED"/>
                <property name="statisticsEnabled" value="true"/>
                <property name="backups" value="1"/>

                <property name="nearConfiguration">
                    <bean class="org.apache.ignite.configuration.NearCacheConfiguration">
                        <property name="nearEvictionPolicyFactory">
                            <bean class="org.apache.ignite.cache.eviction.lru.LruEvictionPolicyFactory">
                                <property name="maxSize" value="100000"/>
                            </bean>
                        </property>
                    </bean>
                </property>
            </bean>
        </property>
    </bean>
</beans>


On visor i see:

Nodes for: cache1(@c0)
+=================================================================================================================================+
|       Node ID8(@), IP       | CPUs | Heap Used | CPU Load |   Up Time    |        Size (Primary / Backup)        | Hi/Mi/Rd/Wr  |
+=================================================================================================================================+
| BCA8F378(@n2), 10.100.0.239 | 4    | 32.32 %   | 2.17 %   | 00:38:33.071 | Total: 55204 (55204 / 0)              | Hi: 1671212  |
|                             |      |           |          |              |   Heap: 55204 (55204 / <n/a>)         | Mi: 35034768 |
|                             |      |           |          |              |   Off-Heap: 0 (0 / 0)                 | Rd: 36705980 |
|                             |      |           |          |              |   Off-Heap Memory: 0                  | Wr: 0        |
+-----------------------------+------+-----------+----------+--------------+---------------------------------------+--------------+
| 905F83EE(@n3), 10.100.0.230 | 4    | 52.56 %   | 6.67 %   | 00:38:33.401 | Total: 54051 (54051 / 0)              | Hi: 1766495  |
|                             |      |           |          |              |   Heap: 54051 (54051 / <n/a>)         | Mi: 34283753 |
|                             |      |           |          |              |   Off-Heap: 0 (0 / 0)                 | Rd: 36050248 |
|                             |      |           |          |              |   Off-Heap Memory: 0                  | Wr: 0        |
+-----------------------------+------+-----------+----------+--------------+---------------------------------------+--------------+
| 793E1BC9(@n1), 10.100.0.206 | 4    | 99.33 %   | 38.43 %  | 00:51:11.877 | Total: 2999836 (2230060 / 769776)     | Hi: 17323596 |
|                             |      |           |          |              |   Heap: 1499836 (1499836 / <n/a>)     | Mi: 0        |
|                             |      |           |          |              |   Off-Heap: 1500000 (730224 / 769776) | Rd: 17323596 |
|                             |      |           |          |              |   Off-Heap Memory: <n/a>              | Wr: 0        |
+-----------------------------+------+-----------+----------+--------------+---------------------------------------+--------------+
| 0147FB02(@n0), 10.100.0.205 | 4    | 96.48 %   | 40.33 %  | 00:51:11.820 | Total: 2999814 (2269590 / 730224)     | Hi: 17335702 |
|                             |      |           |          |              |   Heap: 1499814 (1499814 / <n/a>)     | Mi: 0        |
|                             |      |           |          |              |   Off-Heap: 1500000 (769776 / 730224) | Rd: 17335702 |
|                             |      |           |          |              |   Off-Heap Memory: <n/a>              | Wr: 0        |
+---------------------------------------------------------------------------------------------------------------------------------+


1st and 2nd entry is client node, 3rd and 4th is server node.

What is wrong with my near cache configuration?
Do I have to mirror all cache configuration on server node into client nodes configuration? (for example, when I miss backup parameter I received exception "Affinity key backups mismatch")

--
Pozdrawiam / Regards,
Dominik Przybysz
ezhuravlev ezhuravlev
Reply | Threaded
Open this post in threaded view
|

Re: Near cache configuration for partitioned cache

Hi,

Near Cache configuration in xml creates near caches for all nodes, including server nodes. As far as I understand, you want to have them on client side only, right? If so, I'd recommend to create them dynamically: https://www.gridgain.com/docs/latest/developers-guide/near-cache#creating-near-cache-dynamically-on-client-nodes

What kind of operations are you running? Are you trying to access data on server from another server node? In any case, so many entries in Heap on server nodes looks strange. 

Evgenii

пн, 23 мар. 2020 г. в 07:08, Dominik Przybysz <[hidden email]>:
Hi,
I am using Ignite 2.7.6 and I have 2 server nodes with one partitioned cache and configuration:

<?xml version="1.0" encoding="UTF-8"?>
<beans xmlns="http://www.springframework.org/schema/beans"
       xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
       xsi:schemaLocation="
       http://www.springframework.org/schema/beans
       http://www.springframework.org/schema/beans/spring-beans.xsd">

    <bean id="grid.cfg" class="org.apache.ignite.configuration.IgniteConfiguration">
        <property name="cacheConfiguration">
            <bean class="org.apache.ignite.configuration.CacheConfiguration">
                <property name="name" value="cache1"/>
                <property name="cacheMode" value="PARTITIONED"/>
                <property name="statisticsEnabled" value="true"/>
                <property name="backups" value="1"/>
            </bean>
        </property>

        <property name="communicationSpi">
            <bean class="org.apache.ignite.spi.communication.tcp.TcpCommunicationSpi">
                <property name="localPort" value="47500"/>
            </bean>
        </property>

        <property name="discoverySpi">
            <bean class="org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi">
                <property name="localPort" value="47100"/>
                <property name="localPortRange" value="100"/>
                <property name="ipFinder">
                    <bean class="org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder">
                        <property name="addresses">
                            <list>
                                <value>ignite1:47100..47200</value>
                                <value>ignite2:47100..47200</value>
                            </list>
                        </property>
                    </bean>
                </property>
            </bean>
        </property>

        <property name="clientConnectorConfiguration">
            <bean class="org.apache.ignite.configuration.ClientConnectorConfiguration">
                <property name="port" value="10800"/>
            </bean>
        </property>

        <property name="dataStorageConfiguration">
            <bean class="org.apache.ignite.configuration.DataStorageConfiguration">
                <property name="defaultDataRegionConfiguration">
                    <bean class="org.apache.ignite.configuration.DataRegionConfiguration">
                        <property name="persistenceEnabled" value="true"/>
                        <property name="metricsEnabled" value="true"/>
                    </bean>
                </property>
                <property name="metricsEnabled" value="true"/>
            </bean>
        </property>

        <property name="consistentId" value="{{hostname}}"/>

        <property name="systemThreadPoolSize" value="{{ignite_system_thread_pool_size}}"/>
        <property name="dataStreamerThreadPoolSize" value="{{ignite_cluster_data_streamer_thread_pool_size}}"/>
    </bean>
</beans>

I loaded 1,5mln entries into cluster via data streamer.
I tested this topology without near cache and everything was fine, but when I tried to add near cache to my client nodes then server nodes started to keep data on heap and reads rps dramatically fell down (150k rps to 10k rps).

My clients' configuration:

<?xml version="1.0" encoding="UTF-8"?>
<beans xmlns="http://www.springframework.org/schema/beans"
       xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
       xsi:schemaLocation="
       http://www.springframework.org/schema/beans
       http://www.springframework.org/schema/beans/spring-beans.xsd">
    <bean id="grid.cfg" class="org.apache.ignite.configuration.IgniteConfiguration">
        <property name="clientMode" value="true"/>
        <property name="discoverySpi">
            <bean class="org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi">
                <property name="ipFinder">
                    <bean class="org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder">
                        <property name="addresses">
                            <list>
                            <value>ignite1:47100..47200</value>
                            <value>ignite2:47100..47200</value>
                            </list>
                        </property>
                    </bean>
                </property>
            </bean>
        </property>
        <property name="dataStreamerThreadPoolSize" value="8"/>
        <property name="systemThreadPoolSize" value="8"/>

        <property name="cacheConfiguration">
            <bean class="org.apache.ignite.configuration.CacheConfiguration">
                <!-- Cache configuration has to be the same as in server config -->
                <property name="name" value="cache1"/>
                <property name="cacheMode" value="PARTITIONED"/>
                <property name="statisticsEnabled" value="true"/>
                <property name="backups" value="1"/>

                <property name="nearConfiguration">
                    <bean class="org.apache.ignite.configuration.NearCacheConfiguration">
                        <property name="nearEvictionPolicyFactory">
                            <bean class="org.apache.ignite.cache.eviction.lru.LruEvictionPolicyFactory">
                                <property name="maxSize" value="100000"/>
                            </bean>
                        </property>
                    </bean>
                </property>
            </bean>
        </property>
    </bean>
</beans>


On visor i see:

Nodes for: cache1(@c0)
+=================================================================================================================================+
|       Node ID8(@), IP       | CPUs | Heap Used | CPU Load |   Up Time    |        Size (Primary / Backup)        | Hi/Mi/Rd/Wr  |
+=================================================================================================================================+
| BCA8F378(@n2), 10.100.0.239 | 4    | 32.32 %   | 2.17 %   | 00:38:33.071 | Total: 55204 (55204 / 0)              | Hi: 1671212  |
|                             |      |           |          |              |   Heap: 55204 (55204 / <n/a>)         | Mi: 35034768 |
|                             |      |           |          |              |   Off-Heap: 0 (0 / 0)                 | Rd: 36705980 |
|                             |      |           |          |              |   Off-Heap Memory: 0                  | Wr: 0        |
+-----------------------------+------+-----------+----------+--------------+---------------------------------------+--------------+
| 905F83EE(@n3), 10.100.0.230 | 4    | 52.56 %   | 6.67 %   | 00:38:33.401 | Total: 54051 (54051 / 0)              | Hi: 1766495  |
|                             |      |           |          |              |   Heap: 54051 (54051 / <n/a>)         | Mi: 34283753 |
|                             |      |           |          |              |   Off-Heap: 0 (0 / 0)                 | Rd: 36050248 |
|                             |      |           |          |              |   Off-Heap Memory: 0                  | Wr: 0        |
+-----------------------------+------+-----------+----------+--------------+---------------------------------------+--------------+
| 793E1BC9(@n1), 10.100.0.206 | 4    | 99.33 %   | 38.43 %  | 00:51:11.877 | Total: 2999836 (2230060 / 769776)     | Hi: 17323596 |
|                             |      |           |          |              |   Heap: 1499836 (1499836 / <n/a>)     | Mi: 0        |
|                             |      |           |          |              |   Off-Heap: 1500000 (730224 / 769776) | Rd: 17323596 |
|                             |      |           |          |              |   Off-Heap Memory: <n/a>              | Wr: 0        |
+-----------------------------+------+-----------+----------+--------------+---------------------------------------+--------------+
| 0147FB02(@n0), 10.100.0.205 | 4    | 96.48 %   | 40.33 %  | 00:51:11.820 | Total: 2999814 (2269590 / 730224)     | Hi: 17335702 |
|                             |      |           |          |              |   Heap: 1499814 (1499814 / <n/a>)     | Mi: 0        |
|                             |      |           |          |              |   Off-Heap: 1500000 (769776 / 730224) | Rd: 17335702 |
|                             |      |           |          |              |   Off-Heap Memory: <n/a>              | Wr: 0        |
+---------------------------------------------------------------------------------------------------------------------------------+


1st and 2nd entry is client node, 3rd and 4th is server node.

What is wrong with my near cache configuration?
Do I have to mirror all cache configuration on server node into client nodes configuration? (for example, when I miss backup parameter I received exception "Affinity key backups mismatch")

--
Pozdrawiam / Regards,
Dominik Przybysz
Dominik Przybysz Dominik Przybysz
Reply | Threaded
Open this post in threaded view
|

Re: Near cache configuration for partitioned cache

Hi,
exactly I want to have near cache only on client nodes. I will check your advice with dynamic cache.
I have two server nodes which keep data and I want to get data from them via my client nodes.
I am also  curious what had happened with heap on server nodes.

pon., 23 mar 2020 o 23:13 Evgenii Zhuravlev <[hidden email]> napisał(a):
Hi,

Near Cache configuration in xml creates near caches for all nodes, including server nodes. As far as I understand, you want to have them on client side only, right? If so, I'd recommend to create them dynamically: https://www.gridgain.com/docs/latest/developers-guide/near-cache#creating-near-cache-dynamically-on-client-nodes

What kind of operations are you running? Are you trying to access data on server from another server node? In any case, so many entries in Heap on server nodes looks strange. 

Evgenii

пн, 23 мар. 2020 г. в 07:08, Dominik Przybysz <[hidden email]>:
Hi,
I am using Ignite 2.7.6 and I have 2 server nodes with one partitioned cache and configuration:

<?xml version="1.0" encoding="UTF-8"?>
<beans xmlns="http://www.springframework.org/schema/beans"
       xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
       xsi:schemaLocation="
       http://www.springframework.org/schema/beans
       http://www.springframework.org/schema/beans/spring-beans.xsd">

    <bean id="grid.cfg" class="org.apache.ignite.configuration.IgniteConfiguration">
        <property name="cacheConfiguration">
            <bean class="org.apache.ignite.configuration.CacheConfiguration">
                <property name="name" value="cache1"/>
                <property name="cacheMode" value="PARTITIONED"/>
                <property name="statisticsEnabled" value="true"/>
                <property name="backups" value="1"/>
            </bean>
        </property>

        <property name="communicationSpi">
            <bean class="org.apache.ignite.spi.communication.tcp.TcpCommunicationSpi">
                <property name="localPort" value="47500"/>
            </bean>
        </property>

        <property name="discoverySpi">
            <bean class="org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi">
                <property name="localPort" value="47100"/>
                <property name="localPortRange" value="100"/>
                <property name="ipFinder">
                    <bean class="org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder">
                        <property name="addresses">
                            <list>
                                <value>ignite1:47100..47200</value>
                                <value>ignite2:47100..47200</value>
                            </list>
                        </property>
                    </bean>
                </property>
            </bean>
        </property>

        <property name="clientConnectorConfiguration">
            <bean class="org.apache.ignite.configuration.ClientConnectorConfiguration">
                <property name="port" value="10800"/>
            </bean>
        </property>

        <property name="dataStorageConfiguration">
            <bean class="org.apache.ignite.configuration.DataStorageConfiguration">
                <property name="defaultDataRegionConfiguration">
                    <bean class="org.apache.ignite.configuration.DataRegionConfiguration">
                        <property name="persistenceEnabled" value="true"/>
                        <property name="metricsEnabled" value="true"/>
                    </bean>
                </property>
                <property name="metricsEnabled" value="true"/>
            </bean>
        </property>

        <property name="consistentId" value="{{hostname}}"/>

        <property name="systemThreadPoolSize" value="{{ignite_system_thread_pool_size}}"/>
        <property name="dataStreamerThreadPoolSize" value="{{ignite_cluster_data_streamer_thread_pool_size}}"/>
    </bean>
</beans>

I loaded 1,5mln entries into cluster via data streamer.
I tested this topology without near cache and everything was fine, but when I tried to add near cache to my client nodes then server nodes started to keep data on heap and reads rps dramatically fell down (150k rps to 10k rps).

My clients' configuration:

<?xml version="1.0" encoding="UTF-8"?>
<beans xmlns="http://www.springframework.org/schema/beans"
       xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
       xsi:schemaLocation="
       http://www.springframework.org/schema/beans
       http://www.springframework.org/schema/beans/spring-beans.xsd">
    <bean id="grid.cfg" class="org.apache.ignite.configuration.IgniteConfiguration">
        <property name="clientMode" value="true"/>
        <property name="discoverySpi">
            <bean class="org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi">
                <property name="ipFinder">
                    <bean class="org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder">
                        <property name="addresses">
                            <list>
                            <value>ignite1:47100..47200</value>
                            <value>ignite2:47100..47200</value>
                            </list>
                        </property>
                    </bean>
                </property>
            </bean>
        </property>
        <property name="dataStreamerThreadPoolSize" value="8"/>
        <property name="systemThreadPoolSize" value="8"/>

        <property name="cacheConfiguration">
            <bean class="org.apache.ignite.configuration.CacheConfiguration">
                <!-- Cache configuration has to be the same as in server config -->
                <property name="name" value="cache1"/>
                <property name="cacheMode" value="PARTITIONED"/>
                <property name="statisticsEnabled" value="true"/>
                <property name="backups" value="1"/>

                <property name="nearConfiguration">
                    <bean class="org.apache.ignite.configuration.NearCacheConfiguration">
                        <property name="nearEvictionPolicyFactory">
                            <bean class="org.apache.ignite.cache.eviction.lru.LruEvictionPolicyFactory">
                                <property name="maxSize" value="100000"/>
                            </bean>
                        </property>
                    </bean>
                </property>
            </bean>
        </property>
    </bean>
</beans>


On visor i see:

Nodes for: cache1(@c0)
+=================================================================================================================================+
|       Node ID8(@), IP       | CPUs | Heap Used | CPU Load |   Up Time    |        Size (Primary / Backup)        | Hi/Mi/Rd/Wr  |
+=================================================================================================================================+
| BCA8F378(@n2), 10.100.0.239 | 4    | 32.32 %   | 2.17 %   | 00:38:33.071 | Total: 55204 (55204 / 0)              | Hi: 1671212  |
|                             |      |           |          |              |   Heap: 55204 (55204 / <n/a>)         | Mi: 35034768 |
|                             |      |           |          |              |   Off-Heap: 0 (0 / 0)                 | Rd: 36705980 |
|                             |      |           |          |              |   Off-Heap Memory: 0                  | Wr: 0        |
+-----------------------------+------+-----------+----------+--------------+---------------------------------------+--------------+
| 905F83EE(@n3), 10.100.0.230 | 4    | 52.56 %   | 6.67 %   | 00:38:33.401 | Total: 54051 (54051 / 0)              | Hi: 1766495  |
|                             |      |           |          |              |   Heap: 54051 (54051 / <n/a>)         | Mi: 34283753 |
|                             |      |           |          |              |   Off-Heap: 0 (0 / 0)                 | Rd: 36050248 |
|                             |      |           |          |              |   Off-Heap Memory: 0                  | Wr: 0        |
+-----------------------------+------+-----------+----------+--------------+---------------------------------------+--------------+
| 793E1BC9(@n1), 10.100.0.206 | 4    | 99.33 %   | 38.43 %  | 00:51:11.877 | Total: 2999836 (2230060 / 769776)     | Hi: 17323596 |
|                             |      |           |          |              |   Heap: 1499836 (1499836 / <n/a>)     | Mi: 0        |
|                             |      |           |          |              |   Off-Heap: 1500000 (730224 / 769776) | Rd: 17323596 |
|                             |      |           |          |              |   Off-Heap Memory: <n/a>              | Wr: 0        |
+-----------------------------+------+-----------+----------+--------------+---------------------------------------+--------------+
| 0147FB02(@n0), 10.100.0.205 | 4    | 96.48 %   | 40.33 %  | 00:51:11.820 | Total: 2999814 (2269590 / 730224)     | Hi: 17335702 |
|                             |      |           |          |              |   Heap: 1499814 (1499814 / <n/a>)     | Mi: 0        |
|                             |      |           |          |              |   Off-Heap: 1500000 (769776 / 730224) | Rd: 17335702 |
|                             |      |           |          |              |   Off-Heap Memory: <n/a>              | Wr: 0        |
+---------------------------------------------------------------------------------------------------------------------------------+


1st and 2nd entry is client node, 3rd and 4th is server node.

What is wrong with my near cache configuration?
Do I have to mirror all cache configuration on server node into client nodes configuration? (for example, when I miss backup parameter I received exception "Affinity key backups mismatch")

--
Pozdrawiam / Regards,
Dominik Przybysz


--
Pozdrawiam / Regards,
Dominik Przybysz
Dominik Przybysz Dominik Przybysz
Reply | Threaded
Open this post in threaded view
|

Re: Near cache configuration for partitioned cache

Hi,
I configured client node as you described in you email and heap usage on server nodes does not look as expected:

+===================================================================================================================================+
|       Node ID8(@), IP       | CPUs | Heap Used | CPU Load |   Up Time    |         Size (Primary / Backup)         | Hi/Mi/Rd/Wr  |
+===================================================================================================================================+
| 112132F5(@n3), 10.100.0.230 | 4    | 36.49 %   | 76.50 %  | 00:13:10.385 | Total: 75069 (75069 / 0)                | Hi: 19636833 |
|                             |      |           |          |              |   Heap: 75069 (75069 / <n/a>)           | Mi: 39403166 |
|                             |      |           |          |              |   Off-Heap: 0 (0 / 0)                   | Rd: 59039999 |
|                             |      |           |          |              |   Off-Heap Memory: 0                    | Wr: 0        |
+-----------------------------+------+-----------+----------+--------------+-----------------------------------------+--------------+
| 74786280(@n2), 10.100.0.239 | 4    | 33.94 %   | 81.07 %  | 01:06:23.896 | Total: 74817 (74817 / 0)                | Hi: 22447160 |
|                             |      |           |          |              |   Heap: 74817 (74817 / <n/a>)           | Mi: 44987105 |
|                             |      |           |          |              |   Off-Heap: 0 (0 / 0)                   | Rd: 67434265 |
|                             |      |           |          |              |   Off-Heap Memory: 0                    | Wr: 0        |
+-----------------------------+------+-----------+----------+--------------+-----------------------------------------+--------------+
| 5AB7B5FD(@n0), 10.100.0.205 | 4    | 69.39 %   | 15.50 %  | 00:52:54.529 | Total: 2706142 (1460736 / 1245406)      | Hi: 43629857 |
|                             |      |           |          |              |   Heap: 150000 (150000 / <n/a>)         | Mi: 0        |
|                             |      |           |          |              |   Off-Heap: 2556142 (1310736 / 1245406) | Rd: 43629857 |
|                             |      |           |          |              |   Off-Heap Memory: <n/a>                | Wr: 52347667 |
+-----------------------------+------+-----------+----------+--------------+-----------------------------------------+--------------+
| 0608CF95(@n1), 10.100.0.206 | 4    | 42.24 %   | 17.07 %  | 00:52:39.093 | Total: 2706142 (1395406 / 1310736)      | Hi: 43644401 |
|                             |      |           |          |              |   Heap: 150000 (150000 / <n/a>)         | Mi: 0        |
|                             |      |           |          |              |   Off-Heap: 2556142 (1245406 / 1310736) | Rd: 43644401 |
|                             |      |           |          |              |   Off-Heap Memory: <n/a>                | Wr: 52347791 |
+-----------------------------------------------------------------------------------------------------------------------------------+

1st and 2nd entries are clients and 3rd and 4th are server nodes.
My client nodes has LRU near cache with size 100000 and I am querying cache with 150000 random data.
But why there are heap entries on server nodes?

wt., 24 mar 2020 o 08:40 Dominik Przybysz <[hidden email]> napisał(a):
Hi,
exactly I want to have near cache only on client nodes. I will check your advice with dynamic cache.
I have two server nodes which keep data and I want to get data from them via my client nodes.
I am also  curious what had happened with heap on server nodes.

pon., 23 mar 2020 o 23:13 Evgenii Zhuravlev <[hidden email]> napisał(a):
Hi,

Near Cache configuration in xml creates near caches for all nodes, including server nodes. As far as I understand, you want to have them on client side only, right? If so, I'd recommend to create them dynamically: https://www.gridgain.com/docs/latest/developers-guide/near-cache#creating-near-cache-dynamically-on-client-nodes

What kind of operations are you running? Are you trying to access data on server from another server node? In any case, so many entries in Heap on server nodes looks strange. 

Evgenii

пн, 23 мар. 2020 г. в 07:08, Dominik Przybysz <[hidden email]>:
Hi,
I am using Ignite 2.7.6 and I have 2 server nodes with one partitioned cache and configuration:

<?xml version="1.0" encoding="UTF-8"?>
<beans xmlns="http://www.springframework.org/schema/beans"
       xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
       xsi:schemaLocation="
       http://www.springframework.org/schema/beans
       http://www.springframework.org/schema/beans/spring-beans.xsd">

    <bean id="grid.cfg" class="org.apache.ignite.configuration.IgniteConfiguration">
        <property name="cacheConfiguration">
            <bean class="org.apache.ignite.configuration.CacheConfiguration">
                <property name="name" value="cache1"/>
                <property name="cacheMode" value="PARTITIONED"/>
                <property name="statisticsEnabled" value="true"/>
                <property name="backups" value="1"/>
            </bean>
        </property>

        <property name="communicationSpi">
            <bean class="org.apache.ignite.spi.communication.tcp.TcpCommunicationSpi">
                <property name="localPort" value="47500"/>
            </bean>
        </property>

        <property name="discoverySpi">
            <bean class="org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi">
                <property name="localPort" value="47100"/>
                <property name="localPortRange" value="100"/>
                <property name="ipFinder">
                    <bean class="org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder">
                        <property name="addresses">
                            <list>
                                <value>ignite1:47100..47200</value>
                                <value>ignite2:47100..47200</value>
                            </list>
                        </property>
                    </bean>
                </property>
            </bean>
        </property>

        <property name="clientConnectorConfiguration">
            <bean class="org.apache.ignite.configuration.ClientConnectorConfiguration">
                <property name="port" value="10800"/>
            </bean>
        </property>

        <property name="dataStorageConfiguration">
            <bean class="org.apache.ignite.configuration.DataStorageConfiguration">
                <property name="defaultDataRegionConfiguration">
                    <bean class="org.apache.ignite.configuration.DataRegionConfiguration">
                        <property name="persistenceEnabled" value="true"/>
                        <property name="metricsEnabled" value="true"/>
                    </bean>
                </property>
                <property name="metricsEnabled" value="true"/>
            </bean>
        </property>

        <property name="consistentId" value="{{hostname}}"/>

        <property name="systemThreadPoolSize" value="{{ignite_system_thread_pool_size}}"/>
        <property name="dataStreamerThreadPoolSize" value="{{ignite_cluster_data_streamer_thread_pool_size}}"/>
    </bean>
</beans>

I loaded 1,5mln entries into cluster via data streamer.
I tested this topology without near cache and everything was fine, but when I tried to add near cache to my client nodes then server nodes started to keep data on heap and reads rps dramatically fell down (150k rps to 10k rps).

My clients' configuration:

<?xml version="1.0" encoding="UTF-8"?>
<beans xmlns="http://www.springframework.org/schema/beans"
       xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
       xsi:schemaLocation="
       http://www.springframework.org/schema/beans
       http://www.springframework.org/schema/beans/spring-beans.xsd">
    <bean id="grid.cfg" class="org.apache.ignite.configuration.IgniteConfiguration">
        <property name="clientMode" value="true"/>
        <property name="discoverySpi">
            <bean class="org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi">
                <property name="ipFinder">
                    <bean class="org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder">
                        <property name="addresses">
                            <list>
                            <value>ignite1:47100..47200</value>
                            <value>ignite2:47100..47200</value>
                            </list>
                        </property>
                    </bean>
                </property>
            </bean>
        </property>
        <property name="dataStreamerThreadPoolSize" value="8"/>
        <property name="systemThreadPoolSize" value="8"/>

        <property name="cacheConfiguration">
            <bean class="org.apache.ignite.configuration.CacheConfiguration">
                <!-- Cache configuration has to be the same as in server config -->
                <property name="name" value="cache1"/>
                <property name="cacheMode" value="PARTITIONED"/>
                <property name="statisticsEnabled" value="true"/>
                <property name="backups" value="1"/>

                <property name="nearConfiguration">
                    <bean class="org.apache.ignite.configuration.NearCacheConfiguration">
                        <property name="nearEvictionPolicyFactory">
                            <bean class="org.apache.ignite.cache.eviction.lru.LruEvictionPolicyFactory">
                                <property name="maxSize" value="100000"/>
                            </bean>
                        </property>
                    </bean>
                </property>
            </bean>
        </property>
    </bean>
</beans>


On visor i see:

Nodes for: cache1(@c0)
+=================================================================================================================================+
|       Node ID8(@), IP       | CPUs | Heap Used | CPU Load |   Up Time    |        Size (Primary / Backup)        | Hi/Mi/Rd/Wr  |
+=================================================================================================================================+
| BCA8F378(@n2), 10.100.0.239 | 4    | 32.32 %   | 2.17 %   | 00:38:33.071 | Total: 55204 (55204 / 0)              | Hi: 1671212  |
|                             |      |           |          |              |   Heap: 55204 (55204 / <n/a>)         | Mi: 35034768 |
|                             |      |           |          |              |   Off-Heap: 0 (0 / 0)                 | Rd: 36705980 |
|                             |      |           |          |              |   Off-Heap Memory: 0                  | Wr: 0        |
+-----------------------------+------+-----------+----------+--------------+---------------------------------------+--------------+
| 905F83EE(@n3), 10.100.0.230 | 4    | 52.56 %   | 6.67 %   | 00:38:33.401 | Total: 54051 (54051 / 0)              | Hi: 1766495  |
|                             |      |           |          |              |   Heap: 54051 (54051 / <n/a>)         | Mi: 34283753 |
|                             |      |           |          |              |   Off-Heap: 0 (0 / 0)                 | Rd: 36050248 |
|                             |      |           |          |              |   Off-Heap Memory: 0                  | Wr: 0        |
+-----------------------------+------+-----------+----------+--------------+---------------------------------------+--------------+
| 793E1BC9(@n1), 10.100.0.206 | 4    | 99.33 %   | 38.43 %  | 00:51:11.877 | Total: 2999836 (2230060 / 769776)     | Hi: 17323596 |
|                             |      |           |          |              |   Heap: 1499836 (1499836 / <n/a>)     | Mi: 0        |
|                             |      |           |          |              |   Off-Heap: 1500000 (730224 / 769776) | Rd: 17323596 |
|                             |      |           |          |              |   Off-Heap Memory: <n/a>              | Wr: 0        |
+-----------------------------+------+-----------+----------+--------------+---------------------------------------+--------------+
| 0147FB02(@n0), 10.100.0.205 | 4    | 96.48 %   | 40.33 %  | 00:51:11.820 | Total: 2999814 (2269590 / 730224)     | Hi: 17335702 |
|                             |      |           |          |              |   Heap: 1499814 (1499814 / <n/a>)     | Mi: 0        |
|                             |      |           |          |              |   Off-Heap: 1500000 (769776 / 730224) | Rd: 17335702 |
|                             |      |           |          |              |   Off-Heap Memory: <n/a>              | Wr: 0        |
+---------------------------------------------------------------------------------------------------------------------------------+


1st and 2nd entry is client node, 3rd and 4th is server node.

What is wrong with my near cache configuration?
Do I have to mirror all cache configuration on server node into client nodes configuration? (for example, when I miss backup parameter I received exception "Affinity key backups mismatch")

--
Pozdrawiam / Regards,
Dominik Przybysz


--
Pozdrawiam / Regards,
Dominik Przybysz


--
Pozdrawiam / Regards,
Dominik Przybysz
ezhuravlev ezhuravlev
Reply | Threaded
Open this post in threaded view
|

Re: Near cache configuration for partitioned cache

Hi,

I see that you have persistence, did you clean the persistence directory before changing configuration?

Evgenii

вт, 24 мар. 2020 г. в 02:33, Dominik Przybysz <[hidden email]>:
Hi,
I configured client node as you described in you email and heap usage on server nodes does not look as expected:

+===================================================================================================================================+
|       Node ID8(@), IP       | CPUs | Heap Used | CPU Load |   Up Time    |         Size (Primary / Backup)         | Hi/Mi/Rd/Wr  |
+===================================================================================================================================+
| 112132F5(@n3), 10.100.0.230 | 4    | 36.49 %   | 76.50 %  | 00:13:10.385 | Total: 75069 (75069 / 0)                | Hi: 19636833 |
|                             |      |           |          |              |   Heap: 75069 (75069 / <n/a>)           | Mi: 39403166 |
|                             |      |           |          |              |   Off-Heap: 0 (0 / 0)                   | Rd: 59039999 |
|                             |      |           |          |              |   Off-Heap Memory: 0                    | Wr: 0        |
+-----------------------------+------+-----------+----------+--------------+-----------------------------------------+--------------+
| 74786280(@n2), 10.100.0.239 | 4    | 33.94 %   | 81.07 %  | 01:06:23.896 | Total: 74817 (74817 / 0)                | Hi: 22447160 |
|                             |      |           |          |              |   Heap: 74817 (74817 / <n/a>)           | Mi: 44987105 |
|                             |      |           |          |              |   Off-Heap: 0 (0 / 0)                   | Rd: 67434265 |
|                             |      |           |          |              |   Off-Heap Memory: 0                    | Wr: 0        |
+-----------------------------+------+-----------+----------+--------------+-----------------------------------------+--------------+
| 5AB7B5FD(@n0), 10.100.0.205 | 4    | 69.39 %   | 15.50 %  | 00:52:54.529 | Total: 2706142 (1460736 / 1245406)      | Hi: 43629857 |
|                             |      |           |          |              |   Heap: 150000 (150000 / <n/a>)         | Mi: 0        |
|                             |      |           |          |              |   Off-Heap: 2556142 (1310736 / 1245406) | Rd: 43629857 |
|                             |      |           |          |              |   Off-Heap Memory: <n/a>                | Wr: 52347667 |
+-----------------------------+------+-----------+----------+--------------+-----------------------------------------+--------------+
| 0608CF95(@n1), 10.100.0.206 | 4    | 42.24 %   | 17.07 %  | 00:52:39.093 | Total: 2706142 (1395406 / 1310736)      | Hi: 43644401 |
|                             |      |           |          |              |   Heap: 150000 (150000 / <n/a>)         | Mi: 0        |
|                             |      |           |          |              |   Off-Heap: 2556142 (1245406 / 1310736) | Rd: 43644401 |
|                             |      |           |          |              |   Off-Heap Memory: <n/a>                | Wr: 52347791 |
+-----------------------------------------------------------------------------------------------------------------------------------+

1st and 2nd entries are clients and 3rd and 4th are server nodes.
My client nodes has LRU near cache with size 100000 and I am querying cache with 150000 random data.
But why there are heap entries on server nodes?

wt., 24 mar 2020 o 08:40 Dominik Przybysz <[hidden email]> napisał(a):
Hi,
exactly I want to have near cache only on client nodes. I will check your advice with dynamic cache.
I have two server nodes which keep data and I want to get data from them via my client nodes.
I am also  curious what had happened with heap on server nodes.

pon., 23 mar 2020 o 23:13 Evgenii Zhuravlev <[hidden email]> napisał(a):
Hi,

Near Cache configuration in xml creates near caches for all nodes, including server nodes. As far as I understand, you want to have them on client side only, right? If so, I'd recommend to create them dynamically: https://www.gridgain.com/docs/latest/developers-guide/near-cache#creating-near-cache-dynamically-on-client-nodes

What kind of operations are you running? Are you trying to access data on server from another server node? In any case, so many entries in Heap on server nodes looks strange. 

Evgenii

пн, 23 мар. 2020 г. в 07:08, Dominik Przybysz <[hidden email]>:
Hi,
I am using Ignite 2.7.6 and I have 2 server nodes with one partitioned cache and configuration:

<?xml version="1.0" encoding="UTF-8"?>
<beans xmlns="http://www.springframework.org/schema/beans"
       xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
       xsi:schemaLocation="
       http://www.springframework.org/schema/beans
       http://www.springframework.org/schema/beans/spring-beans.xsd">

    <bean id="grid.cfg" class="org.apache.ignite.configuration.IgniteConfiguration">
        <property name="cacheConfiguration">
            <bean class="org.apache.ignite.configuration.CacheConfiguration">
                <property name="name" value="cache1"/>
                <property name="cacheMode" value="PARTITIONED"/>
                <property name="statisticsEnabled" value="true"/>
                <property name="backups" value="1"/>
            </bean>
        </property>

        <property name="communicationSpi">
            <bean class="org.apache.ignite.spi.communication.tcp.TcpCommunicationSpi">
                <property name="localPort" value="47500"/>
            </bean>
        </property>

        <property name="discoverySpi">
            <bean class="org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi">
                <property name="localPort" value="47100"/>
                <property name="localPortRange" value="100"/>
                <property name="ipFinder">
                    <bean class="org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder">
                        <property name="addresses">
                            <list>
                                <value>ignite1:47100..47200</value>
                                <value>ignite2:47100..47200</value>
                            </list>
                        </property>
                    </bean>
                </property>
            </bean>
        </property>

        <property name="clientConnectorConfiguration">
            <bean class="org.apache.ignite.configuration.ClientConnectorConfiguration">
                <property name="port" value="10800"/>
            </bean>
        </property>

        <property name="dataStorageConfiguration">
            <bean class="org.apache.ignite.configuration.DataStorageConfiguration">
                <property name="defaultDataRegionConfiguration">
                    <bean class="org.apache.ignite.configuration.DataRegionConfiguration">
                        <property name="persistenceEnabled" value="true"/>
                        <property name="metricsEnabled" value="true"/>
                    </bean>
                </property>
                <property name="metricsEnabled" value="true"/>
            </bean>
        </property>

        <property name="consistentId" value="{{hostname}}"/>

        <property name="systemThreadPoolSize" value="{{ignite_system_thread_pool_size}}"/>
        <property name="dataStreamerThreadPoolSize" value="{{ignite_cluster_data_streamer_thread_pool_size}}"/>
    </bean>
</beans>

I loaded 1,5mln entries into cluster via data streamer.
I tested this topology without near cache and everything was fine, but when I tried to add near cache to my client nodes then server nodes started to keep data on heap and reads rps dramatically fell down (150k rps to 10k rps).

My clients' configuration:

<?xml version="1.0" encoding="UTF-8"?>
<beans xmlns="http://www.springframework.org/schema/beans"
       xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
       xsi:schemaLocation="
       http://www.springframework.org/schema/beans
       http://www.springframework.org/schema/beans/spring-beans.xsd">
    <bean id="grid.cfg" class="org.apache.ignite.configuration.IgniteConfiguration">
        <property name="clientMode" value="true"/>
        <property name="discoverySpi">
            <bean class="org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi">
                <property name="ipFinder">
                    <bean class="org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder">
                        <property name="addresses">
                            <list>
                            <value>ignite1:47100..47200</value>
                            <value>ignite2:47100..47200</value>
                            </list>
                        </property>
                    </bean>
                </property>
            </bean>
        </property>
        <property name="dataStreamerThreadPoolSize" value="8"/>
        <property name="systemThreadPoolSize" value="8"/>

        <property name="cacheConfiguration">
            <bean class="org.apache.ignite.configuration.CacheConfiguration">
                <!-- Cache configuration has to be the same as in server config -->
                <property name="name" value="cache1"/>
                <property name="cacheMode" value="PARTITIONED"/>
                <property name="statisticsEnabled" value="true"/>
                <property name="backups" value="1"/>

                <property name="nearConfiguration">
                    <bean class="org.apache.ignite.configuration.NearCacheConfiguration">
                        <property name="nearEvictionPolicyFactory">
                            <bean class="org.apache.ignite.cache.eviction.lru.LruEvictionPolicyFactory">
                                <property name="maxSize" value="100000"/>
                            </bean>
                        </property>
                    </bean>
                </property>
            </bean>
        </property>
    </bean>
</beans>


On visor i see:

Nodes for: cache1(@c0)
+=================================================================================================================================+
|       Node ID8(@), IP       | CPUs | Heap Used | CPU Load |   Up Time    |        Size (Primary / Backup)        | Hi/Mi/Rd/Wr  |
+=================================================================================================================================+
| BCA8F378(@n2), 10.100.0.239 | 4    | 32.32 %   | 2.17 %   | 00:38:33.071 | Total: 55204 (55204 / 0)              | Hi: 1671212  |
|                             |      |           |          |              |   Heap: 55204 (55204 / <n/a>)         | Mi: 35034768 |
|                             |      |           |          |              |   Off-Heap: 0 (0 / 0)                 | Rd: 36705980 |
|                             |      |           |          |              |   Off-Heap Memory: 0                  | Wr: 0        |
+-----------------------------+------+-----------+----------+--------------+---------------------------------------+--------------+
| 905F83EE(@n3), 10.100.0.230 | 4    | 52.56 %   | 6.67 %   | 00:38:33.401 | Total: 54051 (54051 / 0)              | Hi: 1766495  |
|                             |      |           |          |              |   Heap: 54051 (54051 / <n/a>)         | Mi: 34283753 |
|                             |      |           |          |              |   Off-Heap: 0 (0 / 0)                 | Rd: 36050248 |
|                             |      |           |          |              |   Off-Heap Memory: 0                  | Wr: 0        |
+-----------------------------+------+-----------+----------+--------------+---------------------------------------+--------------+
| 793E1BC9(@n1), 10.100.0.206 | 4    | 99.33 %   | 38.43 %  | 00:51:11.877 | Total: 2999836 (2230060 / 769776)     | Hi: 17323596 |
|                             |      |           |          |              |   Heap: 1499836 (1499836 / <n/a>)     | Mi: 0        |
|                             |      |           |          |              |   Off-Heap: 1500000 (730224 / 769776) | Rd: 17323596 |
|                             |      |           |          |              |   Off-Heap Memory: <n/a>              | Wr: 0        |
+-----------------------------+------+-----------+----------+--------------+---------------------------------------+--------------+
| 0147FB02(@n0), 10.100.0.205 | 4    | 96.48 %   | 40.33 %  | 00:51:11.820 | Total: 2999814 (2269590 / 730224)     | Hi: 17335702 |
|                             |      |           |          |              |   Heap: 1499814 (1499814 / <n/a>)     | Mi: 0        |
|                             |      |           |          |              |   Off-Heap: 1500000 (769776 / 730224) | Rd: 17335702 |
|                             |      |           |          |              |   Off-Heap Memory: <n/a>              | Wr: 0        |
+---------------------------------------------------------------------------------------------------------------------------------+


1st and 2nd entry is client node, 3rd and 4th is server node.

What is wrong with my near cache configuration?
Do I have to mirror all cache configuration on server node into client nodes configuration? (for example, when I miss backup parameter I received exception "Affinity key backups mismatch")

--
Pozdrawiam / Regards,
Dominik Przybysz


--
Pozdrawiam / Regards,
Dominik Przybysz


--
Pozdrawiam / Regards,
Dominik Przybysz
Dominik Przybysz Dominik Przybysz
Reply | Threaded
Open this post in threaded view
|

Re: Near cache configuration for partitioned cache

Hi,
between first test with near cache configured in xml and configuring near cache via java code i cleaned data.
But why should I clean data on server nodes when I add near cache only on client nodes to gain performance?
Now I am testing ignite so I could do this, but it won't be acceptable when running on production.

wt., 24 mar 2020 o 16:47 Evgenii Zhuravlev <[hidden email]> napisał(a):
Hi,

I see that you have persistence, did you clean the persistence directory before changing configuration?

Evgenii

вт, 24 мар. 2020 г. в 02:33, Dominik Przybysz <[hidden email]>:
Hi,
I configured client node as you described in you email and heap usage on server nodes does not look as expected:

+===================================================================================================================================+
|       Node ID8(@), IP       | CPUs | Heap Used | CPU Load |   Up Time    |         Size (Primary / Backup)         | Hi/Mi/Rd/Wr  |
+===================================================================================================================================+
| 112132F5(@n3), 10.100.0.230 | 4    | 36.49 %   | 76.50 %  | 00:13:10.385 | Total: 75069 (75069 / 0)                | Hi: 19636833 |
|                             |      |           |          |              |   Heap: 75069 (75069 / <n/a>)           | Mi: 39403166 |
|                             |      |           |          |              |   Off-Heap: 0 (0 / 0)                   | Rd: 59039999 |
|                             |      |           |          |              |   Off-Heap Memory: 0                    | Wr: 0        |
+-----------------------------+------+-----------+----------+--------------+-----------------------------------------+--------------+
| 74786280(@n2), 10.100.0.239 | 4    | 33.94 %   | 81.07 %  | 01:06:23.896 | Total: 74817 (74817 / 0)                | Hi: 22447160 |
|                             |      |           |          |              |   Heap: 74817 (74817 / <n/a>)           | Mi: 44987105 |
|                             |      |           |          |              |   Off-Heap: 0 (0 / 0)                   | Rd: 67434265 |
|                             |      |           |          |              |   Off-Heap Memory: 0                    | Wr: 0        |
+-----------------------------+------+-----------+----------+--------------+-----------------------------------------+--------------+
| 5AB7B5FD(@n0), 10.100.0.205 | 4    | 69.39 %   | 15.50 %  | 00:52:54.529 | Total: 2706142 (1460736 / 1245406)      | Hi: 43629857 |
|                             |      |           |          |              |   Heap: 150000 (150000 / <n/a>)         | Mi: 0        |
|                             |      |           |          |              |   Off-Heap: 2556142 (1310736 / 1245406) | Rd: 43629857 |
|                             |      |           |          |              |   Off-Heap Memory: <n/a>                | Wr: 52347667 |
+-----------------------------+------+-----------+----------+--------------+-----------------------------------------+--------------+
| 0608CF95(@n1), 10.100.0.206 | 4    | 42.24 %   | 17.07 %  | 00:52:39.093 | Total: 2706142 (1395406 / 1310736)      | Hi: 43644401 |
|                             |      |           |          |              |   Heap: 150000 (150000 / <n/a>)         | Mi: 0        |
|                             |      |           |          |              |   Off-Heap: 2556142 (1245406 / 1310736) | Rd: 43644401 |
|                             |      |           |          |              |   Off-Heap Memory: <n/a>                | Wr: 52347791 |
+-----------------------------------------------------------------------------------------------------------------------------------+

1st and 2nd entries are clients and 3rd and 4th are server nodes.
My client nodes has LRU near cache with size 100000 and I am querying cache with 150000 random data.
But why there are heap entries on server nodes?

wt., 24 mar 2020 o 08:40 Dominik Przybysz <[hidden email]> napisał(a):
Hi,
exactly I want to have near cache only on client nodes. I will check your advice with dynamic cache.
I have two server nodes which keep data and I want to get data from them via my client nodes.
I am also  curious what had happened with heap on server nodes.

pon., 23 mar 2020 o 23:13 Evgenii Zhuravlev <[hidden email]> napisał(a):
Hi,

Near Cache configuration in xml creates near caches for all nodes, including server nodes. As far as I understand, you want to have them on client side only, right? If so, I'd recommend to create them dynamically: https://www.gridgain.com/docs/latest/developers-guide/near-cache#creating-near-cache-dynamically-on-client-nodes

What kind of operations are you running? Are you trying to access data on server from another server node? In any case, so many entries in Heap on server nodes looks strange. 

Evgenii

пн, 23 мар. 2020 г. в 07:08, Dominik Przybysz <[hidden email]>:
Hi,
I am using Ignite 2.7.6 and I have 2 server nodes with one partitioned cache and configuration:

<?xml version="1.0" encoding="UTF-8"?>
<beans xmlns="http://www.springframework.org/schema/beans"
       xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
       xsi:schemaLocation="
       http://www.springframework.org/schema/beans
       http://www.springframework.org/schema/beans/spring-beans.xsd">

    <bean id="grid.cfg" class="org.apache.ignite.configuration.IgniteConfiguration">
        <property name="cacheConfiguration">
            <bean class="org.apache.ignite.configuration.CacheConfiguration">
                <property name="name" value="cache1"/>
                <property name="cacheMode" value="PARTITIONED"/>
                <property name="statisticsEnabled" value="true"/>
                <property name="backups" value="1"/>
            </bean>
        </property>

        <property name="communicationSpi">
            <bean class="org.apache.ignite.spi.communication.tcp.TcpCommunicationSpi">
                <property name="localPort" value="47500"/>
            </bean>
        </property>

        <property name="discoverySpi">
            <bean class="org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi">
                <property name="localPort" value="47100"/>
                <property name="localPortRange" value="100"/>
                <property name="ipFinder">
                    <bean class="org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder">
                        <property name="addresses">
                            <list>
                                <value>ignite1:47100..47200</value>
                                <value>ignite2:47100..47200</value>
                            </list>
                        </property>
                    </bean>
                </property>
            </bean>
        </property>

        <property name="clientConnectorConfiguration">
            <bean class="org.apache.ignite.configuration.ClientConnectorConfiguration">
                <property name="port" value="10800"/>
            </bean>
        </property>

        <property name="dataStorageConfiguration">
            <bean class="org.apache.ignite.configuration.DataStorageConfiguration">
                <property name="defaultDataRegionConfiguration">
                    <bean class="org.apache.ignite.configuration.DataRegionConfiguration">
                        <property name="persistenceEnabled" value="true"/>
                        <property name="metricsEnabled" value="true"/>
                    </bean>
                </property>
                <property name="metricsEnabled" value="true"/>
            </bean>
        </property>

        <property name="consistentId" value="{{hostname}}"/>

        <property name="systemThreadPoolSize" value="{{ignite_system_thread_pool_size}}"/>
        <property name="dataStreamerThreadPoolSize" value="{{ignite_cluster_data_streamer_thread_pool_size}}"/>
    </bean>
</beans>

I loaded 1,5mln entries into cluster via data streamer.
I tested this topology without near cache and everything was fine, but when I tried to add near cache to my client nodes then server nodes started to keep data on heap and reads rps dramatically fell down (150k rps to 10k rps).

My clients' configuration:

<?xml version="1.0" encoding="UTF-8"?>
<beans xmlns="http://www.springframework.org/schema/beans"
       xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
       xsi:schemaLocation="
       http://www.springframework.org/schema/beans
       http://www.springframework.org/schema/beans/spring-beans.xsd">
    <bean id="grid.cfg" class="org.apache.ignite.configuration.IgniteConfiguration">
        <property name="clientMode" value="true"/>
        <property name="discoverySpi">
            <bean class="org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi">
                <property name="ipFinder">
                    <bean class="org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder">
                        <property name="addresses">
                            <list>
                            <value>ignite1:47100..47200</value>
                            <value>ignite2:47100..47200</value>
                            </list>
                        </property>
                    </bean>
                </property>
            </bean>
        </property>
        <property name="dataStreamerThreadPoolSize" value="8"/>
        <property name="systemThreadPoolSize" value="8"/>

        <property name="cacheConfiguration">
            <bean class="org.apache.ignite.configuration.CacheConfiguration">
                <!-- Cache configuration has to be the same as in server config -->
                <property name="name" value="cache1"/>
                <property name="cacheMode" value="PARTITIONED"/>
                <property name="statisticsEnabled" value="true"/>
                <property name="backups" value="1"/>

                <property name="nearConfiguration">
                    <bean class="org.apache.ignite.configuration.NearCacheConfiguration">
                        <property name="nearEvictionPolicyFactory">
                            <bean class="org.apache.ignite.cache.eviction.lru.LruEvictionPolicyFactory">
                                <property name="maxSize" value="100000"/>
                            </bean>
                        </property>
                    </bean>
                </property>
            </bean>
        </property>
    </bean>
</beans>


On visor i see:

Nodes for: cache1(@c0)
+=================================================================================================================================+
|       Node ID8(@), IP       | CPUs | Heap Used | CPU Load |   Up Time    |        Size (Primary / Backup)        | Hi/Mi/Rd/Wr  |
+=================================================================================================================================+
| BCA8F378(@n2), 10.100.0.239 | 4    | 32.32 %   | 2.17 %   | 00:38:33.071 | Total: 55204 (55204 / 0)              | Hi: 1671212  |
|                             |      |           |          |              |   Heap: 55204 (55204 / <n/a>)         | Mi: 35034768 |
|                             |      |           |          |              |   Off-Heap: 0 (0 / 0)                 | Rd: 36705980 |
|                             |      |           |          |              |   Off-Heap Memory: 0                  | Wr: 0        |
+-----------------------------+------+-----------+----------+--------------+---------------------------------------+--------------+
| 905F83EE(@n3), 10.100.0.230 | 4    | 52.56 %   | 6.67 %   | 00:38:33.401 | Total: 54051 (54051 / 0)              | Hi: 1766495  |
|                             |      |           |          |              |   Heap: 54051 (54051 / <n/a>)         | Mi: 34283753 |
|                             |      |           |          |              |   Off-Heap: 0 (0 / 0)                 | Rd: 36050248 |
|                             |      |           |          |              |   Off-Heap Memory: 0                  | Wr: 0        |
+-----------------------------+------+-----------+----------+--------------+---------------------------------------+--------------+
| 793E1BC9(@n1), 10.100.0.206 | 4    | 99.33 %   | 38.43 %  | 00:51:11.877 | Total: 2999836 (2230060 / 769776)     | Hi: 17323596 |
|                             |      |           |          |              |   Heap: 1499836 (1499836 / <n/a>)     | Mi: 0        |
|                             |      |           |          |              |   Off-Heap: 1500000 (730224 / 769776) | Rd: 17323596 |
|                             |      |           |          |              |   Off-Heap Memory: <n/a>              | Wr: 0        |
+-----------------------------+------+-----------+----------+--------------+---------------------------------------+--------------+
| 0147FB02(@n0), 10.100.0.205 | 4    | 96.48 %   | 40.33 %  | 00:51:11.820 | Total: 2999814 (2269590 / 730224)     | Hi: 17335702 |
|                             |      |           |          |              |   Heap: 1499814 (1499814 / <n/a>)     | Mi: 0        |
|                             |      |           |          |              |   Off-Heap: 1500000 (769776 / 730224) | Rd: 17335702 |
|                             |      |           |          |              |   Off-Heap Memory: <n/a>              | Wr: 0        |
+---------------------------------------------------------------------------------------------------------------------------------+


1st and 2nd entry is client node, 3rd and 4th is server node.

What is wrong with my near cache configuration?
Do I have to mirror all cache configuration on server node into client nodes configuration? (for example, when I miss backup parameter I received exception "Affinity key backups mismatch")

--
Pozdrawiam / Regards,
Dominik Przybysz


--
Pozdrawiam / Regards,
Dominik Przybysz


--
Pozdrawiam / Regards,
Dominik Przybysz


--
Pozdrawiam / Regards,
Dominik Przybysz
ezhuravlev ezhuravlev
Reply | Threaded
Open this post in threaded view
|

Re: Near cache configuration for partitioned cache

Dominik,

This cache was already created in the cluster with Near Cace for all nodes. Once created, this configuration can't be changed, so, it's needed to clean persistence data or just destroy the cache and create it without Near Cache in configuration. After this, you can dynamically add Near caches for clients.

Evgenii

ср, 25 мар. 2020 г. в 01:02, Dominik Przybysz <[hidden email]>:
Hi,
between first test with near cache configured in xml and configuring near cache via java code i cleaned data.
But why should I clean data on server nodes when I add near cache only on client nodes to gain performance?
Now I am testing ignite so I could do this, but it won't be acceptable when running on production.

wt., 24 mar 2020 o 16:47 Evgenii Zhuravlev <[hidden email]> napisał(a):
Hi,

I see that you have persistence, did you clean the persistence directory before changing configuration?

Evgenii

вт, 24 мар. 2020 г. в 02:33, Dominik Przybysz <[hidden email]>:
Hi,
I configured client node as you described in you email and heap usage on server nodes does not look as expected:

+===================================================================================================================================+
|       Node ID8(@), IP       | CPUs | Heap Used | CPU Load |   Up Time    |         Size (Primary / Backup)         | Hi/Mi/Rd/Wr  |
+===================================================================================================================================+
| 112132F5(@n3), 10.100.0.230 | 4    | 36.49 %   | 76.50 %  | 00:13:10.385 | Total: 75069 (75069 / 0)                | Hi: 19636833 |
|                             |      |           |          |              |   Heap: 75069 (75069 / <n/a>)           | Mi: 39403166 |
|                             |      |           |          |              |   Off-Heap: 0 (0 / 0)                   | Rd: 59039999 |
|                             |      |           |          |              |   Off-Heap Memory: 0                    | Wr: 0        |
+-----------------------------+------+-----------+----------+--------------+-----------------------------------------+--------------+
| 74786280(@n2), 10.100.0.239 | 4    | 33.94 %   | 81.07 %  | 01:06:23.896 | Total: 74817 (74817 / 0)                | Hi: 22447160 |
|                             |      |           |          |              |   Heap: 74817 (74817 / <n/a>)           | Mi: 44987105 |
|                             |      |           |          |              |   Off-Heap: 0 (0 / 0)                   | Rd: 67434265 |
|                             |      |           |          |              |   Off-Heap Memory: 0                    | Wr: 0        |
+-----------------------------+------+-----------+----------+--------------+-----------------------------------------+--------------+
| 5AB7B5FD(@n0), 10.100.0.205 | 4    | 69.39 %   | 15.50 %  | 00:52:54.529 | Total: 2706142 (1460736 / 1245406)      | Hi: 43629857 |
|                             |      |           |          |              |   Heap: 150000 (150000 / <n/a>)         | Mi: 0        |
|                             |      |           |          |              |   Off-Heap: 2556142 (1310736 / 1245406) | Rd: 43629857 |
|                             |      |           |          |              |   Off-Heap Memory: <n/a>                | Wr: 52347667 |
+-----------------------------+------+-----------+----------+--------------+-----------------------------------------+--------------+
| 0608CF95(@n1), 10.100.0.206 | 4    | 42.24 %   | 17.07 %  | 00:52:39.093 | Total: 2706142 (1395406 / 1310736)      | Hi: 43644401 |
|                             |      |           |          |              |   Heap: 150000 (150000 / <n/a>)         | Mi: 0        |
|                             |      |           |          |              |   Off-Heap: 2556142 (1245406 / 1310736) | Rd: 43644401 |
|                             |      |           |          |              |   Off-Heap Memory: <n/a>                | Wr: 52347791 |
+-----------------------------------------------------------------------------------------------------------------------------------+

1st and 2nd entries are clients and 3rd and 4th are server nodes.
My client nodes has LRU near cache with size 100000 and I am querying cache with 150000 random data.
But why there are heap entries on server nodes?

wt., 24 mar 2020 o 08:40 Dominik Przybysz <[hidden email]> napisał(a):
Hi,
exactly I want to have near cache only on client nodes. I will check your advice with dynamic cache.
I have two server nodes which keep data and I want to get data from them via my client nodes.
I am also  curious what had happened with heap on server nodes.

pon., 23 mar 2020 o 23:13 Evgenii Zhuravlev <[hidden email]> napisał(a):
Hi,

Near Cache configuration in xml creates near caches for all nodes, including server nodes. As far as I understand, you want to have them on client side only, right? If so, I'd recommend to create them dynamically: https://www.gridgain.com/docs/latest/developers-guide/near-cache#creating-near-cache-dynamically-on-client-nodes

What kind of operations are you running? Are you trying to access data on server from another server node? In any case, so many entries in Heap on server nodes looks strange. 

Evgenii

пн, 23 мар. 2020 г. в 07:08, Dominik Przybysz <[hidden email]>:
Hi,
I am using Ignite 2.7.6 and I have 2 server nodes with one partitioned cache and configuration:

<?xml version="1.0" encoding="UTF-8"?>
<beans xmlns="http://www.springframework.org/schema/beans"
       xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
       xsi:schemaLocation="
       http://www.springframework.org/schema/beans
       http://www.springframework.org/schema/beans/spring-beans.xsd">

    <bean id="grid.cfg" class="org.apache.ignite.configuration.IgniteConfiguration">
        <property name="cacheConfiguration">
            <bean class="org.apache.ignite.configuration.CacheConfiguration">
                <property name="name" value="cache1"/>
                <property name="cacheMode" value="PARTITIONED"/>
                <property name="statisticsEnabled" value="true"/>
                <property name="backups" value="1"/>
            </bean>
        </property>

        <property name="communicationSpi">
            <bean class="org.apache.ignite.spi.communication.tcp.TcpCommunicationSpi">
                <property name="localPort" value="47500"/>
            </bean>
        </property>

        <property name="discoverySpi">
            <bean class="org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi">
                <property name="localPort" value="47100"/>
                <property name="localPortRange" value="100"/>
                <property name="ipFinder">
                    <bean class="org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder">
                        <property name="addresses">
                            <list>
                                <value>ignite1:47100..47200</value>
                                <value>ignite2:47100..47200</value>
                            </list>
                        </property>
                    </bean>
                </property>
            </bean>
        </property>

        <property name="clientConnectorConfiguration">
            <bean class="org.apache.ignite.configuration.ClientConnectorConfiguration">
                <property name="port" value="10800"/>
            </bean>
        </property>

        <property name="dataStorageConfiguration">
            <bean class="org.apache.ignite.configuration.DataStorageConfiguration">
                <property name="defaultDataRegionConfiguration">
                    <bean class="org.apache.ignite.configuration.DataRegionConfiguration">
                        <property name="persistenceEnabled" value="true"/>
                        <property name="metricsEnabled" value="true"/>
                    </bean>
                </property>
                <property name="metricsEnabled" value="true"/>
            </bean>
        </property>

        <property name="consistentId" value="{{hostname}}"/>

        <property name="systemThreadPoolSize" value="{{ignite_system_thread_pool_size}}"/>
        <property name="dataStreamerThreadPoolSize" value="{{ignite_cluster_data_streamer_thread_pool_size}}"/>
    </bean>
</beans>

I loaded 1,5mln entries into cluster via data streamer.
I tested this topology without near cache and everything was fine, but when I tried to add near cache to my client nodes then server nodes started to keep data on heap and reads rps dramatically fell down (150k rps to 10k rps).

My clients' configuration:

<?xml version="1.0" encoding="UTF-8"?>
<beans xmlns="http://www.springframework.org/schema/beans"
       xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
       xsi:schemaLocation="
       http://www.springframework.org/schema/beans
       http://www.springframework.org/schema/beans/spring-beans.xsd">
    <bean id="grid.cfg" class="org.apache.ignite.configuration.IgniteConfiguration">
        <property name="clientMode" value="true"/>
        <property name="discoverySpi">
            <bean class="org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi">
                <property name="ipFinder">
                    <bean class="org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder">
                        <property name="addresses">
                            <list>
                            <value>ignite1:47100..47200</value>
                            <value>ignite2:47100..47200</value>
                            </list>
                        </property>
                    </bean>
                </property>
            </bean>
        </property>
        <property name="dataStreamerThreadPoolSize" value="8"/>
        <property name="systemThreadPoolSize" value="8"/>

        <property name="cacheConfiguration">
            <bean class="org.apache.ignite.configuration.CacheConfiguration">
                <!-- Cache configuration has to be the same as in server config -->
                <property name="name" value="cache1"/>
                <property name="cacheMode" value="PARTITIONED"/>
                <property name="statisticsEnabled" value="true"/>
                <property name="backups" value="1"/>

                <property name="nearConfiguration">
                    <bean class="org.apache.ignite.configuration.NearCacheConfiguration">
                        <property name="nearEvictionPolicyFactory">
                            <bean class="org.apache.ignite.cache.eviction.lru.LruEvictionPolicyFactory">
                                <property name="maxSize" value="100000"/>
                            </bean>
                        </property>
                    </bean>
                </property>
            </bean>
        </property>
    </bean>
</beans>


On visor i see:

Nodes for: cache1(@c0)
+=================================================================================================================================+
|       Node ID8(@), IP       | CPUs | Heap Used | CPU Load |   Up Time    |        Size (Primary / Backup)        | Hi/Mi/Rd/Wr  |
+=================================================================================================================================+
| BCA8F378(@n2), 10.100.0.239 | 4    | 32.32 %   | 2.17 %   | 00:38:33.071 | Total: 55204 (55204 / 0)              | Hi: 1671212  |
|                             |      |           |          |              |   Heap: 55204 (55204 / <n/a>)         | Mi: 35034768 |
|                             |      |           |          |              |   Off-Heap: 0 (0 / 0)                 | Rd: 36705980 |
|                             |      |           |          |              |   Off-Heap Memory: 0                  | Wr: 0        |
+-----------------------------+------+-----------+----------+--------------+---------------------------------------+--------------+
| 905F83EE(@n3), 10.100.0.230 | 4    | 52.56 %   | 6.67 %   | 00:38:33.401 | Total: 54051 (54051 / 0)              | Hi: 1766495  |
|                             |      |           |          |              |   Heap: 54051 (54051 / <n/a>)         | Mi: 34283753 |
|                             |      |           |          |              |   Off-Heap: 0 (0 / 0)                 | Rd: 36050248 |
|                             |      |           |          |              |   Off-Heap Memory: 0                  | Wr: 0        |
+-----------------------------+------+-----------+----------+--------------+---------------------------------------+--------------+
| 793E1BC9(@n1), 10.100.0.206 | 4    | 99.33 %   | 38.43 %  | 00:51:11.877 | Total: 2999836 (2230060 / 769776)     | Hi: 17323596 |
|                             |      |           |          |              |   Heap: 1499836 (1499836 / <n/a>)     | Mi: 0        |
|                             |      |           |          |              |   Off-Heap: 1500000 (730224 / 769776) | Rd: 17323596 |
|                             |      |           |          |              |   Off-Heap Memory: <n/a>              | Wr: 0        |
+-----------------------------+------+-----------+----------+--------------+---------------------------------------+--------------+
| 0147FB02(@n0), 10.100.0.205 | 4    | 96.48 %   | 40.33 %  | 00:51:11.820 | Total: 2999814 (2269590 / 730224)     | Hi: 17335702 |
|                             |      |           |          |              |   Heap: 1499814 (1499814 / <n/a>)     | Mi: 0        |
|                             |      |           |          |              |   Off-Heap: 1500000 (769776 / 730224) | Rd: 17335702 |
|                             |      |           |          |              |   Off-Heap Memory: <n/a>              | Wr: 0        |
+---------------------------------------------------------------------------------------------------------------------------------+


1st and 2nd entry is client node, 3rd and 4th is server node.

What is wrong with my near cache configuration?
Do I have to mirror all cache configuration on server node into client nodes configuration? (for example, when I miss backup parameter I received exception "Affinity key backups mismatch")

--
Pozdrawiam / Regards,
Dominik Przybysz


--
Pozdrawiam / Regards,
Dominik Przybysz


--
Pozdrawiam / Regards,
Dominik Przybysz


--
Pozdrawiam / Regards,
Dominik Przybysz