Large Heap with lots of BinaryMetaDataHolders

classic Classic list List threaded Threaded
10 messages Options
ssansoy ssansoy
Reply | Threaded
Open this post in threaded view
|

Large Heap with lots of BinaryMetaDataHolders

Hi, could anyone please help understand why the heap of a client app has such
large amounts of data pertaining to binary meta data?

Here it takes up 30mb but in our UAT environment we have approx 50 caches.
The binary meta data that gets added to the client's heap equats to around
220mb (even for a very simple app that doesn't do any subscriptions - it
just calls Igition.start() to connect to the cluster)

It seems meta is kept on the client for every cache even if the client app
needs it or not. Is there any way to tune this at all - e.g. knowing that a
particular client is only interested in a particular cache?

Screenshot:

<http://apache-ignite-users.70518.x6.nabble.com/file/t2797/binaryMetadata.png>

Thanks



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/
aealexsandrov aealexsandrov
Reply | Threaded
Open this post in threaded view
|

Re: Large Heap with lots of BinaryMetaDataHolders

Hello,

Let's start from the very beginning.

1) Could you please share the server and client config?
2) Java code of what you have in your client launcher application

I will try to investigate your case.

BR,
Andrew

10/28/2020 7:19 PM, ssansoy пишет:

> Hi, could anyone please help understand why the heap of a client app has such
> large amounts of data pertaining to binary meta data?
>
> Here it takes up 30mb but in our UAT environment we have approx 50 caches.
> The binary meta data that gets added to the client's heap equats to around
> 220mb (even for a very simple app that doesn't do any subscriptions - it
> just calls Igition.start() to connect to the cluster)
>
> It seems meta is kept on the client for every cache even if the client app
> needs it or not. Is there any way to tune this at all - e.g. knowing that a
> particular client is only interested in a particular cache?
>
> Screenshot:
>
> <http://apache-ignite-users.70518.x6.nabble.com/file/t2797/binaryMetadata.png>
>
> Thanks
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
ssansoy ssansoy
Reply | Threaded
Open this post in threaded view
|

Re: Large Heap with lots of BinaryMetaDataHolders

Hi,

Here is the client config:

    <bean id="clientConfig"
class="org.apache.ignite.configuration.IgniteConfiguration">
       <property name="failureHandler">
            <bean
class="org.apache.ignite.failure.StopNodeOrHaltFailureHandler">
                <constructor-arg value="true"/>
                <constructor-arg value="1000"/>

            </bean>
        </property>
        <property name="pluginProviders">
            <array>
                <bean class=&quot;&lt;MY_SecurityPluginProvider>">
                    <constructor-arg ref="nodeSecurityCredential"/>
                    <constructor-arg ref="securityPluginConfiguration"/>
                </bean>
                <bean class=&quot;&lt;MY_SegmentationPluginProvider>">
                </bean>
            </array>
        </property>
        <property name="eventStorageSpi">
            <bean
                    class=&quot;&lt;MY_AUDIT_STORAGE_SPI>"
                    scope="prototype"/>
        </property>

        <property name="discoverySpi" ref="tcpDiscSpiSpecific"/>
        <property name="peerClassLoadingEnabled" value="true"/>
        <property name="deploymentMode" value="SHARED"/>
        <property name="gridLogger" ref="igniteLogger"/>
        <property name="metricsLogFrequency" value="0"/>

        <property name="includeEventTypes">
            <list>

                <util:constant
static-field="org.apache.ignite.events.EventType.EVT_TX_STARTED"/>
                <util:constant
                       
static-field="org.apache.ignite.events.EventType.EVT_NODE_SEGMENTED"/>
                <util:constant
                       
static-field="org.apache.ignite.events.EventType.EVT_CLIENT_NODE_RECONNECTED"/>
                <util:constant
static-field="org.apache.ignite.events.EventType.EVT_CACHE_OBJECT_PUT"/>
                <util:constant
static-field="org.apache.ignite.events.EventType.EVT_CACHE_OBJECT_REMOVED"/>
            </list>
        </property>

        <property name="dataStreamerThreadPoolSize" value="4"/>
        <property name="igfsThreadPoolSize" value="4"/>
        <property name="peerClassLoadingThreadPoolSize" value="8"/>

        <property name="connectorConfiguration">
            <bean
class="org.apache.ignite.configuration.ConnectorConfiguration">
                <property name="threadPoolSize" value="4"/>
            </bean>
        </property>

        <property name="workDirectory" value="${ignite.work.directory}"/>

        <property name="clientMode" value="true"/>
    </bean>

Here is the server config (and associated beans)

  <bean class="org.apache.ignite.configuration.IgniteConfiguration"
id="serverConfig">
 
  <property name="failureHandler">
                <bean class="org.apache.ignite.failure.StopNodeOrHaltFailureHandler">
                        <constructor-arg value="true"/>
                        <constructor-arg value="1000"/>

                </bean>
        </property>
        <property name="pluginProviders">
                <array>
                        <bean class=&quot;&lt;MY_SecurityPluginProvider>">
                                <constructor-arg ref="nodeSecurityCredential"/>
                                <constructor-arg ref="securityPluginConfiguration"/>
                        </bean>
                        <bean class=&quot;&lt;MY_SegmentationPluginProvider>">
                        </bean>
                </array>
        </property>
        <property name="eventStorageSpi">
                <bean
                                class=&quot;&lt;MY_AUDIT_STORAGE_SPI>"
                                scope="prototype"/>
        </property>

        <property name="discoverySpi" ref="tcpDiscSpiSpecific"/>
        <property name="peerClassLoadingEnabled" value="true"/>
        <property name="deploymentMode" value="SHARED"/>
        <property name="gridLogger" ref="igniteLogger"/>
        <property name="metricsLogFrequency" value="0"/>

        <property name="includeEventTypes">
                <list>

                        <util:constant
static-field="org.apache.ignite.events.EventType.EVT_TX_STARTED"/>
                        <util:constant
                                        static-field="org.apache.ignite.events.EventType.EVT_NODE_SEGMENTED"/>
                        <util:constant
                               
static-field="org.apache.ignite.events.EventType.EVT_CLIENT_NODE_RECONNECTED"/>
                        <util:constant
static-field="org.apache.ignite.events.EventType.EVT_CACHE_OBJECT_PUT"/>
                        <util:constant
static-field="org.apache.ignite.events.EventType.EVT_CACHE_OBJECT_REMOVED"/>
                </list>
        </property>

        <property name="dataStreamerThreadPoolSize" value="4"/>
        <property name="igfsThreadPoolSize" value="4"/>
        <property name="peerClassLoadingThreadPoolSize" value="8"/>

        <property name="connectorConfiguration">
                <bean class="org.apache.ignite.configuration.ConnectorConfiguration">
                        <property name="threadPoolSize" value="4"/>
                </bean>
        </property>

    <property name="communicationSpi">
      <bean
class="org.apache.ignite.spi.communication.tcp.TcpCommunicationSpi"
scope="prototype">
        <property name="localPort"
value="${ignite.communicationSpiPort:47100}"/>
        <property name="localPortRange" value="20"/>
      </bean>
    </property>
    <property name="segmentationResolvers">
      <array>
        <ref bean="quorumCheckResolver"/>
      </array>
    </property>
    <property name="segmentationPolicy" value="NOOP"/>
    <property name="segmentCheckFrequency" value="5000"/>
    <property name="segmentationResolveAttempts" value="5"/>
    <property name="clientMode" value="false"/>
    <property name="dataStorageConfiguration" ref="persistConf"/>
    <property name="workDirectory" value="${ignite.work.directory}"/>
  </bean>


  <bean class="org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi"
        id="tcpDiscSpiSpecific"
        parent="tcpDiscSpi" scope="prototype">
    <property name="localPort" value="${ignite.discoverySpiPort:47500}"/>
    <property name="localPortRange" value="20"/>
  </bean>
 
  <bean class="org.apache.ignite.configuration.DataStorageConfiguration"
id="persistConf" scope="prototype">
    <property name="defaultDataRegionConfiguration">
      <bean class="org.apache.ignite.configuration.DataRegionConfiguration"
scope="prototype">
        <property name="metricsEnabled" value="false"/>
        <property name="persistenceEnabled" value="true"/>
        <property name="name" value="Default_Region"/>
        <property name="maxSize" value="#{4L * 1024 * 1024 * 1024}"/>
      </bean>
    </property>
  </bean>


  <bean
class="org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder"
        id="tcpPortConfig" scope="prototype">
    <property name="addresses" value="${ignite.nodes}"/>

  </bean>

The client app just does this:

        IgniteConfiguration igniteConfiguration = // load the serverConfig
spring bean
        Ignition.start(igniteConfiguration);



Our caches are created via sql tables with:

 CREATE TABLE TABLEA(
        MODIFICATIONDATE TIMESTAMP,
        ISACTIVE BOOLEAN,
        VERSIONKEY VARCHAR,
        KEYNAME VARCHAR ,
        NAME VARCHAR,
        VALUE VARCHAR,
        VALUETYPE VARCHAR,
        PRIMARY KEY ( NAME,VALUETYPE)
) WITH "TEMPLATE=MY_TEMPLATE,value_type=TABLEA, key_type=TABLEAKEY";
CREATE INDEX TABLEA_IDX ON PUBLIC.TABLEA (VALUETYPE, NAME);

Where MY_TEMPLATE is used for all our caches with this in our server node
startup code:

        CacheConfiguration<BinaryObject, BinaryObject> cacheConfiguration =
new CacheConfiguration<>("MY_TEMPLATE");
        cacheConfiguration.setRebalanceMode(CacheRebalanceMode.SYNC);
       
cacheConfiguration.setWriteSynchronizationMode(CacheWriteSynchronizationMode.FULL_SYNC);
       
cacheConfiguration.setAtomicityMode(CacheAtomicityMode.TRANSACTIONAL);
        cacheConfiguration.setCacheMode(CacheMode.REPLICATED);

ignite.addCacheConfiguration(persistentCacheConfiguration);

The client app prints out lots of this type of thing incase this is
relevant:

2020-11-02 14:16:43,904 [exchange-worker-#70] DEBUG
org.apache.ignite.internal.processors.query.h2.SchemaManager [] - Creating
DB table with SQL: CREATE TABLE "PUBLIC"."TABLEA" ( ... all the fields
etc...

2020-11-02 14:16:43,909 [exchange-worker-#70] DEBUG
org.apache.ignite.internal.processors.query.h2.IgniteH2Indexing [] -
Creating cache index [cacheId=-1559558230, idxName=_key_PK]
2020-11-02 14:16:43,912 [exchange-worker-#70] DEBUG
org.apache.ignite.internal.processors.query.h2.IgniteH2Indexing [] -
Creating cache index [cacheId=-1559558230, idxName=TABLEA_IDX]
2020-11-02 14:16:43,915 [exchange-worker-#70] DEBUG
org.apache.ignite.internal.processors.resource.GridResourceProcessor [] -
Injecting resources
[obj=org.apache.ignite.internal.processors.cache.CacheDefaultBinaryAffinityKeyMapper@706efa6b]

2020-11-02 14:16:43,924 [exchange-worker-#70] DEBUG
org.apache.ignite.internal.processors.query.h2.SchemaManager [] - Creating
DB table with SQL: CREATE TABLE "PUBLIC"."TABLEB" ( ... all the fields
etc...

e.t.c for every cache/table defined on the cluster





--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/
ssansoy ssansoy
Reply | Threaded
Open this post in threaded view
|

Re: Large Heap with lots of BinaryMetaDataHolders

P.S. here's a screenshot of where this is coming from:

Note, some binary meta data seems to be about remote filters and
transformers that have been issued by other client apps connected to the
cluster (we do not want this stuff propagated to this client app either).
The rest of the binary meta data is about the caches and their keys/fields
etc.
All this data adds up to hundreds of MB locally.

<http://apache-ignite-users.70518.x6.nabble.com/file/t2797/stack.png>



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/
ssansoy ssansoy
Reply | Threaded
Open this post in threaded view
|

Re: Large Heap with lots of BinaryMetaDataHolders

Hi Andrew any thoughts on this? thanks!



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Ilya Kazakov Ilya Kazakov
Reply | Threaded
Open this post in threaded view
|

Re: Large Heap with lots of BinaryMetaDataHolders

Hello!

Well, as I understand, you have a cluster with 50 caches and in each client, you have this situation with 220 MB of metadata? But what operations deploy your other clients? Maybe some continuous queries? Also please tell me, how many nodes have your cluster?

------------------------------
Ilya Kazakov

чт, 5 нояб. 2020 г. в 21:03, ssansoy <[hidden email]>:
Hi Andrew any thoughts on this? thanks!



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/
ssansoy ssansoy
Reply | Threaded
Open this post in threaded view
|

Re: Large Heap with lots of BinaryMetaDataHolders

Hi, there are approx 6 server nodes, and only around 10 client nodes at the
moment (there will eventually be several hundred client nodes once this is
enabled in production if we can overcome these issues - and the cluster
metric messages which are causing over 1mb of allocations per sec).

Each client node issues at least 1 continuous query with a remote filter and
transformer, sometimes as many as 4/5 continuous queries.

There are currently 50 caches but this will be over 100 eventually



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Ilya Kazakov Ilya Kazakov
Reply | Threaded
Open this post in threaded view
|

Re: Large Heap with lots of BinaryMetaDataHolders

Hmmm. Well, each node shares with other nodes (including clients) all metadata (all CQ and etc...). Did you try to check this situation in a thin client? 

I think this is the normal amount of heap for a cluster like this.

------------
Ilya Kazakov

пт, 13 нояб. 2020 г. в 22:15, ssansoy <[hidden email]>:
Hi, there are approx 6 server nodes, and only around 10 client nodes at the
moment (there will eventually be several hundred client nodes once this is
enabled in production if we can overcome these issues - and the cluster
metric messages which are causing over 1mb of allocations per sec).

Each client node issues at least 1 continuous query with a remote filter and
transformer, sometimes as many as 4/5 continuous queries.

There are currently 50 caches but this will be over 100 eventually



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/
ssansoy ssansoy
Reply | Threaded
Open this post in threaded view
|

Re: Large Heap with lots of BinaryMetaDataHolders

Hi we can't use the thin client unfortunately because we rely heavily on
continuous queries to notify clients when an update has occurred without
delay.

based on the heap dump above, is there anything we can do to get around this
large mem footpring on the client? It seems odd that the client needs
metadata stored on it locally for all 50 caches for example, even though the
client is only ever interested in one of them for example



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/
ssansoy ssansoy
Reply | Threaded
Open this post in threaded view
|

Re: Large Heap with lots of BinaryMetaDataHolders

Hi did anyone have any possible suggestions for this? Thanks!



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/