High CPU(70-80%) utilization with single Ignite node

classic Classic list List threaded Threaded
2 messages Options
alokyadav12 alokyadav12
Reply | Threaded
Open this post in threaded view
|

High CPU(70-80%) utilization with single Ignite node

Hi,
  We are evaluating Apache.Ignite as a cache layer for our application, and
able to implement in .Net
Our application has a service to which multiple devices connected and that
service saves the byte array information to Ignite Cache. Each device will
have it own key and data is saved to that key.
Once device disconnect will remove that key and one new device comes up will
create key in Ignite and start using that to save and get data.

We need to store a data for certain duration onle so we are reading from
cache and remove the old values from returned data and save it back using
put. Sample flow is like below

Step 1 - Check if key exists in Cache
Step 2 - If exists then get all the value for that key else create new key
Step 3 - If data does not exists then add to cache using Put
OR
Step 3 - If data exists, then remove older data from return result, append
new data and save to cache using PUT

When we connect one or two devices it works fine, and we see low CPU usage,
but when we connect more device say 10+ then CPU utilization slowly getting
high and sometimes reaches to 70-80%. We are not doing anything else except
saving and retrieving cache data.

Below is ignite configuration

 <igniteConfiguration publicThreadPoolSize="16" systemThreadPoolSize="8">
   
    <atomicConfiguration atomicSequenceReserveSize="10"/>
        <discoverySpi type="TcpDiscoverySpi">
                <ipFinder type="TcpDiscoveryMulticastIpFinder">
                <endpoints>
                        <string>127.0.0.1:47500</string>
                </endpoints>
                </ipFinder>
        </discoverySpi>
 
    <Assemblies>
                <string>Path for assemblies</string>
    </Assemblies>
   
  </igniteConfiguration>
 
 
 We removed threadpool size and still the same.
 
<http://apache-ignite-users.70518.x6.nabble.com/file/t2641/Capture.png>


 We created sample application and writing data to cache at faster rate
(~100 entries per sec) and noticed that CPU spikes to 20-30%, just writing
one int value
 
 Are doing any misconfiguration due to that high CPU utilization, due to
high CPU utilization other services are not getting enough CPU. And as per
our understanding it should not take that much CPU as we just saving and
retreiving
 
 
<http://apache-ignite-users.70518.x6.nabble.com/file/t2641/ignitePerformanceIssue.png>

We are seeing another exception in Ignite Window

[13:02:22,403][SEVERE][grid-nio-worker-client-listener-2-#31][ClientListenerProcessor]
Failed to process selector key [ses=GridSelectorNioSessionImpl
[worker=ByteBufferNioClientWorker [readBuf=java.nio.HeapByteBuffer[pos=0
lim=8192 cap=8192], super=AbstractNioClientWorker [idx=2, bytesRcvd=0,
bytesSent=0, bytesRcvd0=0, bytesSent0=0, select=true, super=GridWorker
[name=grid-nio-worker-client-listener-2, igniteInstanceName=null,
finished=false, heartbeatTs=1572548540778, hashCode=282194889,
interrupted=false, runner=grid-nio-worker-client-listener-2-#31]]],
writeBuf=null, readBuf=null, inRecovery=null, outRecovery=null,
super=GridNioSessionImpl [locAddr=/fe80:0:0:0:41d5:fd90:e62c:56e4%9:10800,
rmtAddr=/fe80:0:0:0:41d5:fd90:e62c:56e4%9:49827, createTime=1572548529138,
closeTime=0, bytesSent=5, bytesRcvd=12, bytesSent0=0, bytesRcvd0=0,
sndSchedTime=1572548538778, lastSndTime=1572548538778,
lastRcvTime=1572548529138, readsPaused=false,
filterChain=FilterChain[filters=[GridNioAsyncNotifyFilter,
GridNioCodecFilter [parser=ClientListenerBufferedParser, directMode=false]],
accepted=true, markedForClose=false]]]
java.io.IOException: An existing connection was forcibly closed by the
remote host
        at java.base/sun.nio.ch.SocketDispatcher.read0(Native Method)
        at
java.base/sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:43)
        at java.base/sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:276)
        at java.base/sun.nio.ch.IOUtil.read(IOUtil.java:245)
        at java.base/sun.nio.ch.IOUtil.read(IOUtil.java:223)
        at
java.base/sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:358)
        at
org.apache.ignite.internal.util.nio.GridNioServer$ByteBufferNioClientWorker.processRead(GridNioServer.java:1120)
        at
org.apache.ignite.internal.util.nio.GridNioServer$AbstractNioClientWorker.processSelectedKeysOptimized(GridNioServer.java:2386)
        at
org.apache.ignite.internal.util.nio.GridNioServer$AbstractNioClientWorker.bodyInternal(GridNioServer.java:2153)
        at
org.apache.ignite.internal.util.nio.GridNioServer$AbstractNioClientWorker.body(GridNioServer.java:1794)
        at
org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:120)
        at java.base/java.lang.Thread.run(Thread.java:835)

not sure why this error shows up,



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/
dmagda dmagda
Reply | Threaded
Open this post in threaded view
|

Re: High CPU(70-80%) utilization with single Ignite node

Hello,

There might be many reasons that cause the CPU utilization spikes. I would approach it debugging from bottom-up - by checking application, virtual machine, and OS levels:
Anyway, start checking from the application down to the OS level. Use FlightRecorder or similar tools to capture bottlenecks.

-
Denis


On Thu, Oct 31, 2019 at 1:42 PM alokyadav12 <[hidden email]> wrote:
Hi,
  We are evaluating Apache.Ignite as a cache layer for our application, and
able to implement in .Net
Our application has a service to which multiple devices connected and that
service saves the byte array information to Ignite Cache. Each device will
have it own key and data is saved to that key.
Once device disconnect will remove that key and one new device comes up will
create key in Ignite and start using that to save and get data.

We need to store a data for certain duration onle so we are reading from
cache and remove the old values from returned data and save it back using
put. Sample flow is like below

Step 1 - Check if key exists in Cache
Step 2 - If exists then get all the value for that key else create new key
Step 3 - If data does not exists then add to cache using Put
OR
Step 3 - If data exists, then remove older data from return result, append
new data and save to cache using PUT

When we connect one or two devices it works fine, and we see low CPU usage,
but when we connect more device say 10+ then CPU utilization slowly getting
high and sometimes reaches to 70-80%. We are not doing anything else except
saving and retrieving cache data.

Below is ignite configuration

 <igniteConfiguration publicThreadPoolSize="16" systemThreadPoolSize="8">

    <atomicConfiguration atomicSequenceReserveSize="10"/>
        <discoverySpi type="TcpDiscoverySpi">
                <ipFinder type="TcpDiscoveryMulticastIpFinder">
                <endpoints>
                        <string>127.0.0.1:47500</string>
                </endpoints>
                </ipFinder>
        </discoverySpi>

    <Assemblies>
                <string>Path for assemblies</string>
    </Assemblies>

  </igniteConfiguration>


 We removed threadpool size and still the same.

<http://apache-ignite-users.70518.x6.nabble.com/file/t2641/Capture.png>


 We created sample application and writing data to cache at faster rate
(~100 entries per sec) and noticed that CPU spikes to 20-30%, just writing
one int value

 Are doing any misconfiguration due to that high CPU utilization, due to
high CPU utilization other services are not getting enough CPU. And as per
our understanding it should not take that much CPU as we just saving and
retreiving


<http://apache-ignite-users.70518.x6.nabble.com/file/t2641/ignitePerformanceIssue.png>

We are seeing another exception in Ignite Window

[13:02:22,403][SEVERE][grid-nio-worker-client-listener-2-#31][ClientListenerProcessor]
Failed to process selector key [ses=GridSelectorNioSessionImpl
[worker=ByteBufferNioClientWorker [readBuf=java.nio.HeapByteBuffer[pos=0
lim=8192 cap=8192], super=AbstractNioClientWorker [idx=2, bytesRcvd=0,
bytesSent=0, bytesRcvd0=0, bytesSent0=0, select=true, super=GridWorker
[name=grid-nio-worker-client-listener-2, igniteInstanceName=null,
finished=false, heartbeatTs=1572548540778, hashCode=282194889,
interrupted=false, runner=grid-nio-worker-client-listener-2-#31]]],
writeBuf=null, readBuf=null, inRecovery=null, outRecovery=null,
super=GridNioSessionImpl [locAddr=/fe80:0:0:0:41d5:fd90:e62c:56e4%9:10800,
rmtAddr=/fe80:0:0:0:41d5:fd90:e62c:56e4%9:49827, createTime=1572548529138,
closeTime=0, bytesSent=5, bytesRcvd=12, bytesSent0=0, bytesRcvd0=0,
sndSchedTime=1572548538778, lastSndTime=1572548538778,
lastRcvTime=1572548529138, readsPaused=false,
filterChain=FilterChain[filters=[GridNioAsyncNotifyFilter,
GridNioCodecFilter [parser=ClientListenerBufferedParser, directMode=false]],
accepted=true, markedForClose=false]]]
java.io.IOException: An existing connection was forcibly closed by the
remote host
        at java.base/sun.nio.ch.SocketDispatcher.read0(Native Method)
        at
java.base/sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:43)
        at java.base/sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:276)
        at java.base/sun.nio.ch.IOUtil.read(IOUtil.java:245)
        at java.base/sun.nio.ch.IOUtil.read(IOUtil.java:223)
        at
java.base/sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:358)
        at
org.apache.ignite.internal.util.nio.GridNioServer$ByteBufferNioClientWorker.processRead(GridNioServer.java:1120)
        at
org.apache.ignite.internal.util.nio.GridNioServer$AbstractNioClientWorker.processSelectedKeysOptimized(GridNioServer.java:2386)
        at
org.apache.ignite.internal.util.nio.GridNioServer$AbstractNioClientWorker.bodyInternal(GridNioServer.java:2153)
        at
org.apache.ignite.internal.util.nio.GridNioServer$AbstractNioClientWorker.body(GridNioServer.java:1794)
        at
org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:120)
        at java.base/java.lang.Thread.run(Thread.java:835)

not sure why this error shows up,



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/