.NET thin client multithreaded?

classic Classic list List threaded Threaded
5 messages Options
e.llull e.llull
Reply | Threaded
Open this post in threaded view
|

.NET thin client multithreaded?

Hello everyone,

We have just developed a gRPC service in .NET core that performs a bunch of cache gets for every RPC. We've been using the Apache.NET nuGet starting the Ignite node in client mode (thick client) but we just changed it to use the thin client and we see much much worse response times: from 4ms average and 15ms 95th percentile, to 21ms average and 86ms 95th percentile, but the times get event worse under load: peaks of 115ms average, 1s 95th percentile.

We were expecting some level of degradation in the response times when changeing from the thick to the thin client but not as much. In fact, trying to reduce the impact, we've deployed a Ignite node in client mode on every host where we have our gRPC service deployed and the gRPC service connects to the local Ignite node.

The gRPC service receives several tens of concurrent requests when under load, but we instantiate one single ApacheClient (Ignition.StartClient()) shared by all the threads that are serving the RPC requests. I've seen in the Java Thin Client documentation (https://apacheignite.readme.io/docs/java-thin-client-initialization-and-configuration#section-multithreading) the following:

Thin client is single-threaded and thread-safe. The only shared resource is the underlying communication channel, and only one thread reads/writes to the channel while other threads are waiting for the operation to complete.

Use multiple threads with thin client connection pool to improve performance

Presently thin client has no feature to create multiple threads to improve throughput. You can create multiple threads by getting thin client connection from a pool in your application to improve throughput.

But there is no such warning in the .NET thin client documentation (https://apacheignite-net.readme.io/docs/thin-client).

Is it possible that the huge increase in the reponse times comes from contention when multiple gRPC threads are using the same thin client (thus, the same ClientSocket) to communicate with the cluster?

In the mean time we will use a thin client pool as recommended in the Java documentation to see if it improves the performance.


Thank you very much.
Alexandr Shapkin Alexandr Shapkin
Reply | Threaded
Open this post in threaded view
|

RE: .NET thin client multithreaded?

Hello,

 

Is it possible that the huge increase in the reponse times comes from contention when multiple gRPC threads are using the same thin client (thus, the same ClientSocket) to communicate with the cluster?

 

Yes, that’s correct. Threads will share the same TCP connection by default.

 

But there is no such warning in the .NET thin client documentation (https://apacheignite-net.readme.io/docs/thin-client).

 

I think we need to update the docs to include that warning.

 

In the mean time we will use a thin client pool as recommended in the Java documentation to see if it improves the performance.

 

Well, in general yes, it should help you increase the performance.

Also it’s worth to know how many server nodes do you have in a cluster? Is your data well-collocated?

 

A thin client utilizes a single connection to a single node, but the requested data could be located at a different one, and thus could cause an additional network overhead.

Please, refer to the affinity awareness wiki page [1].

 

[1] - https://cwiki.apache.org/confluence/display/IGNITE/IEP-23%3A+Best+Effort+Affinity+for+thin+clients

 

From: [hidden email]
Sent: Friday, August 23, 2019 9:49 AM
To: [hidden email]
Subject: .NET thin client multithreaded?

 

Hello everyone,

 

We have just developed a gRPC service in .NET core that performs a bunch of cache gets for every RPC. We've been using the Apache.NET nuGet starting the Ignite node in client mode (thick client) but we just changed it to use the thin client and we see much much worse response times: from 4ms average and 15ms 95th percentile, to 21ms average and 86ms 95th percentile, but the times get event worse under load: peaks of 115ms average, 1s 95th percentile.

 

We were expecting some level of degradation in the response times when changeing from the thick to the thin client but not as much. In fact, trying to reduce the impact, we've deployed a Ignite node in client mode on every host where we have our gRPC service deployed and the gRPC service connects to the local Ignite node.

 

The gRPC service receives several tens of concurrent requests when under load, but we instantiate one single ApacheClient (Ignition.StartClient()) shared by all the threads that are serving the RPC requests. I've seen in the Java Thin Client documentation (https://apacheignite.readme.io/docs/java-thin-client-initialization-and-configuration#section-multithreading) the following:

 

Thin client is single-threaded and thread-safe. The only shared resource is the underlying communication channel, and only one thread reads/writes to the channel while other threads are waiting for the operation to complete.

Use multiple threads with thin client connection pool to improve performance

Presently thin client has no feature to create multiple threads to improve throughput. You can create multiple threads by getting thin client connection from a pool in your application to improve throughput.

But there is no such warning in the .NET thin client documentation (https://apacheignite-net.readme.io/docs/thin-client).

 

Is it possible that the huge increase in the reponse times comes from contention when multiple gRPC threads are using the same thin client (thus, the same ClientSocket) to communicate with the cluster?

 

In the mean time we will use a thin client pool as recommended in the Java documentation to see if it improves the performance.

 

 

Thank you very much.

 

Alex Shapkin
dmagda dmagda
Reply | Threaded
Open this post in threaded view
|

Re: .NET thin client multithreaded?

In reply to this post by e.llull
Please continue using Ignite.NET thick client until we release partition-awareness for the thin one. That feature has been already developed and to be released in Ignite 2.8.

Presently, the thin client sends all the request via the proxy which is one of the server nodes it's connected to. While the thick client always goes to a node that keeps a key. The proxy is a bottleneck and that's why you see such a performance drop.

-
Denis


On Thu, Aug 22, 2019 at 11:49 PM Eduard Llull <[hidden email]> wrote:
Hello everyone,

We have just developed a gRPC service in .NET core that performs a bunch of cache gets for every RPC. We've been using the Apache.NET nuGet starting the Ignite node in client mode (thick client) but we just changed it to use the thin client and we see much much worse response times: from 4ms average and 15ms 95th percentile, to 21ms average and 86ms 95th percentile, but the times get event worse under load: peaks of 115ms average, 1s 95th percentile.

We were expecting some level of degradation in the response times when changeing from the thick to the thin client but not as much. In fact, trying to reduce the impact, we've deployed a Ignite node in client mode on every host where we have our gRPC service deployed and the gRPC service connects to the local Ignite node.

The gRPC service receives several tens of concurrent requests when under load, but we instantiate one single ApacheClient (Ignition.StartClient()) shared by all the threads that are serving the RPC requests. I've seen in the Java Thin Client documentation (https://apacheignite.readme.io/docs/java-thin-client-initialization-and-configuration#section-multithreading) the following:

Thin client is single-threaded and thread-safe. The only shared resource is the underlying communication channel, and only one thread reads/writes to the channel while other threads are waiting for the operation to complete.

Use multiple threads with thin client connection pool to improve performance

Presently thin client has no feature to create multiple threads to improve throughput. You can create multiple threads by getting thin client connection from a pool in your application to improve throughput.

But there is no such warning in the .NET thin client documentation (https://apacheignite-net.readme.io/docs/thin-client).

Is it possible that the huge increase in the reponse times comes from contention when multiple gRPC threads are using the same thin client (thus, the same ClientSocket) to communicate with the cluster?

In the mean time we will use a thin client pool as recommended in the Java documentation to see if it improves the performance.


Thank you very much.
e.llull e.llull
Reply | Threaded
Open this post in threaded view
|

Re: .NET thin client multithreaded?

Hi Denis,

We already know that using the thin client introduces an extra network hop, and to minimice its impact we've deployed a thick client node colocated with our application and every application instance connects to the local Ignite node.

We'd love to continue using the Ignite.NET thick client but our .NET core application was segfaulting every few minutes and as soon as we moved to the thin client the segfaults stopped. We tracked down the segfaulting code to be related with the monitoring system we use: Prometheus. When Prometheus is getting the number of open file handles (https://github.com/prometheus-net/prometheus-net/blob/2f0d409bf89ba66af2e31caf6d605b4824c797d7/Prometheus.NetStandard/DotNetStats.cs#L89) sometimes the application exited with a SIGSEGV signal, sometimes with a SIGABRT signal, and sometimes it worked, but as we gather stats from the application every minute is just a matter of time for the application to fail. On the other hand we use the same Prometheus client in several other .NET core applications without having this problem.

We suspect that the .NET core CLR and the embedded JVM that Ignite starts don't get along very well (maybe they mess with each other signal handlers) so we tried separating the two runtimes using the thin client. The result: no more segfaults nor aborts, but worse response times.

If you, or anybody, have any idea about the origin of the segfaults it would be very wellcomed.


On Fri, Aug 23, 2019 at 11:27 PM Denis Magda <[hidden email]> wrote:
Please continue using Ignite.NET thick client until we release partition-awareness for the thin one. That feature has been already developed and to be released in Ignite 2.8.

Presently, the thin client sends all the request via the proxy which is one of the server nodes it's connected to. While the thick client always goes to a node that keeps a key. The proxy is a bottleneck and that's why you see such a performance drop.

-
Denis


On Thu, Aug 22, 2019 at 11:49 PM Eduard Llull <[hidden email]> wrote:
Hello everyone,

We have just developed a gRPC service in .NET core that performs a bunch of cache gets for every RPC. We've been using the Apache.NET nuGet starting the Ignite node in client mode (thick client) but we just changed it to use the thin client and we see much much worse response times: from 4ms average and 15ms 95th percentile, to 21ms average and 86ms 95th percentile, but the times get event worse under load: peaks of 115ms average, 1s 95th percentile.

We were expecting some level of degradation in the response times when changeing from the thick to the thin client but not as much. In fact, trying to reduce the impact, we've deployed a Ignite node in client mode on every host where we have our gRPC service deployed and the gRPC service connects to the local Ignite node.

The gRPC service receives several tens of concurrent requests when under load, but we instantiate one single ApacheClient (Ignition.StartClient()) shared by all the threads that are serving the RPC requests. I've seen in the Java Thin Client documentation (https://apacheignite.readme.io/docs/java-thin-client-initialization-and-configuration#section-multithreading) the following:

Thin client is single-threaded and thread-safe. The only shared resource is the underlying communication channel, and only one thread reads/writes to the channel while other threads are waiting for the operation to complete.

Use multiple threads with thin client connection pool to improve performance

Presently thin client has no feature to create multiple threads to improve throughput. You can create multiple threads by getting thin client connection from a pool in your application to improve throughput.

But there is no such warning in the .NET thin client documentation (https://apacheignite-net.readme.io/docs/thin-client).

Is it possible that the huge increase in the reponse times comes from contention when multiple gRPC threads are using the same thin client (thus, the same ClientSocket) to communicate with the cluster?

In the mean time we will use a thin client pool as recommended in the Java documentation to see if it improves the performance.


Thank you very much.
ptupitsyn ptupitsyn
Reply | Threaded
Open this post in threaded view
|

Re: .NET thin client multithreaded?

Eduard,

I tried the following to reproduce segfaults according to your description:
* Start Ignite server node
* Infinite loop, perform Cache.Put operations
* In the same loop access Process.HandleCount property

On Ubuntu 16.04, .NET Core 2.2.103 I see no crashes.
Can you please provide more details about your environment? Maybe a minimal reproducer?

On Mon, Aug 26, 2019 at 12:45 PM Eduard Llull <[hidden email]> wrote:
Hi Denis,

We already know that using the thin client introduces an extra network hop, and to minimice its impact we've deployed a thick client node colocated with our application and every application instance connects to the local Ignite node.

We'd love to continue using the Ignite.NET thick client but our .NET core application was segfaulting every few minutes and as soon as we moved to the thin client the segfaults stopped. We tracked down the segfaulting code to be related with the monitoring system we use: Prometheus. When Prometheus is getting the number of open file handles (https://github.com/prometheus-net/prometheus-net/blob/2f0d409bf89ba66af2e31caf6d605b4824c797d7/Prometheus.NetStandard/DotNetStats.cs#L89) sometimes the application exited with a SIGSEGV signal, sometimes with a SIGABRT signal, and sometimes it worked, but as we gather stats from the application every minute is just a matter of time for the application to fail. On the other hand we use the same Prometheus client in several other .NET core applications without having this problem.

We suspect that the .NET core CLR and the embedded JVM that Ignite starts don't get along very well (maybe they mess with each other signal handlers) so we tried separating the two runtimes using the thin client. The result: no more segfaults nor aborts, but worse response times.

If you, or anybody, have any idea about the origin of the segfaults it would be very wellcomed.


On Fri, Aug 23, 2019 at 11:27 PM Denis Magda <[hidden email]> wrote:
Please continue using Ignite.NET thick client until we release partition-awareness for the thin one. That feature has been already developed and to be released in Ignite 2.8.

Presently, the thin client sends all the request via the proxy which is one of the server nodes it's connected to. While the thick client always goes to a node that keeps a key. The proxy is a bottleneck and that's why you see such a performance drop.

-
Denis


On Thu, Aug 22, 2019 at 11:49 PM Eduard Llull <[hidden email]> wrote:
Hello everyone,

We have just developed a gRPC service in .NET core that performs a bunch of cache gets for every RPC. We've been using the Apache.NET nuGet starting the Ignite node in client mode (thick client) but we just changed it to use the thin client and we see much much worse response times: from 4ms average and 15ms 95th percentile, to 21ms average and 86ms 95th percentile, but the times get event worse under load: peaks of 115ms average, 1s 95th percentile.

We were expecting some level of degradation in the response times when changeing from the thick to the thin client but not as much. In fact, trying to reduce the impact, we've deployed a Ignite node in client mode on every host where we have our gRPC service deployed and the gRPC service connects to the local Ignite node.

The gRPC service receives several tens of concurrent requests when under load, but we instantiate one single ApacheClient (Ignition.StartClient()) shared by all the threads that are serving the RPC requests. I've seen in the Java Thin Client documentation (https://apacheignite.readme.io/docs/java-thin-client-initialization-and-configuration#section-multithreading) the following:

Thin client is single-threaded and thread-safe. The only shared resource is the underlying communication channel, and only one thread reads/writes to the channel while other threads are waiting for the operation to complete.

Use multiple threads with thin client connection pool to improve performance

Presently thin client has no feature to create multiple threads to improve throughput. You can create multiple threads by getting thin client connection from a pool in your application to improve throughput.

But there is no such warning in the .NET thin client documentation (https://apacheignite-net.readme.io/docs/thin-client).

Is it possible that the huge increase in the reponse times comes from contention when multiple gRPC threads are using the same thin client (thus, the same ClientSocket) to communicate with the cluster?

In the mean time we will use a thin client pool as recommended in the Java documentation to see if it improves the performance.


Thank you very much.