Threadpools and .WithExecute() for C# clients

classic Classic list List threaded Threaded
13 messages Options
Raymond Wilson Raymond Wilson
Reply | Threaded
Open this post in threaded view
|

Threadpools and .WithExecute() for C# clients

Some time ago I ran into and issue with thread pool exhaustion and deadlocking in AI 2.2.


At the time .WithExecutor() was not implemented in the C# client so there was little option but to expand the size of the public thread pool sufficiently to prevent the deadlocking.

We have been revisiting this issue and see that .WithExecutor() is not supported in the AI 2.7.5 client.

Can this be supported in the C# client, or is there a workaround in the .Net environment? that does not require this capability?

Thanks,
Raymond.

Alexandr Shapkin Alexandr Shapkin
Reply | Threaded
Open this post in threaded view
|

RE: Threadpools and .WithExecute() for C# clients

Hi, Raymond!

 

As far as I can see, there are no plans for porting custom executors configuration in .NET client right now [1].

 

Please, remind, why do you need a separate pool instead of a default PublicPool?

 

[1] - https://issues.apache.org/jira/browse/IGNITE-6566

 

 

 

From: [hidden email]
Sent: Wednesday, July 17, 2019 10:58 AM
To: [hidden email]
Subject: Threadpools and .WithExecute() for C# clients

 

Some time ago I ran into and issue with thread pool exhaustion and deadlocking in AI 2.2.

 

 

At the time .WithExecutor() was not implemented in the C# client so there was little option but to expand the size of the public thread pool sufficiently to prevent the deadlocking.

 

We have been revisiting this issue and see that .WithExecutor() is not supported in the AI 2.7.5 client.

 

Can this be supported in the C# client, or is there a workaround in the .Net environment? that does not require this capability?

 

Thanks,

Raymond.

 

 

Alex Shapkin
Raymond Wilson Raymond Wilson
Reply | Threaded
Open this post in threaded view
|

Re: Threadpools and .WithExecute() for C# clients

Hi Alexandr,

To summarise from the original thread, say I have server A that accepts requests. It contacts server B in order to help processing those requests. Server B sends in-progress results to server A using the Ignite messaging fabric. If the thread pool in server A is saturated with inbound requests, then there are no available threads to service the messaging fabric traffic from server B to server A resulting in a deadlock condition.

In the original discussion it was suggested creating a custom thread pool to handle the Server B to Server A traffic would resolve it.

Thanks,
Raymond.

On Wed, Jul 17, 2019 at 9:48 PM Alexandr Shapkin <[hidden email]> wrote:

Hi, Raymond!

 

As far as I can see, there are no plans for porting custom executors configuration in .NET client right now [1].

 

Please, remind, why do you need a separate pool instead of a default PublicPool?

 

[1] - https://issues.apache.org/jira/browse/IGNITE-6566

 

 

 

From: [hidden email]
Sent: Wednesday, July 17, 2019 10:58 AM
To: [hidden email]
Subject: Threadpools and .WithExecute() for C# clients

 

Some time ago I ran into and issue with thread pool exhaustion and deadlocking in AI 2.2.

 

 

At the time .WithExecutor() was not implemented in the C# client so there was little option but to expand the size of the public thread pool sufficiently to prevent the deadlocking.

 

We have been revisiting this issue and see that .WithExecutor() is not supported in the AI 2.7.5 client.

 

Can this be supported in the C# client, or is there a workaround in the .Net environment? that does not require this capability?

 

Thanks,

Raymond.

 

 

Alexandr Shapkin Alexandr Shapkin
Reply | Threaded
Open this post in threaded view
|

RE: Threadpools and .WithExecute() for C# clients

Hi,

 

Can you share a more detailed use case, please?

 

Right now it’s not clear why do you need a messaging fabric.

If you are interesting in a progress tracking, then you could try a CacheAPI or QueryContinious, for example.

 

What are the sources of inbound requests? Is it a client requests?

 

What is your cluster config? How many nodes do you have for your distributed computations?

 

From: [hidden email]
Sent: Wednesday, July 17, 2019 1:49 PM
To: [hidden email]
Subject: Re: Threadpools and .WithExecute() for C# clients

 

Hi Alexandr,

 

To summarise from the original thread, say I have server A that accepts requests. It contacts server B in order to help processing those requests. Server B sends in-progress results to server A using the Ignite messaging fabric. If the thread pool in server A is saturated with inbound requests, then there are no available threads to service the messaging fabric traffic from server B to server A resulting in a deadlock condition.

 

In the original discussion it was suggested creating a custom thread pool to handle the Server B to Server A traffic would resolve it.

 

Thanks,

Raymond.

 

On Wed, Jul 17, 2019 at 9:48 PM Alexandr Shapkin <[hidden email]> wrote:

Hi, Raymond!

 

As far as I can see, there are no plans for porting custom executors configuration in .NET client right now [1].

 

Please, remind, why do you need a separate pool instead of a default PublicPool?

 

[1] - https://issues.apache.org/jira/browse/IGNITE-6566

 

 

 

From: [hidden email]
Sent: Wednesday, July 17, 2019 10:58 AM
To: [hidden email]
Subject: Threadpools and .WithExecute() for C# clients

 

Some time ago I ran into and issue with thread pool exhaustion and deadlocking in AI 2.2.

 

 

At the time .WithExecutor() was not implemented in the C# client so there was little option but to expand the size of the public thread pool sufficiently to prevent the deadlocking.

 

We have been revisiting this issue and see that .WithExecutor() is not supported in the AI 2.7.5 client.

 

Can this be supported in the C# client, or is there a workaround in the .Net environment? that does not require this capability?

 

Thanks,

Raymond.

 

 

 

Alex Shapkin
Raymond Wilson Raymond Wilson
Reply | Threaded
Open this post in threaded view
|

Re: Threadpools and .WithExecute() for C# clients

The source of inbound requests into Server A is from client applications. 

Server B is really a cluster of servers that are performing clustered transformations and computations across a data set.

I originally used IComputeJob and similar functions which work very well but have the restriction that they return the entire result set from a Server B node in a single response. These result sets can be large (100's of megabytes and larger), which makes life pretty hard for Server A if it has to field multiple incoming responses of this size. So, these types of requests progressively send responses back (using Ignite messaging) to Server A using the Ignite messaging fabric. As Server A receives each part of the overall response it processes it according the business rules relevant to the request.

The cluster config and numbers of nodes are not really material to this.

Raymond.

On Thu, Jul 18, 2019 at 12:26 AM Alexandr Shapkin <[hidden email]> wrote:

Hi,

 

Can you share a more detailed use case, please?

 

Right now it’s not clear why do you need a messaging fabric.

If you are interesting in a progress tracking, then you could try a CacheAPI or QueryContinious, for example.

 

What are the sources of inbound requests? Is it a client requests?

 

What is your cluster config? How many nodes do you have for your distributed computations?

 

From: [hidden email]
Sent: Wednesday, July 17, 2019 1:49 PM
To: [hidden email]
Subject: Re: Threadpools and .WithExecute() for C# clients

 

Hi Alexandr,

 

To summarise from the original thread, say I have server A that accepts requests. It contacts server B in order to help processing those requests. Server B sends in-progress results to server A using the Ignite messaging fabric. If the thread pool in server A is saturated with inbound requests, then there are no available threads to service the messaging fabric traffic from server B to server A resulting in a deadlock condition.

 

In the original discussion it was suggested creating a custom thread pool to handle the Server B to Server A traffic would resolve it.

 

Thanks,

Raymond.

 

On Wed, Jul 17, 2019 at 9:48 PM Alexandr Shapkin <[hidden email]> wrote:

Hi, Raymond!

 

As far as I can see, there are no plans for porting custom executors configuration in .NET client right now [1].

 

Please, remind, why do you need a separate pool instead of a default PublicPool?

 

[1] - https://issues.apache.org/jira/browse/IGNITE-6566

 

 

 

From: [hidden email]
Sent: Wednesday, July 17, 2019 10:58 AM
To: [hidden email]
Subject: Threadpools and .WithExecute() for C# clients

 

Some time ago I ran into and issue with thread pool exhaustion and deadlocking in AI 2.2.

 

 

At the time .WithExecutor() was not implemented in the C# client so there was little option but to expand the size of the public thread pool sufficiently to prevent the deadlocking.

 

We have been revisiting this issue and see that .WithExecutor() is not supported in the AI 2.7.5 client.

 

Can this be supported in the C# client, or is there a workaround in the .Net environment? that does not require this capability?

 

Thanks,

Raymond.

 

 

 

Raymond Wilson Raymond Wilson
Reply | Threaded
Open this post in threaded view
|

Re: Threadpools and .WithExecute() for C# clients

Alexandr,

If .WithExecute is not planned to be made available in the C# client, what is the plan to support custom thread pools from the C# side of things?

Thanks,
Raymond.


On Thu, Jul 18, 2019 at 9:28 AM Raymond Wilson <[hidden email]> wrote:
The source of inbound requests into Server A is from client applications. 

Server B is really a cluster of servers that are performing clustered transformations and computations across a data set.

I originally used IComputeJob and similar functions which work very well but have the restriction that they return the entire result set from a Server B node in a single response. These result sets can be large (100's of megabytes and larger), which makes life pretty hard for Server A if it has to field multiple incoming responses of this size. So, these types of requests progressively send responses back (using Ignite messaging) to Server A using the Ignite messaging fabric. As Server A receives each part of the overall response it processes it according the business rules relevant to the request.

The cluster config and numbers of nodes are not really material to this.

Raymond.

On Thu, Jul 18, 2019 at 12:26 AM Alexandr Shapkin <[hidden email]> wrote:

Hi,

 

Can you share a more detailed use case, please?

 

Right now it’s not clear why do you need a messaging fabric.

If you are interesting in a progress tracking, then you could try a CacheAPI or QueryContinious, for example.

 

What are the sources of inbound requests? Is it a client requests?

 

What is your cluster config? How many nodes do you have for your distributed computations?

 

From: [hidden email]
Sent: Wednesday, July 17, 2019 1:49 PM
To: [hidden email]
Subject: Re: Threadpools and .WithExecute() for C# clients

 

Hi Alexandr,

 

To summarise from the original thread, say I have server A that accepts requests. It contacts server B in order to help processing those requests. Server B sends in-progress results to server A using the Ignite messaging fabric. If the thread pool in server A is saturated with inbound requests, then there are no available threads to service the messaging fabric traffic from server B to server A resulting in a deadlock condition.

 

In the original discussion it was suggested creating a custom thread pool to handle the Server B to Server A traffic would resolve it.

 

Thanks,

Raymond.

 

On Wed, Jul 17, 2019 at 9:48 PM Alexandr Shapkin <[hidden email]> wrote:

Hi, Raymond!

 

As far as I can see, there are no plans for porting custom executors configuration in .NET client right now [1].

 

Please, remind, why do you need a separate pool instead of a default PublicPool?

 

[1] - https://issues.apache.org/jira/browse/IGNITE-6566

 

 

 

From: [hidden email]
Sent: Wednesday, July 17, 2019 10:58 AM
To: [hidden email]
Subject: Threadpools and .WithExecute() for C# clients

 

Some time ago I ran into and issue with thread pool exhaustion and deadlocking in AI 2.2.

 

 

At the time .WithExecutor() was not implemented in the C# client so there was little option but to expand the size of the public thread pool sufficiently to prevent the deadlocking.

 

We have been revisiting this issue and see that .WithExecutor() is not supported in the AI 2.7.5 client.

 

Can this be supported in the C# client, or is there a workaround in the .Net environment? that does not require this capability?

 

Thanks,

Raymond.

 

 

 

dmagda dmagda
Reply | Threaded
Open this post in threaded view
|

Re: Threadpools and .WithExecute() for C# clients

Looping in the dev list.

Pavel, Igor and other C# maintainers, this looks like a valuable extension of our C# APIs. Shouldn't this be a quick addition to Ignite?

-
Denis


On Mon, Jul 22, 2019 at 3:22 PM Raymond Wilson <[hidden email]> wrote:
Alexandr,

If .WithExecute is not planned to be made available in the C# client, what is the plan to support custom thread pools from the C# side of things?

Thanks,
Raymond.


On Thu, Jul 18, 2019 at 9:28 AM Raymond Wilson <[hidden email]> wrote:
The source of inbound requests into Server A is from client applications. 

Server B is really a cluster of servers that are performing clustered transformations and computations across a data set.

I originally used IComputeJob and similar functions which work very well but have the restriction that they return the entire result set from a Server B node in a single response. These result sets can be large (100's of megabytes and larger), which makes life pretty hard for Server A if it has to field multiple incoming responses of this size. So, these types of requests progressively send responses back (using Ignite messaging) to Server A using the Ignite messaging fabric. As Server A receives each part of the overall response it processes it according the business rules relevant to the request.

The cluster config and numbers of nodes are not really material to this.

Raymond.

On Thu, Jul 18, 2019 at 12:26 AM Alexandr Shapkin <[hidden email]> wrote:

Hi,

 

Can you share a more detailed use case, please?

 

Right now it’s not clear why do you need a messaging fabric.

If you are interesting in a progress tracking, then you could try a CacheAPI or QueryContinious, for example.

 

What are the sources of inbound requests? Is it a client requests?

 

What is your cluster config? How many nodes do you have for your distributed computations?

 

From: [hidden email]
Sent: Wednesday, July 17, 2019 1:49 PM
To: [hidden email]
Subject: Re: Threadpools and .WithExecute() for C# clients

 

Hi Alexandr,

 

To summarise from the original thread, say I have server A that accepts requests. It contacts server B in order to help processing those requests. Server B sends in-progress results to server A using the Ignite messaging fabric. If the thread pool in server A is saturated with inbound requests, then there are no available threads to service the messaging fabric traffic from server B to server A resulting in a deadlock condition.

 

In the original discussion it was suggested creating a custom thread pool to handle the Server B to Server A traffic would resolve it.

 

Thanks,

Raymond.

 

On Wed, Jul 17, 2019 at 9:48 PM Alexandr Shapkin <[hidden email]> wrote:

Hi, Raymond!

 

As far as I can see, there are no plans for porting custom executors configuration in .NET client right now [1].

 

Please, remind, why do you need a separate pool instead of a default PublicPool?

 

[1] - https://issues.apache.org/jira/browse/IGNITE-6566

 

 

 

From: [hidden email]
Sent: Wednesday, July 17, 2019 10:58 AM
To: [hidden email]
Subject: Threadpools and .WithExecute() for C# clients

 

Some time ago I ran into and issue with thread pool exhaustion and deadlocking in AI 2.2.

 

 

At the time .WithExecutor() was not implemented in the C# client so there was little option but to expand the size of the public thread pool sufficiently to prevent the deadlocking.

 

We have been revisiting this issue and see that .WithExecutor() is not supported in the AI 2.7.5 client.

 

Can this be supported in the C# client, or is there a workaround in the .Net environment? that does not require this capability?

 

Thanks,

Raymond.

 

 

 

ptupitsyn ptupitsyn
Reply | Threaded
Open this post in threaded view
|

Re: Threadpools and .WithExecute() for C# clients

Denis, yes, looks like a simple thing to add.

On Tue, Jul 23, 2019 at 10:38 PM Denis Magda <[hidden email]> wrote:
Looping in the dev list.

Pavel, Igor and other C# maintainers, this looks like a valuable extension
of our C# APIs. Shouldn't this be a quick addition to Ignite?

-
Denis


On Mon, Jul 22, 2019 at 3:22 PM Raymond Wilson <[hidden email]>
wrote:

> Alexandr,
>
> If .WithExecute is not planned to be made available in the C# client, what
> is the plan to support custom thread pools from the C# side of things?
>
> Thanks,
> Raymond.
>
>
> On Thu, Jul 18, 2019 at 9:28 AM Raymond Wilson <[hidden email]>
> wrote:
>
>> The source of inbound requests into Server A is from client applications.
>>
>> Server B is really a cluster of servers that are performing clustered
>> transformations and computations across a data set.
>>
>> I originally used IComputeJob and similar functions which work very well
>> but have the restriction that they return the entire result set from a
>> Server B node in a single response. These result sets can be large (100's
>> of megabytes and larger), which makes life pretty hard for Server A if it
>> has to field multiple incoming responses of this size. So, these types of
>> requests progressively send responses back (using Ignite messaging) to
>> Server A using the Ignite messaging fabric. As Server A receives each part
>> of the overall response it processes it according the business rules
>> relevant to the request.
>>
>> The cluster config and numbers of nodes are not really material to this.
>>
>> Raymond.
>>
>> On Thu, Jul 18, 2019 at 12:26 AM Alexandr Shapkin <[hidden email]>
>> wrote:
>>
>>> Hi,
>>>
>>>
>>>
>>> Can you share a more detailed use case, please?
>>>
>>>
>>>
>>> Right now it’s not clear why do you need a messaging fabric.
>>>
>>> If you are interesting in a progress tracking, then you could try a
>>> CacheAPI or QueryContinious, for example.
>>>
>>>
>>>
>>> What are the sources of inbound requests? Is it a client requests?
>>>
>>>
>>>
>>> What is your cluster config? How many nodes do you have for your
>>> distributed computations?
>>>
>>>
>>>
>>> *From: *Raymond Wilson <[hidden email]>
>>> *Sent: *Wednesday, July 17, 2019 1:49 PM
>>> *To: *user <[hidden email]>
>>> *Subject: *Re: Threadpools and .WithExecute() for C# clients
>>>
>>>
>>>
>>> Hi Alexandr,
>>>
>>>
>>>
>>> To summarise from the original thread, say I have server A that accepts
>>> requests. It contacts server B in order to help processing those requests.
>>> Server B sends in-progress results to server A using the Ignite messaging
>>> fabric. If the thread pool in server A is saturated with inbound requests,
>>> then there are no available threads to service the messaging fabric traffic
>>> from server B to server A resulting in a deadlock condition.
>>>
>>>
>>>
>>> In the original discussion it was suggested creating a custom thread
>>> pool to handle the Server B to Server A traffic would resolve it.
>>>
>>>
>>>
>>> Thanks,
>>>
>>> Raymond.
>>>
>>>
>>>
>>> On Wed, Jul 17, 2019 at 9:48 PM Alexandr Shapkin <[hidden email]>
>>> wrote:
>>>
>>> Hi, Raymond!
>>>
>>>
>>>
>>> As far as I can see, there are no plans for porting custom executors
>>> configuration in .NET client right now [1].
>>>
>>>
>>>
>>> Please, remind, why do you need a separate pool instead of a default
>>> PublicPool?
>>>
>>>
>>>
>>> [1] - https://issues.apache.org/jira/browse/IGNITE-6566
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>> *From: *Raymond Wilson <[hidden email]>
>>> *Sent: *Wednesday, July 17, 2019 10:58 AM
>>> *To: *user <[hidden email]>
>>> *Subject: *Threadpools and .WithExecute() for C# clients
>>>
>>>
>>>
>>> Some time ago I ran into and issue with thread pool exhaustion and
>>> deadlocking in AI 2.2.
>>>
>>>
>>>
>>> This is the original thread:
>>> http://apache-ignite-users.70518.x6.nabble.com/Possible-dead-lock-when-number-of-jobs-exceeds-thread-pool-td17262.html
>>>
>>>
>>>
>>>
>>> At the time .WithExecutor() was not implemented in the C# client so
>>> there was little option but to expand the size of the public thread pool
>>> sufficiently to prevent the deadlocking.
>>>
>>>
>>>
>>> We have been revisiting this issue and see that .WithExecutor() is not
>>> supported in the AI 2.7.5 client.
>>>
>>>
>>>
>>> Can this be supported in the C# client, or is there a workaround in the
>>> .Net environment? that does not require this capability?
>>>
>>>
>>>
>>> Thanks,
>>>
>>> Raymond.
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>
dmagda dmagda
Reply | Threaded
Open this post in threaded view
|

Re: Threadpools and .WithExecute() for C# clients

Pavel,

Do we already have a ticket or do you want me to create one?

-
Denis


On Wed, Jul 24, 2019 at 1:21 AM Pavel Tupitsyn <[hidden email]> wrote:
Denis, yes, looks like a simple thing to add.

On Tue, Jul 23, 2019 at 10:38 PM Denis Magda <[hidden email]> wrote:
Looping in the dev list.

Pavel, Igor and other C# maintainers, this looks like a valuable extension
of our C# APIs. Shouldn't this be a quick addition to Ignite?

-
Denis


On Mon, Jul 22, 2019 at 3:22 PM Raymond Wilson <[hidden email]>
wrote:

> Alexandr,
>
> If .WithExecute is not planned to be made available in the C# client, what
> is the plan to support custom thread pools from the C# side of things?
>
> Thanks,
> Raymond.
>
>
> On Thu, Jul 18, 2019 at 9:28 AM Raymond Wilson <[hidden email]>
> wrote:
>
>> The source of inbound requests into Server A is from client applications.
>>
>> Server B is really a cluster of servers that are performing clustered
>> transformations and computations across a data set.
>>
>> I originally used IComputeJob and similar functions which work very well
>> but have the restriction that they return the entire result set from a
>> Server B node in a single response. These result sets can be large (100's
>> of megabytes and larger), which makes life pretty hard for Server A if it
>> has to field multiple incoming responses of this size. So, these types of
>> requests progressively send responses back (using Ignite messaging) to
>> Server A using the Ignite messaging fabric. As Server A receives each part
>> of the overall response it processes it according the business rules
>> relevant to the request.
>>
>> The cluster config and numbers of nodes are not really material to this.
>>
>> Raymond.
>>
>> On Thu, Jul 18, 2019 at 12:26 AM Alexandr Shapkin <[hidden email]>
>> wrote:
>>
>>> Hi,
>>>
>>>
>>>
>>> Can you share a more detailed use case, please?
>>>
>>>
>>>
>>> Right now it’s not clear why do you need a messaging fabric.
>>>
>>> If you are interesting in a progress tracking, then you could try a
>>> CacheAPI or QueryContinious, for example.
>>>
>>>
>>>
>>> What are the sources of inbound requests? Is it a client requests?
>>>
>>>
>>>
>>> What is your cluster config? How many nodes do you have for your
>>> distributed computations?
>>>
>>>
>>>
>>> *From: *Raymond Wilson <[hidden email]>
>>> *Sent: *Wednesday, July 17, 2019 1:49 PM
>>> *To: *user <[hidden email]>
>>> *Subject: *Re: Threadpools and .WithExecute() for C# clients
>>>
>>>
>>>
>>> Hi Alexandr,
>>>
>>>
>>>
>>> To summarise from the original thread, say I have server A that accepts
>>> requests. It contacts server B in order to help processing those requests.
>>> Server B sends in-progress results to server A using the Ignite messaging
>>> fabric. If the thread pool in server A is saturated with inbound requests,
>>> then there are no available threads to service the messaging fabric traffic
>>> from server B to server A resulting in a deadlock condition.
>>>
>>>
>>>
>>> In the original discussion it was suggested creating a custom thread
>>> pool to handle the Server B to Server A traffic would resolve it.
>>>
>>>
>>>
>>> Thanks,
>>>
>>> Raymond.
>>>
>>>
>>>
>>> On Wed, Jul 17, 2019 at 9:48 PM Alexandr Shapkin <[hidden email]>
>>> wrote:
>>>
>>> Hi, Raymond!
>>>
>>>
>>>
>>> As far as I can see, there are no plans for porting custom executors
>>> configuration in .NET client right now [1].
>>>
>>>
>>>
>>> Please, remind, why do you need a separate pool instead of a default
>>> PublicPool?
>>>
>>>
>>>
>>> [1] - https://issues.apache.org/jira/browse/IGNITE-6566
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>> *From: *Raymond Wilson <[hidden email]>
>>> *Sent: *Wednesday, July 17, 2019 10:58 AM
>>> *To: *user <[hidden email]>
>>> *Subject: *Threadpools and .WithExecute() for C# clients
>>>
>>>
>>>
>>> Some time ago I ran into and issue with thread pool exhaustion and
>>> deadlocking in AI 2.2.
>>>
>>>
>>>
>>> This is the original thread:
>>> http://apache-ignite-users.70518.x6.nabble.com/Possible-dead-lock-when-number-of-jobs-exceeds-thread-pool-td17262.html
>>>
>>>
>>>
>>>
>>> At the time .WithExecutor() was not implemented in the C# client so
>>> there was little option but to expand the size of the public thread pool
>>> sufficiently to prevent the deadlocking.
>>>
>>>
>>>
>>> We have been revisiting this issue and see that .WithExecutor() is not
>>> supported in the AI 2.7.5 client.
>>>
>>>
>>>
>>> Can this be supported in the C# client, or is there a workaround in the
>>> .Net environment? that does not require this capability?
>>>
>>>
>>>
>>> Thanks,
>>>
>>> Raymond.
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>
ptupitsyn ptupitsyn
Reply | Threaded
Open this post in threaded view
|

Re: Threadpools and .WithExecute() for C# clients

Denis, I've just created one:  https://issues.apache.org/jira/browse/IGNITE-12012

Thanks

On Thu, Jul 25, 2019 at 2:25 AM Denis Magda <[hidden email]> wrote:
Pavel,

Do we already have a ticket or do you want me to create one?

-
Denis


On Wed, Jul 24, 2019 at 1:21 AM Pavel Tupitsyn <[hidden email]> wrote:

> Denis, yes, looks like a simple thing to add.
>
> On Tue, Jul 23, 2019 at 10:38 PM Denis Magda <[hidden email]> wrote:
>
>> Looping in the dev list.
>>
>> Pavel, Igor and other C# maintainers, this looks like a valuable extension
>> of our C# APIs. Shouldn't this be a quick addition to Ignite?
>>
>> -
>> Denis
>>
>>
>> On Mon, Jul 22, 2019 at 3:22 PM Raymond Wilson <
>> [hidden email]>
>> wrote:
>>
>> > Alexandr,
>> >
>> > If .WithExecute is not planned to be made available in the C# client,
>> what
>> > is the plan to support custom thread pools from the C# side of things?
>> >
>> > Thanks,
>> > Raymond.
>> >
>> >
>> > On Thu, Jul 18, 2019 at 9:28 AM Raymond Wilson <
>> [hidden email]>
>> > wrote:
>> >
>> >> The source of inbound requests into Server A is from client
>> applications.
>> >>
>> >> Server B is really a cluster of servers that are performing clustered
>> >> transformations and computations across a data set.
>> >>
>> >> I originally used IComputeJob and similar functions which work very
>> well
>> >> but have the restriction that they return the entire result set from a
>> >> Server B node in a single response. These result sets can be large
>> (100's
>> >> of megabytes and larger), which makes life pretty hard for Server A if
>> it
>> >> has to field multiple incoming responses of this size. So, these types
>> of
>> >> requests progressively send responses back (using Ignite messaging) to
>> >> Server A using the Ignite messaging fabric. As Server A receives each
>> part
>> >> of the overall response it processes it according the business rules
>> >> relevant to the request.
>> >>
>> >> The cluster config and numbers of nodes are not really material to
>> this.
>> >>
>> >> Raymond.
>> >>
>> >> On Thu, Jul 18, 2019 at 12:26 AM Alexandr Shapkin <[hidden email]>
>> >> wrote:
>> >>
>> >>> Hi,
>> >>>
>> >>>
>> >>>
>> >>> Can you share a more detailed use case, please?
>> >>>
>> >>>
>> >>>
>> >>> Right now it’s not clear why do you need a messaging fabric.
>> >>>
>> >>> If you are interesting in a progress tracking, then you could try a
>> >>> CacheAPI or QueryContinious, for example.
>> >>>
>> >>>
>> >>>
>> >>> What are the sources of inbound requests? Is it a client requests?
>> >>>
>> >>>
>> >>>
>> >>> What is your cluster config? How many nodes do you have for your
>> >>> distributed computations?
>> >>>
>> >>>
>> >>>
>> >>> *From: *Raymond Wilson <[hidden email]>
>> >>> *Sent: *Wednesday, July 17, 2019 1:49 PM
>> >>> *To: *user <[hidden email]>
>> >>> *Subject: *Re: Threadpools and .WithExecute() for C# clients
>> >>>
>> >>>
>> >>>
>> >>> Hi Alexandr,
>> >>>
>> >>>
>> >>>
>> >>> To summarise from the original thread, say I have server A that
>> accepts
>> >>> requests. It contacts server B in order to help processing those
>> requests.
>> >>> Server B sends in-progress results to server A using the Ignite
>> messaging
>> >>> fabric. If the thread pool in server A is saturated with inbound
>> requests,
>> >>> then there are no available threads to service the messaging fabric
>> traffic
>> >>> from server B to server A resulting in a deadlock condition.
>> >>>
>> >>>
>> >>>
>> >>> In the original discussion it was suggested creating a custom thread
>> >>> pool to handle the Server B to Server A traffic would resolve it.
>> >>>
>> >>>
>> >>>
>> >>> Thanks,
>> >>>
>> >>> Raymond.
>> >>>
>> >>>
>> >>>
>> >>> On Wed, Jul 17, 2019 at 9:48 PM Alexandr Shapkin <[hidden email]>
>> >>> wrote:
>> >>>
>> >>> Hi, Raymond!
>> >>>
>> >>>
>> >>>
>> >>> As far as I can see, there are no plans for porting custom executors
>> >>> configuration in .NET client right now [1].
>> >>>
>> >>>
>> >>>
>> >>> Please, remind, why do you need a separate pool instead of a default
>> >>> PublicPool?
>> >>>
>> >>>
>> >>>
>> >>> [1] - https://issues.apache.org/jira/browse/IGNITE-6566
>> >>>
>> >>>
>> >>>
>> >>>
>> >>>
>> >>>
>> >>>
>> >>> *From: *Raymond Wilson <[hidden email]>
>> >>> *Sent: *Wednesday, July 17, 2019 10:58 AM
>> >>> *To: *user <[hidden email]>
>> >>> *Subject: *Threadpools and .WithExecute() for C# clients
>> >>>
>> >>>
>> >>>
>> >>> Some time ago I ran into and issue with thread pool exhaustion and
>> >>> deadlocking in AI 2.2.
>> >>>
>> >>>
>> >>>
>> >>> This is the original thread:
>> >>>
>> http://apache-ignite-users.70518.x6.nabble.com/Possible-dead-lock-when-number-of-jobs-exceeds-thread-pool-td17262.html
>> >>>
>> >>>
>> >>>
>> >>>
>> >>> At the time .WithExecutor() was not implemented in the C# client so
>> >>> there was little option but to expand the size of the public thread
>> pool
>> >>> sufficiently to prevent the deadlocking.
>> >>>
>> >>>
>> >>>
>> >>> We have been revisiting this issue and see that .WithExecutor() is not
>> >>> supported in the AI 2.7.5 client.
>> >>>
>> >>>
>> >>>
>> >>> Can this be supported in the C# client, or is there a workaround in
>> the
>> >>> .Net environment? that does not require this capability?
>> >>>
>> >>>
>> >>>
>> >>> Thanks,
>> >>>
>> >>> Raymond.
>> >>>
>> >>>
>> >>>
>> >>>
>> >>>
>> >>>
>> >>>
>> >>
>>
>
Raymond Wilson Raymond Wilson
Reply | Threaded
Open this post in threaded view
|

Re: Threadpools and .WithExecute() for C# clients

In reply to this post by ptupitsyn
Thanks Pavel!

Does the priority on the Jira ticket suggest this will target IA 2.8?

Thanks,
Raymond.

On Wed, Jul 24, 2019 at 8:21 PM Pavel Tupitsyn <[hidden email]> wrote:
Denis, yes, looks like a simple thing to add.

On Tue, Jul 23, 2019 at 10:38 PM Denis Magda <[hidden email]> wrote:
Looping in the dev list.

Pavel, Igor and other C# maintainers, this looks like a valuable extension
of our C# APIs. Shouldn't this be a quick addition to Ignite?

-
Denis


On Mon, Jul 22, 2019 at 3:22 PM Raymond Wilson <[hidden email]>
wrote:

> Alexandr,
>
> If .WithExecute is not planned to be made available in the C# client, what
> is the plan to support custom thread pools from the C# side of things?
>
> Thanks,
> Raymond.
>
>
> On Thu, Jul 18, 2019 at 9:28 AM Raymond Wilson <[hidden email]>
> wrote:
>
>> The source of inbound requests into Server A is from client applications.
>>
>> Server B is really a cluster of servers that are performing clustered
>> transformations and computations across a data set.
>>
>> I originally used IComputeJob and similar functions which work very well
>> but have the restriction that they return the entire result set from a
>> Server B node in a single response. These result sets can be large (100's
>> of megabytes and larger), which makes life pretty hard for Server A if it
>> has to field multiple incoming responses of this size. So, these types of
>> requests progressively send responses back (using Ignite messaging) to
>> Server A using the Ignite messaging fabric. As Server A receives each part
>> of the overall response it processes it according the business rules
>> relevant to the request.
>>
>> The cluster config and numbers of nodes are not really material to this.
>>
>> Raymond.
>>
>> On Thu, Jul 18, 2019 at 12:26 AM Alexandr Shapkin <[hidden email]>
>> wrote:
>>
>>> Hi,
>>>
>>>
>>>
>>> Can you share a more detailed use case, please?
>>>
>>>
>>>
>>> Right now it’s not clear why do you need a messaging fabric.
>>>
>>> If you are interesting in a progress tracking, then you could try a
>>> CacheAPI or QueryContinious, for example.
>>>
>>>
>>>
>>> What are the sources of inbound requests? Is it a client requests?
>>>
>>>
>>>
>>> What is your cluster config? How many nodes do you have for your
>>> distributed computations?
>>>
>>>
>>>
>>> *From: *Raymond Wilson <[hidden email]>
>>> *Sent: *Wednesday, July 17, 2019 1:49 PM
>>> *To: *user <[hidden email]>
>>> *Subject: *Re: Threadpools and .WithExecute() for C# clients
>>>
>>>
>>>
>>> Hi Alexandr,
>>>
>>>
>>>
>>> To summarise from the original thread, say I have server A that accepts
>>> requests. It contacts server B in order to help processing those requests.
>>> Server B sends in-progress results to server A using the Ignite messaging
>>> fabric. If the thread pool in server A is saturated with inbound requests,
>>> then there are no available threads to service the messaging fabric traffic
>>> from server B to server A resulting in a deadlock condition.
>>>
>>>
>>>
>>> In the original discussion it was suggested creating a custom thread
>>> pool to handle the Server B to Server A traffic would resolve it.
>>>
>>>
>>>
>>> Thanks,
>>>
>>> Raymond.
>>>
>>>
>>>
>>> On Wed, Jul 17, 2019 at 9:48 PM Alexandr Shapkin <[hidden email]>
>>> wrote:
>>>
>>> Hi, Raymond!
>>>
>>>
>>>
>>> As far as I can see, there are no plans for porting custom executors
>>> configuration in .NET client right now [1].
>>>
>>>
>>>
>>> Please, remind, why do you need a separate pool instead of a default
>>> PublicPool?
>>>
>>>
>>>
>>> [1] - https://issues.apache.org/jira/browse/IGNITE-6566
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>> *From: *Raymond Wilson <[hidden email]>
>>> *Sent: *Wednesday, July 17, 2019 10:58 AM
>>> *To: *user <[hidden email]>
>>> *Subject: *Threadpools and .WithExecute() for C# clients
>>>
>>>
>>>
>>> Some time ago I ran into and issue with thread pool exhaustion and
>>> deadlocking in AI 2.2.
>>>
>>>
>>>
>>> This is the original thread:
>>> http://apache-ignite-users.70518.x6.nabble.com/Possible-dead-lock-when-number-of-jobs-exceeds-thread-pool-td17262.html
>>>
>>>
>>>
>>>
>>> At the time .WithExecutor() was not implemented in the C# client so
>>> there was little option but to expand the size of the public thread pool
>>> sufficiently to prevent the deadlocking.
>>>
>>>
>>>
>>> We have been revisiting this issue and see that .WithExecutor() is not
>>> supported in the AI 2.7.5 client.
>>>
>>>
>>>
>>> Can this be supported in the C# client, or is there a workaround in the
>>> .Net environment? that does not require this capability?
>>>
>>>
>>>
>>> Thanks,
>>>
>>> Raymond.
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>
ptupitsyn ptupitsyn
Reply | Threaded
Open this post in threaded view
|

Re: Threadpools and .WithExecute() for C# clients

Most probably - yes

On Fri, Jul 26, 2019 at 1:36 AM Raymond Wilson <[hidden email]> wrote:
Thanks Pavel!

Does the priority on the Jira ticket suggest this will target IA 2.8?

Thanks,
Raymond.

On Wed, Jul 24, 2019 at 8:21 PM Pavel Tupitsyn <[hidden email]> wrote:
Denis, yes, looks like a simple thing to add.

On Tue, Jul 23, 2019 at 10:38 PM Denis Magda <[hidden email]> wrote:
Looping in the dev list.

Pavel, Igor and other C# maintainers, this looks like a valuable extension
of our C# APIs. Shouldn't this be a quick addition to Ignite?

-
Denis


On Mon, Jul 22, 2019 at 3:22 PM Raymond Wilson <[hidden email]>
wrote:

> Alexandr,
>
> If .WithExecute is not planned to be made available in the C# client, what
> is the plan to support custom thread pools from the C# side of things?
>
> Thanks,
> Raymond.
>
>
> On Thu, Jul 18, 2019 at 9:28 AM Raymond Wilson <[hidden email]>
> wrote:
>
>> The source of inbound requests into Server A is from client applications.
>>
>> Server B is really a cluster of servers that are performing clustered
>> transformations and computations across a data set.
>>
>> I originally used IComputeJob and similar functions which work very well
>> but have the restriction that they return the entire result set from a
>> Server B node in a single response. These result sets can be large (100's
>> of megabytes and larger), which makes life pretty hard for Server A if it
>> has to field multiple incoming responses of this size. So, these types of
>> requests progressively send responses back (using Ignite messaging) to
>> Server A using the Ignite messaging fabric. As Server A receives each part
>> of the overall response it processes it according the business rules
>> relevant to the request.
>>
>> The cluster config and numbers of nodes are not really material to this.
>>
>> Raymond.
>>
>> On Thu, Jul 18, 2019 at 12:26 AM Alexandr Shapkin <[hidden email]>
>> wrote:
>>
>>> Hi,
>>>
>>>
>>>
>>> Can you share a more detailed use case, please?
>>>
>>>
>>>
>>> Right now it’s not clear why do you need a messaging fabric.
>>>
>>> If you are interesting in a progress tracking, then you could try a
>>> CacheAPI or QueryContinious, for example.
>>>
>>>
>>>
>>> What are the sources of inbound requests? Is it a client requests?
>>>
>>>
>>>
>>> What is your cluster config? How many nodes do you have for your
>>> distributed computations?
>>>
>>>
>>>
>>> *From: *Raymond Wilson <[hidden email]>
>>> *Sent: *Wednesday, July 17, 2019 1:49 PM
>>> *To: *user <[hidden email]>
>>> *Subject: *Re: Threadpools and .WithExecute() for C# clients
>>>
>>>
>>>
>>> Hi Alexandr,
>>>
>>>
>>>
>>> To summarise from the original thread, say I have server A that accepts
>>> requests. It contacts server B in order to help processing those requests.
>>> Server B sends in-progress results to server A using the Ignite messaging
>>> fabric. If the thread pool in server A is saturated with inbound requests,
>>> then there are no available threads to service the messaging fabric traffic
>>> from server B to server A resulting in a deadlock condition.
>>>
>>>
>>>
>>> In the original discussion it was suggested creating a custom thread
>>> pool to handle the Server B to Server A traffic would resolve it.
>>>
>>>
>>>
>>> Thanks,
>>>
>>> Raymond.
>>>
>>>
>>>
>>> On Wed, Jul 17, 2019 at 9:48 PM Alexandr Shapkin <[hidden email]>
>>> wrote:
>>>
>>> Hi, Raymond!
>>>
>>>
>>>
>>> As far as I can see, there are no plans for porting custom executors
>>> configuration in .NET client right now [1].
>>>
>>>
>>>
>>> Please, remind, why do you need a separate pool instead of a default
>>> PublicPool?
>>>
>>>
>>>
>>> [1] - https://issues.apache.org/jira/browse/IGNITE-6566
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>> *From: *Raymond Wilson <[hidden email]>
>>> *Sent: *Wednesday, July 17, 2019 10:58 AM
>>> *To: *user <[hidden email]>
>>> *Subject: *Threadpools and .WithExecute() for C# clients
>>>
>>>
>>>
>>> Some time ago I ran into and issue with thread pool exhaustion and
>>> deadlocking in AI 2.2.
>>>
>>>
>>>
>>> This is the original thread:
>>> http://apache-ignite-users.70518.x6.nabble.com/Possible-dead-lock-when-number-of-jobs-exceeds-thread-pool-td17262.html
>>>
>>>
>>>
>>>
>>> At the time .WithExecutor() was not implemented in the C# client so
>>> there was little option but to expand the size of the public thread pool
>>> sufficiently to prevent the deadlocking.
>>>
>>>
>>>
>>> We have been revisiting this issue and see that .WithExecutor() is not
>>> supported in the AI 2.7.5 client.
>>>
>>>
>>>
>>> Can this be supported in the C# client, or is there a workaround in the
>>> .Net environment? that does not require this capability?
>>>
>>>
>>>
>>> Thanks,
>>>
>>> Raymond.
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>
ilya.kasnacheev ilya.kasnacheev
Reply | Threaded
Open this post in threaded view
|

Re: Threadpools and .WithExecute() for C# clients

In reply to this post by Raymond Wilson
Hello!

It is possible that you need to write a tiny Java service to do that, and call it from your C# (whether via Ignite or not).

This is definitely easier than trying to roll out WithExecutor() .Net support.

Regards,
--
Ilya Kasnacheev


ср, 17 июл. 2019 г. в 10:58, Raymond Wilson <[hidden email]>:
Some time ago I ran into and issue with thread pool exhaustion and deadlocking in AI 2.2.


At the time .WithExecutor() was not implemented in the C# client so there was little option but to expand the size of the public thread pool sufficiently to prevent the deadlocking.

We have been revisiting this issue and see that .WithExecutor() is not supported in the AI 2.7.5 client.

Can this be supported in the C# client, or is there a workaround in the .Net environment? that does not require this capability?

Thanks,
Raymond.