Kubernetes - Access Ignite Cluster Externally

classic Classic list List threaded Threaded
11 messages Options
Ryan Samo-2 Ryan Samo-2
Reply | Threaded
Open this post in threaded view
|

Kubernetes - Access Ignite Cluster Externally

CONTENTS DELETED
The author has deleted this message.
ilya.kasnacheev ilya.kasnacheev
Reply | Threaded
Open this post in threaded view
|

Re: Kubernetes - Access Ignite Cluster Externally

Hello Ryan!

I might be terribly wrong, but my first guess is, you should use
TcpDiscoveryKubernetesIpFinder for Server nodes inside container, and
TcpDiscoveryVmIpFinder for outside client nodes. If you're reaching for
setAddresses(), that's the one you should take.

With regards to breakage, there's no active work on Tcp Discovery Finders as
far as I know, so it is unlikely. However there's a branch of development
for a ZooKeeper-based discovery (not just finding nodes but information
exchange between nodes). When this is implemented your code might break, or
become irrelevant. I'm not sure that there would not be a legacy discovery
option, though.

Regards,
Ilya.



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/
David Wimsey David Wimsey
Reply | Threaded
Open this post in threaded view
|

Re: Kubernetes - Access Ignite Cluster Externally

In reply to this post by Ryan Samo-2
To access ignite from outside of kubernetes, you need to enable hostNetwork in the kubernetes cluster for your ignite server pods (this is not enabled with default permissions as it goes against many of the core principals of kubernetes).  This will allow the ignite server to attach directly the host network interface of the server its running on rather than the internal networking setup by kubernetes to hide the container from the outside world.  This lets the pod expose/connect itself directly to the external network.  The kubernetes discovery spi can then be used by your external clients to connect from outside the cluster into the cluster.  Your external clients will need to be able to authenticate to the kubernetes API to read the service information from the kubernetes service in the same way the internal pods do, except now there is an address in the list that the external pods can actually connect to.  You don’t get this authentication for free as you would inside a kubernetes pod, but it works fine with manual configuration.

https://apacheignite-mix.readme.io/docs/kubernetes-discovery


> On Mar 15, 2018, at 4:33 PM, Ryan Samo <[hidden email]> wrote:
>
> Hey guys and gals!
> I have created a development environment for Ignite 2.3 Native Persistence
> on Kubernetes 1.9.3 and have it up and running successfully. I then
> attempted to activate one of my clusters via a Java client call and
> discovered that the TcpDiscoveryKubernetesIpFinder doesn't support the
> "addresses" property by receiving the following error:
>
> *Caused by: org.springframework.beans.NotWritablePropertyException: Invalid
> property 'addresses' of bean class
> [org.apache.ignite.spi.discovery.tcp.ipfinder.kubernetes.TcpDiscoveryKubernetesIpFinder]:
> Bean property 'addresses' is not writable or has an invalid setter method.
> Does the parameter type of the setter match the return type of the getter?*
>
> It turns out that in the documentation for the
> TcpDiscoveryKubernetesIpFinder class, there is a statement that says:
>
> *"An application that uses Ignite client nodes as a gateway to the cluster
> is required to be containerized as well. Applications and Ignite nodes
> running outside of Kubernetes will not be able to reach the containerized
> counterparts."*
>
> I get that in most cases, it's best to run all of the components from within
> Kubernetes for security purposes, but our use case is to create an Ignite
> cluster and then hit them from external clients. In digging through the
> TcpDiscoveryKubernetesIpFinder code, I see that it inherits
> TcpDiscoveryIpFinder which has the methods we need to specify Ignite server
> addresses. With that being said, my questions are...
>
> 1.) Is there any development going on around the
> TcpDiscoveryKubernetesIpFinder class to possibly add external client
> connections outside of Kubernetes?
>
> 2.) If I decided to build my own version of the
> TcpDiscoveryKubernetesIpFinder class that allows for external connections,
> would that be broken in upcoming releases?
>
> Thanks!
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/

Ryan Samo-2 Ryan Samo-2
Reply | Threaded
Open this post in threaded view
|

Re: Kubernetes - Access Ignite Cluster Externally

CONTENTS DELETED
The author has deleted this message.
dmagda dmagda
Reply | Threaded
Open this post in threaded view
|

Re: Kubernetes - Access Ignite Cluster Externally

Hi Ryan,

I see you've already come across the ticket that intended to bring the capability of connecting Ignite nodes that are both inside and outside Kubernetes:

As you see, there is no progress for now, and I'll appreciate if you take over the ticket and develop a new version of the IP finder. Are you interested in that? The community will be happy to support you.

Also, you said that you're using Ignite persistence in production. Do you attach it to StatefulSet? Does it work? We have a ticket to document it, but nobody from the community has tried to set this up yet:

--
Denis

On Thu, Mar 22, 2018 at 7:31 AM, Ryan Samo <[hidden email]> wrote:
Ok thank you all for the tips, I will give it a try!

sid_human sid_human
Reply | Threaded
Open this post in threaded view
|

Re: Kubernetes - Access Ignite Cluster Externally

In reply to this post by David Wimsey
Hi

I had recently come across this similar issue. I have multiple ignite server
pods up in a cluster and running in 3 - 4 nodes. At the same time, I have
ignite clients that can connect and work perfectly fine if they are running
as pods in these nodes.

I tried to implement an external client connecting to the ignite server pods
cluster but no avail. I have followed through the readme.io pages of
kubernetes discovery. Here are the issues I had confronted:

1) Setting hostNetwork = True in the yaml configuration of ignite pods is a
network issue because running multiple containers on a node and each
container trying to host the port would send an error. So, I tried running
one container - one node.
2) I have added a comment  IGNITE-4161
<https://issues.apache.org/jira/browse/IGNITE-4161>   with the exact client
configurations which seem to retrieve the server pods IP addresses. But no
connection is made.

I'm afraid no one has tried this before successfully and no development on
external clients are made as someone had posted in the same Jira- issue.

Thank you.



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Roman Shtykh Roman Shtykh
Reply | Threaded
Open this post in threaded view
|

Re: Kubernetes - Access Ignite Cluster Externally

I have been playing with this for a while, and managed to get an external client node get into the topology, but it failed to communicate to the cluster. Some points:
1. Used TcpDiscoveryKubernetesIpFinder for sever nodes
2. Exposed pods via NodePort on custom ports (higher than 30000); more flexible than hostNetwork = True
3. Created an address resolver for finding and communication SPIs, so that pods addresses are conveyed to the external client.
4. Used TcpDiscoveryVmIpFinder for the client to successfully join the topology.

Unfortunately, TcpCommunicationSPI fails to connect to the server nodes. The client has all addresses of the ClusterNode it attempts to connect to, including internal pods' IP, 127.0.0.1, 0:0:0:0:0:0:0:0 and external IP, but fails to reach the external IP from the list (see TcpCommunicationSPI.createTcpClient). Having ClusterNode expose only the address that was registered by the address resolver might fix it (haven't checked yet) -- anyway, I think the client should not be provided with and communicate via internal addresses iff external addresses are provided, should it?

-- Roman


On Thursday, April 5, 2018, 3:33:10 p.m. GMT+9, sid_human <[hidden email]> wrote:


Hi

I had recently come across this similar issue. I have multiple ignite server
pods up in a cluster and running in 3 - 4 nodes. At the same time, I have
ignite clients that can connect and work perfectly fine if they are running
as pods in these nodes.

I tried to implement an external client connecting to the ignite server pods
cluster but no avail. I have followed through the readme.io pages of
kubernetes discovery. Here are the issues I had confronted:

1) Setting hostNetwork = True in the yaml configuration of ignite pods is a
network issue because running multiple containers on a node and each
container trying to host the port would send an error. So, I tried running
one container - one node.
2) I have added a comment  IGNITE-4161
<https://issues.apache.org/jira/browse/IGNITE-4161>  with the exact client
configurations which seem to retrieve the server pods IP addresses. But no
connection is made.

I'm afraid no one has tried this before successfully and no development on
external clients are made as someone had posted in the same Jira- issue.

Thank you.
Ryan Samo Ryan Samo
Reply | Threaded
Open this post in threaded view
|

Re: Kubernetes - Access Ignite Cluster Externally

In reply to this post by dmagda
Denis,
Sorry for the late reply as I have been away from a computer for a few days.
Today, I do not use Kubernetes in production, it is only in use for POC work
so that I can understand how to utilize the K8s platform and test if it
works well with Ignite. So far, Ignite seems to be playing nicely with
Kubernetes although I have not fully tested the performance, etc.

The POC environment is running Kubernetes 1.9.6 with Ignite Fabric 2.4. I am
using Statefulsets to make sure Ignite has some stickiness to the nodes in
case of failures, etc. I am also utilizing local storage on each of the K8s
nodes so that Ignite persistence will work and it does, very well in fact.

*Making it work*

The following link will give you everything you need to know to enable local
stoarage on Kubernetes:
https://github.com/kubernetes-incubator/external-storage/tree/master/local-volume
<https://github.com/kubernetes-incubator/external-storage/tree/master/local-volume>  

- In order to use persistence, you have to have K8s version 1.9.3 and above.
This is because there are some "feature-gates" that you must enable to allow
for the persistence to operate properly. If your K8s version is >= 1.9.3 and
< 1.10.0 then you need "feature-gates:
PersistentLocalVolumes=true,VolumeScheduling=true,MountPropagation=true". If
your K8s version is >= 1.10.0 then you only need "feature-gates:
BlockVolume=true".

- Next you will need to create a StorageClass where "volumeBindingMode:
WaitForFirstConsumer". This allows us to bind to a storage class and wait
for it to be picked up by a volume claim before it can be used.

- Now we need a DaemonSet that monitors the disk that we want to use for
local storage and checks for mounts. When it sees a new mount, it will
create a PersistentVolume(PV) in K8s. Fortunately Quay has a docker image
that does everything we need,
"quay.io/external_storage/local-volume-provisioner". This is awesome because
we can add more storage or mounts or both and our DaemonSet will pick it up
automatically!

- Now you just need to create a StatefulSet that uses an Ignite docker
image, has a "volumeClaimTemplates" section which points to a the
"volume.beta.kubernetes.io/storage-class" we specified earlier, and a
"volumeMounts" section which points to a mount point that was
auto-discovered by the DaemonSet above. When you start the StatefulSet, it
will create a PersistentVolumeClaim for the mount you specified and bind
that to the StatefulSet.

Bingo, you have persistent storage!

As to your request for contributing to the cause on external K8s discovery,
I would love to do so, but at this time I will have to respectfully decline
as I have many irons in the fire. I appreciate the gesture and hope to help
the community in the future!

I also hope this post helps everyone in their K8s Ignite adventures!



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/
dmagda dmagda
Reply | Threaded
Open this post in threaded view
|

Re: Kubernetes - Access Ignite Cluster Externally

Hello Ryan,

Astonishing. Thanks for contributing this step-by-step guidance! We'll prepare a special documentation page for that:

--
Denis

On Fri, Apr 6, 2018 at 6:17 AM, Ryan Samo <[hidden email]> wrote:
Denis,
Sorry for the late reply as I have been away from a computer for a few days.
Today, I do not use Kubernetes in production, it is only in use for POC work
so that I can understand how to utilize the K8s platform and test if it
works well with Ignite. So far, Ignite seems to be playing nicely with
Kubernetes although I have not fully tested the performance, etc.

The POC environment is running Kubernetes 1.9.6 with Ignite Fabric 2.4. I am
using Statefulsets to make sure Ignite has some stickiness to the nodes in
case of failures, etc. I am also utilizing local storage on each of the K8s
nodes so that Ignite persistence will work and it does, very well in fact.

*Making it work*

The following link will give you everything you need to know to enable local
stoarage on Kubernetes:
https://github.com/kubernetes-incubator/external-storage/tree/master/local-volume
<https://github.com/kubernetes-incubator/external-storage/tree/master/local-volume>

- In order to use persistence, you have to have K8s version 1.9.3 and above.
This is because there are some "feature-gates" that you must enable to allow
for the persistence to operate properly. If your K8s version is >= 1.9.3 and
< 1.10.0 then you need "feature-gates:
PersistentLocalVolumes=true,VolumeScheduling=true,MountPropagation=true". If
your K8s version is >= 1.10.0 then you only need "feature-gates:
BlockVolume=true".

- Next you will need to create a StorageClass where "volumeBindingMode:
WaitForFirstConsumer". This allows us to bind to a storage class and wait
for it to be picked up by a volume claim before it can be used.

- Now we need a DaemonSet that monitors the disk that we want to use for
local storage and checks for mounts. When it sees a new mount, it will
create a PersistentVolume(PV) in K8s. Fortunately Quay has a docker image
that does everything we need,
"quay.io/external_storage/local-volume-provisioner". This is awesome because
we can add more storage or mounts or both and our DaemonSet will pick it up
automatically!

- Now you just need to create a StatefulSet that uses an Ignite docker
image, has a "volumeClaimTemplates" section which points to a the
"volume.beta.kubernetes.io/storage-class" we specified earlier, and a
"volumeMounts" section which points to a mount point that was
auto-discovered by the DaemonSet above. When you start the StatefulSet, it
will create a PersistentVolumeClaim for the mount you specified and bind
that to the StatefulSet.

Bingo, you have persistent storage!

As to your request for contributing to the cause on external K8s discovery,
I would love to do so, but at this time I will have to respectfully decline
as I have many irons in the fire. I appreciate the gesture and hope to help
the community in the future!

I also hope this post helps everyone in their K8s Ignite adventures!

dmagda dmagda
Reply | Threaded
Open this post in threaded view
|

Re: Kubernetes - Access Ignite Cluster Externally

In reply to this post by Roman Shtykh
Roman,

Yes, it shouldn't be required to feed external addresses directly if they are listed in the address resolver. It looks it's inevitable that a special IP finder is required here. 

--
Denis

On Thu, Apr 5, 2018 at 6:03 PM, Roman Shtykh <[hidden email]> wrote:
I have been playing with this for a while, and managed to get an external client node get into the topology, but it failed to communicate to the cluster. Some points:
1. Used TcpDiscoveryKubernetesIpFinder for sever nodes
2. Exposed pods via NodePort on custom ports (higher than 30000); more flexible than hostNetwork = True
3. Created an address resolver for finding and communication SPIs, so that pods addresses are conveyed to the external client.
4. Used TcpDiscoveryVmIpFinder for the client to successfully join the topology.

Unfortunately, TcpCommunicationSPI fails to connect to the server nodes. The client has all addresses of the ClusterNode it attempts to connect to, including internal pods' IP, 127.0.0.1, 0:0:0:0:0:0:0:0 and external IP, but fails to reach the external IP from the list (see TcpCommunicationSPI.createTcpClient). Having ClusterNode expose only the address that was registered by the address resolver might fix it (haven't checked yet) -- anyway, I think the client should not be provided with and communicate via internal addresses iff external addresses are provided, should it?

-- Roman


On Thursday, April 5, 2018, 3:33:10 p.m. GMT+9, sid_human <[hidden email]> wrote:


Hi

I had recently come across this similar issue. I have multiple ignite server
pods up in a cluster and running in 3 - 4 nodes. At the same time, I have
ignite clients that can connect and work perfectly fine if they are running
as pods in these nodes.

I tried to implement an external client connecting to the ignite server pods
cluster but no avail. I have followed through the readme.io pages of
kubernetes discovery. Here are the issues I had confronted:

1) Setting hostNetwork = True in the yaml configuration of ignite pods is a
network issue because running multiple containers on a node and each
container trying to host the port would send an error. So, I tried running
one container - one node.
2) I have added a comment  IGNITE-4161
<https://issues.apache.org/jira/browse/IGNITE-4161>  with the exact client
configurations which seem to retrieve the server pods IP addresses. But no
connection is made.

I'm afraid no one has tried this before successfully and no development on
external clients are made as someone had posted in the same Jira- issue.

Thank you.

Rob Drawsky Rob Drawsky
Reply | Threaded
Open this post in threaded view
|

Re: Kubernetes - Access Ignite Cluster Externally

In reply to this post by Roman Shtykh
Roman,

I know this is old, but thanks for your summary. I was able to get your
method to work, but with an additional step in the config.

It seems that if localAddress is not set via config, that ignite will
enumerate all interfaces (of which there are ususally more than one for
docker containers in general) and report those after discovery, causing
problems with joining the cluster -- I believe because I didn't have
AddressResolver entries for all of those interface IPs.

To solve this, I set the localAddress (for both TcpDiscovery, and
TcpCommunication Spi's) to be the pods address (available via env), and use
that address for the AddressResolver mapping to the external nodePorts.

After doing so, Remote 'thick' ignite client is able to join the cluster
completely. However, I am only using a single ignite node as a server so
far.

In the next week or so, I will try to get this to work with the
KubernetesDiscoveryIpFinder which we use for create multi-node server
clusters in kubernetes.  I am unsure if using AddressResolver to do the
mapping will work in mulit-node clusters without additional steps (In this
case the NodePorts would be load balanced for external access) I believe,
and the kubernetes internal ignite pods may end up getting mapped to the
external ports as well. I am not sure ignite will be happy with that. It
would seem different contexts would be needed for AddressResolver.. the
internal nodes should talk to each other with internal ips, and the the
external client should use external addresses...and that may be a problem.

I am not very familiar with kubernetes api yet, and wonder if there may be a
way to create a kubernetes ipfinder for an external client/node, that would
be smart enough to query the the services external ports automatically. I
think that this approach may still have a problem in that the
TcpCommunications address received by the client after discovery will either
get in internal address if the internal kubernetes nodes aren't using an
AddressResolver, or if they are, the internal ignite nodes will end up using
the external addresses to talk to each other.  I am not sure it will be
possible for ignite to allow some nodes in the cluster to know about the
other nodes via one address, while other nodes to know it via a different
address, but it may work. Even if it does, it may be a trick to get the
right IP for the different contexts (internal versus external) to the other
nodes.

I am motivated to find a solution to this, as it solves some development
time problems for us; it would be great if someone could provide some
guidance or hints.

Thanks,
--Rob







--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/