Ignite - Spark integration

classic Classic list List threaded Threaded
4 messages Options
Paolo Di Tommaso Paolo Di Tommaso
Reply | Threaded
Open this post in threaded view
|

Ignite - Spark integration

Hi, 

I'm giving a try to the Spark integration provided by Ignite by using the embedded deployment mode described here

 I've setup a local cluster made up a master and a worker node. 

This is my basic Ignite-Spark application: 

public class JavaLaunchIgnite {

    static public void main(String... args) {
        // -- spark context
        SparkConf sparkConf = new SparkConf().setAppName("Spark-Ignite");
        JavaSparkContext sc = new JavaSparkContext(sparkConf);

        // -- ignite configuration
        IgniteOutClosure cfg = new IgniteOutClosure() {
            @Override public Object apply() {
                return new IgniteConfiguration();
            }};
        // -- ignite context
        JavaIgniteContext<Integer,Integer> ic = new JavaIgniteContext<Integer,Integer>(sc, cfg);
        final Ignite ignite = ic.ignite();
        ic.ignite().compute().broadcast(new IgniteRunnable() {
            @Override public void run() {
                System.out.println(">>> Hello Node: " + ignite.cluster().localNode().id());
            }});

        ic.close(true);
        System.out.println(">>> DONE");
    }
}

However when I submit it it simply hangs. By using the Spark web console, I can see that the application is correctly deployed and running but it never stops. 

In the Spark worker node I can find any log produced by Ignite (which is supposed to deploy an Ignite worker). See here

Instead I can see the Ignite output in the log of the spark-submit log. See here


Does anybody have any clue why this app just hangs? 


Cheers,
Paolo

Denis Magda Denis Magda
Reply | Threaded
Open this post in threaded view
|

Re: Ignite - Spark integration

Hi Paolo,

The application hangs because Ignite client node, that is used by Spark worker, can’t connect to the cluster
  1. 3797 [tcp-client-disco-msg-worker-#4%null%] WARN  org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi  - IP finder returned empty addresses list. Please check IP finder configuration and make sure multicast works on your network. Will retry every 2 secs.

To fix the issue you have to use one of IP Finders implementations [1] that will let each cluster node to find each other.
One of the most common solutions is to use TcpDiscoveryVmIpFinder [2] listing all the IPs of all the cluster nodes and set this IP finder to IgniteConfiguration on a node startup.

Also you may want to refer to the following discussion where the user also had an issue with IP finders initially.


Denis

On Jun 12, 2016, at 4:24 PM, Paolo Di Tommaso <[hidden email]> wrote:

Hi, 

I'm giving a try to the Spark integration provided by Ignite by using the embedded deployment mode described here

 I've setup a local cluster made up a master and a worker node. 

This is my basic Ignite-Spark application: 

public class JavaLaunchIgnite {

    static public void main(String... args) {
        // -- spark context
        SparkConf sparkConf = new SparkConf().setAppName("Spark-Ignite");
        JavaSparkContext sc = new JavaSparkContext(sparkConf);

        // -- ignite configuration
        IgniteOutClosure cfg = new IgniteOutClosure() {
            @Override public Object apply() {
                return new IgniteConfiguration();
            }};
        // -- ignite context
        JavaIgniteContext<Integer,Integer> ic = new JavaIgniteContext<Integer,Integer>(sc, cfg);
        final Ignite ignite = ic.ignite();
        ic.ignite().compute().broadcast(new IgniteRunnable() {
            @Override public void run() {
                System.out.println(">>> Hello Node: " + ignite.cluster().localNode().id());
            }});

        ic.close(true);
        System.out.println(">>> DONE");
    }
}

However when I submit it it simply hangs. By using the Spark web console, I can see that the application is correctly deployed and running but it never stops. 

In the Spark worker node I can find any log produced by Ignite (which is supposed to deploy an Ignite worker). See here

Instead I can see the Ignite output in the log of the spark-submit log. See here


Does anybody have any clue why this app just hangs? 


Cheers,
Paolo


Paolo Di Tommaso Paolo Di Tommaso
Reply | Threaded
Open this post in threaded view
|

Re: Ignite - Spark integration

Hi, 

Not sure that is the problem, because I'm using deploy a local Ignite cluster and it works by using the multicast discover. 

However I've tried using TcpDiscoveryVmIpFinder and providing the local addresses. It changes the warning message but it continues to hangs. 

12516 [tcp-client-disco-msg-worker-#4%null%] WARN  org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi  - Failed to connect to any address from IP finder (will retry to join topology every 2 secs): [/192.168.1.36:47500, /192.168.99.1:47500]

It looks to me that there's no Ignite daemon to which connect. I've understood that an Ignite daemon is automatically launched in each Spark worker when using the embedded deployment mode (but I can't find any Ignite message in the Spark worker log). 


I've missed something? 


Cheers, 
Paolo




On Mon, Jun 13, 2016 at 10:09 AM, Denis Magda <[hidden email]> wrote:
Hi Paolo,

The application hangs because Ignite client node, that is used by Spark worker, can’t connect to the cluster
  1. 3797 [tcp-client-disco-msg-worker-#4%null%] WARN  org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi  - IP finder returned empty addresses list. Please check IP finder configuration and make sure multicast works on your network. Will retry every 2 secs.

To fix the issue you have to use one of IP Finders implementations [1] that will let each cluster node to find each other.
One of the most common solutions is to use TcpDiscoveryVmIpFinder [2] listing all the IPs of all the cluster nodes and set this IP finder to IgniteConfiguration on a node startup.

Also you may want to refer to the following discussion where the user also had an issue with IP finders initially.


Denis

On Jun 12, 2016, at 4:24 PM, Paolo Di Tommaso <[hidden email]> wrote:

Hi, 

I'm giving a try to the Spark integration provided by Ignite by using the embedded deployment mode described here

 I've setup a local cluster made up a master and a worker node. 

This is my basic Ignite-Spark application: 

public class JavaLaunchIgnite {

    static public void main(String... args) {
        // -- spark context
        SparkConf sparkConf = new SparkConf().setAppName("Spark-Ignite");
        JavaSparkContext sc = new JavaSparkContext(sparkConf);

        // -- ignite configuration
        IgniteOutClosure cfg = new IgniteOutClosure() {
            @Override public Object apply() {
                return new IgniteConfiguration();
            }};
        // -- ignite context
        JavaIgniteContext<Integer,Integer> ic = new JavaIgniteContext<Integer,Integer>(sc, cfg);
        final Ignite ignite = ic.ignite();
        ic.ignite().compute().broadcast(new IgniteRunnable() {
            @Override public void run() {
                System.out.println(">>> Hello Node: " + ignite.cluster().localNode().id());
            }});

        ic.close(true);
        System.out.println(">>> DONE");
    }
}

However when I submit it it simply hangs. By using the Spark web console, I can see that the application is correctly deployed and running but it never stops. 

In the Spark worker node I can find any log produced by Ignite (which is supposed to deploy an Ignite worker). See here

Instead I can see the Ignite output in the log of the spark-submit log. See here


Does anybody have any clue why this app just hangs? 


Cheers,
Paolo



Denis Magda Denis Magda
Reply | Threaded
Open this post in threaded view
|

Re: Ignite - Spark integration

Hi, 

As I see you already got the answer in the following discussion

Let’s keep discussing in one thread.

Denis

On Jun 13, 2016, at 12:40 PM, Paolo Di Tommaso <[hidden email]> wrote:

Hi, 

Not sure that is the problem, because I'm using deploy a local Ignite cluster and it works by using the multicast discover. 

However I've tried using TcpDiscoveryVmIpFinder and providing the local addresses. It changes the warning message but it continues to hangs. 

12516 [tcp-client-disco-msg-worker-#4%null%] WARN  org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi  - Failed to connect to any address from IP finder (will retry to join topology every 2 secs): [/192.168.1.36:47500, /192.168.99.1:47500]

It looks to me that there's no Ignite daemon to which connect. I've understood that an Ignite daemon is automatically launched in each Spark worker when using the embedded deployment mode (but I can't find any Ignite message in the Spark worker log). 


I've missed something? 


Cheers, 
Paolo




On Mon, Jun 13, 2016 at 10:09 AM, Denis Magda <[hidden email]> wrote:
Hi Paolo,

The application hangs because Ignite client node, that is used by Spark worker, can’t connect to the cluster
  1. 3797 [tcp-client-disco-msg-worker-#4%null%] WARN  org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi  - IP finder returned empty addresses list. Please check IP finder configuration and make sure multicast works on your network. Will retry every 2 secs.

To fix the issue you have to use one of IP Finders implementations [1] that will let each cluster node to find each other.
One of the most common solutions is to use TcpDiscoveryVmIpFinder [2] listing all the IPs of all the cluster nodes and set this IP finder to IgniteConfiguration on a node startup.

Also you may want to refer to the following discussion where the user also had an issue with IP finders initially.


Denis

On Jun 12, 2016, at 4:24 PM, Paolo Di Tommaso <[hidden email]> wrote:

Hi, 

I'm giving a try to the Spark integration provided by Ignite by using the embedded deployment mode described here

 I've setup a local cluster made up a master and a worker node. 

This is my basic Ignite-Spark application: 

public class JavaLaunchIgnite {

    static public void main(String... args) {
        // -- spark context
        SparkConf sparkConf = new SparkConf().setAppName("Spark-Ignite");
        JavaSparkContext sc = new JavaSparkContext(sparkConf);

        // -- ignite configuration
        IgniteOutClosure cfg = new IgniteOutClosure() {
            @Override public Object apply() {
                return new IgniteConfiguration();
            }};
        // -- ignite context
        JavaIgniteContext<Integer,Integer> ic = new JavaIgniteContext<Integer,Integer>(sc, cfg);
        final Ignite ignite = ic.ignite();
        ic.ignite().compute().broadcast(new IgniteRunnable() {
            @Override public void run() {
                System.out.println(">>> Hello Node: " + ignite.cluster().localNode().id());
            }});

        ic.close(true);
        System.out.println(">>> DONE");
    }
}

However when I submit it it simply hangs. By using the Spark web console, I can see that the application is correctly deployed and running but it never stops. 

In the Spark worker node I can find any log produced by Ignite (which is supposed to deploy an Ignite worker). See here

Instead I can see the Ignite output in the log of the spark-submit log. See here


Does anybody have any clue why this app just hangs? 


Cheers,
Paolo