How deploy Ignite workers in a Spark cluster

classic Classic list List threaded Threaded
10 messages Options
Paolo Di Tommaso Paolo Di Tommaso
Reply | Threaded
Open this post in threaded view
|

How deploy Ignite workers in a Spark cluster

Hi all, 

I'm struggling deploying an Ignite application in a Spark (local) cluster using the Embedded deploying described at this link.  

The documentation seems suggesting that Ignite workers are automatically instantiated at runtime when submitting the Ignite app.

Could you please confirm that this is the expected behaviour? 


In my tests the when the application starts it simply hangs, reporting this warning message: 

WARN  org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi  - Failed to connect to any address from IP finder (will retry to join topology every 2 secs): [/192.168.1.36:47500, /192.168.99.1:47500]

It looks like there are not ignite daemons to which connect to. Also inspecting the Spark worker log I'm unable to find any message produced by Ignite. I'm expecting instead to find the log messages produced by the ignite daemon startup. 


Any idea what's wrong? 


Cheers,
Paolo

Alexei Scherbakov Alexei Scherbakov
Reply | Threaded
Open this post in threaded view
|

Re: How deploy Ignite workers in a Spark cluster

Hi,

To automatically start Ignite nodes you must pass false parameter to 3-d IgniteContext argument like:

// java
SparcContext sc = ...
new JavaIgniteContext<>(sc, new IgniteConfigProvider(), false);;

or

// scala
SparcContext sc = ...
new IgniteContext[String, String](sc,() ⇒ configurationClo(), false)

2016-06-15 13:31 GMT+03:00 Paolo Di Tommaso <[hidden email]>:
Hi all, 

I'm struggling deploying an Ignite application in a Spark (local) cluster using the Embedded deploying described at this link.  

The documentation seems suggesting that Ignite workers are automatically instantiated at runtime when submitting the Ignite app.

Could you please confirm that this is the expected behaviour? 


In my tests the when the application starts it simply hangs, reporting this warning message: 

WARN  org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi  - Failed to connect to any address from IP finder (will retry to join topology every 2 secs): [/192.168.1.36:47500, /192.168.99.1:47500]

It looks like there are not ignite daemons to which connect to. Also inspecting the Spark worker log I'm unable to find any message produced by Ignite. I'm expecting instead to find the log messages produced by the ignite daemon startup. 


Any idea what's wrong? 


Cheers,
Paolo




--

Best regards,
Alexei Scherbakov
Paolo Di Tommaso Paolo Di Tommaso
Reply | Threaded
Open this post in threaded view
|

Re: How deploy Ignite workers in a Spark cluster

Great, now it works! Thanks a lot. 


I have only a NPE during the application shutdown (you can find the stack trace at this link). Is this normal? and in any case is there a way to avoid it? 


Cheers,
Paolo



On Wed, Jun 15, 2016 at 1:25 PM, Alexei Scherbakov <[hidden email]> wrote:
Hi,

To automatically start Ignite nodes you must pass false parameter to 3-d IgniteContext argument like:

// java
SparcContext sc = ...
new JavaIgniteContext<>(sc, new IgniteConfigProvider(), false);;

or

// scala
SparcContext sc = ...
new IgniteContext[String, String](sc,() ⇒ configurationClo(), false)

2016-06-15 13:31 GMT+03:00 Paolo Di Tommaso <[hidden email]>:
Hi all, 

I'm struggling deploying an Ignite application in a Spark (local) cluster using the Embedded deploying described at this link.  

The documentation seems suggesting that Ignite workers are automatically instantiated at runtime when submitting the Ignite app.

Could you please confirm that this is the expected behaviour? 


In my tests the when the application starts it simply hangs, reporting this warning message: 

WARN  org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi  - Failed to connect to any address from IP finder (will retry to join topology every 2 secs): [/192.168.1.36:47500, /192.168.99.1:47500]

It looks like there are not ignite daemons to which connect to. Also inspecting the Spark worker log I'm unable to find any message produced by Ignite. I'm expecting instead to find the log messages produced by the ignite daemon startup. 


Any idea what's wrong? 


Cheers,
Paolo




--

Best regards,
Alexei Scherbakov

Alexei Scherbakov Alexei Scherbakov
Reply | Threaded
Open this post in threaded view
|

Re: How deploy Ignite workers in a Spark cluster

I don't think it's OK.

Which Ingite's version do you use?

2016-06-15 15:35 GMT+03:00 Paolo Di Tommaso <[hidden email]>:
Great, now it works! Thanks a lot. 


I have only a NPE during the application shutdown (you can find the stack trace at this link). Is this normal? and in any case is there a way to avoid it? 


Cheers,
Paolo



On Wed, Jun 15, 2016 at 1:25 PM, Alexei Scherbakov <[hidden email]> wrote:
Hi,

To automatically start Ignite nodes you must pass false parameter to 3-d IgniteContext argument like:

// java
SparcContext sc = ...
new JavaIgniteContext<>(sc, new IgniteConfigProvider(), false);;

or

// scala
SparcContext sc = ...
new IgniteContext[String, String](sc,() ⇒ configurationClo(), false)

2016-06-15 13:31 GMT+03:00 Paolo Di Tommaso <[hidden email]>:
Hi all, 

I'm struggling deploying an Ignite application in a Spark (local) cluster using the Embedded deploying described at this link.  

The documentation seems suggesting that Ignite workers are automatically instantiated at runtime when submitting the Ignite app.

Could you please confirm that this is the expected behaviour? 


In my tests the when the application starts it simply hangs, reporting this warning message: 

WARN  org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi  - Failed to connect to any address from IP finder (will retry to join topology every 2 secs): [/192.168.1.36:47500, /192.168.99.1:47500]

It looks like there are not ignite daemons to which connect to. Also inspecting the Spark worker log I'm unable to find any message produced by Ignite. I'm expecting instead to find the log messages produced by the ignite daemon startup. 


Any idea what's wrong? 


Cheers,
Paolo




--

Best regards,
Alexei Scherbakov




--

Best regards,
Alexei Scherbakov
Paolo Di Tommaso Paolo Di Tommaso
Reply | Threaded
Open this post in threaded view
|

Re: How deploy Ignite workers in a Spark cluster

The version is 1.6.0#20160518-sha1:0b22c45b and the following is the script I'm using. 




Cheers, p


On Wed, Jun 15, 2016 at 5:00 PM, Alexei Scherbakov <[hidden email]> wrote:
I don't think it's OK.

Which Ingite's version do you use?

2016-06-15 15:35 GMT+03:00 Paolo Di Tommaso <[hidden email]>:
Great, now it works! Thanks a lot. 


I have only a NPE during the application shutdown (you can find the stack trace at this link). Is this normal? and in any case is there a way to avoid it? 


Cheers,
Paolo



On Wed, Jun 15, 2016 at 1:25 PM, Alexei Scherbakov <[hidden email]> wrote:
Hi,

To automatically start Ignite nodes you must pass false parameter to 3-d IgniteContext argument like:

// java
SparcContext sc = ...
new JavaIgniteContext<>(sc, new IgniteConfigProvider(), false);;

or

// scala
SparcContext sc = ...
new IgniteContext[String, String](sc,() ⇒ configurationClo(), false)

2016-06-15 13:31 GMT+03:00 Paolo Di Tommaso <[hidden email]>:
Hi all, 

I'm struggling deploying an Ignite application in a Spark (local) cluster using the Embedded deploying described at this link.  

The documentation seems suggesting that Ignite workers are automatically instantiated at runtime when submitting the Ignite app.

Could you please confirm that this is the expected behaviour? 


In my tests the when the application starts it simply hangs, reporting this warning message: 

WARN  org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi  - Failed to connect to any address from IP finder (will retry to join topology every 2 secs): [/192.168.1.36:47500, /192.168.99.1:47500]

It looks like there are not ignite daemons to which connect to. Also inspecting the Spark worker log I'm unable to find any message produced by Ignite. I'm expecting instead to find the log messages produced by the ignite daemon startup. 


Any idea what's wrong? 


Cheers,
Paolo




--

Best regards,
Alexei Scherbakov




--

Best regards,
Alexei Scherbakov

Alexei Scherbakov Alexei Scherbakov
Reply | Threaded
Open this post in threaded view
|

Re: How deploy Ignite workers in a Spark cluster

This example (slightly modified) works fine for me.

In logs I see a problem with multicast IP discovery.

Check if the mulitcast is enabled on your machine or better use static IP discovery[1]


2016-06-15 18:06 GMT+03:00 Paolo Di Tommaso <[hidden email]>:
The version is 1.6.0#20160518-sha1:0b22c45b and the following is the script I'm using. 




Cheers, p


On Wed, Jun 15, 2016 at 5:00 PM, Alexei Scherbakov <[hidden email]> wrote:
I don't think it's OK.

Which Ingite's version do you use?

2016-06-15 15:35 GMT+03:00 Paolo Di Tommaso <[hidden email]>:
Great, now it works! Thanks a lot. 


I have only a NPE during the application shutdown (you can find the stack trace at this link). Is this normal? and in any case is there a way to avoid it? 


Cheers,
Paolo



On Wed, Jun 15, 2016 at 1:25 PM, Alexei Scherbakov <[hidden email]> wrote:
Hi,

To automatically start Ignite nodes you must pass false parameter to 3-d IgniteContext argument like:

// java
SparcContext sc = ...
new JavaIgniteContext<>(sc, new IgniteConfigProvider(), false);;

or

// scala
SparcContext sc = ...
new IgniteContext[String, String](sc,() ⇒ configurationClo(), false)

2016-06-15 13:31 GMT+03:00 Paolo Di Tommaso <[hidden email]>:
Hi all, 

I'm struggling deploying an Ignite application in a Spark (local) cluster using the Embedded deploying described at this link.  

The documentation seems suggesting that Ignite workers are automatically instantiated at runtime when submitting the Ignite app.

Could you please confirm that this is the expected behaviour? 


In my tests the when the application starts it simply hangs, reporting this warning message: 

WARN  org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi  - Failed to connect to any address from IP finder (will retry to join topology every 2 secs): [/192.168.1.36:47500, /192.168.99.1:47500]

It looks like there are not ignite daemons to which connect to. Also inspecting the Spark worker log I'm unable to find any message produced by Ignite. I'm expecting instead to find the log messages produced by the ignite daemon startup. 


Any idea what's wrong? 


Cheers,
Paolo




--

Best regards,
Alexei Scherbakov




--

Best regards,
Alexei Scherbakov




--

Best regards,
Alexei Scherbakov
Paolo Di Tommaso Paolo Di Tommaso
Reply | Threaded
Open this post in threaded view
|

Re: How deploy Ignite workers in a Spark cluster

In reply to this post by Paolo Di Tommaso
OK, using `ic.close(false)` instead of `ic.close(true)` that exception is not reported. 

However I'm a bit confused. The close argument is named `shutdownIgniteOnWorkers` so I was thinking that is required to set it true to shutdown the Ignite daemon when the app is terminated. 

How it is supposed to be used that flag? 


Cheers,
Paolo


On Wed, Jun 15, 2016 at 5:06 PM, Paolo Di Tommaso <[hidden email]> wrote:
The version is 1.6.0#20160518-sha1:0b22c45b and the following is the script I'm using. 




Cheers, p


On Wed, Jun 15, 2016 at 5:00 PM, Alexei Scherbakov <[hidden email]> wrote:
I don't think it's OK.

Which Ingite's version do you use?

2016-06-15 15:35 GMT+03:00 Paolo Di Tommaso <[hidden email]>:
Great, now it works! Thanks a lot. 


I have only a NPE during the application shutdown (you can find the stack trace at this link). Is this normal? and in any case is there a way to avoid it? 


Cheers,
Paolo



On Wed, Jun 15, 2016 at 1:25 PM, Alexei Scherbakov <[hidden email]> wrote:
Hi,

To automatically start Ignite nodes you must pass false parameter to 3-d IgniteContext argument like:

// java
SparcContext sc = ...
new JavaIgniteContext<>(sc, new IgniteConfigProvider(), false);;

or

// scala
SparcContext sc = ...
new IgniteContext[String, String](sc,() ⇒ configurationClo(), false)

2016-06-15 13:31 GMT+03:00 Paolo Di Tommaso <[hidden email]>:
Hi all, 

I'm struggling deploying an Ignite application in a Spark (local) cluster using the Embedded deploying described at this link.  

The documentation seems suggesting that Ignite workers are automatically instantiated at runtime when submitting the Ignite app.

Could you please confirm that this is the expected behaviour? 


In my tests the when the application starts it simply hangs, reporting this warning message: 

WARN  org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi  - Failed to connect to any address from IP finder (will retry to join topology every 2 secs): [/192.168.1.36:47500, /192.168.99.1:47500]

It looks like there are not ignite daemons to which connect to. Also inspecting the Spark worker log I'm unable to find any message produced by Ignite. I'm expecting instead to find the log messages produced by the ignite daemon startup. 


Any idea what's wrong? 


Cheers,
Paolo




--

Best regards,
Alexei Scherbakov




--

Best regards,
Alexei Scherbakov


Alexei Scherbakov Alexei Scherbakov
Reply | Threaded
Open this post in threaded view
|

Re: How deploy Ignite workers in a Spark cluster

Your understanding is correct.

How many nodes do you have?

Please provide full logs from the started Ignite instances.



2016-06-15 18:34 GMT+03:00 Paolo Di Tommaso <[hidden email]>:
OK, using `ic.close(false)` instead of `ic.close(true)` that exception is not reported. 

However I'm a bit confused. The close argument is named `shutdownIgniteOnWorkers` so I was thinking that is required to set it true to shutdown the Ignite daemon when the app is terminated. 

How it is supposed to be used that flag? 


Cheers,
Paolo


On Wed, Jun 15, 2016 at 5:06 PM, Paolo Di Tommaso <[hidden email]> wrote:
The version is 1.6.0#20160518-sha1:0b22c45b and the following is the script I'm using. 




Cheers, p


On Wed, Jun 15, 2016 at 5:00 PM, Alexei Scherbakov <[hidden email]> wrote:
I don't think it's OK.

Which Ingite's version do you use?

2016-06-15 15:35 GMT+03:00 Paolo Di Tommaso <[hidden email]>:
Great, now it works! Thanks a lot. 


I have only a NPE during the application shutdown (you can find the stack trace at this link). Is this normal? and in any case is there a way to avoid it? 


Cheers,
Paolo



On Wed, Jun 15, 2016 at 1:25 PM, Alexei Scherbakov <[hidden email]> wrote:
Hi,

To automatically start Ignite nodes you must pass false parameter to 3-d IgniteContext argument like:

// java
SparcContext sc = ...
new JavaIgniteContext<>(sc, new IgniteConfigProvider(), false);;

or

// scala
SparcContext sc = ...
new IgniteContext[String, String](sc,() ⇒ configurationClo(), false)

2016-06-15 13:31 GMT+03:00 Paolo Di Tommaso <[hidden email]>:
Hi all, 

I'm struggling deploying an Ignite application in a Spark (local) cluster using the Embedded deploying described at this link.  

The documentation seems suggesting that Ignite workers are automatically instantiated at runtime when submitting the Ignite app.

Could you please confirm that this is the expected behaviour? 


In my tests the when the application starts it simply hangs, reporting this warning message: 

WARN  org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi  - Failed to connect to any address from IP finder (will retry to join topology every 2 secs): [/192.168.1.36:47500, /192.168.99.1:47500]

It looks like there are not ignite daemons to which connect to. Also inspecting the Spark worker log I'm unable to find any message produced by Ignite. I'm expecting instead to find the log messages produced by the ignite daemon startup. 


Any idea what's wrong? 


Cheers,
Paolo




--

Best regards,
Alexei Scherbakov




--

Best regards,
Alexei Scherbakov





--

Best regards,
Alexei Scherbakov
Paolo Di Tommaso Paolo Di Tommaso
Reply | Threaded
Open this post in threaded view
|

Re: How deploy Ignite workers in a Spark cluster

Hi, 

I'm using a local spark cluster made up one master and one worker. 

Using the version of the script that exception is not raised. But it is confusing me even more because that application run is not reported in the Spark console. It looks like it is running on the master node. Does it make sense? You can find the output produced at this link

My goal is to deploy an Ignite worker in *each* Spark node available in the Spark cluster, deploy an hybrid application based on Spark+Ignite and shutdown the Ignite workers on completion. 

What is supposed to be the best approach to implement that. 


Thanks,
Paolo
 

On Wed, Jun 15, 2016 at 6:13 PM, Alexei Scherbakov <[hidden email]> wrote:
Your understanding is correct.

How many nodes do you have?

Please provide full logs from the started Ignite instances.



2016-06-15 18:34 GMT+03:00 Paolo Di Tommaso <[hidden email]>:
OK, using `ic.close(false)` instead of `ic.close(true)` that exception is not reported. 

However I'm a bit confused. The close argument is named `shutdownIgniteOnWorkers` so I was thinking that is required to set it true to shutdown the Ignite daemon when the app is terminated. 

How it is supposed to be used that flag? 


Cheers,
Paolo


On Wed, Jun 15, 2016 at 5:06 PM, Paolo Di Tommaso <[hidden email]> wrote:
The version is 1.6.0#20160518-sha1:0b22c45b and the following is the script I'm using. 




Cheers, p


On Wed, Jun 15, 2016 at 5:00 PM, Alexei Scherbakov <[hidden email]> wrote:
I don't think it's OK.

Which Ingite's version do you use?

2016-06-15 15:35 GMT+03:00 Paolo Di Tommaso <[hidden email]>:
Great, now it works! Thanks a lot. 


I have only a NPE during the application shutdown (you can find the stack trace at this link). Is this normal? and in any case is there a way to avoid it? 


Cheers,
Paolo



On Wed, Jun 15, 2016 at 1:25 PM, Alexei Scherbakov <[hidden email]> wrote:
Hi,

To automatically start Ignite nodes you must pass false parameter to 3-d IgniteContext argument like:

// java
SparcContext sc = ...
new JavaIgniteContext<>(sc, new IgniteConfigProvider(), false);;

or

// scala
SparcContext sc = ...
new IgniteContext[String, String](sc,() ⇒ configurationClo(), false)

2016-06-15 13:31 GMT+03:00 Paolo Di Tommaso <[hidden email]>:
Hi all, 

I'm struggling deploying an Ignite application in a Spark (local) cluster using the Embedded deploying described at this link.  

The documentation seems suggesting that Ignite workers are automatically instantiated at runtime when submitting the Ignite app.

Could you please confirm that this is the expected behaviour? 


In my tests the when the application starts it simply hangs, reporting this warning message: 

WARN  org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi  - Failed to connect to any address from IP finder (will retry to join topology every 2 secs): [/192.168.1.36:47500, /192.168.99.1:47500]

It looks like there are not ignite daemons to which connect to. Also inspecting the Spark worker log I'm unable to find any message produced by Ignite. I'm expecting instead to find the log messages produced by the ignite daemon startup. 


Any idea what's wrong? 


Cheers,
Paolo




--

Best regards,
Alexei Scherbakov




--

Best regards,
Alexei Scherbakov





--

Best regards,
Alexei Scherbakov

Alexei Scherbakov Alexei Scherbakov
Reply | Threaded
Open this post in threaded view
|

Re: How deploy Ignite workers in a Spark cluster

Hi,

Have you tried to provide Spark master URL in the SparkConf instance ?

I'm not very big expert on Spark, so you better follow Spark docs for troubleshooting
Spark configuration problems.




2016-06-15 23:12 GMT+03:00 Paolo Di Tommaso <[hidden email]>:
Hi, 

I'm using a local spark cluster made up one master and one worker. 

Using the version of the script that exception is not raised. But it is confusing me even more because that application run is not reported in the Spark console. It looks like it is running on the master node. Does it make sense? You can find the output produced at this link

My goal is to deploy an Ignite worker in *each* Spark node available in the Spark cluster, deploy an hybrid application based on Spark+Ignite and shutdown the Ignite workers on completion. 

What is supposed to be the best approach to implement that. 


Thanks,
Paolo
 

On Wed, Jun 15, 2016 at 6:13 PM, Alexei Scherbakov <[hidden email]> wrote:
Your understanding is correct.

How many nodes do you have?

Please provide full logs from the started Ignite instances.



2016-06-15 18:34 GMT+03:00 Paolo Di Tommaso <[hidden email]>:
OK, using `ic.close(false)` instead of `ic.close(true)` that exception is not reported. 

However I'm a bit confused. The close argument is named `shutdownIgniteOnWorkers` so I was thinking that is required to set it true to shutdown the Ignite daemon when the app is terminated. 

How it is supposed to be used that flag? 


Cheers,
Paolo


On Wed, Jun 15, 2016 at 5:06 PM, Paolo Di Tommaso <[hidden email]> wrote:
The version is 1.6.0#20160518-sha1:0b22c45b and the following is the script I'm using. 




Cheers, p


On Wed, Jun 15, 2016 at 5:00 PM, Alexei Scherbakov <[hidden email]> wrote:
I don't think it's OK.

Which Ingite's version do you use?

2016-06-15 15:35 GMT+03:00 Paolo Di Tommaso <[hidden email]>:
Great, now it works! Thanks a lot. 


I have only a NPE during the application shutdown (you can find the stack trace at this link). Is this normal? and in any case is there a way to avoid it? 


Cheers,
Paolo



On Wed, Jun 15, 2016 at 1:25 PM, Alexei Scherbakov <[hidden email]> wrote:
Hi,

To automatically start Ignite nodes you must pass false parameter to 3-d IgniteContext argument like:

// java
SparcContext sc = ...
new JavaIgniteContext<>(sc, new IgniteConfigProvider(), false);;

or

// scala
SparcContext sc = ...
new IgniteContext[String, String](sc,() ⇒ configurationClo(), false)

2016-06-15 13:31 GMT+03:00 Paolo Di Tommaso <[hidden email]>:
Hi all, 

I'm struggling deploying an Ignite application in a Spark (local) cluster using the Embedded deploying described at this link.  

The documentation seems suggesting that Ignite workers are automatically instantiated at runtime when submitting the Ignite app.

Could you please confirm that this is the expected behaviour? 


In my tests the when the application starts it simply hangs, reporting this warning message: 

WARN  org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi  - Failed to connect to any address from IP finder (will retry to join topology every 2 secs): [/192.168.1.36:47500, /192.168.99.1:47500]

It looks like there are not ignite daemons to which connect to. Also inspecting the Spark worker log I'm unable to find any message produced by Ignite. I'm expecting instead to find the log messages produced by the ignite daemon startup. 


Any idea what's wrong? 


Cheers,
Paolo




--

Best regards,
Alexei Scherbakov




--

Best regards,
Alexei Scherbakov





--

Best regards,
Alexei Scherbakov




--

Best regards,
Alexei Scherbakov