Altered sql table (adding new columns) does not reflect in Spark shell

classic Classic list List threaded Threaded
3 messages Options
Shravya Nethula Shravya Nethula
Reply | Threaded
Open this post in threaded view
|

Altered sql table (adding new columns) does not reflect in Spark shell

Hi,

I created and altered the table using the following queries: 

a. CREATE TABLE person (id LONG, name VARCHAR(64), age LONG, city_id DOUBLE, zip_code LONG, PRIMARY KEY (name)) WITH "backups=1"
b. ALTER TABLE person ADD COLUMN (first_name VARCHAR(64), last_name VARCHAR(64))

*The changes (columns added from above Alter table SQL) are correct when verified from GridGain. 

However, when I use Spark shell, couldn't find the columns added through Alter table SQL (above (b) query).
Is there any configuration that I am missing? (Attached ignite-config file for reference)

Executed the following commands in Spark shell:


Step 1: Connected to Spark shell:
/usr/hdp/2.6.5.1100-53/spark2/bin/spark-shell --jars /opt/jar/ignite-core-2.7.0.jar,/opt/jar/ignite-spark-2.7.0.jar,/opt/jar/ignite-spring-2.7.0.jar,"/opt/jar/commons-logging-1.1.3.jar","/opt/jar/spark-core_2.11-2.3.0.jar","/opt/jar/spring-core-4.3.18.RELEASE.jar","/opt/jar/spring-beans-4.3.18.RELEASE.jar","/opt/jar/spring-aop-4.3.18.RELEASE.jar","/opt/jar/spring-context-4.3.18.RELEASE.jar","/opt/jar/spring-tx-4.3.18.RELEASE.jar","/opt/jar/spring-jdbc-4.3.18.RELEASE.jar","/opt/jar/spring-expression-4.3.18.RELEASE.jar","/opt/jar/cache-api-1.0.0.jar","/opt/jar/annotations-13.0.jar","/opt/jar/ignite-shmem-1.0.0.jar","/opt/jar/ignite-indexing-2.7.0.jar","/opt/jar/lucene-analyzers-common-7.4.0.jar","/opt/jar/lucene-core-7.4.0.jar","/opt/jar/h2-1.4.197.jar","/opt/jar/commons-codec-1.11.jar","/opt/jar/lucene-queryparser-7.4.0.jar","/opt/jar/spark-sql_2.11-2.3.0.jar" --driver-memory 4g

Step 2: Ran the import statements: 

import org.apache.ignite.{ Ignite, Ignition }

import org.apache.ignite.spark.IgniteDataFrameSettings._

import org.apache.spark.sql.{DataFrame, Row, SQLContext}

val CONFIG = "file:///opt/ignite-config.xml"

Step3: Read a table

var df = spark.read.format(FORMAT_IGNITE).option(OPTION_CONFIG_FILE, CONFIG).option(OPTION_TABLE, "person").load()

df.show();





Regards,

Shravya Nethula,

BigData Developer,


Hyderabad.


SparkIssue.png (462K) Download Attachment
ignite-config.xml.zip (1K) Download Attachment
aealexsandrov aealexsandrov
Reply | Threaded
Open this post in threaded view
|

Re: Altered sql table (adding new columns) does not reflect in Spark shell

Hi,

Yes, I can confirm that this is the issue. I filed next ticket for it:

https://issues.apache.org/jira/browse/IGNITE-12159

BR,
Andrei

9/7/2019 10:00 PM, Shravya Nethula пишет:
Hi,

I created and altered the table using the following queries: 

a. CREATE TABLE person (id LONG, name VARCHAR(64), age LONG, city_id DOUBLE, zip_code LONG, PRIMARY KEY (name)) WITH "backups=1"
b. ALTER TABLE person ADD COLUMN (first_name VARCHAR(64), last_name VARCHAR(64))

*The changes (columns added from above Alter table SQL) are correct when verified from GridGain. 

However, when I use Spark shell, couldn't find the columns added through Alter table SQL (above (b) query).
Is there any configuration that I am missing? (Attached ignite-config file for reference)

Executed the following commands in Spark shell:


Step 1: Connected to Spark shell:
/usr/hdp/2.6.5.1100-53/spark2/bin/spark-shell --jars /opt/jar/ignite-core-2.7.0.jar,/opt/jar/ignite-spark-2.7.0.jar,/opt/jar/ignite-spring-2.7.0.jar,"/opt/jar/commons-logging-1.1.3.jar","/opt/jar/spark-core_2.11-2.3.0.jar","/opt/jar/spring-core-4.3.18.RELEASE.jar","/opt/jar/spring-beans-4.3.18.RELEASE.jar","/opt/jar/spring-aop-4.3.18.RELEASE.jar","/opt/jar/spring-context-4.3.18.RELEASE.jar","/opt/jar/spring-tx-4.3.18.RELEASE.jar","/opt/jar/spring-jdbc-4.3.18.RELEASE.jar","/opt/jar/spring-expression-4.3.18.RELEASE.jar","/opt/jar/cache-api-1.0.0.jar","/opt/jar/annotations-13.0.jar","/opt/jar/ignite-shmem-1.0.0.jar","/opt/jar/ignite-indexing-2.7.0.jar","/opt/jar/lucene-analyzers-common-7.4.0.jar","/opt/jar/lucene-core-7.4.0.jar","/opt/jar/h2-1.4.197.jar","/opt/jar/commons-codec-1.11.jar","/opt/jar/lucene-queryparser-7.4.0.jar","/opt/jar/spark-sql_2.11-2.3.0.jar" --driver-memory 4g

Step 2: Ran the import statements: 

import org.apache.ignite.{ Ignite, Ignition }

import org.apache.ignite.spark.IgniteDataFrameSettings._

import org.apache.spark.sql.{DataFrame, Row, SQLContext}

val CONFIG = "file:///opt/ignite-config.xml"

Step3: Read a table

var df = spark.read.format(FORMAT_IGNITE).option(OPTION_CONFIG_FILE, CONFIG).option(OPTION_TABLE, "person").load()

df.show();





Regards,

Shravya Nethula,

BigData Developer,


Hyderabad.

Shravya Nethula Shravya Nethula
Reply | Threaded
Open this post in threaded view
|

Re: Altered sql table (adding new columns) does not reflect in Spark shell

Thank you Andrei.

Regards,

Shravya Nethula,

BigData Developer,


Hyderabad.


From: Andrei Aleksandrov <[hidden email]>
Sent: Tuesday, September 10, 2019 8:22 PM
To: [hidden email] <[hidden email]>
Subject: Re: Altered sql table (adding new columns) does not reflect in Spark shell
 

Hi,

Yes, I can confirm that this is the issue. I filed next ticket for it:

https://issues.apache.org/jira/browse/IGNITE-12159

BR,
Andrei

9/7/2019 10:00 PM, Shravya Nethula пишет:
Hi,

I created and altered the table using the following queries: 

a. CREATE TABLE person (id LONG, name VARCHAR(64), age LONG, city_id DOUBLE, zip_code LONG, PRIMARY KEY (name)) WITH "backups=1"
b. ALTER TABLE person ADD COLUMN (first_name VARCHAR(64), last_name VARCHAR(64))

*The changes (columns added from above Alter table SQL) are correct when verified from GridGain. 

However, when I use Spark shell, couldn't find the columns added through Alter table SQL (above (b) query).
Is there any configuration that I am missing? (Attached ignite-config file for reference)

Executed the following commands in Spark shell:


Step 1: Connected to Spark shell:
/usr/hdp/2.6.5.1100-53/spark2/bin/spark-shell --jars /opt/jar/ignite-core-2.7.0.jar,/opt/jar/ignite-spark-2.7.0.jar,/opt/jar/ignite-spring-2.7.0.jar,"/opt/jar/commons-logging-1.1.3.jar","/opt/jar/spark-core_2.11-2.3.0.jar","/opt/jar/spring-core-4.3.18.RELEASE.jar","/opt/jar/spring-beans-4.3.18.RELEASE.jar","/opt/jar/spring-aop-4.3.18.RELEASE.jar","/opt/jar/spring-context-4.3.18.RELEASE.jar","/opt/jar/spring-tx-4.3.18.RELEASE.jar","/opt/jar/spring-jdbc-4.3.18.RELEASE.jar","/opt/jar/spring-expression-4.3.18.RELEASE.jar","/opt/jar/cache-api-1.0.0.jar","/opt/jar/annotations-13.0.jar","/opt/jar/ignite-shmem-1.0.0.jar","/opt/jar/ignite-indexing-2.7.0.jar","/opt/jar/lucene-analyzers-common-7.4.0.jar","/opt/jar/lucene-core-7.4.0.jar","/opt/jar/h2-1.4.197.jar","/opt/jar/commons-codec-1.11.jar","/opt/jar/lucene-queryparser-7.4.0.jar","/opt/jar/spark-sql_2.11-2.3.0.jar" --driver-memory 4g

Step 2: Ran the import statements: 

import org.apache.ignite.{ Ignite, Ignition }

import org.apache.ignite.spark.IgniteDataFrameSettings._

import org.apache.spark.sql.{DataFrame, Row, SQLContext}

val CONFIG = "file:///opt/ignite-config.xml"

Step3: Read a table

var df = spark.read.format(FORMAT_IGNITE).option(OPTION_CONFIG_FILE, CONFIG).option(OPTION_TABLE, "person").load()

df.show();





Regards,

Shravya Nethula,

BigData Developer,


Hyderabad.