High heap on ignite client

classic Classic list List threaded Threaded
15 messages Options
Anil Anil
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

High heap on ignite client

HI,

I have implemented export feature of ignite data using JDBC Interator 

ResultSet rs = statement.executeQuery();

while (rs.next()){
// do operations

}

and fetch size is 200.

when i run export operation twice for 4 L records whole 6B is filled up and never getting released.

Initially i thought that operations transforting result set to file causing the memory full. But not.

I just did follwoing and still the memory is growing and not getting released

while (rs.next()){
 // nothing
}

num     #instances         #bytes  class name
----------------------------------------------
   1:      55072353     2408335272  [C
   2:      54923606     1318166544  java.lang.String
   3:        779006      746187792  [B
   4:        903548      304746304  [Ljava.lang.Object;
   5:        773348      259844928  net.juniper.cs.entity.InstallBase
   6:       4745694      113896656  java.lang.Long
   7:       1111692       44467680  sun.nio.cs.UTF_8$Decoder
   8:        773348       30933920  org.apache.ignite.internal.binary.BinaryObjectImpl
   9:        895627       21495048  java.util.ArrayList
  10:         12427       16517632  [I


Not sure why string objects are getting increased.

Could you please help in understanding the issue ?

Thanks
Anil Anil
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: High heap on ignite client


jvm parameters used -

-Xmx6144m -XX:NewSize=512m -XX:+UseTLAB -XX:+UseG1GC -XX:MaxGCPauseMillis=500 -XX:+ScavengeBeforeFullGC -XX:+DisableExplicitGC -Xloggc:C:/Anil/dumps/gc-client.log -XX:+HeapDumpOnOutOfMemoryError -XX:+PrintGCCause -XX:+PrintGCDetails -XX:+PrintAdaptiveSizePolicy -XX:+PrintGCTimeStamps -XX:+PrintGCDateStamps -XX:+HeapDumpAfterFullGC -XX:+ScavengeBeforeFullGC -XX:+DisableExplicitGC -XX:+AlwaysPreTouch -XX:+PrintFlagsFinal -XX:HeapDumpPath=C:/Anil/dumps/heapdump-client.hprof

Thanks.

On 10 June 2017 at 15:06, Anil <[hidden email]> wrote:
HI,

I have implemented export feature of ignite data using JDBC Interator 

ResultSet rs = statement.executeQuery();

while (rs.next()){
// do operations

}

and fetch size is 200.

when i run export operation twice for 4 L records whole 6B is filled up and never getting released.

Initially i thought that operations transforting result set to file causing the memory full. But not.

I just did follwoing and still the memory is growing and not getting released

while (rs.next()){
 // nothing
}

num     #instances         #bytes  class name
----------------------------------------------
   1:      55072353     2408335272  [C
   2:      54923606     1318166544  java.lang.String
   3:        779006      746187792  [B
   4:        903548      304746304  [Ljava.lang.Object;
   5:        773348      259844928  net.juniper.cs.entity.InstallBase
   6:       4745694      113896656  java.lang.Long
   7:       1111692       44467680  sun.nio.cs.UTF_8$Decoder
   8:        773348       30933920  org.apache.ignite.internal.binary.BinaryObjectImpl
   9:        895627       21495048  java.util.ArrayList
  10:         12427       16517632  [I


Not sure why string objects are getting increased.

Could you please help in understanding the issue ?

Thanks

Anil Anil
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: High heap on ignite client

I understand from the code that there is no cursor from h2 db (or ignite embed h2 db) internally and all mapper response consolidated at reducer. It means when exporting large number of records, all data is in memory.

             if (send(nodes,
                    oldStyle ?
                        new GridQueryRequest(qryReqId,
                            r.pageSize,
                            space,
                            mapQrys,
                            topVer,
                            extraSpaces(space, qry.spaces()),
                            null,
                            timeoutMillis) :
                        new GridH2QueryRequest()
                            .requestId(qryReqId)
                            .topologyVersion(topVer)
                            .pageSize(r.pageSize)
                            .caches(qry.caches())
                            .tables(distributedJoins ? qry.tables() : null)
                            .partitions(convert(partsMap))
                            .queries(mapQrys)
                            .flags(flags)
                            .timeout(timeoutMillis),
                    oldStyle && partsMap != null ? new ExplicitPartitionsSpecializer(partsMap) : null,
                    false)) {

                    awaitAllReplies(r, nodes, cancel);

// once the responses from all nodes for the query received.. proceed further ?

          if (!retry) {
                    if (skipMergeTbl) {
                        List<List<?>> res = new ArrayList<>();

                        // Simple UNION ALL can have multiple indexes.
                        for (GridMergeIndex idx : r.idxs) {
                            Cursor cur = idx.findInStream(null, null);

                            while (cur.next()) {
                                Row row = cur.get();

                                int cols = row.getColumnCount();

                                List<Object> resRow = new ArrayList<>(cols);

                                for (int c = 0; c < cols; c++)
                                    resRow.add(row.getValue(c).getObject());

                                res.add(resRow);
                            }
                        }

                        resIter = res.iterator();
                    }else {
                      // incase of split query scenario
                    }
         
         }

      return new GridQueryCacheObjectsIterator(resIter, cctx, keepPortable);


Query cursor is iterator which does column value mapping per page. But still all records of query are still in memory. correct?

Please correct me if I am wrong. thanks.


Thanks


On 10 June 2017 at 15:53, Anil <[hidden email]> wrote:

jvm parameters used -

-Xmx6144m -XX:NewSize=512m -XX:+UseTLAB -XX:+UseG1GC -XX:MaxGCPauseMillis=500 -XX:+ScavengeBeforeFullGC -XX:+DisableExplicitGC -Xloggc:C:/Anil/dumps/gc-client.log -XX:+HeapDumpOnOutOfMemoryError -XX:+PrintGCCause -XX:+PrintGCDetails -XX:+PrintAdaptiveSizePolicy -XX:+PrintGCTimeStamps -XX:+PrintGCDateStamps -XX:+HeapDumpAfterFullGC -XX:+ScavengeBeforeFullGC -XX:+DisableExplicitGC -XX:+AlwaysPreTouch -XX:+PrintFlagsFinal -XX:HeapDumpPath=C:/Anil/dumps/heapdump-client.hprof

Thanks.

On 10 June 2017 at 15:06, Anil <[hidden email]> wrote:
HI,

I have implemented export feature of ignite data using JDBC Interator 

ResultSet rs = statement.executeQuery();

while (rs.next()){
// do operations

}

and fetch size is 200.

when i run export operation twice for 4 L records whole 6B is filled up and never getting released.

Initially i thought that operations transforting result set to file causing the memory full. But not.

I just did follwoing and still the memory is growing and not getting released

while (rs.next()){
 // nothing
}

num     #instances         #bytes  class name
----------------------------------------------
   1:      55072353     2408335272  [C
   2:      54923606     1318166544  java.lang.String
   3:        779006      746187792  [B
   4:        903548      304746304  [Ljava.lang.Object;
   5:        773348      259844928  net.juniper.cs.entity.InstallBase
   6:       4745694      113896656  java.lang.Long
   7:       1111692       44467680  sun.nio.cs.UTF_8$Decoder
   8:        773348       30933920  org.apache.ignite.internal.binary.BinaryObjectImpl
   9:        895627       21495048  java.util.ArrayList
  10:         12427       16517632  [I


Not sure why string objects are getting increased.

Could you please help in understanding the issue ?

Thanks


Anil Anil
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: High heap on ignite client

Do you have any advice on implementing large records export from ignite ?

I could not use ScanQuery right as my whole application built around Jdbc driver and writing complex queries in scan query is very difficult.

Thanks

On 10 June 2017 at 18:48, Anil <[hidden email]> wrote:
I understand from the code that there is no cursor from h2 db (or ignite embed h2 db) internally and all mapper response consolidated at reducer. It means when exporting large number of records, all data is in memory.

             if (send(nodes,
                    oldStyle ?
                        new GridQueryRequest(qryReqId,
                            r.pageSize,
                            space,
                            mapQrys,
                            topVer,
                            extraSpaces(space, qry.spaces()),
                            null,
                            timeoutMillis) :
                        new GridH2QueryRequest()
                            .requestId(qryReqId)
                            .topologyVersion(topVer)
                            .pageSize(r.pageSize)
                            .caches(qry.caches())
                            .tables(distributedJoins ? qry.tables() : null)
                            .partitions(convert(partsMap))
                            .queries(mapQrys)
                            .flags(flags)
                            .timeout(timeoutMillis),
                    oldStyle && partsMap != null ? new ExplicitPartitionsSpecializer(partsMap) : null,
                    false)) {

                    awaitAllReplies(r, nodes, cancel);

// once the responses from all nodes for the query received.. proceed further ?

          if (!retry) {
                    if (skipMergeTbl) {
                        List<List<?>> res = new ArrayList<>();

                        // Simple UNION ALL can have multiple indexes.
                        for (GridMergeIndex idx : r.idxs) {
                            Cursor cur = idx.findInStream(null, null);

                            while (cur.next()) {
                                Row row = cur.get();

                                int cols = row.getColumnCount();

                                List<Object> resRow = new ArrayList<>(cols);

                                for (int c = 0; c < cols; c++)
                                    resRow.add(row.getValue(c).getObject());

                                res.add(resRow);
                            }
                        }

                        resIter = res.iterator();
                    }else {
                      // incase of split query scenario
                    }
         
         }

      return new GridQueryCacheObjectsIterator(resIter, cctx, keepPortable);


Query cursor is iterator which does column value mapping per page. But still all records of query are still in memory. correct?

Please correct me if I am wrong. thanks.


Thanks


On 10 June 2017 at 15:53, Anil <[hidden email]> wrote:

jvm parameters used -

-Xmx6144m -XX:NewSize=512m -XX:+UseTLAB -XX:+UseG1GC -XX:MaxGCPauseMillis=500 -XX:+ScavengeBeforeFullGC -XX:+DisableExplicitGC -Xloggc:C:/Anil/dumps/gc-client.log -XX:+HeapDumpOnOutOfMemoryError -XX:+PrintGCCause -XX:+PrintGCDetails -XX:+PrintAdaptiveSizePolicy -XX:+PrintGCTimeStamps -XX:+PrintGCDateStamps -XX:+HeapDumpAfterFullGC -XX:+ScavengeBeforeFullGC -XX:+DisableExplicitGC -XX:+AlwaysPreTouch -XX:+PrintFlagsFinal -XX:HeapDumpPath=C:/Anil/dumps/heapdump-client.hprof

Thanks.

On 10 June 2017 at 15:06, Anil <[hidden email]> wrote:
HI,

I have implemented export feature of ignite data using JDBC Interator 

ResultSet rs = statement.executeQuery();

while (rs.next()){
// do operations

}

and fetch size is 200.

when i run export operation twice for 4 L records whole 6B is filled up and never getting released.

Initially i thought that operations transforting result set to file causing the memory full. But not.

I just did follwoing and still the memory is growing and not getting released

while (rs.next()){
 // nothing
}

num     #instances         #bytes  class name
----------------------------------------------
   1:      55072353     2408335272  [C
   2:      54923606     1318166544  java.lang.String
   3:        779006      746187792  [B
   4:        903548      304746304  [Ljava.lang.Object;
   5:        773348      259844928  net.juniper.cs.entity.InstallBase
   6:       4745694      113896656  java.lang.Long
   7:       1111692       44467680  sun.nio.cs.UTF_8$Decoder
   8:        773348       30933920  org.apache.ignite.internal.binary.BinaryObjectImpl
   9:        895627       21495048  java.util.ArrayList
  10:         12427       16517632  [I


Not sure why string objects are getting increased.

Could you please help in understanding the issue ?

Thanks



afedotov afedotov
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: High heap on ignite client

Hi, Anil.

Could you please share your full code (class/method) you are using to read data.

Kind regards,
Alex

12 июня 2017 г. 4:07 PM пользователь "Anil [via Apache Ignite Users]" <[hidden email]> написал:
Do you have any advice on implementing large records export from ignite ?

I could not use ScanQuery right as my whole application built around Jdbc driver and writing complex queries in scan query is very difficult.

Thanks

On 10 June 2017 at 18:48, Anil <[hidden email]> wrote:
I understand from the code that there is no cursor from h2 db (or ignite embed h2 db) internally and all mapper response consolidated at reducer. It means when exporting large number of records, all data is in memory.

             if (send(nodes,
                    oldStyle ?
                        new GridQueryRequest(qryReqId,
                            r.pageSize,
                            space,
                            mapQrys,
                            topVer,
                            extraSpaces(space, qry.spaces()),
                            null,
                            timeoutMillis) :
                        new GridH2QueryRequest()
                            .requestId(qryReqId)
                            .topologyVersion(topVer)
                            .pageSize(r.pageSize)
                            .caches(qry.caches())
                            .tables(distributedJoins ? qry.tables() : null)
                            .partitions(convert(partsMap))
                            .queries(mapQrys)
                            .flags(flags)
                            .timeout(timeoutMillis),
                    oldStyle && partsMap != null ? new ExplicitPartitionsSpecializer(partsMap) : null,
                    false)) {

                    awaitAllReplies(r, nodes, cancel);

// once the responses from all nodes for the query received.. proceed further ?

          if (!retry) {
                    if (skipMergeTbl) {
                        List<List<?>> res = new ArrayList<>();

                        // Simple UNION ALL can have multiple indexes.
                        for (GridMergeIndex idx : r.idxs) {
                            Cursor cur = idx.findInStream(null, null);

                            while (cur.next()) {
                                Row row = cur.get();

                                int cols = row.getColumnCount();

                                List<Object> resRow = new ArrayList<>(cols);

                                for (int c = 0; c < cols; c++)
                                    resRow.add(row.getValue(c).getObject());

                                res.add(resRow);
                            }
                        }

                        resIter = res.iterator();
                    }else {
                      // incase of split query scenario
                    }
         
         }

      return new GridQueryCacheObjectsIterator(resIter, cctx, keepPortable);


Query cursor is iterator which does column value mapping per page. But still all records of query are still in memory. correct?

Please correct me if I am wrong. thanks.


Thanks


On 10 June 2017 at 15:53, Anil <[hidden email]> wrote:

jvm parameters used -

-Xmx6144m -XX:NewSize=512m -XX:+UseTLAB -XX:+UseG1GC -XX:MaxGCPauseMillis=500 -XX:+ScavengeBeforeFullGC -XX:+DisableExplicitGC -Xloggc:C:/Anil/dumps/gc-client.log -XX:+HeapDumpOnOutOfMemoryError -XX:+PrintGCCause -XX:+PrintGCDetails -XX:+PrintAdaptiveSizePolicy -XX:+PrintGCTimeStamps -XX:+PrintGCDateStamps -XX:+HeapDumpAfterFullGC -XX:+ScavengeBeforeFullGC -XX:+DisableExplicitGC -XX:+AlwaysPreTouch -XX:+PrintFlagsFinal -XX:HeapDumpPath=C:/Anil/dumps/heapdump-client.hprof

Thanks.

On 10 June 2017 at 15:06, Anil <[hidden email]> wrote:
HI,

I have implemented export feature of ignite data using JDBC Interator 

ResultSet rs = statement.executeQuery();

while (rs.next()){
// do operations

}

and fetch size is 200.

when i run export operation twice for 4 L records whole 6B is filled up and never getting released.

Initially i thought that operations transforting result set to file causing the memory full. But not.

I just did follwoing and still the memory is growing and not getting released

while (rs.next()){
 // nothing
}

num     #instances         #bytes  class name
----------------------------------------------
   1:      55072353     2408335272  [C
   2:      54923606     1318166544  java.lang.String
   3:        779006      746187792  [B
   4:        903548      304746304  [Ljava.lang.Object;
   5:        773348      259844928  net.juniper.cs.entity.InstallBase
   6:       4745694      113896656  java.lang.Long
   7:       1111692       44467680  sun.nio.cs.UTF_8$Decoder
   8:        773348       30933920  org.apache.ignite.internal.binary.BinaryObjectImpl
   9:        895627       21495048  java.util.ArrayList
  10:         12427       16517632  [I


Not sure why string objects are getting increased.

Could you please help in understanding the issue ?

Thanks






If you reply to this email, your message will be added to the discussion below:
http://apache-ignite-users.70518.x6.nabble.com/High-heap-on-ignite-client-tp13594p13626.html
To start a new topic under Apache Ignite Users, email [hidden email]
To unsubscribe from Apache Ignite Users, click here.
NAML
Anil Anil
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: High heap on ignite client

Sure. thanks

On 14 June 2017 at 19:51, afedotov <[hidden email]> wrote:
Hi, Anil.

Could you please share your full code (class/method) you are using to read data.

Kind regards,
Alex

12 июня 2017 г. 4:07 PM пользователь "Anil [via Apache Ignite Users]" <[hidden email]> написал:
Do you have any advice on implementing large records export from ignite ?

I could not use ScanQuery right as my whole application built around Jdbc driver and writing complex queries in scan query is very difficult.

Thanks

On 10 June 2017 at 18:48, Anil <[hidden email]> wrote:
I understand from the code that there is no cursor from h2 db (or ignite embed h2 db) internally and all mapper response consolidated at reducer. It means when exporting large number of records, all data is in memory.

             if (send(nodes,
                    oldStyle ?
                        new GridQueryRequest(qryReqId,
                            r.pageSize,
                            space,
                            mapQrys,
                            topVer,
                            extraSpaces(space, qry.spaces()),
                            null,
                            timeoutMillis) :
                        new GridH2QueryRequest()
                            .requestId(qryReqId)
                            .topologyVersion(topVer)
                            .pageSize(r.pageSize)
                            .caches(qry.caches())
                            .tables(distributedJoins ? qry.tables() : null)
                            .partitions(convert(partsMap))
                            .queries(mapQrys)
                            .flags(flags)
                            .timeout(timeoutMillis),
                    oldStyle && partsMap != null ? new ExplicitPartitionsSpecializer(partsMap) : null,
                    false)) {

                    awaitAllReplies(r, nodes, cancel);

// once the responses from all nodes for the query received.. proceed further ?

          if (!retry) {
                    if (skipMergeTbl) {
                        List<List<?>> res = new ArrayList<>();

                        // Simple UNION ALL can have multiple indexes.
                        for (GridMergeIndex idx : r.idxs) {
                            Cursor cur = idx.findInStream(null, null);

                            while (cur.next()) {
                                Row row = cur.get();

                                int cols = row.getColumnCount();

                                List<Object> resRow = new ArrayList<>(cols);

                                for (int c = 0; c < cols; c++)
                                    resRow.add(row.getValue(c).getObject());

                                res.add(resRow);
                            }
                        }

                        resIter = res.iterator();
                    }else {
                      // incase of split query scenario
                    }
         
         }

      return new GridQueryCacheObjectsIterator(resIter, cctx, keepPortable);


Query cursor is iterator which does column value mapping per page. But still all records of query are still in memory. correct?

Please correct me if I am wrong. thanks.


Thanks


On 10 June 2017 at 15:53, Anil <[hidden email]> wrote:

jvm parameters used -

-Xmx6144m -XX:NewSize=512m -XX:+UseTLAB -XX:+UseG1GC -XX:MaxGCPauseMillis=500 -XX:+ScavengeBeforeFullGC -XX:+DisableExplicitGC -Xloggc:C:/Anil/dumps/gc-client.log -XX:+HeapDumpOnOutOfMemoryError -XX:+PrintGCCause -XX:+PrintGCDetails -XX:+PrintAdaptiveSizePolicy -XX:+PrintGCTimeStamps -XX:+PrintGCDateStamps -XX:+HeapDumpAfterFullGC -XX:+ScavengeBeforeFullGC -XX:+DisableExplicitGC -XX:+AlwaysPreTouch -XX:+PrintFlagsFinal -XX:HeapDumpPath=C:/Anil/dumps/heapdump-client.hprof

Thanks.

On 10 June 2017 at 15:06, Anil <[hidden email]> wrote:
HI,

I have implemented export feature of ignite data using JDBC Interator 

ResultSet rs = statement.executeQuery();

while (rs.next()){
// do operations

}

and fetch size is 200.

when i run export operation twice for 4 L records whole 6B is filled up and never getting released.

Initially i thought that operations transforting result set to file causing the memory full. But not.

I just did follwoing and still the memory is growing and not getting released

while (rs.next()){
 // nothing
}

num     #instances         #bytes  class name
----------------------------------------------
   1:      55072353     2408335272  [C
   2:      54923606     1318166544  java.lang.String
   3:        779006      746187792  [B
   4:        903548      304746304  [Ljava.lang.Object;
   5:        773348      259844928  net.juniper.cs.entity.InstallBase
   6:       4745694      113896656  java.lang.Long
   7:       1111692       44467680  sun.nio.cs.UTF_8$Decoder
   8:        773348       30933920  org.apache.ignite.internal.binary.BinaryObjectImpl
   9:        895627       21495048  java.util.ArrayList
  10:         12427       16517632  [I


Not sure why string objects are getting increased.

Could you please help in understanding the issue ?

Thanks






If you reply to this email, your message will be added to the discussion below:
http://apache-ignite-users.70518.x6.nabble.com/High-heap-on-ignite-client-tp13594p13626.html
To start a new topic under Apache Ignite Users, email [hidden email]
To unsubscribe from Apache Ignite Users, click here.
NAML


View this message in context: Re: High heap on ignite client
Sent from the Apache Ignite Users mailing list archive at Nabble.com.

Anil Anil
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: High heap on ignite client

Hi Alex,


please let us if you have any suggestions/questions. thanks.

Thanks

On 15 June 2017 at 10:58, Anil <[hidden email]> wrote:
Sure. thanks

On 14 June 2017 at 19:51, afedotov <[hidden email]> wrote:
Hi, Anil.

Could you please share your full code (class/method) you are using to read data.

Kind regards,
Alex

12 июня 2017 г. 4:07 PM пользователь "Anil [via Apache Ignite Users]" <[hidden email]> написал:
Do you have any advice on implementing large records export from ignite ?

I could not use ScanQuery right as my whole application built around Jdbc driver and writing complex queries in scan query is very difficult.

Thanks

On 10 June 2017 at 18:48, Anil <[hidden email]> wrote:
I understand from the code that there is no cursor from h2 db (or ignite embed h2 db) internally and all mapper response consolidated at reducer. It means when exporting large number of records, all data is in memory.

             if (send(nodes,
                    oldStyle ?
                        new GridQueryRequest(qryReqId,
                            r.pageSize,
                            space,
                            mapQrys,
                            topVer,
                            extraSpaces(space, qry.spaces()),
                            null,
                            timeoutMillis) :
                        new GridH2QueryRequest()
                            .requestId(qryReqId)
                            .topologyVersion(topVer)
                            .pageSize(r.pageSize)
                            .caches(qry.caches())
                            .tables(distributedJoins ? qry.tables() : null)
                            .partitions(convert(partsMap))
                            .queries(mapQrys)
                            .flags(flags)
                            .timeout(timeoutMillis),
                    oldStyle && partsMap != null ? new ExplicitPartitionsSpecializer(partsMap) : null,
                    false)) {

                    awaitAllReplies(r, nodes, cancel);

// once the responses from all nodes for the query received.. proceed further ?

          if (!retry) {
                    if (skipMergeTbl) {
                        List<List<?>> res = new ArrayList<>();

                        // Simple UNION ALL can have multiple indexes.
                        for (GridMergeIndex idx : r.idxs) {
                            Cursor cur = idx.findInStream(null, null);

                            while (cur.next()) {
                                Row row = cur.get();

                                int cols = row.getColumnCount();

                                List<Object> resRow = new ArrayList<>(cols);

                                for (int c = 0; c < cols; c++)
                                    resRow.add(row.getValue(c).getObject());

                                res.add(resRow);
                            }
                        }

                        resIter = res.iterator();
                    }else {
                      // incase of split query scenario
                    }
         
         }

      return new GridQueryCacheObjectsIterator(resIter, cctx, keepPortable);


Query cursor is iterator which does column value mapping per page. But still all records of query are still in memory. correct?

Please correct me if I am wrong. thanks.


Thanks


On 10 June 2017 at 15:53, Anil <[hidden email]> wrote:

jvm parameters used -

-Xmx6144m -XX:NewSize=512m -XX:+UseTLAB -XX:+UseG1GC -XX:MaxGCPauseMillis=500 -XX:+ScavengeBeforeFullGC -XX:+DisableExplicitGC -Xloggc:C:/Anil/dumps/gc-client.log -XX:+HeapDumpOnOutOfMemoryError -XX:+PrintGCCause -XX:+PrintGCDetails -XX:+PrintAdaptiveSizePolicy -XX:+PrintGCTimeStamps -XX:+PrintGCDateStamps -XX:+HeapDumpAfterFullGC -XX:+ScavengeBeforeFullGC -XX:+DisableExplicitGC -XX:+AlwaysPreTouch -XX:+PrintFlagsFinal -XX:HeapDumpPath=C:/Anil/dumps/heapdump-client.hprof

Thanks.

On 10 June 2017 at 15:06, Anil <[hidden email]> wrote:
HI,

I have implemented export feature of ignite data using JDBC Interator 

ResultSet rs = statement.executeQuery();

while (rs.next()){
// do operations

}

and fetch size is 200.

when i run export operation twice for 4 L records whole 6B is filled up and never getting released.

Initially i thought that operations transforting result set to file causing the memory full. But not.

I just did follwoing and still the memory is growing and not getting released

while (rs.next()){
 // nothing
}

num     #instances         #bytes  class name
----------------------------------------------
   1:      55072353     2408335272  [C
   2:      54923606     1318166544  java.lang.String
   3:        779006      746187792  [B
   4:        903548      304746304  [Ljava.lang.Object;
   5:        773348      259844928  net.juniper.cs.entity.InstallBase
   6:       4745694      113896656  java.lang.Long
   7:       1111692       44467680  sun.nio.cs.UTF_8$Decoder
   8:        773348       30933920  org.apache.ignite.internal.binary.BinaryObjectImpl
   9:        895627       21495048  java.util.ArrayList
  10:         12427       16517632  [I


Not sure why string objects are getting increased.

Could you please help in understanding the issue ?

Thanks






If you reply to this email, your message will be added to the discussion below:
http://apache-ignite-users.70518.x6.nabble.com/High-heap-on-ignite-client-tp13594p13626.html
To start a new topic under Apache Ignite Users, email [hidden email]
To unsubscribe from Apache Ignite Users, click here.
NAML


View this message in context: Re: High heap on ignite client
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


afedotov afedotov
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: High heap on ignite client

Thanks. I'll take a look and let you know about any findings.

Kind regards,
Alex

18 июня 2017 г. 3:33 PM пользователь "Anil" <[hidden email]> написал:
Hi Alex,


please let us if you have any suggestions/questions. thanks.

Thanks

On 15 June 2017 at 10:58, Anil <[hidden email]> wrote:
Sure. thanks

On 14 June 2017 at 19:51, afedotov <[hidden email]> wrote:
Hi, Anil.

Could you please share your full code (class/method) you are using to read data.

Kind regards,
Alex

12 июня 2017 г. 4:07 PM пользователь "Anil [via Apache Ignite Users]" <[hidden email]> написал:
Do you have any advice on implementing large records export from ignite ?

I could not use ScanQuery right as my whole application built around Jdbc driver and writing complex queries in scan query is very difficult.

Thanks

On 10 June 2017 at 18:48, Anil <[hidden email]> wrote:
I understand from the code that there is no cursor from h2 db (or ignite embed h2 db) internally and all mapper response consolidated at reducer. It means when exporting large number of records, all data is in memory.

             if (send(nodes,
                    oldStyle ?
                        new GridQueryRequest(qryReqId,
                            r.pageSize,
                            space,
                            mapQrys,
                            topVer,
                            extraSpaces(space, qry.spaces()),
                            null,
                            timeoutMillis) :
                        new GridH2QueryRequest()
                            .requestId(qryReqId)
                            .topologyVersion(topVer)
                            .pageSize(r.pageSize)
                            .caches(qry.caches())
                            .tables(distributedJoins ? qry.tables() : null)
                            .partitions(convert(partsMap))
                            .queries(mapQrys)
                            .flags(flags)
                            .timeout(timeoutMillis),
                    oldStyle && partsMap != null ? new ExplicitPartitionsSpecializer(partsMap) : null,
                    false)) {

                    awaitAllReplies(r, nodes, cancel);

// once the responses from all nodes for the query received.. proceed further ?

          if (!retry) {
                    if (skipMergeTbl) {
                        List<List<?>> res = new ArrayList<>();

                        // Simple UNION ALL can have multiple indexes.
                        for (GridMergeIndex idx : r.idxs) {
                            Cursor cur = idx.findInStream(null, null);

                            while (cur.next()) {
                                Row row = cur.get();

                                int cols = row.getColumnCount();

                                List<Object> resRow = new ArrayList<>(cols);

                                for (int c = 0; c < cols; c++)
                                    resRow.add(row.getValue(c).getObject());

                                res.add(resRow);
                            }
                        }

                        resIter = res.iterator();
                    }else {
                      // incase of split query scenario
                    }
         
         }

      return new GridQueryCacheObjectsIterator(resIter, cctx, keepPortable);


Query cursor is iterator which does column value mapping per page. But still all records of query are still in memory. correct?

Please correct me if I am wrong. thanks.


Thanks


On 10 June 2017 at 15:53, Anil <[hidden email]> wrote:

jvm parameters used -

-Xmx6144m -XX:NewSize=512m -XX:+UseTLAB -XX:+UseG1GC -XX:MaxGCPauseMillis=500 -XX:+ScavengeBeforeFullGC -XX:+DisableExplicitGC -Xloggc:C:/Anil/dumps/gc-client.log -XX:+HeapDumpOnOutOfMemoryError -XX:+PrintGCCause -XX:+PrintGCDetails -XX:+PrintAdaptiveSizePolicy -XX:+PrintGCTimeStamps -XX:+PrintGCDateStamps -XX:+HeapDumpAfterFullGC -XX:+ScavengeBeforeFullGC -XX:+DisableExplicitGC -XX:+AlwaysPreTouch -XX:+PrintFlagsFinal -XX:HeapDumpPath=C:/Anil/dumps/heapdump-client.hprof

Thanks.

On 10 June 2017 at 15:06, Anil <[hidden email]> wrote:
HI,

I have implemented export feature of ignite data using JDBC Interator 

ResultSet rs = statement.executeQuery();

while (rs.next()){
// do operations

}

and fetch size is 200.

when i run export operation twice for 4 L records whole 6B is filled up and never getting released.

Initially i thought that operations transforting result set to file causing the memory full. But not.

I just did follwoing and still the memory is growing and not getting released

while (rs.next()){
 // nothing
}

num     #instances         #bytes  class name
----------------------------------------------
   1:      55072353     2408335272  [C
   2:      54923606     1318166544  java.lang.String
   3:        779006      746187792  [B
   4:        903548      304746304  [Ljava.lang.Object;
   5:        773348      259844928  net.juniper.cs.entity.InstallBase
   6:       4745694      113896656  java.lang.Long
   7:       1111692       44467680  sun.nio.cs.UTF_8$Decoder
   8:        773348       30933920  org.apache.ignite.internal.binary.BinaryObjectImpl
   9:        895627       21495048  java.util.ArrayList
  10:         12427       16517632  [I


Not sure why string objects are getting increased.

Could you please help in understanding the issue ?

Thanks






If you reply to this email, your message will be added to the discussion below:
http://apache-ignite-users.70518.x6.nabble.com/High-heap-on-ignite-client-tp13594p13626.html
To start a new topic under Apache Ignite Users, email [hidden email]
To unsubscribe from Apache Ignite Users, click here.
NAML


View this message in context: Re: High heap on ignite client
Sent from the Apache Ignite Users mailing list archive at Nabble.com.



afedotov afedotov
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: High heap on ignite client

Hi Anil.

Could you please also share C:/Anil/ignite-client.xml ? As well, it would be useful if you took JFR reports for the case with allocation profiling enabled.
Just to clarify, by 4L do you mean 4 million entries?

Kind regards,
Alex.

On Mon, Jun 19, 2017 at 10:15 AM, Alexander Fedotov <[hidden email]> wrote:
Thanks. I'll take a look and let you know about any findings.

Kind regards,
Alex

18 июня 2017 г. 3:33 PM пользователь "Anil" <[hidden email]> написал:
Hi Alex,


please let us if you have any suggestions/questions. thanks.

Thanks

On 15 June 2017 at 10:58, Anil <[hidden email]> wrote:
Sure. thanks

On 14 June 2017 at 19:51, afedotov <[hidden email]> wrote:
Hi, Anil.

Could you please share your full code (class/method) you are using to read data.

Kind regards,
Alex

12 июня 2017 г. 4:07 PM пользователь "Anil [via Apache Ignite Users]" <[hidden email]> написал:
Do you have any advice on implementing large records export from ignite ?

I could not use ScanQuery right as my whole application built around Jdbc driver and writing complex queries in scan query is very difficult.

Thanks

On 10 June 2017 at 18:48, Anil <[hidden email]> wrote:
I understand from the code that there is no cursor from h2 db (or ignite embed h2 db) internally and all mapper response consolidated at reducer. It means when exporting large number of records, all data is in memory.

             if (send(nodes,
                    oldStyle ?
                        new GridQueryRequest(qryReqId,
                            r.pageSize,
                            space,
                            mapQrys,
                            topVer,
                            extraSpaces(space, qry.spaces()),
                            null,
                            timeoutMillis) :
                        new GridH2QueryRequest()
                            .requestId(qryReqId)
                            .topologyVersion(topVer)
                            .pageSize(r.pageSize)
                            .caches(qry.caches())
                            .tables(distributedJoins ? qry.tables() : null)
                            .partitions(convert(partsMap))
                            .queries(mapQrys)
                            .flags(flags)
                            .timeout(timeoutMillis),
                    oldStyle && partsMap != null ? new ExplicitPartitionsSpecializer(partsMap) : null,
                    false)) {

                    awaitAllReplies(r, nodes, cancel);

// once the responses from all nodes for the query received.. proceed further ?

          if (!retry) {
                    if (skipMergeTbl) {
                        List<List<?>> res = new ArrayList<>();

                        // Simple UNION ALL can have multiple indexes.
                        for (GridMergeIndex idx : r.idxs) {
                            Cursor cur = idx.findInStream(null, null);

                            while (cur.next()) {
                                Row row = cur.get();

                                int cols = row.getColumnCount();

                                List<Object> resRow = new ArrayList<>(cols);

                                for (int c = 0; c < cols; c++)
                                    resRow.add(row.getValue(c).getObject());

                                res.add(resRow);
                            }
                        }

                        resIter = res.iterator();
                    }else {
                      // incase of split query scenario
                    }
         
         }

      return new GridQueryCacheObjectsIterator(resIter, cctx, keepPortable);


Query cursor is iterator which does column value mapping per page. But still all records of query are still in memory. correct?

Please correct me if I am wrong. thanks.


Thanks


On 10 June 2017 at 15:53, Anil <[hidden email]> wrote:

jvm parameters used -

-Xmx6144m -XX:NewSize=512m -XX:+UseTLAB -XX:+UseG1GC -XX:MaxGCPauseMillis=500 -XX:+ScavengeBeforeFullGC -XX:+DisableExplicitGC -Xloggc:C:/Anil/dumps/gc-client.log -XX:+HeapDumpOnOutOfMemoryError -XX:+PrintGCCause -XX:+PrintGCDetails -XX:+PrintAdaptiveSizePolicy -XX:+PrintGCTimeStamps -XX:+PrintGCDateStamps -XX:+HeapDumpAfterFullGC -XX:+ScavengeBeforeFullGC -XX:+DisableExplicitGC -XX:+AlwaysPreTouch -XX:+PrintFlagsFinal -XX:HeapDumpPath=C:/Anil/dumps/heapdump-client.hprof

Thanks.

On 10 June 2017 at 15:06, Anil <[hidden email]> wrote:
HI,

I have implemented export feature of ignite data using JDBC Interator 

ResultSet rs = statement.executeQuery();

while (rs.next()){
// do operations

}

and fetch size is 200.

when i run export operation twice for 4 L records whole 6B is filled up and never getting released.

Initially i thought that operations transforting result set to file causing the memory full. But not.

I just did follwoing and still the memory is growing and not getting released

while (rs.next()){
 // nothing
}

num     #instances         #bytes  class name
----------------------------------------------
   1:      55072353     2408335272  [C
   2:      54923606     1318166544  java.lang.String
   3:        779006      746187792  [B
   4:        903548      304746304  [Ljava.lang.Object;
   5:        773348      259844928  net.juniper.cs.entity.InstallBase
   6:       4745694      113896656  java.lang.Long
   7:       1111692       44467680  sun.nio.cs.UTF_8$Decoder
   8:        773348       30933920  org.apache.ignite.internal.binary.BinaryObjectImpl
   9:        895627       21495048  java.util.ArrayList
  10:         12427       16517632  [I


Not sure why string objects are getting increased.

Could you please help in understanding the issue ?

Thanks






If you reply to this email, your message will be added to the discussion below:
http://apache-ignite-users.70518.x6.nabble.com/High-heap-on-ignite-client-tp13594p13626.html
To start a new topic under Apache Ignite Users, email [hidden email]
To unsubscribe from Apache Ignite Users, click here.
NAML


View this message in context: Re: High heap on ignite client
Sent from the Apache Ignite Users mailing list archive at Nabble.com.




Anil Anil
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: High heap on ignite client

HI Alex,

I have attached the ignite client xml. 4L means 0.4 million records. Sorry, I didn't generate JFR. But created heap dump.

Do you agree that Jdbc driver loading everything in memory and next() just for conversion ?

Thanks

On 19 June 2017 at 17:16, Alexander Fedotov <[hidden email]> wrote:
Hi Anil.

Could you please also share C:/Anil/ignite-client.xml ? As well, it would be useful if you took JFR reports for the case with allocation profiling enabled.
Just to clarify, by 4L do you mean 4 million entries?

Kind regards,
Alex.

On Mon, Jun 19, 2017 at 10:15 AM, Alexander Fedotov <[hidden email]> wrote:
Thanks. I'll take a look and let you know about any findings.

Kind regards,
Alex

18 июня 2017 г. 3:33 PM пользователь "Anil" <[hidden email]> написал:
Hi Alex,


please let us if you have any suggestions/questions. thanks.

Thanks

On 15 June 2017 at 10:58, Anil <[hidden email]> wrote:
Sure. thanks

On 14 June 2017 at 19:51, afedotov <[hidden email]> wrote:
Hi, Anil.

Could you please share your full code (class/method) you are using to read data.

Kind regards,
Alex

12 июня 2017 г. 4:07 PM пользователь "Anil [via Apache Ignite Users]" <[hidden email]> написал:
Do you have any advice on implementing large records export from ignite ?

I could not use ScanQuery right as my whole application built around Jdbc driver and writing complex queries in scan query is very difficult.

Thanks

On 10 June 2017 at 18:48, Anil <[hidden email]> wrote:
I understand from the code that there is no cursor from h2 db (or ignite embed h2 db) internally and all mapper response consolidated at reducer. It means when exporting large number of records, all data is in memory.

             if (send(nodes,
                    oldStyle ?
                        new GridQueryRequest(qryReqId,
                            r.pageSize,
                            space,
                            mapQrys,
                            topVer,
                            extraSpaces(space, qry.spaces()),
                            null,
                            timeoutMillis) :
                        new GridH2QueryRequest()
                            .requestId(qryReqId)
                            .topologyVersion(topVer)
                            .pageSize(r.pageSize)
                            .caches(qry.caches())
                            .tables(distributedJoins ? qry.tables() : null)
                            .partitions(convert(partsMap))
                            .queries(mapQrys)
                            .flags(flags)
                            .timeout(timeoutMillis),
                    oldStyle && partsMap != null ? new ExplicitPartitionsSpecializer(partsMap) : null,
                    false)) {

                    awaitAllReplies(r, nodes, cancel);

// once the responses from all nodes for the query received.. proceed further ?

          if (!retry) {
                    if (skipMergeTbl) {
                        List<List<?>> res = new ArrayList<>();

                        // Simple UNION ALL can have multiple indexes.
                        for (GridMergeIndex idx : r.idxs) {
                            Cursor cur = idx.findInStream(null, null);

                            while (cur.next()) {
                                Row row = cur.get();

                                int cols = row.getColumnCount();

                                List<Object> resRow = new ArrayList<>(cols);

                                for (int c = 0; c < cols; c++)
                                    resRow.add(row.getValue(c).getObject());

                                res.add(resRow);
                            }
                        }

                        resIter = res.iterator();
                    }else {
                      // incase of split query scenario
                    }
         
         }

      return new GridQueryCacheObjectsIterator(resIter, cctx, keepPortable);


Query cursor is iterator which does column value mapping per page. But still all records of query are still in memory. correct?

Please correct me if I am wrong. thanks.


Thanks


On 10 June 2017 at 15:53, Anil <[hidden email]> wrote:

jvm parameters used -

-Xmx6144m -XX:NewSize=512m -XX:+UseTLAB -XX:+UseG1GC -XX:MaxGCPauseMillis=500 -XX:+ScavengeBeforeFullGC -XX:+DisableExplicitGC -Xloggc:C:/Anil/dumps/gc-client.log -XX:+HeapDumpOnOutOfMemoryError -XX:+PrintGCCause -XX:+PrintGCDetails -XX:+PrintAdaptiveSizePolicy -XX:+PrintGCTimeStamps -XX:+PrintGCDateStamps -XX:+HeapDumpAfterFullGC -XX:+ScavengeBeforeFullGC -XX:+DisableExplicitGC -XX:+AlwaysPreTouch -XX:+PrintFlagsFinal -XX:HeapDumpPath=C:/Anil/dumps/heapdump-client.hprof

Thanks.

On 10 June 2017 at 15:06, Anil <[hidden email]> wrote:
HI,

I have implemented export feature of ignite data using JDBC Interator 

ResultSet rs = statement.executeQuery();

while (rs.next()){
// do operations

}

and fetch size is 200.

when i run export operation twice for 4 L records whole 6B is filled up and never getting released.

Initially i thought that operations transforting result set to file causing the memory full. But not.

I just did follwoing and still the memory is growing and not getting released

while (rs.next()){
 // nothing
}

num     #instances         #bytes  class name
----------------------------------------------
   1:      55072353     2408335272  [C
   2:      54923606     1318166544  java.lang.String
   3:        779006      746187792  [B
   4:        903548      304746304  [Ljava.lang.Object;
   5:        773348      259844928  net.juniper.cs.entity.InstallBase
   6:       4745694      113896656  java.lang.Long
   7:       1111692       44467680  sun.nio.cs.UTF_8$Decoder
   8:        773348       30933920  org.apache.ignite.internal.binary.BinaryObjectImpl
   9:        895627       21495048  java.util.ArrayList
  10:         12427       16517632  [I


Not sure why string objects are getting increased.

Could you please help in understanding the issue ?

Thanks






If you reply to this email, your message will be added to the discussion below:
http://apache-ignite-users.70518.x6.nabble.com/High-heap-on-ignite-client-tp13594p13626.html
To start a new topic under Apache Ignite Users, email [hidden email]
To unsubscribe from Apache Ignite Users, click here.
NAML


View this message in context: Re: High heap on ignite client
Sent from the Apache Ignite Users mailing list archive at Nabble.com.






ignite-client.xml (2K) Download Attachment
afedotov afedotov
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: High heap on ignite client

Actually, JDBC driver should extract data page by page.
Need to take an in-depth look.

Kind regards,
Alex.

On Mon, Jun 19, 2017 at 3:08 PM, Anil [via Apache Ignite Users] <[hidden email]> wrote:
HI Alex,

I have attached the ignite client xml. 4L means 0.4 million records. Sorry, I didn't generate JFR. But created heap dump.

Do you agree that Jdbc driver loading everything in memory and next() just for conversion ?

Thanks

On 19 June 2017 at 17:16, Alexander Fedotov <[hidden email]> wrote:
Hi Anil.

Could you please also share C:/Anil/ignite-client.xml ? As well, it would be useful if you took JFR reports for the case with allocation profiling enabled.
Just to clarify, by 4L do you mean 4 million entries?

Kind regards,
Alex.

On Mon, Jun 19, 2017 at 10:15 AM, Alexander Fedotov <[hidden email]> wrote:
Thanks. I'll take a look and let you know about any findings.

Kind regards,
Alex

18 июня 2017 г. 3:33 PM пользователь "Anil" <[hidden email]> написал:
Hi Alex,


please let us if you have any suggestions/questions. thanks.

Thanks

On 15 June 2017 at 10:58, Anil <[hidden email]> wrote:
Sure. thanks

On 14 June 2017 at 19:51, afedotov <[hidden email]> wrote:
Hi, Anil.

Could you please share your full code (class/method) you are using to read data.

Kind regards,
Alex

12 июня 2017 г. 4:07 PM пользователь "Anil [via Apache Ignite Users]" <[hidden email]> написал:
Do you have any advice on implementing large records export from ignite ?

I could not use ScanQuery right as my whole application built around Jdbc driver and writing complex queries in scan query is very difficult.

Thanks

On 10 June 2017 at 18:48, Anil <[hidden email]> wrote:
I understand from the code that there is no cursor from h2 db (or ignite embed h2 db) internally and all mapper response consolidated at reducer. It means when exporting large number of records, all data is in memory.

             if (send(nodes,
                    oldStyle ?
                        new GridQueryRequest(qryReqId,
                            r.pageSize,
                            space,
                            mapQrys,
                            topVer,
                            extraSpaces(space, qry.spaces()),
                            null,
                            timeoutMillis) :
                        new GridH2QueryRequest()
                            .requestId(qryReqId)
                            .topologyVersion(topVer)
                            .pageSize(r.pageSize)
                            .caches(qry.caches())
                            .tables(distributedJoins ? qry.tables() : null)
                            .partitions(convert(partsMap))
                            .queries(mapQrys)
                            .flags(flags)
                            .timeout(timeoutMillis),
                    oldStyle && partsMap != null ? new ExplicitPartitionsSpecializer(partsMap) : null,
                    false)) {

                    awaitAllReplies(r, nodes, cancel);

// once the responses from all nodes for the query received.. proceed further ?

          if (!retry) {
                    if (skipMergeTbl) {
                        List<List<?>> res = new ArrayList<>();

                        // Simple UNION ALL can have multiple indexes.
                        for (GridMergeIndex idx : r.idxs) {
                            Cursor cur = idx.findInStream(null, null);

                            while (cur.next()) {
                                Row row = cur.get();

                                int cols = row.getColumnCount();

                                List<Object> resRow = new ArrayList<>(cols);

                                for (int c = 0; c < cols; c++)
                                    resRow.add(row.getValue(c).getObject());

                                res.add(resRow);
                            }
                        }

                        resIter = res.iterator();
                    }else {
                      // incase of split query scenario
                    }
         
         }

      return new GridQueryCacheObjectsIterator(resIter, cctx, keepPortable);


Query cursor is iterator which does column value mapping per page. But still all records of query are still in memory. correct?

Please correct me if I am wrong. thanks.


Thanks


On 10 June 2017 at 15:53, Anil <[hidden email]> wrote:

jvm parameters used -

-Xmx6144m -XX:NewSize=512m -XX:+UseTLAB -XX:+UseG1GC -XX:MaxGCPauseMillis=500 -XX:+ScavengeBeforeFullGC -XX:+DisableExplicitGC -Xloggc:C:/Anil/dumps/gc-client.log -XX:+HeapDumpOnOutOfMemoryError -XX:+PrintGCCause -XX:+PrintGCDetails -XX:+PrintAdaptiveSizePolicy -XX:+PrintGCTimeStamps -XX:+PrintGCDateStamps -XX:+HeapDumpAfterFullGC -XX:+ScavengeBeforeFullGC -XX:+DisableExplicitGC -XX:+AlwaysPreTouch -XX:+PrintFlagsFinal -XX:HeapDumpPath=C:/Anil/dumps/heapdump-client.hprof

Thanks.

On 10 June 2017 at 15:06, Anil <[hidden email]> wrote:
HI,

I have implemented export feature of ignite data using JDBC Interator 

ResultSet rs = statement.executeQuery();

while (rs.next()){
// do operations

}

and fetch size is 200.

when i run export operation twice for 4 L records whole 6B is filled up and never getting released.

Initially i thought that operations transforting result set to file causing the memory full. But not.

I just did follwoing and still the memory is growing and not getting released

while (rs.next()){
 // nothing
}

num     #instances         #bytes  class name
----------------------------------------------
   1:      55072353     2408335272  [C
   2:      54923606     1318166544  java.lang.String
   3:        779006      746187792  [B
   4:        903548      304746304  [Ljava.lang.Object;
   5:        773348      259844928  net.juniper.cs.entity.InstallBase
   6:       4745694      113896656  java.lang.Long
   7:       1111692       44467680  sun.nio.cs.UTF_8$Decoder
   8:        773348       30933920  org.apache.ignite.internal.binary.BinaryObjectImpl
   9:        895627       21495048  java.util.ArrayList
  10:         12427       16517632  [I


Not sure why string objects are getting increased.

Could you please help in understanding the issue ?

Thanks






If you reply to this email, your message will be added to the discussion below:
http://apache-ignite-users.70518.x6.nabble.com/High-heap-on-ignite-client-tp13594p13626.html
To start a new topic under Apache Ignite Users, email [hidden email]
To unsubscribe from Apache Ignite Users, click here.
NAML


View this message in context: Re: High heap on ignite client
Sent from the Apache Ignite Users mailing list archive at Nabble.com.






ignite-client.xml (2K) Download Attachment



If you reply to this email, your message will be added to the discussion below:
http://apache-ignite-users.70518.x6.nabble.com/High-heap-on-ignite-client-tp13594p13953.html
To start a new topic under Apache Ignite Users, email [hidden email]
To unsubscribe from Apache Ignite Users, click here.
NAML

afedotov afedotov
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: High heap on ignite client

I don't see anything wrong with your config.
Could you please provide C:/Anil/dumps/gc-client.log
There should be a reason for objects not being collected during GC.

Just one more thing, try replacing -XX:NewSize=512m with -XX:G1NewSizePercent=30
XX:NewSize won't let G1GC adjusting young gen size properly.



Kind regards,
Alex.

On Mon, Jun 19, 2017 at 3:47 PM, afedotov <[hidden email]> wrote:
Actually, JDBC driver should extract data page by page.
Need to take an in-depth look.

Kind regards,
Alex.

On Mon, Jun 19, 2017 at 3:08 PM, Anil [via Apache Ignite Users] <[hidden email]> wrote:
HI Alex,

I have attached the ignite client xml. 4L means 0.4 million records. Sorry, I didn't generate JFR. But created heap dump.

Do you agree that Jdbc driver loading everything in memory and next() just for conversion ?

Thanks

On 19 June 2017 at 17:16, Alexander Fedotov <[hidden email]> wrote:
Hi Anil.

Could you please also share C:/Anil/ignite-client.xml ? As well, it would be useful if you took JFR reports for the case with allocation profiling enabled.
Just to clarify, by 4L do you mean 4 million entries?

Kind regards,
Alex.

On Mon, Jun 19, 2017 at 10:15 AM, Alexander Fedotov <[hidden email]> wrote:
Thanks. I'll take a look and let you know about any findings.

Kind regards,
Alex

18 июня 2017 г. 3:33 PM пользователь "Anil" <[hidden email]> написал:
Hi Alex,


please let us if you have any suggestions/questions. thanks.

Thanks

On 15 June 2017 at 10:58, Anil <[hidden email]> wrote:
Sure. thanks

On 14 June 2017 at 19:51, afedotov <[hidden email]> wrote:
Hi, Anil.

Could you please share your full code (class/method) you are using to read data.

Kind regards,
Alex

12 июня 2017 г. 4:07 PM пользователь "Anil [via Apache Ignite Users]" <[hidden email]> написал:
Do you have any advice on implementing large records export from ignite ?

I could not use ScanQuery right as my whole application built around Jdbc driver and writing complex queries in scan query is very difficult.

Thanks

On 10 June 2017 at 18:48, Anil <[hidden email]> wrote:
I understand from the code that there is no cursor from h2 db (or ignite embed h2 db) internally and all mapper response consolidated at reducer. It means when exporting large number of records, all data is in memory.

             if (send(nodes,
                    oldStyle ?
                        new GridQueryRequest(qryReqId,
                            r.pageSize,
                            space,
                            mapQrys,
                            topVer,
                            extraSpaces(space, qry.spaces()),
                            null,
                            timeoutMillis) :
                        new GridH2QueryRequest()
                            .requestId(qryReqId)
                            .topologyVersion(topVer)
                            .pageSize(r.pageSize)
                            .caches(qry.caches())
                            .tables(distributedJoins ? qry.tables() : null)
                            .partitions(convert(partsMap))
                            .queries(mapQrys)
                            .flags(flags)
                            .timeout(timeoutMillis),
                    oldStyle && partsMap != null ? new ExplicitPartitionsSpecializer(partsMap) : null,
                    false)) {

                    awaitAllReplies(r, nodes, cancel);

// once the responses from all nodes for the query received.. proceed further ?

          if (!retry) {
                    if (skipMergeTbl) {
                        List<List<?>> res = new ArrayList<>();

                        // Simple UNION ALL can have multiple indexes.
                        for (GridMergeIndex idx : r.idxs) {
                            Cursor cur = idx.findInStream(null, null);

                            while (cur.next()) {
                                Row row = cur.get();

                                int cols = row.getColumnCount();

                                List<Object> resRow = new ArrayList<>(cols);

                                for (int c = 0; c < cols; c++)
                                    resRow.add(row.getValue(c).getObject());

                                res.add(resRow);
                            }
                        }

                        resIter = res.iterator();
                    }else {
                      // incase of split query scenario
                    }
         
         }

      return new GridQueryCacheObjectsIterator(resIter, cctx, keepPortable);


Query cursor is iterator which does column value mapping per page. But still all records of query are still in memory. correct?

Please correct me if I am wrong. thanks.


Thanks


On 10 June 2017 at 15:53, Anil <[hidden email]> wrote:

jvm parameters used -

-Xmx6144m -XX:NewSize=512m -XX:+UseTLAB -XX:+UseG1GC -XX:MaxGCPauseMillis=500 -XX:+ScavengeBeforeFullGC -XX:+DisableExplicitGC -Xloggc:C:/Anil/dumps/gc-client.log -XX:+HeapDumpOnOutOfMemoryError -XX:+PrintGCCause -XX:+PrintGCDetails -XX:+PrintAdaptiveSizePolicy -XX:+PrintGCTimeStamps -XX:+PrintGCDateStamps -XX:+HeapDumpAfterFullGC -XX:+ScavengeBeforeFullGC -XX:+DisableExplicitGC -XX:+AlwaysPreTouch -XX:+PrintFlagsFinal -XX:HeapDumpPath=C:/Anil/dumps/heapdump-client.hprof

Thanks.

On 10 June 2017 at 15:06, Anil <[hidden email]> wrote:
HI,

I have implemented export feature of ignite data using JDBC Interator 

ResultSet rs = statement.executeQuery();

while (rs.next()){
// do operations

}

and fetch size is 200.

when i run export operation twice for 4 L records whole 6B is filled up and never getting released.

Initially i thought that operations transforting result set to file causing the memory full. But not.

I just did follwoing and still the memory is growing and not getting released

while (rs.next()){
 // nothing
}

num     #instances         #bytes  class name
----------------------------------------------
   1:      55072353     2408335272  [C
   2:      54923606     1318166544  java.lang.String
   3:        779006      746187792  [B
   4:        903548      304746304  [Ljava.lang.Object;
   5:        773348      259844928  net.juniper.cs.entity.InstallBase
   6:       4745694      113896656  java.lang.Long
   7:       1111692       44467680  sun.nio.cs.UTF_8$Decoder
   8:        773348       30933920  org.apache.ignite.internal.binary.BinaryObjectImpl
   9:        895627       21495048  java.util.ArrayList
  10:         12427       16517632  [I


Not sure why string objects are getting increased.

Could you please help in understanding the issue ?

Thanks






If you reply to this email, your message will be added to the discussion below:
http://apache-ignite-users.70518.x6.nabble.com/High-heap-on-ignite-client-tp13594p13626.html
To start a new topic under Apache Ignite Users, email [hidden email]
To unsubscribe from Apache Ignite Users, click here.
NAML


View this message in context: Re: High heap on ignite client
Sent from the Apache Ignite Users mailing list archive at Nabble.com.






ignite-client.xml (2K) Download Attachment



If you reply to this email, your message will be added to the discussion below:
http://apache-ignite-users.70518.x6.nabble.com/High-heap-on-ignite-client-tp13594p13953.html
To start a new topic under Apache Ignite Users, email [hidden email]
To unsubscribe from Apache Ignite Users, click here.
NAML



View this message in context: Re: High heap on ignite client
Sent from the Apache Ignite Users mailing list archive at Nabble.com.

Anil Anil
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: High heap on ignite client

Hi Alex,

Thanks for the suggestion Alex. I will try with new setting. thanks.

I have attached the gc client file.

Did you find anything around jdbc issue ? I have put a debug point in GridReduceQueryExecutor at resIter = res.iterator(); and res object holding all the records. 

Thanks

On 19 June 2017 at 18:50, Alexander Fedotov <[hidden email]> wrote:
I don't see anything wrong with your config.
Could you please provide C:/Anil/dumps/gc-client.log
There should be a reason for objects not being collected during GC.

Just one more thing, try replacing -XX:NewSize=512m with -XX:G1NewSizePercent=30
XX:NewSize won't let G1GC adjusting young gen size properly.



Kind regards,
Alex.

On Mon, Jun 19, 2017 at 3:47 PM, afedotov <[hidden email]> wrote:
Actually, JDBC driver should extract data page by page.
Need to take an in-depth look.

Kind regards,
Alex.

On Mon, Jun 19, 2017 at 3:08 PM, Anil [via Apache Ignite Users] <[hidden email]> wrote:
HI Alex,

I have attached the ignite client xml. 4L means 0.4 million records. Sorry, I didn't generate JFR. But created heap dump.

Do you agree that Jdbc driver loading everything in memory and next() just for conversion ?

Thanks

On 19 June 2017 at 17:16, Alexander Fedotov <[hidden email]> wrote:
Hi Anil.

Could you please also share C:/Anil/ignite-client.xml ? As well, it would be useful if you took JFR reports for the case with allocation profiling enabled.
Just to clarify, by 4L do you mean 4 million entries?

Kind regards,
Alex.

On Mon, Jun 19, 2017 at 10:15 AM, Alexander Fedotov <[hidden email]> wrote:
Thanks. I'll take a look and let you know about any findings.

Kind regards,
Alex

18 июня 2017 г. 3:33 PM пользователь "Anil" <[hidden email]> написал:
Hi Alex,


please let us if you have any suggestions/questions. thanks.

Thanks

On 15 June 2017 at 10:58, Anil <[hidden email]> wrote:
Sure. thanks

On 14 June 2017 at 19:51, afedotov <[hidden email]> wrote:
Hi, Anil.

Could you please share your full code (class/method) you are using to read data.

Kind regards,
Alex

12 июня 2017 г. 4:07 PM пользователь "Anil [via Apache Ignite Users]" <[hidden email]> написал:
Do you have any advice on implementing large records export from ignite ?

I could not use ScanQuery right as my whole application built around Jdbc driver and writing complex queries in scan query is very difficult.

Thanks

On 10 June 2017 at 18:48, Anil <[hidden email]> wrote:
I understand from the code that there is no cursor from h2 db (or ignite embed h2 db) internally and all mapper response consolidated at reducer. It means when exporting large number of records, all data is in memory.

             if (send(nodes,
                    oldStyle ?
                        new GridQueryRequest(qryReqId,
                            r.pageSize,
                            space,
                            mapQrys,
                            topVer,
                            extraSpaces(space, qry.spaces()),
                            null,
                            timeoutMillis) :
                        new GridH2QueryRequest()
                            .requestId(qryReqId)
                            .topologyVersion(topVer)
                            .pageSize(r.pageSize)
                            .caches(qry.caches())
                            .tables(distributedJoins ? qry.tables() : null)
                            .partitions(convert(partsMap))
                            .queries(mapQrys)
                            .flags(flags)
                            .timeout(timeoutMillis),
                    oldStyle && partsMap != null ? new ExplicitPartitionsSpecializer(partsMap) : null,
                    false)) {

                    awaitAllReplies(r, nodes, cancel);

// once the responses from all nodes for the query received.. proceed further ?

          if (!retry) {
                    if (skipMergeTbl) {
                        List<List<?>> res = new ArrayList<>();

                        // Simple UNION ALL can have multiple indexes.
                        for (GridMergeIndex idx : r.idxs) {
                            Cursor cur = idx.findInStream(null, null);

                            while (cur.next()) {
                                Row row = cur.get();

                                int cols = row.getColumnCount();

                                List<Object> resRow = new ArrayList<>(cols);

                                for (int c = 0; c < cols; c++)
                                    resRow.add(row.getValue(c).getObject());

                                res.add(resRow);
                            }
                        }

                        resIter = res.iterator();
                    }else {
                      // incase of split query scenario
                    }
         
         }

      return new GridQueryCacheObjectsIterator(resIter, cctx, keepPortable);


Query cursor is iterator which does column value mapping per page. But still all records of query are still in memory. correct?

Please correct me if I am wrong. thanks.


Thanks


On 10 June 2017 at 15:53, Anil <[hidden email]> wrote:

jvm parameters used -

-Xmx6144m -XX:NewSize=512m -XX:+UseTLAB -XX:+UseG1GC -XX:MaxGCPauseMillis=500 -XX:+ScavengeBeforeFullGC -XX:+DisableExplicitGC -Xloggc:C:/Anil/dumps/gc-client.log -XX:+HeapDumpOnOutOfMemoryError -XX:+PrintGCCause -XX:+PrintGCDetails -XX:+PrintAdaptiveSizePolicy -XX:+PrintGCTimeStamps -XX:+PrintGCDateStamps -XX:+HeapDumpAfterFullGC -XX:+ScavengeBeforeFullGC -XX:+DisableExplicitGC -XX:+AlwaysPreTouch -XX:+PrintFlagsFinal -XX:HeapDumpPath=C:/Anil/dumps/heapdump-client.hprof

Thanks.

On 10 June 2017 at 15:06, Anil <[hidden email]> wrote:
HI,

I have implemented export feature of ignite data using JDBC Interator 

ResultSet rs = statement.executeQuery();

while (rs.next()){
// do operations

}

and fetch size is 200.

when i run export operation twice for 4 L records whole 6B is filled up and never getting released.

Initially i thought that operations transforting result set to file causing the memory full. But not.

I just did follwoing and still the memory is growing and not getting released

while (rs.next()){
 // nothing
}

num     #instances         #bytes  class name
----------------------------------------------
   1:      55072353     2408335272  [C
   2:      54923606     1318166544  java.lang.String
   3:        779006      746187792  [B
   4:        903548      304746304  [Ljava.lang.Object;
   5:        773348      259844928  net.juniper.cs.entity.InstallBase
   6:       4745694      113896656  java.lang.Long
   7:       1111692       44467680  sun.nio.cs.UTF_8$Decoder
   8:        773348       30933920  org.apache.ignite.internal.binary.BinaryObjectImpl
   9:        895627       21495048  java.util.ArrayList
  10:         12427       16517632  [I


Not sure why string objects are getting increased.

Could you please help in understanding the issue ?

Thanks






If you reply to this email, your message will be added to the discussion below:
http://apache-ignite-users.70518.x6.nabble.com/High-heap-on-ignite-client-tp13594p13626.html
To start a new topic under Apache Ignite Users, email [hidden email]
To unsubscribe from Apache Ignite Users, click here.
NAML


View this message in context: Re: High heap on ignite client
Sent from the Apache Ignite Users mailing list archive at Nabble.com.






ignite-client.xml (2K) Download Attachment



If you reply to this email, your message will be added to the discussion below:
http://apache-ignite-users.70518.x6.nabble.com/High-heap-on-ignite-client-tp13594p13953.html
To start a new topic under Apache Ignite Users, email [hidden email]
To unsubscribe from Apache Ignite Users, click here.
NAML



View this message in context: Re: High heap on ignite client
Sent from the Apache Ignite Users mailing list archive at Nabble.com.



gc-client-old.log (1M) Download Attachment
afedotov afedotov
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: High heap on ignite client

Hi Anil,

I have not been able to reproduce your case based on the code and config you provided.
If you provide the corresponding JFR records I will check it for any problems.

PFA attached the code. You can run it on yourself and monitor the client's GC activity via, for example VisualVM

I tried running a server node (ServerRunner) with the following VM settings:
-Xmx1024m -Xmx3072m -XX:NewSize=512m -XX:+UseTLAB -XX:+UseG1GC -XX:MaxGCPauseMillis=500 
-XX:+ScavengeBeforeFullGC -XX:+DisableExplicitGC -XX:+ScavengeBeforeFullGC -XX:+AlwaysPreTouch -XX:+PrintFlagsFinal

The client node was run with:
-Xmx6144m -XX:NewSize=512m -XX:+UseTLAB -XX:+UseG1GC -XX:MaxGCPauseMillis=500 -XX:+ScavengeBeforeFullGC 
-XX:+DisableExplicitGC -XX:+HeapDumpOnOutOfMemoryError -XX:+PrintGCCause -XX:+PrintGCDetails 
-XX:+PrintAdaptiveSizePolicy -XX:+PrintGCTimeStamps -XX:+PrintGCDateStamps -XX:+HeapDumpAfterFullGC 
-XX:+ScavengeBeforeFullGC -XX:+DisableExplicitGC -XX:+AlwaysPreTouch -XX:+PrintFlagsFinal 

Kind regards,
Alex.

On Tue, Jun 20, 2017 at 8:00 AM, Anil [via Apache Ignite Users] <[hidden email]> wrote:
Hi Alex,

Thanks for the suggestion Alex. I will try with new setting. thanks.

I have attached the gc client file.

Did you find anything around jdbc issue ? I have put a debug point in GridReduceQueryExecutor at resIter = res.iterator(); and res object holding all the records. 

Thanks

On 19 June 2017 at 18:50, Alexander Fedotov <[hidden email]> wrote:
I don't see anything wrong with your config.
Could you please provide C:/Anil/dumps/gc-client.log
There should be a reason for objects not being collected during GC.

Just one more thing, try replacing -XX:NewSize=512m with -XX:G1NewSizePercent=30
XX:NewSize won't let G1GC adjusting young gen size properly.



Kind regards,
Alex.

On Mon, Jun 19, 2017 at 3:47 PM, afedotov <[hidden email]> wrote:
Actually, JDBC driver should extract data page by page.
Need to take an in-depth look.

Kind regards,
Alex.

On Mon, Jun 19, 2017 at 3:08 PM, Anil [via Apache Ignite Users] <[hidden email]> wrote:
HI Alex,

I have attached the ignite client xml. 4L means 0.4 million records. Sorry, I didn't generate JFR. But created heap dump.

Do you agree that Jdbc driver loading everything in memory and next() just for conversion ?

Thanks

On 19 June 2017 at 17:16, Alexander Fedotov <[hidden email]> wrote:
Hi Anil.

Could you please also share C:/Anil/ignite-client.xml ? As well, it would be useful if you took JFR reports for the case with allocation profiling enabled.
Just to clarify, by 4L do you mean 4 million entries?

Kind regards,
Alex.

On Mon, Jun 19, 2017 at 10:15 AM, Alexander Fedotov <[hidden email]> wrote:
Thanks. I'll take a look and let you know about any findings.

Kind regards,
Alex

18 июня 2017 г. 3:33 PM пользователь "Anil" <[hidden email]> написал:
Hi Alex,


please let us if you have any suggestions/questions. thanks.

Thanks

On 15 June 2017 at 10:58, Anil <[hidden email]> wrote:
Sure. thanks

On 14 June 2017 at 19:51, afedotov <[hidden email]> wrote:
Hi, Anil.

Could you please share your full code (class/method) you are using to read data.

Kind regards,
Alex

12 июня 2017 г. 4:07 PM пользователь "Anil [via Apache Ignite Users]" <[hidden email]> написал:
Do you have any advice on implementing large records export from ignite ?

I could not use ScanQuery right as my whole application built around Jdbc driver and writing complex queries in scan query is very difficult.

Thanks

On 10 June 2017 at 18:48, Anil <[hidden email]> wrote:
I understand from the code that there is no cursor from h2 db (or ignite embed h2 db) internally and all mapper response consolidated at reducer. It means when exporting large number of records, all data is in memory.

             if (send(nodes,
                    oldStyle ?
                        new GridQueryRequest(qryReqId,
                            r.pageSize,
                            space,
                            mapQrys,
                            topVer,
                            extraSpaces(space, qry.spaces()),
                            null,
                            timeoutMillis) :
                        new GridH2QueryRequest()
                            .requestId(qryReqId)
                            .topologyVersion(topVer)
                            .pageSize(r.pageSize)
                            .caches(qry.caches())
                            .tables(distributedJoins ? qry.tables() : null)
                            .partitions(convert(partsMap))
                            .queries(mapQrys)
                            .flags(flags)
                            .timeout(timeoutMillis),
                    oldStyle && partsMap != null ? new ExplicitPartitionsSpecializer(partsMap) : null,
                    false)) {

                    awaitAllReplies(r, nodes, cancel);

// once the responses from all nodes for the query received.. proceed further ?

          if (!retry) {
                    if (skipMergeTbl) {
                        List<List<?>> res = new ArrayList<>();

                        // Simple UNION ALL can have multiple indexes.
                        for (GridMergeIndex idx : r.idxs) {
                            Cursor cur = idx.findInStream(null, null);

                            while (cur.next()) {
                                Row row = cur.get();

                                int cols = row.getColumnCount();

                                List<Object> resRow = new ArrayList<>(cols);

                                for (int c = 0; c < cols; c++)
                                    resRow.add(row.getValue(c).getObject());

                                res.add(resRow);
                            }
                        }

                        resIter = res.iterator();
                    }else {
                      // incase of split query scenario
                    }
         
         }

      return new GridQueryCacheObjectsIterator(resIter, cctx, keepPortable);


Query cursor is iterator which does column value mapping per page. But still all records of query are still in memory. correct?

Please correct me if I am wrong. thanks.


Thanks


On 10 June 2017 at 15:53, Anil <[hidden email]> wrote:

jvm parameters used -

-Xmx6144m -XX:NewSize=512m -XX:+UseTLAB -XX:+UseG1GC -XX:MaxGCPauseMillis=500 -XX:+ScavengeBeforeFullGC -XX:+DisableExplicitGC -Xloggc:C:/Anil/dumps/gc-client.log -XX:+HeapDumpOnOutOfMemoryError -XX:+PrintGCCause -XX:+PrintGCDetails -XX:+PrintAdaptiveSizePolicy -XX:+PrintGCTimeStamps -XX:+PrintGCDateStamps -XX:+HeapDumpAfterFullGC -XX:+ScavengeBeforeFullGC -XX:+DisableExplicitGC -XX:+AlwaysPreTouch -XX:+PrintFlagsFinal -XX:HeapDumpPath=C:/Anil/dumps/heapdump-client.hprof

Thanks.

On 10 June 2017 at 15:06, Anil <[hidden email]> wrote:
HI,

I have implemented export feature of ignite data using JDBC Interator 

ResultSet rs = statement.executeQuery();

while (rs.next()){
// do operations

}

and fetch size is 200.

when i run export operation twice for 4 L records whole 6B is filled up and never getting released.

Initially i thought that operations transforting result set to file causing the memory full. But not.

I just did follwoing and still the memory is growing and not getting released

while (rs.next()){
 // nothing
}

num     #instances         #bytes  class name
----------------------------------------------
   1:      55072353     2408335272  [C
   2:      54923606     1318166544  java.lang.String
   3:        779006      746187792  [B
   4:        903548      304746304  [Ljava.lang.Object;
   5:        773348      259844928  net.juniper.cs.entity.InstallBase
   6:       4745694      113896656  java.lang.Long
   7:       1111692       44467680  sun.nio.cs.UTF_8$Decoder
   8:        773348       30933920  org.apache.ignite.internal.binary.BinaryObjectImpl
   9:        895627       21495048  java.util.ArrayList
  10:         12427       16517632  [I


Not sure why string objects are getting increased.

Could you please help in understanding the issue ?

Thanks






If you reply to this email, your message will be added to the discussion below:
http://apache-ignite-users.70518.x6.nabble.com/High-heap-on-ignite-client-tp13594p13626.html
To start a new topic under Apache Ignite Users, email [hidden email]
To unsubscribe from Apache Ignite Users, click here.
NAML


View this message in context: Re: High heap on ignite client
Sent from the Apache Ignite Users mailing list archive at Nabble.com.






ignite-client.xml (2K) Download Attachment



If you reply to this email, your message will be added to the discussion below:
http://apache-ignite-users.70518.x6.nabble.com/High-heap-on-ignite-client-tp13594p13953.html
To start a new topic under Apache Ignite Users, email [hidden email]
To unsubscribe from Apache Ignite Users, click here.
NAML



View this message in context: Re: High heap on ignite client
Sent from the Apache Ignite Users mailing list archive at Nabble.com.



gc-client-old.log (1M) Download Attachment



If you reply to this email, your message will be added to the discussion below:
http://apache-ignite-users.70518.x6.nabble.com/High-heap-on-ignite-client-tp13594p13980.html
To start a new topic under Apache Ignite Users, email [hidden email]
To unsubscribe from Apache Ignite Users, click here.
NAML


test-ignite-jdbc-reproducer.tar.gz (60K) Download Attachment
Anil Anil
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: High heap on ignite client

Thanks Alex. I will test it in my local and share the results.

Did you get a chance to look at the Jdbc driver's next() issue ? Thanks.

Thanks,
Anil
Loading...