I have two physical hosts, each host has a sever node, node A and node B. I
created a table and inserted data into table via JDBCthin driver. But the
performance of inserting is poor. When the bandwidth of network is 100Mbps,
the throughput from node A to node B is only about 2Mbps, 1 million rows
need 800s to be finished. Then I only changed the bandwidth of network to
1Gbps, the throughput from node A to node B is up to 10Mbps, 1 million rows
need 250s to be finished. Moreover, the cacheMode is "partitioned".
How to explain this case? Is there some limitaiton or configuration on
network? What did I miss? How can How can we use all network bandwidth?
Did you try non-default parameters for
1) socketSendBuffer and socketReceiveBuffer  in JDBC connection string?
2) socketSendBufferSize and socketReceiveBufferSize  in Ignite server
node configuration for SqlConnectorConfiguration?
Please change them to 128k and make a try.
JDBC thin driver works in a single thread in one-by-one request-by-response
SQL statement execution. I am not sure you can fully utilize a network here.
Probably, you could use an IgniteDataStreamer to achieve better results.
Please have a look at .
Or you should create multiple JDBC connections and parallel your inserts in