Yes, you can do that if you store the partition ID alongside with the record in the database. I have updated the data-loading configuration page (see the partition-aware data loading section). Please let me know if it makes sense or if I need to add more information to that page.
1. When new nodes join a cluster, partition-to-node assignment changes. Let's assume you have one backup. When you have just one node, it is responsible for all partitions, so your first node will try to load all the partitions. When second node joins grid, it will try to re-balance existing data from the first node and at the same time it will try to load the same set of partitions since we assumed you have one backup. When third node joins, it again will rebalance existing data from two first nodes and will try to load from the database 2/3s of partitions, and so on. In this scenario all nodes but the last one will try to load more partitions than necessary. The best approach here is to wait until your topology has enough nodes (you can create an event listener for node join/leave event) and then call loadCache().
2. It should not run on client node because it does not make sense, I will be surprised if it is.
3. In the ticket you have created your XML configuration does not have a name, but in the code you do have cache name. So in your cluster you end up with two caches, one with cache store (defined in the XML) and one without cache store, so it looks like you end up calling loadCache on the cache without store.