I have to load several million child records distributed over about 2000 parent records, so obviously I'm running into locking issues. I'm trying to find a way to avoid locks as much as possible, so I came up with this:
I split my child record file into one file for every parent record. then I use Dataloader CLI to start a separate serial bulk API job for each of these files. My idea was that each batch job would run independently from the others while the individual batches for each job would not interfere with eachother.
However, from small scale testing it looks like the jobs don't run concurrently. A batch from Job A runs alone and only when it is done does a batch from Job B start running, and so on.
Is this expected behavior or is it just an anomaly? If several serial jobs are running at the same time, will they still finish in the same time as one large serial job?