We currently have core.parallelism set to 64, so we should theoretically be able to execute 64 dump tasks at the same time.
However, we will still be limited by:
- the amount of pool slots (adjustable at runtime)
- the DAGs max_active_tasks parameter (currently set to 16)
The kubernetes_executor.worker_pods_creation_batch_size being set to 16 means that we won't be able to create the 64 pods in a single batch, but that's probably fine, as the pods will simply be delayed in the executor queue for a bit.
Same for the max_tis_per_query being set to 16. That will probably delay the scheduling of some tasks by a couple of scheduler loops. We should see whether we can increase it, though.









