The scraper has configurable concurrency at the top level, processing each wiki in a separate thread. Overall concurrency can be set using a variable in config/prod.exs
However, in practice we see that concurrency starts at the intended level but immediately drops to something very low, with only 1-4 output files being written to. Why is this happening? The most likely possibility is that threads are blocking on a shared resource, maybe the mapdata API requests. This could be due to the HTTP library defaulting to single-threading.