For testing multi-datacenter MediaWiki work, there is a need for two production-like MediaWiki clusters, each with its own master / slave databases, application servers, job runner, and memcached / redis servers. The workload these servers will be expected to handle is slight, so they can be relatively modest virtual machines.
The effort required for configuring a production-like MediaWiki instance is enormous. We have done it three times in three years (first Ashburn, then Beta Cluster, then Dallas), and each time it has involved a lot of repetitive, manual work. We have to do it again now, and if the multi-datacenter project is successful at making the business-case for additional data-centers attractive to our users and the board, we will be doing it again in the future.
My fear is that if we don't find a way to automate more, we will end up being inundated with work that is unpleasant, menial, repetitive, and error-prone, and we will be both inefficient and unhappy as a result.
Because the next clusters we provision will be for testing rather than production, I think it would be OK to take a chance with an immature automation framework, provided we are satisfied that it is heading in the right direction, and expect it to be ready for some production use within a one-year timeframe.