Typically, modules that use FileBackend will want to read from the closest cluster and write to the master cluster (or all, depending on implementation). The 'latest' flag (which already is in use) could force reads to use the master (or cross-DC quorum read ect).
One way to do this is by making MultiWriteFileBackend use the 'latest' flag in the same way as BagOStuff::READ_LATEST for ReplicatedBagOStuff. This works if the master cluster is in the one DC, which the other cluster(s) replicated from it. On the other hand, a Swift setup using a global cluster spread over the DCs (https://swiftstack.com/blog/2013/07/02/swift-1-9-0-release/) would not use this setup. Instead, callers could just use SwiftFileBackend directly, which proper read/write affinity settings set in the swift proxy server config. I assume we will at least start off with master/slave though before trying the global stuff.
Additionally, deferring writes to the non-master backends should be a feature. This would handle updates for the 99% case, and a tool like swift-repl can fix any missed updates to assure consistency within hours.