I propose to add an option to have the Internet Archiver (and/or other robots) retrieve raw wikitext. This way, if a wiki goes down, it will be possible to more easily create a successor wiki by gathering that data from the Internet Archive. As it is now, all that can be obtained are the parsed pages. That's okay for a static archive, but one might want to revive the wiki for further editing. Also, Archive should be opened
I would say that someone should write a script to convert the parsed pages retrieved from the Internet Archive back into wikitext, but that will run into problems with templates and such, unless it's designed to identify them and recreate them. It would be a much easier and cleaner solution to just make the wikitext available from the get-go.
Version: 1.23.0
Severity: enhancement