Would be really nice to have a built-in way to query page content in a structured way for using later in bots or scripts w/o having to rely on RegExp or writing a wikicode parser yourself. What I want is output of all elements of a page serialized in a format of your choice. This is different from action=query&prop=x or action=parse, the output must match exactly the layout of the originating wikipage. Example output in XML:
```
<root>
<header level="2"> <wikilink target="Main page">Example</wikilink> </header>
<filelink target="File:Example.png" width="250px" alt="Click me" target="Main page">Click the image or <wikilink target="Cookie">eat a cookie</wikilink>!</filelink>
</root>
```
for a page like this:
```
== Example ==
[[File:Example.png|link=Click me|Click the image or [[Cookie|eat a cookie]]!|250px]]
```
So I propose to add a new action to get a page's structured view, for example action=query&prop=structure. There should be also a reverse action, to convert a structured content into wikicode and another to perform an edit using modified structured view.
Some notes:
* It should be possible to recover the original source from a structured view exactly as it is. No normalization or other changes should be done in between, also all sorts of parameters (in templates, magic words, other elements) should be in exact order.
* The example output above is not complete, I believe all plain text should be in fact encapsulated in a separate tag, such as <text>, and it line breaks also should be passed into it.
* Could have a parameter to request particular sections, by name or title.