Nov 23 2016
I'm happy to note the constructive attitude, thanks :).
Wikitext's processing model is based on generating snippets of HTML strings for wikitext markup [...] So, in this string concatenation model, in the general case, you cannot know how a piece of wikitext markup is going to render without processing the entire document.
This string-concatenation based model has a number of issues:
- It is a poor fit for tools that operate at a structural level on the document and need to precisely map those structures back to the wikitext that generated them. VisualEditor (VE) is the best known example of such a tool. VE operates on the DOM, and the edits on the DOM need to be converted back to wikitext without introducing spurious diffs elsewhere in the document. To enable this, Parsoid does a lot of analysis and hard work to map a DOM node to its associated wikitext string that generated it. Parsoid relies on a lot of (some ugly) hacks to provide this support.
Sep 7 2016
It has been two weeks since the last activity, and I would be delighted if someone could update me on the latest status. Also if this status is that nobody besides me cares for this behaviour of the TOC and the only way forward is to make a crappy localised workaround that I will need to maintain with every MediaWiki upgrade. (Although obviously the software developer/architect in me would not like that.)
Aug 22 2016
@Anomie Thanks for your comment.
Aug 20 2016
@RobLa-WMF Unfortunately, nothing happened in the past three weeks.
Aug 1 2016
I checked and this does not recur on ProofWiki for MW 1.27 (which I tested on a local VM).
So it seems that upgrading will automatically tackle this for us, which is great.
Jul 30 2016
Thanks for your time.
Jul 28 2016
I guess, with everyone seemingly indifferent and/or having moved on, it is safe to assume that this enhancement of the parser will not be implemented?
Apr 13 2016
In the nearly three years since the last activity in this thread, does anyone know what the status of this is?