User Details
- User Since
- May 15 2022, 7:54 AM (203 w, 6 d)
- Availability
- Available
- LDAP User
- Unknown
- MediaWiki User
- Theknightwho [ Global Accounts ]
Nov 3 2025
Aug 27 2025
Thank you.
Mar 18 2025
Mar 4 2025
I wrote this code. I give a detailed explanation of the issue below, but the tl;dr is that get_current_L2 relies on a nasty kludge to get the current section number (the number used to determine the section when you edit a section), and this kludge seems to be incompatible with Parsoid.
May 22 2024
May 17 2024
May 15 2024
May 14 2024
The main issue that I see with both this and T331906 is that neither has a good way to handle (a) headings parsed by the preprocessor which fail on expansion (e.g. a heading containing a template which expands into multiline text), or (b) headings which aren't parsed as such by the preprocessor, but which are created via expansion (e.g. headings in template outputs).
May 12 2024
May 11 2024
Another idea I had for this is to make most of the data in mw.site available via a metatable, in the same way mw.loadData is. Removing mw.site and package.loaded["mw.site"] from _G speeds up mw.clone(_G) by about 4.5 times, which is a major reduction, and there's no reason for anyone to be writing to those tables anyway.
Apr 14 2024
Apr 13 2024
Apr 8 2024
Feb 28 2024
Feb 12 2024
Here's a version that's about 15% faster than the original:
- It avoids generating a new closure every time the main function is called.
- tableRefs isn't an upvalue, so access is faster.
- Avoids any global variables.
I should have mentioned why I chose for key, elt in next, val do: it's because it avoids any __pairs metamethod, which would prevent a true clone of the table being created. This wouldn't work with data loaded via mw.loadData, but (a) mw.clone already throws an error if you pass data loaded via mw.loadData into it, and (b) even if it didn't, it defeats the purpose of mw.loadData to make a local clone of the data, so you may as well just load it via require in the first place.
Feb 11 2024
From my tests, the speed improves by about 10% if the check for non-tables and already-seen values is in the for-loop, rather than at the start of the recursive call. I've also minimised the number of table accesses.
Strongly support this. At the English Wiktionary, we use a recursive backtracking parser to iterate over templates, since we have to do a lot of data scraping due to a large amount of info being spread across pages and to ensure we remain accessible (instead of shoving everything into intimidating data tables). The current performance is good, but it would be even better with coroutines, since there'd be no need to build the whole node tree of templates before traversing it. This is an issue on very large pages, where any breaks necessarily have to happen in the second pass.
Feb 5 2024
Just to reiterate Erutuon's point above, that approximately 1.6% of all mainspace pages on the English Wiktionary- that's a lot! Obviously we can adjust the modules to account for this, but it's a situation that was difficult to spot, and causes problems in sorting that are difficult to debug since it's so unintuitive. It really should be being treated as a bug, not a feature request.
Jan 27 2024
Jan 7 2024
This has been pending for three months now with no action. Could we please get an update?
Dec 31 2023
Dec 19 2023
Thanks for the quick turnaround.
Dec 18 2023
Dec 3 2023
Oct 24 2023
Sep 17 2023
Why has this been closed? This ticket is for Wiktionary, not Wikiversity.
Aug 10 2023
Jul 7 2023
Jun 13 2023
May 30 2023
@Pcoombe [[en:wikt:summary]], [[en:wikt:heading]] and [[en:wikt:actions]] may also be exhibiting unintended behaviour, in that case. There are a few other classes in that format, but the titles are implausible for Wiktionary.
Hi @Aklapper - I've updated the initial report with the browsers I've tested it on.
Mar 1 2023
May 15 2022
Just to point out that this is a common practice on English Wiktionary when striking out a whole section and works without issue in other skins, so regardless of whether it's bad HTML from a technical perspective, it does need to be corrected as some users will continue to do this if they aren't using Vector 2022.

