User Details
- User Since
- Nov 2 2015, 2:39 PM (444 w, 4 d)
- Availability
- Available
- LDAP User
- Unknown
- MediaWiki User
- LA2 [ Global Accounts ]
Oct 7 2021
I'm in! Thanks!
Oct 7 2019
Sep 26 2019
I'm quite sure that I will not run into any more problems with Wikisource,
simply because I don't intend to run into Wikisource anymore.
It appears my problem (with PDFs of Finlands Allmänna Tidning) is the same that I reported in Bugzilla in 2010 and that I gave up on solving in 2013, because it is oooh so difficult to solve a small problem, and which has been imported into Phabricator,
https://phabricator.wikimedia.org/T25326
So once again, I am abandoning Wikisource. This time, my enthusiasm lasted for a week. Maybe I'll make a new try nine years later. Bye!
Nov 2 2015
This goes beyond dumps. Sometimes in Mediawiki, you want a structure that groups many pages together or that splits a page into sections.
- Grouping can be achieved by creating subpages. A typical example is Wikisource that uses one page per book and one subpage for each chapter of that book, e.g. https://en.wikisource.org/wiki/Eskimo_Life
- Splitting can be achieved by subheadings. A typical example is Wiktionary that has one page per word, and subheadings for each language that has a definition of that word, e.g. https://en.wiktionary.org/wiki/Eskimo
However, these structures are not reflected in dumps, templates, robot operations or searches. But maybe they should? It would be nice to be able to search for words only inside the current book without learning all the intitle: syntax for searches. Wiktionary has many templates with the argument lang=, which is almost always lang=fr in the ==French== section of the page and lang=de in the ==German== section of the page. If the subheading could set this as a context (in this section, assume lang=fr, like a local variable in a programming language), the parameter would not be needed in each template call. (It would only be needed when it deviates from the section default.) So, considering that the XML dump is a structured document, perhaps we should see the wiki as one large structure, rather than a flat set of pages.