<pagelist/> is just a fancy way of generating links to the pages (which you can actually do by hand if you want; this is somewhat still necessary when one has a group of media like a collection of JPEG images, etc.).
- Queries
- All Stories
- Search
- Advanced Search
- Transactions
- Transaction Logs
Advanced Search
Wed, Apr 17
Tue, Apr 16
If the issue goes away with a purge there isn't really an issue with the page anyway. The issue is with a stale page cache. If you manage to update a page with such an error in order add processing to catch and categorize such an error, then the error would already have gone away because you necessarily had to update the cached page by editing the page either directly or through one of its transclusions.
Sat, Apr 6
I assume this also breaks (via the quoted code) when <pages/> tag is included to an Index page indirectly (e.g., Xyzzy/ToC has <pages/> and an Index transcludes such via something like: {{Xyzzy/ToC}}. Incidentally, is this an across the board restriction or just one for prevent circular transclusion? Meaning can I use a <pages/> tag in an Index so long as the <pages/> tag refers to a different Index (e.g., a ToC in one volume of a multivolume work where volumes not containing the ToC could still include the ToC in their Index pages using a <pages/> tag` because it is not a circular reference)?
Tue, Mar 26
I am not really against removal of "slave" terms as there and usually plenty of other more precise words that can be used that are unrelated to human slavery, however, I am against unnecessary remove of "master" terminology as it was only apply to slave owners considerably after it already had many other usages and meanings (e.g., mastering a skill and the origin of the "Mr.", etc.). I have no issues with "master" branches and think it is silly and not useful to seriously consider trying to remove such references.
Feb 22 2024
It seems to me a superior solution would just be to use the existing wikitext redirects (necessitating a change in the content model upon rename/moves) and have Scribunto fetch the targets of such things before it #invokes, requires, etc.
Mar 17 2023
Splitting the content model into sequential and non-sequential content models is an interesting concept but I am not sure that is really that useful or necessary.
It seems to me the issue is Parsoid introducing parallel processing across all the individual parts of parsing during a page render. This way it can memoize the results of these individual parts and potentially run them out-of-order. In order for it to accomplish such, Parsoid has to know all of the inputs to any part of the parse and it assumes that anytime the inputs are the same the output will be the same.
Mar 1 2023
I am not sure how this related to ResourceLoader but this seems pertinent: T313514: Enable Wikistories for Desktop users.
Is this related to ResourceLoader and T329891: Remove mobile-only modules in Wikistories ?
Feb 28 2023
I would also like to see Special:BookSources get moved out of core to an Extension:ISBN that provides a Special:ISBN (with a Special:BookSources alias) for things like T148274: Implement a convenient way to link to ISBNs without magic links.
I do not really see the value here. First, I would like to see Special:BookSources moved out of core (e.g., into an extension) not unlike how magic links are likely to be handled anyway (see T252054). And what is wrong with links like [[Special:BookSources/{{{ISBN}}}]]? If you really want something shorter why not just make Special:ISBN be an alias for Special:BookSources (I believe several Special pages have aliases as well as language localization names)? Then ISBN {{{ISBN}}} magic links can just be changed to [[Special:ISBN/{{{ISBN}}}]] style links (e.g., via templates), etc.
Feb 24 2023
While I am for adding such revision tags I am against migrating to and depending the value there (which is good for watching, etc.).
One possible method towards this could be to use Wikibase in much the same way as Structured Data on Commons was deployed. We could develop something akin to Extension:WikibaseMediaInfo (or perhaps more like Extension:WikibaseLexeme since I am not sure we would need or want names, descriptions and aliases for these new objects) and for querying (per T172408) we could leverage things like WikibaseCirrusSearch, e.g., haswbstatement, wbstatementquantity, etc. or whatever else they are using.
@Tpt I also prefer the MCR route. I think that might allow better handling of the migration cost too.
Feb 22 2023
I find it strange that NAMESPACENUMBER was added that works on full pagenames but nothing was added to do the same with just namespace names—the corollary to ns: and nse:. It is easy enough to workaround as I can always just smash on a random pagename to a namespace and then pass to NAMESPACENUMBER: but that seems crudely unnecessary and a strange oversight.
Feb 15 2023
In T326480#8616653, @Umherirrender wrote:Changelog contains
Fixed bug GH-9296 (ksort behaves incorrectly on arrays with mixed keys).
https://github.com/php/php-src/issues/9296 wants to enforce the "saner" comparision linked by you also be done for SORT_REGULAR places.
https://php.watch/versions/8.2/ksort-SORT_REGULAR-order-changes
Feb 13 2023
In addition to the "Central description" ("Zentrale Beschreibung") the "Page information" ("Seiteninformationen") also directly specifies "Page content language" ("Seiteninhaltssprache"):
Which is clearly "en" and not "de".
Using something simplistic like the proposed return require( "Module:Target" ) can easily be detected by the target module, perhaps even affecting its functionality. Specifically, ... will have a value during the module initialization because the module is required and not directly #invoked and mw.getCurrentFrame():getTitle() will refer to the #invoked title and not the redirection target. Also reported script errors will list the #invoked module in the call stack because the target module is not truly treated as the Lua "main" module.
Feb 12 2023
Please make sure any solution works for both int-like and float-like numeric key values. It would be bad if 9.1 and "9.1" no longer sorted as larger than 9 and "9".
Jun 26 2022
In T181714#3807577, @Samwilson wrote:That's a good idea.
I don't think we can email them though (can bots do that?). Perhaps we can add a note to their Commons talk page?
In T179736#3791352, @Samwilson wrote:Is it the case that the zip files we want are identified by format = 'Abbyy GZ' in the files' list? That seems to identify the jp2 and tif zip files in the items I've looked at.
It would be nice if the jobs metadata and the logs were kept longer. That said all jobs should have a master timeout and die that makes them all end up in the completed/aborted bucket. That bucket can then be cleaned after a longer period. This allows one to manually retry jobs by clicking a button on the aborted ones if one can ascertain from the logs that the error was somehow temporal.
In T161456#3139116, @Samwilson wrote:I think the issues with that could be:
- the OCR from IA is of better quality than we can do on Labs (I think I'm right in saying they use Abby FineReader, we use Tesseract or Google Cloud Vision API);
Jun 24 2022
@Samwilson After pruning old job items via the fix for T183338, the line moved from 150 to 111 but it is possible it was fixed. Is this still happening?
Why not just use direct URL upload to begin with? Let Commons pull it from IA Upload then we do not have to worry about teaching addwiki async chunked uploading as IA Upload's part would be downloading instead (and from the perspective of IA Upload, another request is inherently asynchronous). This has the added benefit of transparency as we would have to provide the media URL and file description metadata anyway.
Jun 23 2022
This depends how how you are trying to process that. That IA item does not have an existing DjVu file (it was created well after March 2016 when they stopped making those).
Currently IA Upload uploads DjVu obtained via three possible sources:
- Use existing DjVu
- From original scans (JP2)
- From PDF (maybe of lower quality)
I too have run into this issue and I do not think it is is so much of an issue with the OCR layer being on the wrong pages per se as the
Jp2DjvuMaker including extraneous pages from the jp2 set and thus the OCR layers effectively no longer match up with resultant pages.
Jan 18 2022
For reference, {{PAGELANGUAGE}} mentioned in the description was added in T59603, however, it only allows one to obtain the language of the page being rendered (since it does not take any arguments unlike {{PAGENAME}} and friends) not the content language of arbitrary pages despite arbitrary page content being available via getContent on mw.title objects.
I am hoping that a resolution here can lead to a resolution of T161976 for which the fix was reverted due to T298659 and ultimately resulted in this issue. I believe if page content is available during the rendering of another page that the purported content language of the available page content should also be available during the page render (much like the content model already is).
Jan 11 2022
In T298659#7602887, @Krinkle wrote:The entry point for this problem seems to be ContentHandler::getPageLanguage(), as called by the recently added Scribunto code for mw.title.pageLanguage. This method does not logically depend on the context/user language, except for where it passes $wgLang as third parameter to HookRunner::onPageContentLanguage.
This seems like a bad idea and not something that is imho reasonable to support in MediaWiki, and incompatible with the general model of how this feature works. For one, it would break any assumption that this is persistable to the database.
I feared the Translate extension might depend on thit, but from a Codesearch query we find it is actually not recongising this parameter at all. Instead, it is LiquidThreads and WikimediaIncubator using these to make some of their wiki pages act like a special page, in that they automagically render in the UI language. This is strange since we already expose the use language to the parser and allow it to be used. Rather, the hook is additionally forcing the internal page language to be perceived as if it were the current user language. Perhaps in the short-mid term we can find a different way to support the underlying need there.
Jan 10 2022
The problem with this is that this affects the MediaWIki core code since the Scribunto extension just uses the core template frame code to parse the parameters and arguments. That code makes no attempt to retain original order and is thus this information is lost (despite such order being available to parser functions like #invoke in general). To make matters more complex, parameters and arguments are not only available (likely out of order) for the current #invoke frame but also the parent wikitext template frame. Wikitext template's also need both numbered and named parameters but so far have had no need to retain original ordering. Since Scribunto allows access to the parent frame args which also does not retain the original ordering they were given in this necessitates a core change to fix such.
Nov 24 2021
It should probably be noted that there are Wikidata items that state they represent (P31) Wikimedia categories (Q4167836). Some of those have category sitelinks at Commons, i.e., Q9013822 sitelinks to Category:Text logos). These should probably not be considered in error despite also having P373 "Commons category" statements claiming the same value. Having a MediaInfo entity's statements linking to such Wikidata items might be considered erroneous (depending on the claims).
Nov 23 2021
I am not sure if this is actually unexpected. {{#statements:P195|from=M89709639}} yields <span><span>[[Category:Media contributed by Toledo-Lucas County Public Library|Toledo-Lucas County Public Library]]</span></span> because of the P195 claim on M89709639 that points to Q7814140 which in turn has the commons sitelink that points to Category:Media contributed by Toledo-Lucas County Public Library (I doubt that sitelink is really correct and could use to be fixed, e.g., {{#property:P373|from=Q7814140}} also yields Media contributed by Toledo-Lucas County Public Library).
Nov 14 2021
In T18691#7502612, @Ciencia_Al_Poder wrote:That doesn't seem feasible, and it's outside of the scope of this task
I never suggested such was feasible or in scope, however, I do think it deserves a discussion point as the reason for wanting such external linking is actually even larger for editor created anchors than for sections. As such, they help define the problem here and possible arguments for or against the proposals here.
It should be noted that while these work:
- https://cs.wikipedia.org/w/index.php?title=Demokratura&veaction=edit§ion=5
- https://cs.wikipedia.org/w/index.php?title=Demokratura&action=edit§ion=5
the following does not work nor refer to the same thing:
but rather one has to use something like:
Jun 26 2020
I actually did not do that. I think somehow I must have edited/submitted and older version (though I am not sure how as that was not my intention).
Jun 23 2020
@Aklapper Does it really need triage? There was already a patch for it (thought it seems to need to be updated). I can see how the patch itself needs triage but the issue seems well understood. Anomie already clarified that mw.language.getPageLanguage was not the right thing and demonstrated that a pageLanguage field of mw.title objects was the way to go. What further triage does this issue really need? I only assigned it to Anomie so that he would respond based on the patch he created. I understand if he wanted to remove himself at this point in time but the point was to get him to make such a statement.
Jun 22 2020
Jun 21 2020
I highly doubt this sort of functionality will arrive anytime soon. The main issue is that if a Scribunto module supplies different output based on different input from remote wikis, how does Mediawiki track the links and maintain the page rendering caches (so cached output gets properly updated when a dependency changes)? To accomplish this sort of dependency tracking the link tables would have to somehow be expanded to support cross wiki linking so that things like [[Special:Whatlinkshere]] can list remote page transclusions, etc. (perhaps you read that getContent causes the page to be recorded as a transclusion and this is why).
@Anomie Can we get your change from over three years ago merged? This is an easy and straightforward fix but Gerrit is reporting some sort of merge conflict even though Jenkins had no issues with it.
Jun 20 2020
Jun 16 2020
In T192462#5461691, @Ladsgroup wrote:
May 14 2020
This is actually a regression when the extension moved from GeSHi (PHP) to Pygments (Python): T85794: Convert SyntaxHighlight_Geshi from Geshi to Pygments (was: Don't register 250+ modules) Since MW is PHP the extension can just use GeSHi as a library without having to fork a separate process via one of the unsafe functions removed in the hardened PHP.
Jan 24 2020
The issue becomes how to represent multiple edition links in Mediawiki toolbars across multiple WMF wikis across their projects. Currently, as implemented via WD sitelinks, we only allow one link per wiki per project per WD item. This is in part owning to the limited space in the Mediawiki toolbars where such links are displayed. Even across wikis within a single project when only a single link is allowed per wiki, there can sometimes be a *very* large number of links (there are many languages in Wikipedia alone and already there are mechanisms that limit the number of sitelinks displayed in the toolbar by default).
Jan 23 2020
Jan 13 2020
@beleg_tal: I agree with your statements, especially "interwiki system needs to be flexible enough to accommodate different data models", however, I do not think this is an inherently Wikidata issue.
Jan 10 2020
I think we can remove OldWikisource from this task as concepts from this task that apply to it are now adequately covered by:
- T138332: Interwiki links to/from Multilingual Wikisource
- T206426: Storing multiple sitelinks to a multilingual wiki
This task can now focus on just Incubator and BetaWikiversity.
This should not be a blocking issue if we disregard T206426: Storing multiple sitelinks to a multilingual wiki which I do not think we should consider implementing except for possibly allowing sitelinks to be prefixed (a la Special:MyLanguage/) on their way to their target wiki. See my comment on that task.
Jan 9 2020
I am not sure this is a good idea or not (its seems like there maybe a few proposals) and I am against implementing this in Wikidata beyond how it already implements things (multiple linked records with one sitelink per wiki per item). That said, I see no issue with some wiki client like a Wikisource one implementing such a thing locally, e.g., perhaps traversing through linked Wikidata items via Scribunto Lua modules, to find all the sitelinks of all edition items of a specific work item they are all linked to, or other similar arrangements it wants to.
I am against having multiple site links per wiki per WD item. On the other hand, I am not against having a translation system for these sitelinks and it might be good to have some method to automatically prefix item sitelink links to multilingual wikis using something like Special:MyLanguage/.
Dec 9 2019
FYI: I made a comment on T185313: mw.wikibase.entity:getBacklinks (lua API in wikibase client) about the possibility creating a query service that stores results in Tabular Data (which is available at page render time via Extension:JsonConfig and Scribunto Lua).
I agree that Special:WhatLinksHere is probably not the right semantics for this request, haswbstatement might be better semantics, however, those need to be well defined so people know if in fact they would address this request.
Oct 23 2019
I disagree. The availability of ones password does not fall into "security by obscurity" because it is, in general, not obscurely available from other sources (and if it is, that is an entirely different type of security issue). The point being, security should clearly delineate who has access to what and make all things available to such persons and noting of what they should not have access to. Since the concept of most linked pages is available via other public means this clearly falls into that the category of "security by obscurity" (unless you plan to secure the data through all means of access which seems to go beyond this proposal).
Oct 11 2019
I do not think this is a good idea. This amounts to security through obscurity and in general is not a good practice. The same data could be found in a number of other ways (e.g., api.php which would be even more useful to a potential vandal bot) and in the end does nothing to actually prevent any vandalism to begin with (just attempts to deter it by obscuring which pages have the most links). Also unconfirmed users (e.g., anonymous IPs) might have valid reasons to what to know which pages are most linked. We already punish such editors enough for the faults of troublemakers. I do not see this as a great way to protect our content from vandalism and it definitely punishes other users.
Oct 5 2019
@jeblad: This seems like a problem:
Apr 20 2018
Apr 19 2018
This might get resolved by T112658, at least for the parser function.
This might get resolved by T112658.
This should probably be handled more generally along the lines of T157868 and T127169. This is exactly why I felt it was better to implement a mw.wikibase.resolveEntityId that returns the resolved eid or nil if it does not exist rather than just the true or false of mw.wikibase.entityExists. If we had a mw.wikibase.resolveEntityId we could funnel all code through it that needed to check for existence and redirection instead of making every other function handle redirection, etc.
Apr 17 2018
Perhaps this ticket is old but it seems to me we already have such filters with mw.wikibase.getBestStatements and mw.wikibase.getAllStatements. Of course those do not pull multiple properties in a single execution but they do filter to a single property without pulling the expensive complete set/tree of property data. One could easily create a function to execute one of these multiple times and combined the results to get what is requested in this ticket.
Apr 12 2018
This seems partially redundant with T157868, however I am not sure about mw.wikibase.entity.formatPropertyValues. I agree that Wikibase parser functions like {{#property:…}} should probably properly redirect, however, from Scribunto I would rather see mw.wikibase.getEntity and add a mw.wikibase.resolveEntityId implemented or perhaps also have mw.wikibase.getAllStatements and mw.wikibase.getBestStatements redirect.
T143970 seems like it was recently closed but I still think we need a resolveEntityId(eid) that returns nil when there is no such entity but redirects for merged items, etc. It could also potentially work like resolvePropertyId and return a valid entity ID when given an unambiguous label or alias.
Does entityExists properly handle redirects (e.g., merged entities) and if so how do we get the entity ID we are redirected to?
Feb 10 2018
I have no issue with discussion and I believe this an adequate forum for such a discussion. My point was your requests are significantly lacking (and need discussion and focus) before they can be considered for possible implementation.
Feb 8 2018
I agree. This request is poorly specified. For one, labels, descriptions, and sitelinks are not properties. Also how should these property values be handled? There are many property data types where the data is not necessarily a single scalar value. Also property claims can have unknown or no value in addition to a value. This ignores what to do when there are multiple property claims for the same property (or any comment about ranks) and is unclear how qualifiers or references should or should not be handled by this interface (although the second line in the description of this task quotes a qualifier access).
Jan 23 2018
In T185313#3916435, @Ghuron wrote:Well, although I can some similarities between this one and T99899, I believe there are different use cases involved. Lookup for external identifier is mostly needed in javascript (e.g. notify user that wikidata instance with the same imdb-id already exists) and can be implemented either via Markus resolver or directly via sparql endpoint (e.g. https://ru.wikipedia.org/w/index.php?diff=84445799). I can imagine some use cases where it would be nice to have it in lua (e.g. for additional decoration of wikipdate articles), but its exotic stuff. In contrast, building lists andbased on wikidata is more straightforward case and must be pure severside (for performance reasons), so I'd see a need for backlinks in lua.
Jan 21 2018
Jan 20 2018
Jan 15 2018
@Tacsipacsi It sounds like you want: mw.wikibase.getEntityIdForTitle(mw.title.getCurrentTitle().subjectPageTitle.prefixedText)
We need a resolveEntityId(eid) that returns nil when there is no such entity. It should also handle redirects from merged items, etc. (also solving T157868). It could also potentially work like resolvePropertyId and return a valid entity ID when given an unambiguous label or alias.
In T182147#3815415, @thiemowmde wrote:
- A boolean entityExists (T143970).
- Use case: Currently, I see a lot of code that does if getEntity( … ) then, which is super-expensive for no reason. The cheapest workaround that currently exists is getEntityUrl, but thats awkward to use in an if.