This is a placeholder task for https://www.mediawiki.org/wiki/Requests_for_comment/Accessing_page_properties_from_wiki_pages.
Related to:
This is a placeholder task for https://www.mediawiki.org/wiki/Requests_for_comment/Accessing_page_properties_from_wiki_pages.
Related to:
I'd like to meet to discuss the options listed on the wiki page and any other ideas that would resolve the linked tasks.
According to https://lists.wikimedia.org/pipermail/wikitech-l/2017-January/087406.html, this task is up for discussion at the next IRC meeting. I guess that would be Wednesday, January 25, 2017 (E464)?
@MZMcBride yes, indeed. I was traveling and forgot to ping you about this, sorry. Will you be available for the IRC meeting at 2pm PST on January 25?
No worries. Yep, I'm available then; I have the meeting on my calendar. I pinged @Jackmcbarn and @Legoktm about attending as well.
I posted to wikitech-l: https://lists.wikimedia.org/pipermail/wikitech-l/2017-January/087441.html.
It seems to me that the special case of a wiki page accessing its own page-props deserves another look: The problem with that is that page props come from the ParserOutput object, they are generated during and after parsing. That means that some page-props may already be there when some construct in wikitext asks for them, while others may not exist yet. And which properties already exist and which don't will depend on many things, including implementation details that may change, cache state, and whether we are are using parsoid.
Page props are a result of parsing. Because of this, I see no good way for a page to access its own page props during parsing. The only way around this would be to access information that is stored in a slot different from the wikitext, once we have T107595: [RFC] Multi-Content Revisions.
As to accessing other page's page-props: if we don't care about the information going stale, this should be trivial. If we do care, we need to track which page uses which property of which page, and then purge the pages whenever a property they use changes. That would fit into the scope of T102476: RFC: Requirements for change propagation. (Ab)using templatelinks would be an option, though I think we should rather not.
Some issues:
Alternatively to relying on templatelinks, a more specialized dependency tracking mechanism could be implemented, similar to the usage tracking mechanism for Wikibase: https://phabricator.wikimedia.org/diffusion/EWBA/browse/master/docs/usagetracking.wiki
This was discussed on IRC on Wednesday, January 25. Full meetbot log: https://tools.wmflabs.org/meetbot/wikimedia-office/2017/wikimedia-office.2017-01-25-22.02.log.html
Some notable comments/exchanges:
The essence of the meeting, as summerized above, seems to be:
For the DEFAULTSORT use case, it may be best to simply change MediaWiki so it always uses the subject key's DEFAULT key also for the associated talk page.
The previous comments don't explain what/who exactly this task is stalled on ("If a report is waiting for further input (e.g. from its reporter or a third party) and can currently not be acted on"). Hence resetting task status.
(Smallprint, as general orientation for task management: If you wanted to express that nobody is currently working on this task, then the assignee should be removed and/or priority could be lowered instead. If work on this task is blocked by another task, then that other task should be added via Edit Related Tasks... → Edit Subtasks. If this task is stalled on an upstream project, then the Upstream tag should be added. If this task requires info from the task reporter, then there should be instructions which info is needed. If this task is out of scope and nobody should ever work on this, then task status should have the "Declined" status.)
Untagging an old RFC predating our current process. It appears to be a feature request for the parser, which I've tagged accordingly. If and when it is accepted and turns out to be cross-cutting or strategic, freel free to turn into an RFC.