Page MenuHomePhabricator

RFC: Evolve hook system to support "filters" and "actions" only
Open, MediumPublic

Description

  • Affected components: MediaWiki core and extensions.
  • Engineer for initial implementation: TBD.
  • Code steward: TBD.

Motivation

I believe the following would be desirable outcomes for this RFC.

Status quo statements that we'd like to uphold:

  1. Extensions can make small changes to default behaviour of a feature without significant code duplication or large boilerplates, and in a way that isn't mutually exclusive with another extension also making similar changes. (e.g. extensions can collaboratively make small changes to the same implementation).
  2. Extensions can replace an entire service. (e.g. an extension can replace a feature, in a way that is mutually exclusive with core or other extensions).

Statements we'd like to become true in the future:

  1. Most extensions can be written in a way that will remain functional in the latest stable MediaWiki release for a long time without any maintenance. ("a long time" is believed to be multiple major release cycles no shorter than the time between LTS releases; using a deprecated feature is considered functional; it was noted that while mandatory maintenance after an LTS release is acceptable, it should still be minimal in most cases)
  2. Contributors can usually improve core features without needing to break backward compatibility.
  3. Most extensions can and would by default implement their features in a way that won't significantly slow down user-facing web requests. ("can and would" is meant to signify that the performant way shouldn't require "extra care", but rather good performance should be a natural consequence of an extension author selecting the simplest approach available to them).
Requirements

With this RFC, I'd like to solve problems that affect MediaWiki contributors and extension authors primarily when using the Hooks system. They affect the other extension interfaces as well, but to a lesser extent.

  • Hooks expose a lot of (mutable) state.

Hooks often expose high-level class instances, and in their entirety (e.g. passing $this or other high-level objects, as parameters). This means a hook handler is able to vary its return value based on anything in the current state, and also to modify any of that state, and to call any of the public methods on those objects at that time or at later times (deferred).

Allowing this kind of inspection and flexibility doesn't have to be a problem. But, I believe it is currently causing problems because we _also_ don't make a distinction between what is "recommended" and what is possible "at your own risk".

A recommended interface would be officially supported, with breaking changes happening rarely and with public announcements. Ideally with much documentation, examples, and migration guides. An at-your-own-risk approach would still be possible but might require more maintenance to keep compatible, and might sometimes change multiple times between LTSes without (much) documentation.

I believe the absence of this distinction causes the following problems for maintainers:

  • It is difficult to limit (or selectively encourage) how extensions will use a certain hook.
  • Authors and maintainers of core functionality cannot know what a hook is "supposed" to be used for, or how to change the surrounding code in core in a way that is backwards compatible.
  • Contributors to core functionality are unable to change code without potentially breaking an extension.
  • Maintaining or improving MediaWiki core is difficult.

And as consequence of the above, there are problems for extension authors as well:

  • Extension authors cannot know which hook they should use to implement their custom behaviour in a way that will (likely) remain stable, and that they will (likely) receive support for (in the form of documentation/migration) if and when it does change.
  • Extension authors have to deal with breaking changes almost every release cycle.
  • Maintaining MediaWiki extensions is difficult and/or requires a lot of time.

Note that this last point applies to both - extensions maintained by Wikimedia Foundation, and those from third parties.

Secondary outcomes

At TechConf 2018, a number of other possbile outcomes were noted as positive, but are not hard requirements for a first step. I believe these are important for MediaWiki in the long-term, and that if not improved by this RFC (e.g. we amend it or go with an alternate proposal), should instead discuss these in a separate RFC. We should not regress on the below factors (and either improve here or later in separate proposals.)

  • Encourage database interaction to be in a transaction separate from the main transaction that spans the "pre-send" time of web requests. (E.g. provide eventual consistency via deferred updates or job queue, instead of atomic consistency.)
  • Increase operational high-availability of MediaWiki by reducing the chance of cascading failures.

Status quo

First, a brief summary of the status quo. MediaWiki's current extension interfaces are:

  • Replace an implementation (high abstraction level).
  • Add an implementation (medium abstraction level).
  • Hooks (low abstraction level).
Replace an implementation

These are service-like abstractions in MediaWiki for which the implementation can be swapped.

Examples: JobQueue backend (db, redis, ..), FileRepo backend and LockManager (local filesystem, swift, ..), PoolCounter, RCFeed, Database system (mysql, postgres, sqlite, etc.).

The configuration variable has a (sometimes dynamic) default, and a site administrator would set this to a fully-qualified class name of their choosing. The class in question can either be one of the implementations that ship with core, or provided by an extension.

These services typically have only one instance, which will automatically be used by all components in core and other extensions. The reason for changing the implementation is usually to provide the same set of features but with different underlying technology. For example, to improve performance, scalability, or to share operational resources within an organisation etc.

Add an implementation

Several MediaWiki components allow extensions to register additional implementations, associated with a symbolic name. Typically all implementations are available simultaneously (e.g. not mutually exclusive). It is customary (by social convention, sometimes enforced technically) for extensions not to override existing entries, rather, they are only meant to add new ones.

Examples: Special pages, Skins, API modules, Content models, Media Handlers, Parser tags, Parser functions, and Magic words.

Extension have to implement a class interface, subclass, or function signature; and add their implementation to a registry. Usually through an extension attribute, or site configuration.

The core code then provides the bridge to these implementations from the rest of the system. E.g. the Preferences page automatically lists available skins by localised label, the "useskin" parameter supports rendering with any of the installed skins, the Parser automatically scans for added magic words to call the appropriate function etc.

Hooks

A hook allows arbitrary code to run from a callback ("hook handlers") from a specific point in core code. The core code declares that point with Hooks::run( 'TheHookName', [ $a, $b ] );, then extensions implementation a function somewhere that accepts parameters $a and $b (e.g. MyExtensionHooks::onTheHookName) and they add it to the hooks registry (e.g. in extension.json by adding the function name a Hooks.TheHookName array).

See also https://www.mediawiki.org/wiki/Manual:Hooks.


Exploration

Proposal

Drafted by Daniel Kinzler and Timo Tijhof based on discussions during sessions on the first days of TechConf 2018. Presented in the session "Architecting Core: Extension Interface" on the final day. The below has been amended to incorporate the feedback from that session. (Notes on mediawiki.org).

We group hooks into two broad categories: actions, and filters.

Action hooks (aka listeners)

An asynchronous event handler, whereby a handler callback does additional work in response to an action that has occurred.

Their characteristics are:

  • Action hooks cannot modify or cancel the default result or behaviour during the user action.
  • Action hooks cannot be aborted, returning a value is considered an error.
  • Action hooks may do expensive work, which is acceptable because they are asynchronous in nature. That is, they are not part of the atomic transaction that spans the original action. Instead, they are run from a deferred update or job.
  • Action hooks are not given mutable values (e.g. value object with setters, or primitives passed by reference). Instead, they are only given immutable value objects and primitives by value, with any other read or write services obtained from the services container as needed.
  • An action hook callback cannot be prevented from running by other callbacks. (Exceptions are caught, logged, and then the hook continues; similar to deferred updates). {note: open question}

Filter hooks

A synchronous function to modify a specific value or state.

Their characteristics are:

  • Filter hooks allow the callback to modify a value (e.g. a mutable or replaceable value object, or primitive passed by reference).
  • Filter hooks may return false to prevent other callbacks for the same filter from running. For example, when an extension disables a piece of functionality it might empty the value and then return false.
  • Filter hooks must not do expensive work. Execution time will be profiled and violations above a configurable threshold will be logged as warnings.
  • Filter hooks are given only one mutable value, the other parameters must be immutable values, primitives by value, and/or read-only services.
  • Filter hooks must be deterministic and must not cause side-effects. Their behaviour should be based on the input parameters only. Other instances and services should not be obtained. {note: open question}

PHP interface

Note: The section for the PHP interface is still an early draft and has not yet received wider input.

We'll need a way to register callbacks, and a way to run callbacks. Today that is Hooks::run(), Hooks::runWithoutAbort(), and extension attribute holding an array of callables like Hooks.<Name>. We could follow the same names and do it as follows:

  • Register via ActionHooks.<Name> and FilterHooks.<Name>.
  • Run via Hooks::action(), Hooks::filterWithAbort() and Hooks::filter().

In the above interfaces, Hooks::action could hold logic for asserting we are in async context, catch exceptions, and report errors about return values. Hooks::filter could measure execution time (e.g. cumulatively by extension name with a shutdown handler reporting violations with Monolog).

Policy

Note: The section about policy is still an early draft and has not yet received wider input.

  • New hooks must be documented in a way that indicates whether they are actions, filters or legacy hooks, and whether they are abortable.
  • New hooks of type "action" or "filter" must adhere to the "must" statements above. This means that if there is not yet a value class available to represent the information in question, those classes must be written first. {note: open question}
  • ... (more?) ..

Open questions
  1. If an action hook encounters a fatal error (e.g. class undefined, out of memory, execution timeout), should we ensure other callbacks still run? If so, how? If not, how should we document this?
  1. The current phrasing allows a filter hook to interact with services not passed as parameter as long as they don't have side-effects (because it says "should not" instead of "must not"). Should this change to a must? If not, what are the use cases. If we make it a "must not", what site configuration? Should we pass Config as parameter to all filters? Or whitelist services->getConfig as the one allowed external call? Or something else?
  1. Should we include a guideline about batching? It was raised at TechConf 2018 that consumers of "filter hooks" might prefer that they are always called for individual items (no batching within the hook), which encourages code reuse and make pure functions easier. It was also raised that for "action hooks", we may want to standardise on always reflecting data shapes, or always reflecting user workflows. E.g. an action hook about uploads, where a user uploaded 3 files, should that invoke the hook 3x with 1 file (file-oriented), or 1x with a list of 3 files (user-oriented). Or could we leave this open to be decided on a case-by-case basis? Or we could encourage implementations to provide both (seems pretty simple to do).
  1. Should we require that for all new hooks, if the feature in question does not have value objects, that those are implemented first? If we don't want to require this, how do we move forward? Perhaps we can state that value objects are mandatory if using the new hook system, but that the old system may continue to be used and have new hooks added until a certain point. E.g. leave non-deprecated until 1.34, and deprecate (and disallow new hooks) in 1.35 or something like that, which would give us time to get value objects figured out over 1 year of time before it starts to block introduction of new hooks.

Related Objects

Mentioned In
T321412: Create PageUndeleteComplete hook, analogous to PageDeleteComplete
T308017: Design Schema for page state and page state with content (enriched) streams
T140664: Achieve predictable MediaWiki routing and cacheable skin data
T209924: Specify PageTypeHandler
T294405: SearchDataForIndexHook and SearchIndexFieldsHook run too early, or not at all
T287622: Skin hooks should run after extension hooks
T275847: Grand Hook Revamp
T259955: Skin hooks should not have unexpected side effects to OutputPage
T245900: Introduce dependency injection into jobs
T243854: Evaluate how to structure internal calls to TemplateData in PHP
T240307: Hook container with strong types and DI
T128394: Deprecate and remove User::isSafeToLoad
T225968: Profile and visualise time spent per component/extension in MW entry points
T157402: Provide a reliable way to pass information between hook handlers, "hooked" objects
T221177: REST route handler extension interface RFC
T222827: WMHack19: TechCom office hour, live edition
T222413: PageContentSave does not allow modification of parameters
T208776: RFC: Introduce PageIdentity to be used instead of WikiPage
T193613: RFC: Establish stable interface policy for PHP code
T211849: A particular edit not showing on watchlist
T154677: Turn static Hooks class into HookManager service
T171515: Hooks system should support MediaWiki services
T206081: Wikimedia Technical Conference 2018 Session - Architecting core: extension interfaces
Mentioned Here
T120242: Eventually Consistent MediaWiki State Change Events
T291120: MediaWiki Event Carried State Transfer - Problem Statement
T308017: Design Schema for page state and page state with content (enriched) streams
T317768: Proposal: deprecate the mediawiki.revision-score stream in favour of more streams like mediawiki-revision-score-<model>
T240307: Hook container with strong types and DI
T157402: Provide a reliable way to pass information between hook handlers, "hooked" objects
T222409: Standardize declarative object construction in MediaWiki
T214358: Fatal error: Uncaught TypeError: Argument 2 passed to TranslateHooks::onPageContentLanguage() must be an instance of Language, string given
T171515: Hooks system should support MediaWiki services

Event Timeline

There are a very large number of changes, so older changes are hidden. Show Older Changes

Assigning to @brion for the next step, which is to review the hooks for an existing extension with these recommendations in mind, and how they hold up.

kchapman subscribed.

@CCicalese_WMF and I are interested in having an IRC meeting on this to keep it moving. This is tied closely to work Core Platform is doing on APIs.

daniel renamed this task from RFC: MediaWiki 2018 extension interfaces to Evolve hook system to support "filters" and "actions" only.Apr 11 2019, 9:27 AM

We should have at least a basic survey of the most-used hooks, and the way they are used, before we have an IRC meeting about this.

I like the idea of separation of concerns and having a codebase that is explicit and easy to use. From my understanding,

  • the action hooks can be used to modify state (expensive calls like update user in DB, log stuff, add tags, etc.)
  • the filter hooks can be used to build/filter things (like building UserDefaultOptions array, adding JS modules).

If I didn't understand it correctly, could you provide any examples?

At this moment I have a couple of concerns. Most of the time I develop extensions (Popups, TextExtracts, MobileFrontend, Minerva skin), and I can say that proposed rules fit MediaWiki Core land correctly, but when it comes to extensions - it looks troubling.

ActionHooks: "Instead, they are run from a deferred update or job."

If we decide to go with a deferred update, we need a way to specify if this is done POSTSEND or PRESEND. In some cases we can encounter a race condition, when the output (page redirect) is sent to the user and browser starts rendering a new page, but the script still processes the ActionHook. That leads into the state when the user sees a partially modified state. But sometimes we process stuff that is not visible to the user and it should be executed on POSTSEND.

FilterHooks: "Other instances and services should not be obtained" -> this might work excellent in the MediaWiki core, but extensions/skins need to access their services in some way. I'm slowly shifting old Singletons into Services (registered via ServiceWiring files).

During hooks execution, I need to access Services, so I can decide what to do next, as I cannot keep all logic inside one Hooks file. Without being able to get stuff via MediaWikiServices I need to keep things like old MobileContext::getSingleton() which I want to avoid. Singletons are difficult to test as there is no easy way to inject mocks. Additionally, it's very easy to create side effects when Singletons are in use.

Once we decide to start naming hooks "ActionHooks.XYZ", "FilterHooks.XYZ" - there will be no easy way to switch one to another. It's a good thing, but it may cause some troubles in the future. Each change in the core requires changes in all extensions/skins. If possible, I'd like to have a hook system that allows us small changes to the Hook signature without fixing all extensions/skins that use this hook.

If I understood properly what FilterHooks are, some of those Hooks could be safely replaced by service tagging. There is no need to listen to a hook when we need to build things. Example:
MobileFrontend has an interface called IMobileTransform. This interface represents a single, small transformation to the DOM structure and it's used during MobileFormatter execution. We have many transforms (like MoveLeadParagraphTransform, AddMobileTocTransform). Now, if we abandon hooks in favor of service tagging: instead of such structure:

$transforms = [];
Hooks::filter( 'MobileFrontend.RegisterTransform', &transforms );
/** @var IMobileTransform $transfor */
foreach( $transforms as $transform) {
   $transform->apply( $dom );
}

We could do something IMHO much cleaner:

$transforms = MediaWikiServices::getAllByTag('MobileFrontend.Transform');
/** @var IMobileTransform $transfor */
foreach( $transforms as $transform) {
   $transform->apply( $dom );
}

This reduces side effects to zero, as there is no Hook call and no arbitrary code execution, just a function that loads sets of services from MediaWikiServices.

My last comment is regarding the general approach. We need to get the control on what's going on in the Hooks world.
Passing variables, references to variables () and depending on which method from Hooks is called doesn't give us much flexibility nor any control. Some extensions will have to access MediaWikiServices::getService() or RequestContext::getMain() anyway to get required objects/services. Hooks cannot provide all dependencies for all extensions. Instead of passing a mutable/immutable arguments and depending on fact which Hooks method is called (action or filter). I'd love to see us passing an object that takes control on what can be done and whats not allowed. We should fully embrace an object-oriented approach to managing the execution flow of hooks. We have Status class, that IMHO fits here perfectly.

As a first step, we could come up with a straightforward interface (let's call it HookExcecution for easier reading) for hook execution with only essential methods. Then, we could do something like that:

$execution = new ActionHookExecution( $name, $data );
Hooks::run( $execution );

Hooks::run() can be a factory, and depending on the $execution type can fire filter or action methods. The HookExecution can have methods like: abort() (only filter), fail() (only action) or getService(). This provides us control over what users can do (they can access only methods from the
HookExecution, because HookExecution provides services there will be no need to access global variables/singletons). Inside the HookExecution we could freeze some things (like make some services read-only, when returning a service we can return a clone/frozen version, so we're sure it doesn't have side effects). Then, most important bits of our system could have concrete HookExecution classes, something like Hooks/SpecialPagesInit that exposes an API to modify set of SpecialPages. By default, the HookExecution class can have get( $key ) and set( $key, $value ). Also, it can implement ArrayAccess, therefore in some cases, no changes to extensions have to be done (when we pass an array as the first argument, for example, ChangeTagsListActive`).

Instead of passing references to variables, first we should organize the code in the way it exposes APIs on how to modify stuff, and then pass those APIs. Making things like adding 'FilterHook.' to the hook name, and passing things as $data = [ 'mutable' => &$variableA, 'notmutable' => $variableB ]
is just a short term improvement that will not fix the system. It makes HooksSystem feel better, but under the hood, IMHO it's still the same.

As final thought, I'd like to mention, that some time ago, I came up with the idea of T171515: Hooks system should support MediaWiki services. This is somewhat related to this task, in general, our Hooks system should utilize MediaWikiServices more often. That approach will help us reduce the number of static functions (at this moment most of the Hooks are static methods which are usually difficult to test), therefore it should minimize possible side effects.

moving back to "under discussion" until we have a better understanding of current hook usage.

Currently, passing values by reference makes it impossible for hook subscribers to know which types they are getting, making type hinting impossible. That causes issues like T214358.

Could the hook runner be extended do validate the types of mutable values after each subscriber returns?

The alternative would be that mutable values would need to be wrapped in container objects that expose interface to modify the value. I imagine something generic like new ModifiableValue( $type, $value ) that exposes get and set that enforce the type is possible, but being far from elegant and hiding the real type.

The alternative would be that mutable values would need to be wrapped in container objects that expose interface to modify the value. I imagine something generic like new ModifiableValue( $type, $value ) that exposes get and set that enforce the type is possible, but being far from elegant and hiding the real type.

@BPirkle suggested to use anonymous objects that expose setters tailored for just the hook, and document them along with the hook signature. E.g.

$foo an object that exposes setBar( $x, $y, $z ).

I really like this solution, because it allows us to define narrow, one-off interfaces that allow for readable code:

$foo->setBar( 1, 2, 3)

@daniel Given a filter only mutating one value (previously written as "taking one parameter", we since then loosened it up to allow any number of parameters, but only one of which should be mutated by the hook), this could also be solved by using the return value.

That's what WordPress do, for example. This means it can be validated by return type hints. And if we want to enforce it in the hook runner, this could be passed to the hook runner as option, but I'm not sure that added complexity and verboseness throughout is worth it. It would caught naturally soon enough by parameter type hints from other handlers, other methods called after the hook by the source code, or the source's return type hint etc.

We can progressively enforce with PHPCS that filter hook callbacks must have a return type hint, for example. Might be more DRY that way, then to have the source code also document in multiple places, and pass around, and validate in-process etc.

@daniel Given a filter only mutating one value (previously written as "taking one parameter", we since then loosened it up to allow any number of parameters, but only one of which should be mutated by the hook), this could also be solved by using the return value.

That would mean that we'd lose the ability to break the chain of filters by returning false. This simplifies the logic, but may make it difficult to port some existing use cases. How does WordPress do chaining of filters?

If I understood properly what FilterHooks are, some of those Hooks could be safely replaced by service tagging. There is no need to listen to a hook when we need to build things. Example:
[...]
This reduces side effects to zero, as there is no Hook call and no arbitrary code execution, just a function that loads sets of services from MediaWikiServices.

Your example seems to have just pushed the hook call one level up the stack (now MediaWikiServices calls a bunch of registered callbacks instead of your code calling them directly), and possibly also requires that more things be created on every request rather than being determined when they are needed. Either way, I'm not sure that's much of an improvement.

Hooks::run() can be a factory, and depending on the $execution type can fire filter or action methods. The HookExecution can have methods like: abort() (only filter), fail() (only action) or getService(). This provides us control over what users can do (they can access only methods from the
HookExecution, because HookExecution provides services there will be no need to access global variables/singletons). Inside the HookExecution we could freeze some things (like make some services read-only, when returning a service we can return a clone/frozen version, so we're sure it doesn't have side effects). Then, most important bits of our system could have concrete HookExecution classes, something like Hooks/SpecialPagesInit that exposes an API to modify set of SpecialPages. By default, the HookExecution class can have get( $key ) and set( $key, $value ). Also, it can implement ArrayAccess, therefore in some cases, no changes to extensions have to be done (when we pass an array as the first argument, for example, ChangeTagsListActive`).

Do we really need complicated accessors wrapping everything just to try to enforce a policy on which services can be accessed from a hook callback? We discussed the same thing with respect to trying to force hook calls to use immutable objects in T212482#4841064 and T212482#4867960.

As final thought, I'd like to mention, that some time ago, I came up with the idea of T171515: Hooks system should support MediaWiki services. This is somewhat related to this task, in general, our Hooks system should utilize MediaWikiServices more often. That approach will help us reduce the number of static functions (at this moment most of the Hooks are static methods which are usually difficult to test), therefore it should minimize possible side effects.

Static functions aren't always bad. They'd probably be a better approach to wiring than the current "array full of Closure objects" we have in ServiceWiring.php, as mentioned at https://gerrit.wikimedia.org/r/c/mediawiki/core/+/508972/5/includes/Rest/Router.php#152.

While calling hooks on services in MediaWikiServices may be a good idea when a service already exists, at what point are we putting too much into it? Would it really be a win for every extension to register a global public "FooBarHooks" service just to be able to hook hooks? Would it be better to register "private" services by allowing use of ObjectFactory specs (see T222409) when registering hooks?

Currently, passing values by reference makes it impossible for hook subscribers to know which types they are getting, making type hinting impossible. That causes issues like T214358.

The problem in T214358 seems to have been that the parameter was defined as \Language|string, was really \Language|\StubUserLang|string, and in a patch someone assumed it was just \Language. None of that seems to have much to do with passing values by reference beyond making it slightly harder to find where the string a later "filter" received was coming from.

Could the hook runner be extended do validate the types of mutable values after each subscriber returns?

The alternative would be that mutable values would need to be wrapped in container objects that expose interface to modify the value. I imagine something generic like new ModifiableValue( $type, $value ) that exposes get and set that enforce the type is possible, but being far from elegant and hiding the real type.

Both of those require defining the type in the first place, and then a bunch of code run on every hook function call to check it. Does this sort of problem really come up often enough to justify all that complexity? Could we just enable some logging in such situations to log the types when they actually need to be investigated?

@daniel Given a filter only mutating one value

Is that a given, though? T212482#4869654 never received a reply.

I just came across T157402: Provide a reliable way to pass information between hook handlers, "hooked" objects and I'm wondering how the use cases described there fit into the model proposed here.

I just came across T157402: Provide a reliable way to pass information between hook handlers, "hooked" objects and I'm wondering how the use cases described there fit into the model proposed here.

Currently, it doesn't seem this proposal does anything more to address the issue than the current system does. But,

  • As mentioned in T212482#4869654, this proposal would seem to require more such communication.
  • The current workaround is adding properties on relevant high-level objects that are passed into the hooks. Changes that result in no persistent objects being passed would invalidate that workaround.

The point is to avoid having filters like "outputpage_render( OutputPage )" which don't relate to a tangible value, but a point in time ("event") given a monolithic object. Instead, I think these should have more granular filters like "outputpage_htmltitle", "outputpage_bodyclass", ""outputpage_modules", which have a dedicated purpose and a limited set of factors by which we support their variance. [..]

[..] It seems pretty likely that an extension might need to manipulate the HTML [..], the modules [..], and the body class [..]. Having such granular hooks means would need to register and coordinate across them all, and [..] its only option seems to be depending on global state [..]

Yeah, that'd be nasty. We already see that a lot in extensions that hook into Parser, and that's not a pattern we'd want to encourage further. My unverified gut-feeling is that these "filter" hooks would not actually any sort of coordination, though.

By virtue of only being allowed to vary on the given parameters (as our proposed design states), that implicitly also rules out the outcome of other hooks. If such outcome is relevant and the maintainer of the code the hooks are subject to is okay with that side-effect being officially supported, that probably means the code needs some kind of primitive to support that which could then be passed as parameter to the relevant hook. For the most part, though, I expect filters to be deterministic given their parameters and to generally not rely on outcome of other hooks.

As mentioned by Brion already, I think it's time we look at a practical example to see how this would turn out. Continuing further in theory only I think would leave us vulnerable to either wasting time or considering cases that might work out the way we think they would.

debt moved this task from Old to Under discussion on the TechCom-RFC board.
Krinkle renamed this task from Evolve hook system to support "filters" and "actions" only to RFC: Evolve hook system to support "filters" and "actions" only.Apr 4 2020, 2:34 AM

De-assigning; not actively working on this.

Flagging for PET roadmapping review, to see where this fits in with the decoupling "expedition".

Changing hook signatures is hard, due to how interfaces work in PHP. This problem can be avoided by having each hook take a single parameter, using a type specific to the hook. In case of action hooks, this parameter would model an event. Consider a signature like afterPageDeleted( PageDeletedEvent $event ).

Changing hook signatures is hard, due to how interfaces work in PHP. This problem can be avoided by having each hook take a single parameter, using a type specific to the hook. In case of action hooks, this parameter would model an event. Consider a signature like afterPageDeleted( PageDeletedEvent $event ).

I thought about that during the discussion of T240307: Hook container with strong types and DI; it would re-allow the same kind of signature changes that were possible with the old hook system, plus the the event object would be a nice place for issuing deprecation warnings and for converting a legacy signature into the new siganture. It would also allow grepping for usage, which used to be a common problem - one wants to remove the last argument of the hook signature, which is not useful anymore, but how to tell which of the many subscribers of that hook actually use it?

OTOH, just like the interface, it would be static coupling that's fragile when refactoring (a little less fragile, as method signature <-> caller signature has less strict match conditions than interface signature <-> implementation signature, but the difference is not that much). Instead of versioned interfaces, you'd need versioned factory methods for the event object, which is not much of an improvement. And the code for calling and implementing hook handlers becomes less readable. So on the net it didn't seem like an improvement.

Change 838910 had a related patch set uploaded (by Daniel Kinzler; author: Daniel Kinzler):

[mediawiki/core@master] EXPERIMENT: Filter hooks and action hooks.

https://gerrit.wikimedia.org/r/838910

Consider a signature like afterPageDeleted( PageDeletedEvent $event ).

Of course, I like this. :) Doing something like this I think would allow the 'hook' runner to be pluggable too, or maybe configurable too. Sync, or async DeferredUpdate, or JobQueue, or X.

I'd suggest a more unified data model though, something more like:
afterPageDeleted( PageChangeEvent $event )
where PageChangeEvent can be used for the param to many of the page change hooks.

Or, perhaps, if we don't want to represent the change event, you could just just provide a page state before and page state after objects as params, and allow the hook function to do whatever they need.
afterPageDeleted( PageState $before PageState $after )

Not sure which is better, but maybe ^ would be more flexible?

FWIW, I find 'filter' a strange name for the type of hook that can mutate state. I'd expect a filter to be a boolean filter operation. I guess yall are saying it can cancel the calling of future callbacks, which is like a filter...but mutating state in a filter sounds weird :)

FWIW, I find 'filter' a strange name for the type of hook that can mutate state. I'd expect a filter to be a boolean filter operation. I guess yall are saying it can cancel the calling of future callbacks, which is like a filter...but mutating state in a filter sounds weird :)

I think Timo originally borrowed the terminology from WordPress, compare https://kinsta.com/blog/wordpress-hooks/#hooks-vs-actions-vs-filters.

To me, "actions" are events. And filters are... I dunno, mutators? Mutables? Mogrifiers?...

I'd suggest a more unified data model though, something more like:
afterPageDeleted( PageChangeEvent $event )
where PageChangeEvent can be used for the param to many of the page change hooks.

If we use the same event class to represent multiple types of changes, we run into the problem that some properies make sense for one kind of change, but not for another. E.g. and edit has an old revision and a new revision, a delete doesn't. A page move has an old tilte and a new title, but an edit doesn't. Etc.

We could of course have a hierarchy of event types, where PageDeletedEvent IS A PageChangeEvent.

Or, perhaps, if we don't want to represent the change event, you could just just provide a page state before and page state after objects as params, and allow the hook function to do whatever they need.
afterPageDeleted( PageState $before PageState $after )

Not sure which is better, but maybe ^ would be more flexible?

The system I have in mind relies on there bing a single object representing the change. But we could of course have a PageChangeEvent that has methods like getPageStateBefore() and getPageStateAfter().

Since event objects are expected to be serialized and queued, I'd expect them to be relatively lightweight, though. So in my mind, the page state would not include the HTML, and probably also not the wikitext. It would however contain sufficient information to retrieve them (by revision ID and render ID).

And filters are... I dunno, mutators? Mutables? Mogrifiers?...

Transforms?

We could of course have a hierarchy of event types, where PageDeletedEvent IS A PageChangeEvent.

This could work!

But we could of course have a PageChangeEvent that has methods like getPageStateBefore() and getPageStateAfter().

Could also work!

Since event objects are expected to be serialized and queued, I'd expect them to be relatively lightweight, though. So in my mind, the page state would not include the HTML, and probably also not the wikitext.

Agree. And this is exactly the data modeling exercise we are doing now in T308017: Design Schema for page state and page state with content (enriched) streams: trying to figure what does and does no belong in a 'page state change' event. We're going to need the ability to have page_change event with enriched data in other streams (with html or wikitext or ORES scores T317768, etc.), but to do that we'll just extend the page_change model to I dunno, page_change_with_wikitext. We could probably do the same in these MediaWiki event value classes if we had to. (I don't think we'll have to :p )

I'd LOVE if these MW event classes could JSON serialize to the same (or very close to the same) event schema data model we'd produce to Kafka. This would be just...wow...amazing.

BTW, The reason we are moving to preferring a single entity change model rather than multiple models for each change, is that generally, a consumer of these events is trying to keep track of what is happening to the entity. In Kafka, work and state are distributed via keyed partitions, so you want all events of with the same entity id to go to the same worker, and you'd prefer if all the changes that happen to that entity to be in order. If there are different event data models for each kind of change, then those will be in different streams, and there will be no way to guarantee the order in which the changes will be consumed, or which workers they will be consumed by, because they are in different topic-partitions in Kafka. A big shuffle and order by has to be done before work can be distributed.

BTW, The reason we are moving to preferring a single entity change model rather than multiple models for each change, is that generally, a consumer of these events is trying to keep track of what is happening to the entity.

Modeling an event as change-of-state-of-entity is intruiging, but I see two prolems with it:

  1. you may want additional meta-data about the change itself, things that do not update the resource. E.g. the IP and edit came from.
  2. the state of an entity is potentially huge, e.g. a "page" could potentially be modeled as containing all its revisions, or maybe even all possible renderings... representing "before" and "after" fully in the event itself becomes impossible.

Perhaps it would make sense to model page changes a bit like git commits: they have meta-data, and contain a patch against a base state.

Buut we are getting off-topic for this ticket :)

Yaya, this discussion is for T308017: Design Schema for page state and page state with content (enriched) streams. I'll respond to your comment there.

But yeah, these are all the things we are balancing. Basically, we want a data model that can be used as a changelog stream, similar to what might be in a MySQL replication binlog, but at a higher (denormalized) domain level.

FWIW, I find 'filter' a strange name for the type of hook that can mutate state. I'd expect a filter to be a boolean filter operation. I guess yall are saying it can cancel the calling of future callbacks, which is like a filter...but mutating state in a filter sounds weird :)

I think Timo originally borrowed the terminology from WordPress, compare https://kinsta.com/blog/wordpress-hooks/#hooks-vs-actions-vs-filters.

To me, "actions" are events. And filters are... I dunno, mutators? Mutables? Mogrifiers?...

+1, "filter" does sound a bit odd. I like "mutate/mutator" vs "listen/listener".


I am in favor of the single-parameter hook id, where the param is some kind of data object.
Significant changes to the data would likely still be cumbersome to manage, but it would at least reduce some change synchronization:

Adding a new param right now would probably look something like this:

  1. Change hook invocation to overload with new arg
  2. Changing all subscribers, which must start to accept a new optional argument ahead of time (while interface doesn't yet have it)
  3. Change the hook interface to accept the new argument
  4. Change all subscribers to make new arg non-nullable

With a single-param system:

  1. Change hook invocation to overload with new arg
  2. Change hook params object to accept new "argument"
  3. Update only the subscribers that have business with the new data

I also like the separation of filter/mutable and action/listener hooks:

  • listeners could more easily be moved post-send (insofar applicable)
  • listeners would receive reliable data (whereas right now they could be receiving data that a later subscriber will alter again, or may not persist)

That said, while I like the separation, I'm wary of it introducing too much additional work.
I believe the following is true in reality:

  • Hooks are often only implemented on an as-needed basis. There often isn't time or a realization that a(nother) hook would be desirable.
  • Most existing hooks are mutators.
    • A quick count of functions in HookRunner reveals 350+ with variables passed by ref (explicitly intended to mutate data) and 150+ without (although some of those are still used to mutate, either by altering an object arg, or by interrupting process by returning false). That's 350+ "mutable" hooks vs. 150+ (possibly lower) "listener" hooks.
  • Given an existing hook with the required data is available, subscribers will tend to use that one.
    • If only a mutator hook is available, it will be subscribed to - even if the only intent is listening.

If these end up being totally separate instances, I fear it may often end up being too much work to provide both hook types.
For those creating/invoking new hooks, there usually is 0 incentive to provide more than what is required at the time.
Even more so because it can often take multiple years for a use case to emerge where "the other, not provided type of hook" would be useful.
Unless creating a separate listener hook requires (almost) no additional work, or unless there's a mechanism that heavily discourages mutator-only hooks, I would expect we end up with the same pattern of mostly-mutable hooks that even pure listeners subscribe to.

And from a subscriber's POV, they'll often use whatever is available, rather than implementing that other kind.
And often for good reason (e.g. lack of time, lack of permissions to alter code base where the hook lives, compatibility with older code desirable, ...)

I think we should strive for having all types (filter/mutable + actions/listener) available by default any time a hook is called.
I realize that having multiple hook variations is probably overkill for 90% of them, but that's how we ended up where we are in the first place: we only build what is needed, then find them being used for different purposes years later.


About action/listeners hooks: thus far I've mostly seen this being discussed in the context of moving heavier workloads post-send, but I think we also have a slew of pre-send listeners that need to perform related side-effects immediately (note: I haven't actually gone over them, the actual number may anywhere from tiny to massive)
While this type could also subscribe to a mutable hook, they have no guarantee that data doesn't change later on, or even ends up persisting at all.

Broadly speaking, we probably have 3 kinds of subscribers:

  1. ones that need to mutate data, or alter execution flow (e.g. when returning false prevents other code from running)
  2. ones that listen to data to perform related side-effects immediately
  3. ones that listen to data for later consumption/processing

#1 & #2 are pre-send by nature; #3 can be post-send.
#1 would have to happen prior to whatever action the hook precedes (e.g. storing data); #2 & #3 ideally happen afterwards, once data is final.
#1 would have to be mutable; #2 & #3 immutable.

This would probably make the ideal scenario something like:

$hookContainer->run( 'MutateMyHook', [ $param ], [ 'abortable' => true ] );

// execute code, persist data

$hookContainer->run( 'ListenMyHook', [ $param ], [ 'abortable' => false ] );

DeferredUpdates::addCallableUpdate(
	function () use ( $event ) {
		$hookContainer->run( 'DeferredListenMyHook', [ $param ], [ 'abortable' => false ] );
	},
	DeferredUpdates::POSTSEND,
	$dbw
);

This, too, adds a bunch of extra boilerplate. It would be great if this too could end up being "magic"-ed away for the most part and become standard implementation (while still being possible to deviate from this standard - for some hooks, not all of these may make sense)


TL;DR: while I think the separation of types of hooks has multiple benefits, I don't think we'd reach them unless we break the pattern. Invoking multiple types of hooks should require minimal extra work; ideally, it would be the default.
But since these types of hooks would likely mostly operate on the same data & follow a similar structure, I'm hopeful a solution can be found where no extra work (or at worst only a negligible amount) would be required to create/invoke them.

But since these types of hooks would likely mostly operate on the same data & follow a similar structure, I'm hopeful a solution can be found where no extra work (or at worst only a negligible amount) would be required to create/invoke them.

Perhaps some interface where the when/how the hook is called can be parameterized, I suppose at hook registration time?

But since these types of hooks would likely mostly operate on the same data & follow a similar structure, I'm hopeful a solution can be found where no extra work (or at worst only a negligible amount) would be required to create/invoke them.

Perhaps some interface where the when/how the hook is called can be parameterized, I suppose at hook registration time?

One way to do this would be by a convention around the handler method's name:

  • modifyFooBarResult would be called immediately
  • afterFooBarResult would be called eventually (ideally, with an immutable version of the parameter)

About action/listeners hooks: thus far I've mostly seen this being discussed in the context of moving heavier workloads post-send, but I think we also have a slew of pre-send listeners that need to perform related side-effects immediately (note: I haven't actually gone over them, the actual number may anywhere from tiny to massive)

I'm very curious about this one - so far, I have been operating on the assumption that this should not be needed. And I can't think of a use case were this is actually a requirement. Even updating links tables for core happens post-send, outside the primary transaction of an edit...

Unless creating a separate listener hook requires (almost) no additional work, or unless there's a mechanism that heavily discourages mutator-only hooks, I would expect we end up with the same pattern of mostly-mutable hooks that even pure listeners subscribe to.

Perhaps there should be two hook types, and two listener types, but with a mapping that isn't quite 1:1. Hook types:

  • ParameterHook called before the operation, to allow the parameters to be modified
  • ResultHook called after an operation, to allow the result to be modified and to notify listeners of the outcome.

Handler types:

  • Modifier: called immediately, calls setters, must not update persistent state
  • Listener: called eventually, calls only getters, may update persistent state

ParameterHooks would call only modifier handlers, using the "modify" prefix on the method name.

ResultHooks would also call modifier handlers, but would also schedule a call to listeners to be executed post-send, using the "after" method prefix.

I'd still like to avoid listeners calling setters - that would screw with other listeners that look at the same object. Perhaps the implementation of a ResultData object could be split in two, an immutable base class, and a subclass adding setters?

It's not yet clear to me how to distinguish listeners from result modifiers during registration, if the hook name is the same. I'll think about it.

About action/listeners hooks: thus far I've mostly seen this being discussed in the context of moving heavier workloads post-send, but I think we also have a slew of pre-send listeners that need to perform related side-effects immediately (note: I haven't actually gone over them, the actual number may anywhere from tiny to massive)

I'm very curious about this one - so far, I have been operating on the assumption that this should not be needed. And I can't think of a use case were this is actually a requirement. Even updating links tables for core happens post-send, outside the primary transaction of an edit...

I've looked around just a little bit, and examples were fairly hard to come by, so it looks as if this is a rather small number.

I did find a couple in Flow, though (a great source for hooks abuse):

This CategoryViewer::doCategoryQuery hook subscriber is basically a listener - it'll subscribe to this hook's data, and use it to mutate data in another hook: CategoryViewer::generateLink.

Something similar happens with RC onChangesListInitRows where it listens to a hook exposing a full dataset, preload a batch of related stuff, and then mutates other hooks' (onChangesListInsertArticleLink, onOldChangesListRecentChangesLine & EnhancedChangesList::getLogText) data.

Wikibase repo seems to subscribe to that same hook onChangesListInitRows in order to prefetch data to be reused elsewhere.

Then there are a couple of other listeners (onRevisionUndeleted + onArticleUndelete and onTitleMoveStarting + onPageMoveCompleting) where it looks like handling the changes deferred would've led to rendering errors (I could be misinterpreting the comments)

I would imagine there may be a few more handlers here and there that only intend to listen to (final) data in order to alter output elsewhere (in another hook or some other component, ...)

These above examples usually aren't shining examples of solid code architecture and could probably also have been dealt with in different, more elegant ways.
Making sure that certain hooks expose more data, or function differently, could've solved some of these cases in a different way (although that does raise the point of "not knowing use cases ahead of time" and "potentially not being able to make those changes")

Note also that these examples are 7+ years old; a lot of MW's internal mechanics have greatly changed/improved since then.
Perhaps a more modern codebase like Wikibase would be worth a closer look to check whether similar examples continue to be a thing noawadays.

I concede that so far, these instances appear to be relatively small in number, and perhaps tend form a pattern that we may not want to encourage further anyway.

@matthiasmullie thank you for the investigation!

I have some unfinished thoughts around listeners signalling when and how they want to be called using the methdo name prefix. E.g. an async listener would implement afterPageMove(), while a listener that wants to be called synchronously while the transaction is in progress may implement duringPageMove(). Or something like that.

I imagine this would be detected while registering the hook, so we don't have to do anything with reflection when calling handlers. As Tim pointed out to me the other day, that code is "super hot", and needs to be optimized for performance.

Note to self (mostly) about naming conventions for parameter classes and handler methods:

  • filter hooks (alter hook, value hook): MediaWiki\FooBarValue -> alterFooBarValues( FooBarValue ). May also be plural ("XyzValues"). Base class is MediaWiki\Hooks\MutableValue.
  • event hooks: MediaWiki\FooBarEvent -> afterFooBarEvent( FooBarEvent ). Base class is MediaWiki\Hooks\Event. We could also use "handle"...
  • all other hook names use "on": FooBarCompleted -> onFooBarCompleted( ... )

Should hook names be FQNs?

It seems to me that Symfony's event system is close to what we want to aim for: https://symfony.com/doc/current/components/event_dispatcher.html. Symfony also has mechanism to integrate "events" with async message queues: https://github.com/php-enqueue/enqueue-bundle

Interesting!

Also, from the convo in December's tech leadership CoP meeting, I started thinking about how what we want for T291120: MediaWiki Event Carried State Transfer - Problem Statement is pretty similar to what is in MW's logging table, except we need the data to be structured, comprehensive and consistent (meaning no missing state changes). In T120242: Eventually Consistent MediaWiki State Change Events, one of the solutions outlined is the 'Transactional Outbox' pattern, which is kinda similar to a comprehensive+structured logging table from which we can generate and externalize state change events. I betcha we could tie these ideas together somehow.