ChronologyProtector is responsible for making sure that after users make changes that result in database writes (e.g. editing a page, changing your user preferences, etc) that on future page loads the database reflect these changes. This is mainly to accomodate the fact that replication from db-master to the pool of db-replicas takes time.
The CP code is fairly small and exists part ot the the wikimedia/rdbms libary in MediaWiki.
- After having saved an edit and going to the history page, your edit always should be listed there. (Even in the edge case where the article content may be lagged for parser-stampede/Michael Jackson reasons.)
- When changing your skin from Vector to Minerva, the confirmation page seen therafter and the next article you view should use Minerva.
Changes are made to the db-master. Information is read from a pool of multiple db replicas (for scale).
In general, a random replica is picked from the pool to read your preferences and render wiki content etc.
If you're recently submitted a form or otherwise did something that resulted in a db-master write action, then ChronologyProtector will have observed the current "position" of the db-master, and added to a list of positions internally associated with your connection (ip/user-agent). A temprorary cookie is used to so that MW knows to look for it, and which offset to expect. This information is then used to decide which db-replica to pick from the pool, and if needed try another or wait briefly for it to have caught up, etc.
This information used to be kept in the session store (both logically and physically.) It used the MW interface for adding data to the current session in a subkey of the session object, much like any other session-related feature does. In 2015, CP was logically decoupled from the session store (T111264, 85c0f85e92), so that it could be used also without requiring a user session. Services such as RESTBase, ChangeProp, and JobQueue and externa API consumers can use their cpPosIndex and enjoy the same guruantees that e.g. a job will execute knowing it can read back the information from the event that spawned the event. Decoupling from sessions also makes it easier to use CP across domains and in context of db writes to shared or external databases.
The decoupling meant that instead of using ther session store interface, it used the MainStash interface directly, and thus created its own top-level keys rather than adding data to to the user's main session key-value. But physically sesion store and MainStash used the same Redis backends (until recently)
- Determine the guruantees that ChronologyProtector needs.
- Decide where to store it.
- Make it happen.
- Do it in Beta.
- Do it in prod.
- Solving this is blocker for decomissioning ths Redis cluster. – T243520
- Redis cluster decom is a scheduled to complete ahead of the FY2020-2021 Q1 DC-switchover as it would significantly reduce the amount of work and risk required for a switchover. – T243314
- Solving this is a blocker for moving MainStash away from Redis to a new, simpler backend (e.g. db-replicated), because ChronologyProtector cannot work on the simpler guruantees of the (new) MainStash.
- Solving this is a blocker for migrating MainStash, because ChronologyProtector (after SessionStore and Echo) is the responsible for the largest amount of remaining Redis traffic which would make MainStash more expensive to support. Its data is small and short-lived but it has a high reads-per-second and low latency needs. – T229062#6186134
- Solving this is a blocker for multi-dc (long term). – T88445
About 14K operations per second. See also T212129#6190251.