Page MenuHomePhabricator
Paste P13093

Files changed due to incidents since 2018
ActivePublic

Authored by daniel on Oct 28 2020, 6:38 PM.
Tags
None
Referenced Files
F32417958: Files changed due to incidents since 2018
Oct 28 2020, 6:38 PM
Subscribers
None
{
"includes/libs/rdbms/database/Database.php": {
"File": "includes/libs/rdbms/database/Database.php",
"TicketCount": 14,
"CommitCount": 14,
"Tickets": [
"T190960",
"T191916",
"T191863",
"T191282",
"T193668",
"T201900",
"T193565",
"T218388",
"T212284",
"T225682",
"T191668",
"T227708",
"T239877",
"T235357"
],
"Commits": [
{
"message": "Stop using SCRIPT_NAME where possible, rely on statically configured routing\n\nIt has become apparent that $_SERVER['SCRIPT_NAME'] may contain the same\nthing as REQUEST_URI, for example in WMF production. PATH_INFO is not\nset, so there is no way to split the URL into SCRIPT_NAME and PATH_INFO\ncomponents apart from configuration.\n\n* Revert the fix for T34486, which added a route for SCRIPT_NAME to the\n PathRouter for the benefit of img_auth.php. In T235357, the route thus\n added contained $1, breaking everything.\n* Remove calls to WebRequest::getPathInfo() from everywhere other than\n index.php. Dynamic modification of $wgArticlePath in order to make\n PathRouter work was weird and broken anyway. All that is really needed\n is a suffix of REQUEST_URI, so I added a function which provides that.\n* Add $wgImgAuthPath, for use as a last resort workaround for T34486.\n* Avoid the use of $_SERVER['SCRIPT_NAME'] to detect the currently\n running script.\n* Deprecated wfGetScriptUrl(), a fairly simple wrapper for SCRIPT_NAME.\n Apparently no callers in core or extensions.\n\nBug: T235357\nChange-Id: If2b82759f3f4aecec79d6e2d88cd4330927fdeca\n",
"bugs": [
"T235357"
],
"subject": "Stop using SCRIPT_NAME where possible, rely on statically configured routing",
"hash": "507501d6ee29eb1b8df443192971fe2b6a6addb6",
"date": "2020-04-01T16:33:38"
},
{
"message": "rdbms: Have Database::makeWhereFrom2d assume $subKey is string-based\n\nUntil I70473280, integer literals were always quoted as strings, because\nthe databases we support all have no problem with casting\nstring-literals for comparisons and such.\n\nBut it turned out that gave MySQL/MariaDB's planner problems in some\nqueries, so we changed it to not quote actual PHP integers.\n\nBut then we run into the fact that PHP associative arrays don't preserve\nthe types of keys, it converts integer-like strings into actual\nintegers. And when those are passed to the DB unquoted for comparison\nwith a string-typed column, MySQL/MariaDB's planner has problems again\nwhile PostgreSQL simply throws an error. Sigh.\n\nThis patch adjusts Database::makeWhereFrom2d to assume that the $subKey\ncolumn is going to need all values quoted, as is the case for all\ncallers in MediaWiki.\n\nBug: T239877\nChange-Id: I69c125e8ab9e4d463eab35c6833aabdc436d7674\n",
"bugs": [
"T239877"
],
"subject": "rdbms: Have Database::makeWhereFrom2d assume $subKey is string-based",
"hash": "9a1ecf2efdb7fa31a8be85ef18c80299da071ed3",
"date": "2019-12-06T11:55:31"
},
{
"message": "rdbms: avoid dbSchema() in Database::replaceLostConnection() and Database::__clone()\n\nSince dbSchema() always casts the result to a string, using this method\nwith a call to open() is broken if the RDBMs does not support DB schemas\nand thus requires null\n\nFollows-up 7911da9c6f (last week), which added the check in\nDatabaseMysqlBase::open() check. Also follows fe0af6cad (last year),\nwhich made dbSchema() consistently return string. Before that, an\nimplicit null was passed in from Database::factory for mysql, which hid\nthe class default of empty string.\n\nBug: T227708\nChange-Id: I67207fbaa39c5cc3fe062077cc654f048090e009\n",
"bugs": [
"T227708"
],
"subject": "rdbms: avoid dbSchema() in Database::replaceLostConnection() and Database::__clone()",
"hash": "aaf8d30204a23b3f5437b6a0417d3470fb3878dd",
"date": "2019-07-11T01:53:22"
},
{
"message": "rdbms: Document varargs for IDatabase::buildLike\n\nThis is needed in order for Phan not to consider calls to\nIDatabase::buildLike as invalid. Interestingly, it does not\nconsider calls to Database::buildLike invalid.\n\nBug: T191668\nChange-Id: I0e027f5ec66d20b1d11e3441086001f6a751e1f5\n",
"bugs": [
"T191668"
],
"subject": "rdbms: Document varargs for IDatabase::buildLike",
"hash": "725a59f0c70e37de6de8e26f91b0101839d37ff9",
"date": "2019-06-18T14:11:15"
},
{
"message": "Database: Recognize USE queries as non-write queries\n\nThis should unbreak Ie7341a0e6c41, which switched\nDatabaseMysql::doSelectDomain() from using doQuery() to using\nexecuteQuery() for its USE query, which means it no longer\nbypasses the isWriteQuery() check. This caused every USE query on a\nreplica DB to fail, because it was considered a write query.\n\nBug: T225682\nBug: T212284\nChange-Id: Iecb8b9f6e64d08df8c64b3133078b5324e654ed1\nFollows-Up: Ie7341a0e6c4149fc375cc357877486efe9e56eb9\n",
"bugs": [
"T225682",
"T212284"
],
"subject": "Database: Recognize USE queries as non-write queries",
"hash": "201c8d34975165405d7ba014f05656e586d882f0",
"date": "2019-06-12T23:33:24"
},
{
"message": "rdbms: add Database::executeQuery() method for internal use\n\nThis shares reconnection and retry logic but lacks some of the\nrestrictions applied to queries that go through the public query()\ninterface.\n\nUse this in a few places such as doSelectDomain() for mysql/mssql.\n\nBug: T212284\nChange-Id: Ie7341a0e6c4149fc375cc357877486efe9e56eb9\n",
"bugs": [
"T212284"
],
"subject": "rdbms: add Database::executeQuery() method for internal use",
"hash": "2866c9b7d4295314c3138166cdb671de6dbcb3ab",
"date": "2019-06-11T14:00:41"
},
{
"message": "rdbms: treat cloned temporary tables as \"effective write\" targets\n\nMake IDatabase::lastDoneWrites() reflect creation and changes to\nthe cloned temporary unit test tables but not other temporary tables.\nThis effects the LB method hasOrMadeRecentMasterChanges(). Other tables\nare assumpted to really just be there for temporary calculations rather\nacting as test-only ephemeral versions of permanent tables. Treating\nwrites to the \"fake permanent\" temp tables more like real permanent\ntables means that the tests better align with production.\n\nAt the moment, temporary tables still have to use DB_MASTER, given\nthe assertIsWritableMaster() check in query(). This restriction\ncan be lifted at some point, when RDBMs compatibility is robust.\n\nBug: T218388\nChange-Id: I4c0d629da254ac2aaf31aae35bd2efc7bc064ac6\n",
"bugs": [
"T218388"
],
"subject": "rdbms: treat cloned temporary tables as \"effective write\" targets",
"hash": "108fd8b18c1084de7af0bf05831ee9360f595c96",
"date": "2019-03-26T21:24:42"
},
{
"message": "rdbms: Database::selectDB() update the domain and handle failure better\n\nLoadBalancer uses Database::getDomainId() for deciding which keys to use\nin the foreign connection handle arrays. This method should reflect any\nchanges made to the DB selection.\n\nIf the query fails, then do not change domain field. This is the sort of\napproach that LoadBalancer is expects in openForeignConnection(). Also,\nthrow an exception when selectDB() fails.\n\nThe db/schema/prefix fields of Database no longer exist in favor of just\nusing the newer currentDomain field.\n\nAlso:\n* Add IDatabase::selectDomain() method and made selectDB() wrap it.\n* Extract the DB name from sqlite files if not explicitly provided.\n* Fix inconsistent open() return values from Database subclasses.\n* Make a relationSchemaQualifier() method to handle the concern of\n omitting schema names in queries. The means that getDomainId() can\n still return the right value, rather than confusingly omitt the schema.\n* Make RevisionStore::checkDatabaseWikiId() account for the domain schema.\n Unlike d2a4d614fce09c, this does not incorrectly assume the storage is\n always for the current wiki domain. Also, LBFactorySingle sets the local\n domain so it is defined even in install.php.\n* Make RevisionStoreDbTestBase actually set the LoadBalancer local domain.\n* Make RevisionTest::testLoadFromTitle() account for the domain schema.\n\nBug: T193565\nChange-Id: I6e51cd54c6da78830b38906b8c46789c79498ab5\n",
"bugs": [
"T193565"
],
"subject": "rdbms: Database::selectDB() update the domain and handle failure better",
"hash": "fe0af6cad57821dc5b8741edfed77242d543ccaa",
"date": "2018-10-10T19:03:30"
},
{
"message": "rdbms: Document a bunch of stuff about query verbs\n\nThe decision to treat COMMIT/BEGIN as a \"read\" in isWriteQuery()\nfor the benefit of ChronologyProtector was first introduced\nin r47360 (8653947b, 2009).\n\n* Re-order strings in isTransactableQuery() to match the regular\n expression in isWriteQuery() for quicker mental comparison\n\n* Add missing visibility to DatabaseSqlite->isWriteQuery, matching\n the parent class implementation.\n\nBug: T201900\nChange-Id: Ic90f6455a2e696ba9428ad5835d0f4be6a0d9a5c\n",
"bugs": [
"T201900"
],
"subject": "rdbms: Document a bunch of stuff about query verbs",
"hash": "aeb6a921324770e475e6583aa69dab830d81144e",
"date": "2018-09-28T23:49:10"
},
{
"message": "rdbms: fix finalization stage errors in LBFactory::commitMasterChanges\n\nIf a pre-commit callback caused a new LoadBalancer object to be created,\nthat object will be in the \"cursory\" state rather than the \"finalized\"\nstate. If any callbacks run on an LB instance, make LBFactory iterate\nover them all again to finalize these new instances.\n\nMake LoadBalancer::finializeMasterChanges allow calls to\nalready-finalized instances for simplicity.\n\nBug: T193668\nChange-Id: I4493e9571625a350c0a102219081ce090967a4ac\n",
"bugs": [
"T193668"
],
"subject": "rdbms: fix finalization stage errors in LBFactory::commitMasterChanges",
"hash": "082ed053b699cfd52555f3432a1b4a823a259236",
"date": "2018-05-07T18:04:43"
},
{
"message": "rdbms: improve log warnings in runMasterPostTrxCallbacks()\n\nBug: T191282\nChange-Id: Iba663c58224af920f90d7b401aab2eb21f921941\n",
"bugs": [
"T191282"
],
"subject": "rdbms: improve log warnings in runMasterPostTrxCallbacks()",
"hash": "b32325bd23d44afce7989505af5d3844de2cdbd7",
"date": "2018-05-01T20:24:39"
},
{
"message": "LoadBalancerTest: Clean up transaction handling for sqlite\n\nWe need to make sure a DBO_TRX transaction was started before doing the\nCREATE TABLE, because CREATE TABLE itself won't start one and sqlite\nbreaks if schema changes are done on one handle while another is open.\n\nAlso, incidentally, have the handles in these LoadBalancerTests log to\nthe standard channel. And clean up the auto-rollback of DBO_TRX\ntransactions to use ->rollback() instead of ->doRollback() plus\nincorrect manual setting of trxStatus.\n\nBug: T191863\nChange-Id: Ib422ef89e7eba21281e6ea98def9f98ae762b9fe\n",
"bugs": [
"T191863"
],
"subject": "LoadBalancerTest: Clean up transaction handling for sqlite",
"hash": "78c3b9c21a15030b7ac1cf15c2ae1806dd3d10db",
"date": "2018-04-13T17:42:47"
},
{
"message": "rdbms: fix transaction flushing in Database::close\n\nUse the right IDatabase constants for the $flush parameter to\nthe commit() and rollback() calls.\n\nThis fixes a regression from 3975e04cf4d1.\n\nAlso validate the mode/flush parameters to begin() and commit().\n\nBug: T191916\nChange-Id: I0992f9a87f2add303ed309efcc1adb781baecfdc\n",
"bugs": [
"T191916"
],
"subject": "rdbms: fix transaction flushing in Database::close",
"hash": "f9d10f9e0321151914159773edca3b8eb17dfa54",
"date": "2018-04-11T05:31:31"
},
{
"message": "rdbms: avoid lag estimates in getLagFromPtHeartbeat ruined by snapshots\n\nBug: T190960\nChange-Id: I57dd8d3d0ca96d6fb2f9e83f062f29b1d53224dd\n",
"bugs": [
"T190960"
],
"subject": "rdbms: avoid lag estimates in getLagFromPtHeartbeat ruined by snapshots",
"hash": "24353a60d2c860cd24593d721b45291782a8489f",
"date": "2018-03-31T01:39:57"
}
]
},
"includes/libs/rdbms/loadbalancer/LoadBalancer.php": {
"File": "includes/libs/rdbms/loadbalancer/LoadBalancer.php",
"TicketCount": 11,
"CommitCount": 13,
"Tickets": [
"T186764",
"T192611",
"T191282",
"T193668",
"T194308",
"T193565",
"T204531",
"T215611",
"T226678",
"T226770",
"T245280"
],
"Commits": [
{
"message": "Don't use 'host' as a log parameter\n\nBug: T245280\nChange-Id: Ic765677ea9171672bc99913f45ab7d585981e1ed\n",
"bugs": [
"T245280"
],
"subject": "Don't use 'host' as a log parameter",
"hash": "69988cea16b6c51f96e5936221171b9291729dbe",
"date": "2020-02-14T17:59:44"
},
{
"message": "rdbms: avoid recursion in LoadBalancer when the master has non-zero load\n\nAdd and use IDatabase::getServerConnection() method to avoid loops caused\ncaused by pickReaderIndex() calling getConnection() for the master server.\nThat lead to getReadOnlyReason() triggering pickReaderIndex() again.\n\nMake getLaggedReplicaMode() apply when the master has non-zero load and\nthe replicas are all lagged.\n\nRemove \"allReplicasDownMode\" in favor of checking getExistingReaderIndex()\ninstead. This reduces the amount of state to keep track of a bit.\n\nFollow-up to 95e2c990940f\n\nBug: T226678\nBug: T226770\nChange-Id: Id932c3fcc00625e3960f76d054d38d9679d25ecc\n",
"bugs": [
"T226678",
"T226770"
],
"subject": "rdbms: avoid recursion in LoadBalancer when the master has non-zero load",
"hash": "79d1881eded1537e739c92d3576c48e34b352f88",
"date": "2019-07-09T19:26:46"
},
{
"message": "rdbms: avoid duplicate spammy logging in LoadBalancer::getRandomNonLagged\n\nThis is already logged in LoadMonitor in getServerStates() in a less spammy way\n(due to APC caching of server states).\n\nBug: T215611\nChange-Id: Id70fdfa62eff9cb6446deea5e197f4c0af4928aa\n",
"bugs": [
"T215611"
],
"subject": "rdbms: avoid duplicate spammy logging in LoadBalancer::getRandomNonLagged",
"hash": "f0f3e77dd370681e054b121ac32cb825c7441b13",
"date": "2019-02-12T18:52:31"
},
{
"message": "rdbms: reduce LoadBalancer replication log spam\n\nLoadMonitor already has similar and less-frequent logging since\nit only happens on cache rebuilds.\n\nBug: T204531\nChange-Id: I270a65ab1d3f471bd49c8f54d85151c91827a518\n",
"bugs": [
"T204531"
],
"subject": "rdbms: reduce LoadBalancer replication log spam",
"hash": "38b54d71ece279f978246fefa21142f34cb6e07f",
"date": "2018-12-10T20:29:43"
},
{
"message": "rdbms: re-add DB domain sanity checks to LoadBalancer\n\nAlso clean up empty schema handling in DatabaseDomain\n\nThis reverts commit f23ac02f4fcf156767df66a5df2fa407310fe1d2.\n\nBug: T193565\nChange-Id: I95fde5c069f180ca888a023fade25ec81b846d44\n",
"bugs": [
"T193565"
],
"subject": "rdbms: re-add DB domain sanity checks to LoadBalancer",
"hash": "b06f02021762b3640de6c5a7a592580d7bb7ed95",
"date": "2018-10-16T23:35:05"
},
{
"message": "rdbms: add domain sanity checks to LoadBalancer connection methods\n\nBug: T193565\nChange-Id: I94d905277e01b8e30ac3f6532ece07388bb20cce\n",
"bugs": [
"T193565"
],
"subject": "rdbms: add domain sanity checks to LoadBalancer connection methods",
"hash": "b416e166a3eba1dcfc2e912e37e443981d7f60ef",
"date": "2018-10-12T02:16:49"
},
{
"message": "rdbms: Database::selectDB() update the domain and handle failure better\n\nLoadBalancer uses Database::getDomainId() for deciding which keys to use\nin the foreign connection handle arrays. This method should reflect any\nchanges made to the DB selection.\n\nIf the query fails, then do not change domain field. This is the sort of\napproach that LoadBalancer is expects in openForeignConnection(). Also,\nthrow an exception when selectDB() fails.\n\nThe db/schema/prefix fields of Database no longer exist in favor of just\nusing the newer currentDomain field.\n\nAlso:\n* Add IDatabase::selectDomain() method and made selectDB() wrap it.\n* Extract the DB name from sqlite files if not explicitly provided.\n* Fix inconsistent open() return values from Database subclasses.\n* Make a relationSchemaQualifier() method to handle the concern of\n omitting schema names in queries. The means that getDomainId() can\n still return the right value, rather than confusingly omitt the schema.\n* Make RevisionStore::checkDatabaseWikiId() account for the domain schema.\n Unlike d2a4d614fce09c, this does not incorrectly assume the storage is\n always for the current wiki domain. Also, LBFactorySingle sets the local\n domain so it is defined even in install.php.\n* Make RevisionStoreDbTestBase actually set the LoadBalancer local domain.\n* Make RevisionTest::testLoadFromTitle() account for the domain schema.\n\nBug: T193565\nChange-Id: I6e51cd54c6da78830b38906b8c46789c79498ab5\n",
"bugs": [
"T193565"
],
"subject": "rdbms: Database::selectDB() update the domain and handle failure better",
"hash": "fe0af6cad57821dc5b8741edfed77242d543ccaa",
"date": "2018-10-10T19:03:30"
},
{
"message": "rdbms: fix callback stage errors in LBFactory::commitMasterChanges\n\nJust like 082ed053b6 fixed pre-commit callback errors when new instances\nof LoadBalancer are made during that step, do the same for post-commit\ncallbacks.\n\nBug: T194308\nChange-Id: Ie79e0f22b3aced425cf067d0df6b67e368223e6c\n",
"bugs": [
"T194308"
],
"subject": "rdbms: fix callback stage errors in LBFactory::commitMasterChanges",
"hash": "86af2ef383b6fc9c4032dad769c00e672922d530",
"date": "2018-05-10T04:26:41"
},
{
"message": "rdbms: fix LBFactory::commitAll() round handling\n\nThis avoids \"Transaction round stage must be approved (not cursory)\".\n\nBug: T194308\nChange-Id: I9dbfe9cede02b1b1904c1d5e5a9802306c2492a2\n",
"bugs": [
"T194308"
],
"subject": "rdbms: fix LBFactory::commitAll() round handling",
"hash": "205cfc185446ad9dd355d3a57f4ee60d0dc1de57",
"date": "2018-05-09T21:51:18"
},
{
"message": "rdbms: fix finalization stage errors in LBFactory::commitMasterChanges\n\nIf a pre-commit callback caused a new LoadBalancer object to be created,\nthat object will be in the \"cursory\" state rather than the \"finalized\"\nstate. If any callbacks run on an LB instance, make LBFactory iterate\nover them all again to finalize these new instances.\n\nMake LoadBalancer::finializeMasterChanges allow calls to\nalready-finalized instances for simplicity.\n\nBug: T193668\nChange-Id: I4493e9571625a350c0a102219081ce090967a4ac\n",
"bugs": [
"T193668"
],
"subject": "rdbms: fix finalization stage errors in LBFactory::commitMasterChanges",
"hash": "082ed053b699cfd52555f3432a1b4a823a259236",
"date": "2018-05-07T18:04:43"
},
{
"message": "rdbms: improve log warnings in runMasterPostTrxCallbacks()\n\nBug: T191282\nChange-Id: Iba663c58224af920f90d7b401aab2eb21f921941\n",
"bugs": [
"T191282"
],
"subject": "rdbms: improve log warnings in runMasterPostTrxCallbacks()",
"hash": "b32325bd23d44afce7989505af5d3844de2cdbd7",
"date": "2018-05-01T20:24:39"
},
{
"message": "rdbms: make sure cpPosIndex cookie is applied to LBFactory in time\n\nOnce getMain() was called in setSchemaAliases(), the ChronologyProtector\nwas initialized and the setRequestInfo() call in Setup.php had no effect.\nOnly the request values read in LBFactory::__construct() were used, which\nreflect $_GET but not cookie values.\n\nUse the $wgDBtype variable to avoid this and add an exception when that\nsort of thing happens.\n\nFurther defer instantiation of ChronologyProtector so that methods like\nILBFactory::getMainLB() do not trigger construction.\n\nBug: T192611\nChange-Id: I735d3ade5cd12a5d609f4dae19ac88fec4b18b51\n",
"bugs": [
"T192611"
],
"subject": "rdbms: make sure cpPosIndex cookie is applied to LBFactory in time",
"hash": "628a3a9b267620914701a2a0a17bad8ab2e56498",
"date": "2018-04-23T15:44:02"
},
{
"message": "rdbms: make DOMAIN_ANY ignore bogus MySQL DB names in config\n\nThis regressed in 14ee3f2, trying to select the server config\n\"dbname\" value as the database on connect. If that config is\nbogus, then the connection attempt would fail. Instead, always\npass null to the driver's connection function if the DB domain\ndoesn't matter.\n\nThis affected WMF sites during lag checks, on account of the\n\"serverTemplate\" value in $wgLBFactoryConf always using the local\nwiki domain (whether the cluster had such a database or not).\n\nUse strlen() as mysqli sees null and \"\" as the same for $dbname.\n\nBug: T186764\nChange-Id: I6699d17c0125a08415046211fee7906bbaf2c366\n",
"bugs": [
"T186764"
],
"subject": "rdbms: make DOMAIN_ANY ignore bogus MySQL DB names in config",
"hash": "e59b0556f4531b87d80113fcefc845ed9d6e3500",
"date": "2018-02-09T06:43:24"
}
]
},
"includes/Title.php": {
"File": "includes/Title.php",
"TicketCount": 16,
"CommitCount": 13,
"Tickets": [
"T200456",
"T206130",
"T204800",
"T210739",
"T225585",
"T224814",
"T227700",
"T227816",
"T227817",
"T244300",
"T240083",
"T246720",
"T204793",
"T221763",
"T225366",
"T229443"
],
"Commits": [
{
"message": "Title: fix subpage split for degenerate cases\n\nThis ensures that getBaseText and getRootText always\nreturn a valid title, even for degenerate cases like \"/\" or\n\"//foo//bar//baz//\".\n\nBug: T229443\nChange-Id: I3d8c6f6ffc4234ac938a39f5cc55691d62e1d7ac\n",
"bugs": [
"T229443"
],
"subject": "Title: fix subpage split for degenerate cases",
"hash": "03397e46c5c8c3dd971c10d42636fc58145d7c1d",
"date": "2020-06-22T10:58:29"
},
{
"message": "RevisionStore and PageUpdater: handle stale page ID\n\nSometimes, an edit is done with a Title object that has gone\nout of sync with the database after a page move. In this case,\nwe should re-load the current page ID from the database,\ninstead of failing the update hard.\n\nBug: T246720\nBug: T204793\nBug: T221763\nBug: T225366\nChange-Id: If7701205ec2bf4d4495349d3e67cf53d32ee8357\n",
"bugs": [
"T246720",
"T204793",
"T221763",
"T225366"
],
"subject": "RevisionStore and PageUpdater: handle stale page ID",
"hash": "1fcd23878cfad4a40429aacec5381ad711d9bbe7",
"date": "2020-04-20T16:11:45"
},
{
"message": "CdnCacheUpdate: Accept Titles in addition to strings\n\nThe class was already documented as \"given a list of URLs or Title\ninstances\", this makes that work.\n\nTitle objects will have ->getCdnUrls() called when the update is\nresolved, which avoids problems like those encountered in T240083 where\nthat was being called too early.\n\nBug: T240083\nChange-Id: I30b29a7359a8f393fb19ffc199211a421d3ea4d9\n",
"bugs": [
"T240083"
],
"subject": "CdnCacheUpdate: Accept Titles in addition to strings",
"hash": "d83e00cb92f2c850dfe8ad7f31c65f8db78e47b7",
"date": "2020-03-19T13:56:19"
},
{
"message": "language: remove Language hints for type check as it breaks using of StubUserLang\n\nBug: T244300\nChange-Id: Iec1b5629617f1c171e8af507dc1dcebfef0666eb\n",
"bugs": [
"T244300"
],
"subject": "language: remove Language hints for type check as it breaks using of StubUserLang",
"hash": "ed18dba8f403377ebe6f6a69893eae098b77cf59",
"date": "2020-02-05T14:11:31"
},
{
"message": "Title::getTalkPage(): Restore behavior of interwiki-prefixed & fragment-only titles\n\nNamespaceInfo::getTalkPage will throw an exception for these.\nWith this patch, Title::getTalkPage() will be reverted to the old\nbehavior of returning an incorrect meaningless value. Logging has been\nadded to identify code paths that trigger this behavior.\n\nThis patch should be undone once all such code paths have been found and\nfixed.\n\nBug: T227817\nChange-Id: I4727c7bb54d6f712ddcab05ef278a08d728f5726\n",
"bugs": [
"T227817"
],
"subject": "Title::getTalkPage(): Restore behavior of interwiki-prefixed & fragment-only titles",
"hash": "53ef1fa67bad8ffde58bc3c9c0b232863578ea55",
"date": "2019-09-04T20:37:52"
},
{
"message": "When title contains only slashes, Title::getRootText() shouldn't return false\n\nOtherwise, Title::makeTitleSafe() will return null, which\nbreaks the assumption that Title::makeTitleSafe() always\nreturns something meaningful for strings\nreturned by Title::getRootText().\n\nBug: T227816\nChange-Id: If79a12bb8d23f1eafc10017d56c62566f39347ad\n",
"bugs": [
"T227816"
],
"subject": "When title contains only slashes, Title::getRootText() shouldn't return false",
"hash": "ed1ab4034e39878442fd389b4c1a22161afd935f",
"date": "2019-07-14T11:27:20"
},
{
"message": "Title: Title::getSubpage should not lose the interwiki prefix\n\nThis issue was discovered while investigating T227700, and added some\nconfusion. This patch is necessary for Special:MyLanguage to behave\ncorrectly in all cases, but it's not necessary for fixing the primary\ncritical problem.\n\nBug: T227700\nChange-Id: Ib4cbeec47a877c473cbd501cc964cc66d169b99e\n",
"bugs": [
"T227700"
],
"subject": "Title: Title::getSubpage should not lose the interwiki prefix",
"hash": "54626e5ce172ea469942b01e0ea73f34fb5de07a",
"date": "2019-07-13T17:49:11"
},
{
"message": "Ensure canHaveTalkPage returns false when getTalkPage would fail.\n\nThis causes Title::getTalkPage and NamespaceInfo::getTitle() to throw\nan MWException when called on a LinkTarget that is an interwiki link\nor a relative section link. These methods were already throwing\nMWException when called on a link to a Special page.\n\nBug: T224814\nChange-Id: I525c186a5b8b8fc22bca195da48afead3bfbd402\n",
"bugs": [
"T224814"
],
"subject": "Ensure canHaveTalkPage returns false when getTalkPage would fail.",
"hash": "dbce648a15ee7100383c3ec9781775f0f895c645",
"date": "2019-07-03T08:40:10"
},
{
"message": "Title: ensure getBaseTitle and getRootTitle return valid Titles\n\nSince getBaseText() and getRootText() may return text with trailing\nwhitespace, getBaseTitle and getRootTitle must use makeTitleSafe instead\nof makeTitle.\n\nBug: T225585\nChange-Id: Id92df552f05e6a9ed7c9259a8779fa94c3587a3e\n",
"bugs": [
"T225585"
],
"subject": "Title: ensure getBaseTitle and getRootTitle return valid Titles",
"hash": "3b3115e7f33dc91c05c62dd5b385a37f51f051b5",
"date": "2019-07-01T20:22:10"
},
{
"message": "Clone the Title object to prevent mutations.\n\nThe Title object that is loaded from master gets reloaded from the replicas\nand mutates the original object. When pages are moved, the Title no longer\nexists on master, but still exists on the replicas. Cloning the object allows\nthe item to be loaded from the replicas, without mutating the original Title.\n\nBug: T210739\nChange-Id: I9ad973e9a609124749909605f37bc1e1fc549585\n",
"bugs": [
"T210739"
],
"subject": "Clone the Title object to prevent mutations.",
"hash": "dedfe98eaad7015d52001988eeea9c00749e1a30",
"date": "2019-01-23T22:53:56"
},
{
"message": "Fix Title::getFragmentForURL for bad interwiki prefix.\n\nCalling Title::getLinkURL or any other method that relies on\ngetFragmentForURL on a title with an unknown interwiki prefix\nwas triggering a fatal error. With this patch, that situation is\nhandled more gracefully.\n\nBug: T204800\nChange-Id: I665cd5e983a80c15c68c89541d9c856082c460bb\n",
"bugs": [
"T204800"
],
"subject": "Fix Title::getFragmentForURL for bad interwiki prefix.",
"hash": "337311662d344c90590ca5cee34b8a87da933430",
"date": "2019-01-23T10:17:21"
},
{
"message": "SECURITY: Fix permissions check for patrol action\n\nReturn existing errors instead of empty array in checkUserConfigPermissions().\nReturning an empty array wiped out previously-found errors.\n\nAlso add test coverage for patrol action.\n\nBug: T206130\nChange-Id: I2df0551c5837adc578b27082ab6ba2ac95d937f8\n",
"bugs": [
"T206130"
],
"subject": "SECURITY: Fix permissions check for patrol action",
"hash": "890ffc619dd8fea0526cf76e794835f322d39d0c",
"date": "2018-10-03T19:07:46"
},
{
"message": "Handle $title === null in Title::newFromText\n\nThis relied on TitleCodec throwing MalformedTitleException in the\npast, but that is fragile as other parts of the logic do not\nexpect null.\n\nBug: T200456\nChange-Id: I1aca3971e2a9c0b1fe3adbcf34f3ee65b2271234\n",
"bugs": [
"T200456"
],
"subject": "Handle $title === null in Title::newFromText",
"hash": "bfd6406ff52f090dc2834cd099cd3cf903ca2eee",
"date": "2018-07-26T18:39:21"
}
]
},
"includes/Revision/RevisionStore.php": {
"File": "includes/Revision/RevisionStore.php",
"TicketCount": 16,
"CommitCount": 12,
"Tickets": [
"T193565",
"T208929",
"T219114",
"T236624",
"T139012",
"T239772",
"T205936",
"T246720",
"T204793",
"T221763",
"T225366",
"T212428",
"T252156",
"T258666",
"T259738",
"T235589"
],
"Commits": [
{
"message": "Add extra details to error log to debug why null revision creation is failing\n\nBug: T235589\nChange-Id: Icae5cdc5b010e23d4956269f64069c0271217af7\n",
"bugs": [
"T235589"
],
"subject": "Add extra details to error log to debug why null revision creation is failing",
"hash": "a6284cc69c9e995798732073a0f6744c54ca5464",
"date": "2020-08-17T01:14:56"
},
{
"message": "RevisionStoreCacheRecord: Fallback to master for update callback.\n\nBug: T259738\nChange-Id: Id76f7c5179fa351a4aba4d5226437ef6338bbdce\n",
"bugs": [
"T259738"
],
"subject": "RevisionStoreCacheRecord: Fallback to master for update callback.",
"hash": "b961feca6a8983f20cda9e7ef834056a1ff07880",
"date": "2020-08-05T19:27:29"
},
{
"message": "RevisionStore: fall back to master if no revision rows were found.\n\nThe fallback was previously done too late, re-trying based on the\nsame (empty) list of rows from the slots table.\n\nBug: T212428\nBug: T258666\nChange-Id: Ie4aa2be8d73e597e79b7e0c3e12a539acee427a0\n",
"bugs": [
"T212428",
"T258666"
],
"subject": "RevisionStore: fall back to master if no revision rows were found.",
"hash": "9bd04f78456623b3e8dded2f644560d0f8c84ef1",
"date": "2020-07-27T20:26:12"
},
{
"message": "RevisionStore: fall back to master db if main slot is missing.\n\nThis is to protect against race conditions. There may be situations\nwhere we already got the revision itself, but information is still\nmissing from the slots or content tables.\n\nBug: T212428\nBug: T252156\nChange-Id: Id0cd85ae93616ad91b07afeccb766e8345fa7c9c\n",
"bugs": [
"T212428",
"T252156"
],
"subject": "RevisionStore: fall back to master db if main slot is missing.",
"hash": "767aee7bb162c97644be3730a8f8afc5233a7a56",
"date": "2020-05-08T10:04:19"
},
{
"message": "RevisionStore: improve error handling in newRevisionsFromBatch\n\nWhen for some reason we can't determine the title for a revision\nin the batch, this should not trigger a fatal TypeError, but handled\ngracefully, with helpful information included in the error message.\n\nBug: T205936\nChange-Id: I0c7d2c1fee03d1c9208669a9b5ad66612494a47c\n",
"bugs": [
"T205936"
],
"subject": "RevisionStore: improve error handling in newRevisionsFromBatch",
"hash": "d351bb7e390b0f746b8159d2e2ba5cd8d518c8ba",
"date": "2020-05-03T19:54:02"
},
{
"message": "RevisionStore and PageUpdater: handle stale page ID\n\nSometimes, an edit is done with a Title object that has gone\nout of sync with the database after a page move. In this case,\nwe should re-load the current page ID from the database,\ninstead of failing the update hard.\n\nBug: T246720\nBug: T204793\nBug: T221763\nBug: T225366\nChange-Id: If7701205ec2bf4d4495349d3e67cf53d32ee8357\n",
"bugs": [
"T246720",
"T204793",
"T221763",
"T225366"
],
"subject": "RevisionStore and PageUpdater: handle stale page ID",
"hash": "1fcd23878cfad4a40429aacec5381ad711d9bbe7",
"date": "2020-04-20T16:11:45"
},
{
"message": "Add findBadBlobs script.\n\nThis script scans for content blobs that can't be loaded due to\ndatabase corruption, and can change their entry in the content table\nto an address starting with \"bad:\". Such addresses cause the content\nto be read as empty, with no log entry. This is useful to avoid\nerrors and log spam due to known bad revisions.\n\nThe script is designed to scan a limited number of revisions from a\ngiven start date. The assumption is that database corruption is\ngenerally caused by an intermedia bug or system failure which will\naffect many revisions over a short period of time.\n\nBug: T205936\nChange-Id: I6f513133e90701bee89d63efa618afc3f91c2d2b\n",
"bugs": [
"T205936"
],
"subject": "Add findBadBlobs script.",
"hash": "071ce36abdec44c5940720616ef3617d74f34858",
"date": "2020-04-17T13:04:59"
},
{
"message": "Remove hacks for lack of index on rc_this_oldid\n\nIn several places, we're including rc_timestamp or other fields in a\nquery selecting on rc_this_oldid because there was historically no index\non the column.\n\nThe needed index was created by I0ccfd26d and deployed by T202167, so\nlet's remove the hacks.\n\nBug: T139012\nBug: T239772\nChange-Id: Ic99760075bde6603c9f2ab3ee262f5a2878205c7\n",
"bugs": [
"T139012",
"T239772"
],
"subject": "Remove hacks for lack of index on rc_this_oldid",
"hash": "152376376e6ef60c7169e31582db2be78194b0d4",
"date": "2019-12-04T21:00:02"
},
{
"message": "RevisionStore: force \"Unknown user\" instead of empty user name\n\nWhen restoring revisions that have an empty user name associated with\nthem, force \"Unknown user\" to be used instead, even if the actor table\nhas an entry for the empty user name.\n\nBug: T236624\nChange-Id: I31edd5f7d89d9b43806ad18e96f5e93cff0f8c6f\n",
"bugs": [
"T236624"
],
"subject": "RevisionStore: force \"Unknown user\" instead of empty user name",
"hash": "bb431303140d8ec95c21b56fb3166a6b6653c8f7",
"date": "2019-11-26T21:16:56"
},
{
"message": "build: Upgrade mediawiki/mediawiki-phan-config from 0.5.0 to 0.6.0 and make pass\n\nFix five instances of PhanPluginDuplicateConditionalNullCoalescing;\nescape the rest for now.\n\nBug: T219114\nChange-Id: Ic4bb30c43c5315ce6b878b37b432c6e219414f8b\n",
"bugs": [
"T219114"
],
"subject": "build: Upgrade mediawiki/mediawiki-phan-config from 0.5.0 to 0.6.0 and make pass",
"hash": "460bcf81e752ffd7eb3994fb331324e874f1ca15",
"date": "2019-05-13T14:57:07"
},
{
"message": "RevisionStore: Avoid exception on prev/next of deleted revision\n\nThis makes RevisionStore a bit more robust against use with deleted\nrevisions.\n\nNote that this does not make I151019e336bda redundant, since detecting\nthis situation early and providing meaningful error message is useful.\n\nBug: T208929\nChange-Id: I7635ddc0176e21d873eacadebe02603b1fe51c38\n",
"bugs": [
"T208929"
],
"subject": "RevisionStore: Avoid exception on prev/next of deleted revision",
"hash": "6a9497769c9f1d24dfe922255d77691e270728b3",
"date": "2018-11-26T21:53:56"
},
{
"message": "rdbms: Database::selectDB() update the domain and handle failure better\n\nLoadBalancer uses Database::getDomainId() for deciding which keys to use\nin the foreign connection handle arrays. This method should reflect any\nchanges made to the DB selection.\n\nIf the query fails, then do not change domain field. This is the sort of\napproach that LoadBalancer is expects in openForeignConnection(). Also,\nthrow an exception when selectDB() fails.\n\nThe db/schema/prefix fields of Database no longer exist in favor of just\nusing the newer currentDomain field.\n\nAlso:\n* Add IDatabase::selectDomain() method and made selectDB() wrap it.\n* Extract the DB name from sqlite files if not explicitly provided.\n* Fix inconsistent open() return values from Database subclasses.\n* Make a relationSchemaQualifier() method to handle the concern of\n omitting schema names in queries. The means that getDomainId() can\n still return the right value, rather than confusingly omitt the schema.\n* Make RevisionStore::checkDatabaseWikiId() account for the domain schema.\n Unlike d2a4d614fce09c, this does not incorrectly assume the storage is\n always for the current wiki domain. Also, LBFactorySingle sets the local\n domain so it is defined even in install.php.\n* Make RevisionStoreDbTestBase actually set the LoadBalancer local domain.\n* Make RevisionTest::testLoadFromTitle() account for the domain schema.\n\nBug: T193565\nChange-Id: I6e51cd54c6da78830b38906b8c46789c79498ab5\n",
"bugs": [
"T193565"
],
"subject": "rdbms: Database::selectDB() update the domain and handle failure better",
"hash": "fe0af6cad57821dc5b8741edfed77242d543ccaa",
"date": "2018-10-10T19:03:30"
}
]
},
"includes/user/User.php": {
"File": "includes/user/User.php",
"TicketCount": 12,
"CommitCount": 12,
"Tickets": {
"0": "T188014",
"2": "T188437",
"3": "T202149",
"4": "T202715",
"5": "T208469",
"6": "T208398",
"7": "T208472",
"8": "T210621",
"9": "T227688",
"10": "T187731",
"11": "T259181",
"12": "T264363"
},
"Commits": [
{
"message": "Remove NonSerializableTrait from User object\n\nUser objects are apparently serialized somewhere, removing\nthe trait until this gets fixed.\n\nRemove UserTest::testSerialization_fails to\nallow this change for now.\n\nBug: T264363\nChange-Id: Id804755653452dc94184e5e481efcac3053e6535\n",
"bugs": [
"T264363"
],
"subject": "Remove NonSerializableTrait from User object",
"hash": "016d2e401c763c4df7ebe5b2eb2455562daabe2b",
"date": "2020-10-01T21:16:08"
},
{
"message": "Introduce and apply NonSerializableTrait\n\nThe NonSerializableTrait prevents object serialization via php's native\nserialization mechanism. Most objects are not safe to serialize, and\nNonSerializableTrait provides a covenient and uniform way to protect\nagainst serialization attempts.\n\nThis patch applies the NonSerializableTrait to some key classes in\nMediaWiki.\n\nBug: T187731\nBug: T259181\nChange-Id: I0c3b558d97e3415413bbaa3d98f6ebd5312c4a67\n",
"bugs": [
"T187731",
"T259181"
],
"subject": "Introduce and apply NonSerializableTrait",
"hash": "dc436c3cff5f41bf8f22e0a3d19fa86a1c86b7dd",
"date": "2020-09-28T19:55:49"
},
{
"message": "User: support setting custom fields + array autocreation in non-existent field\n\nI889924037 added a __set method which did not actually handle fields being set.\nFor better or worse, setting custom fields on ubiquitous objects like User is a\ncommon form of in-process caching, so this is a B/C break; restore for now.\n\nPHP allows creating an array in a previously non-existent object property\nwith $o->foo['bar'] = $val, but doesn't properly handle that on objects\nwhich have magic getter/setter. Add an ugly hack to make it work (but warn).\n\nDepends on I15090ae9e4b66ac25f631f6179c4394ce8c445a9.\n\nBug: T227688\nChange-Id: I62b80ab4fa10de984cf2c879ab12d91b0fd9bc1c\n",
"bugs": [
"T227688"
],
"subject": "User: support setting custom fields + array autocreation in non-existent field",
"hash": "b530dca430a7553fd25df7c5dc327c1395928e51",
"date": "2019-07-10T20:35:56"
},
{
"message": "User: Bypass repeatable-read when creating an actor_id\n\nWhen MySQL is using repeatable-read transaction isolation (which is the\ndefault), the following sequence of events can occur:\n\n1. Request A: Begin a transaction.\n2. Request A: Try to select the actor ID for a user. Find no rows.\n3. Request B: Insert an actor ID for that user.\n4. Request A: Try to insert an actor ID for the user. Fails because one\n exists.\n5. Request A: Try to select the actor ID that must exist. Fail because of\n the snapshot created at step 2.\n\nIn MySQL we can avoid this issue at step #5 by using a locking select\n(FOR UPDATE or LOCK IN SHARE MODE), so let's do that.\n\nBug: T210621\nChange-Id: I6c1d255fdd14c6f49d2ea9790e7bd7d101e98ee4\n",
"bugs": [
"T210621"
],
"subject": "User: Bypass repeatable-read when creating an actor_id",
"hash": "37f48fdb25a78ba7623c57b50cdfd842292d3ccb",
"date": "2018-11-29T16:28:05"
},
{
"message": "Block: Clean up handling of non-User targets\n\nThe fix applied in d67121f6d took care of the immediate issue in\nT208398, but after further analysis it was not a correct fix.\n\n* Near line 770, the method shouldn't even be called unless the target\n is TYPE_USER.\n* Near line 1598, it isn't dealing with a target at all.\n* Near line 1813, you're not going to get a sensible result trying to\n call `$user->getTalkPage()` for a range or auto-block ID. What you\n would really need there to handle range and auto-blocks correctly is\n to pass in the User actually making the edit.\n\nBut after some pushback in code review about passing the User into\nBlock::preventsEdit() to make line 1813 work, we'll instead replace the\nmethod with Block::appliesToTitle() and put the check for user talk\npages back into User::isBlockedFrom().\n\nBug: T208398\nBug: T208472\nChange-Id: I23d3a3a1925e97f0cabe328c1cc74e978cb4d24a\n",
"bugs": [
"T208398",
"T208472"
],
"subject": "Block: Clean up handling of non-User targets",
"hash": "74ff87d291e6daddfd791270c6ee95ca587d3d46",
"date": "2018-11-02T16:33:57"
},
{
"message": "User: Don't fail mysteriously when passing a User object to idFromName()\n\nIf $name is a User object, some code magically works because the object\ngets converted to a string, but other code blows up because objects\naren't valid array keys. Prevent this from happening by explicitly\nforcing $name to be a string.\n\nBug: T208469\nChange-Id: Icc9ebec93d18609605e2633ccd23b90478e05e51\n",
"bugs": [
"T208469"
],
"subject": "User: Don't fail mysteriously when passing a User object to idFromName()",
"hash": "614dceed00bea7268bc3a2ddcd776e693ddb4b4d",
"date": "2018-10-31T22:38:07"
},
{
"message": "Make UserEditCountUpdate faster by using auto-commit mode\n\nBug: T202715\nChange-Id: I92c08694cb5e1c367809439cff42e33a56ff9878\n",
"bugs": [
"T202715"
],
"subject": "Make UserEditCountUpdate faster by using auto-commit mode",
"hash": "e2088f1170b2867ca3ababe205137cc6ad010068",
"date": "2018-10-27T20:52:45"
},
{
"message": "Move user_editcount updates to a mergeable deferred update\n\nThis should reduce excess contention and lock timeouts.\nPreviously, it used a pre-commit hook which ran just before the\nend of the DB transaction round.\n\nAlso removed unused User::incEditCountImmediate() method.\n\nBug: T202715\nDepends-on: I6d239a5ea286afb10d9e317b2ee1436de60f7e4f\nDepends-on: I0ad3d17107efc7b0e59f1dd54d5733cd1572a2b7\nChange-Id: I0d6d7ddd91bbb21995142808248d162e05696d47\n",
"bugs": [
"T202715"
],
"subject": "Move user_editcount updates to a mergeable deferred update",
"hash": "390fce6db1e008c53580cedbdfe18dff3de9c766",
"date": "2018-10-25T22:32:18"
},
{
"message": "user: Allow \"CAS update failed\" exceptions to be normalised\n\nTake the user_id variable out of the exception message.\nTo compensate and still allow one to correlate patterns relating\nto a specific user (e.g. a bot), add a warning message that\nstill contains the variable via context. This way that warning\nwill also be normalised/grouped, but with the extra context.\n\nThis is separate because exceptions do not currently support\ncontext placeholders.\n\nBug: T202149\nChange-Id: Ic0c25f66f23fdc65821da12f949c6224bc03f9b3\n",
"bugs": [
"T202149"
],
"subject": "user: Allow \"CAS update failed\" exceptions to be normalised",
"hash": "65f714e1e679127c4d38a40e7b23da8cee2195d6",
"date": "2018-09-12T17:33:34"
},
{
"message": "Add a missing check of $wgActorTableSchemaMigrationStage\n\nWe shouldn't be trying to update the table when it's MIGRATION_OLD.\n\nBug: T188437\nChange-Id: Id5aae5eaafc36bf7e65009e67fe91619fb1df295\n",
"bugs": [
"T188437"
],
"subject": "Add a missing check of $wgActorTableSchemaMigrationStage",
"hash": "73781d381679d7ab4fdf700b73cb0bdbd68bd6b1",
"date": "2018-02-27T21:08:02"
},
{
"message": "Force READ_LATEST for User::newFromId() if writes had been done\n\nThe User::newFromName() case already does this, there seems to be no\nreason not to do it for User::newFromId() too.\n\nBug: T188014\nChange-Id: Ic7fdef0cc1f5750cb5e6b2a7f48f1549862b41cb\n",
"bugs": [
"T188014"
],
"subject": "Force READ_LATEST for User::newFromId() if writes had been done",
"hash": "e93841a621cadfdbbccd0b5f6a34c9b77e51ec3d",
"date": "2018-02-24T00:00:17"
},
{
"message": "Have User::createNew() load the object from master\n\nWhen the new User is created, it's leaving it to be lazy-loaded from a\nreplica. That seems to be causing attempts to add groups immediately\nafter creation to fail because the load-from-replica doesn't find the\njust-created master row.\n\nBug: T188014\nChange-Id: I841c434086bfaaca1cf1ce23673f32dc5a77915d\n",
"bugs": [
"T188014"
],
"subject": "Have User::createNew() load the object from master",
"hash": "61b0c1930626a1088f012bf582262330f903f74a",
"date": "2018-02-23T23:44:19"
}
]
},
"includes/page/WikiPage.php": {
"File": "includes/page/WikiPage.php",
"TicketCount": 12,
"CommitCount": 12,
"Tickets": [
"T188479",
"T198350",
"T199762",
"T198176",
"T203942",
"T207876",
"T221577",
"T214035",
"T240083",
"T257499",
"T259181",
"T187731"
],
"Commits": [
{
"message": "Introduce and apply NonSerializableTrait\n\nThe NonSerializableTrait prevents object serialization via php's native\nserialization mechanism. Most objects are not safe to serialize, and\nNonSerializableTrait provides a covenient and uniform way to protect\nagainst serialization attempts.\n\nThis patch applies the NonSerializableTrait to some key classes in\nMediaWiki.\n\nBug: T187731\nBug: T259181\nChange-Id: I0c3b558d97e3415413bbaa3d98f6ebd5312c4a67\n",
"bugs": [
"T187731",
"T259181"
],
"subject": "Introduce and apply NonSerializableTrait",
"hash": "dc436c3cff5f41bf8f22e0a3d19fa86a1c86b7dd",
"date": "2020-09-28T19:55:49"
},
{
"message": "Allow for safe deserialization of WikiPage objects.\n\nWikiPage and Revision obejcts really shouldn't be serialized,\nbut we still need to support unserializing them, to prevent errors with\nold data.\n\nBug: T259181\nChange-Id: I0a75c1a4e8ee9c458c048fd18306a4d4d13a9b5a\n",
"bugs": [
"T259181"
],
"subject": "Allow for safe deserialization of WikiPage objects.",
"hash": "8a0eb03b2e26cd8b3fe77946137364e12dd11db7",
"date": "2020-09-07T20:15:13"
},
{
"message": "Make EditResult::getRevertTags() call to be only when the update\nwas successful\n\nWe are calling the method too early even though there are many\nreturn statements before we reach where we need the value of the call.\n\nThis patch simply move the call after $updater->wasSuccessful() call.\nIf we did not return after $updater->wasSuccessful() then\n$updater->getEditResult() will not be null\n\nBug: T257499\nChange-Id: Ib59cd1d0cde0118f2a9299f777077ab77f3d5d19\n",
"bugs": [
"T257499"
],
"subject": "Make EditResult::getRevertTags() call to be only when the update\nwas successful",
"hash": "7fa5ce21af35dffd8565f7274cb2d06e611251c8",
"date": "2020-07-10T17:35:25"
},
{
"message": "CdnCacheUpdate: Accept Titles in addition to strings\n\nThe class was already documented as \"given a list of URLs or Title\ninstances\", this makes that work.\n\nTitle objects will have ->getCdnUrls() called when the update is\nresolved, which avoids problems like those encountered in T240083 where\nthat was being called too early.\n\nBug: T240083\nChange-Id: I30b29a7359a8f393fb19ffc199211a421d3ea4d9\n",
"bugs": [
"T240083"
],
"subject": "CdnCacheUpdate: Accept Titles in addition to strings",
"hash": "d83e00cb92f2c850dfe8ad7f31c65f8db78e47b7",
"date": "2020-03-19T13:56:19"
},
{
"message": "WikiPage: Reduce locking in doUpdateRestrictions()\n\nThe WikiPage::doUpdateRestrictions() method is responsible for updating\nthe page_restrictions table when protection settings are modified for a\ngiven page. Currently, it unconditionally issues a query to delete existing\nrows associated with the page being modified - using the page ID reference\nstored in the table's pr_page field - even if such rows may not exist.\nOn MySQL and derivatives, this statement will create a gap lock if no rows\nwere found.\n\nThis can cause a deadlock if another thread then attempts to manipulate\nprotection settings for a different page ID that falls into the same gap.\nThe contending thread will likewise issue a query to delete existing\nprotection rows for the other page ID, which will create a gap lock on the\nsame page ID gap. At this point, neither thread can proceed with inserting new\nprotection settings for their respective pages due to the gap lock held\nby the other, and a deadlock occurs. Report details in T214035 indicate that\nthis error condition can be observed when running mass protection scripts\noperating on user and user talk pages.\n\nThis patch aims to reduce locking by first querying for existing rows in the\npage_restrictions table associated with the page ID whose protection settings\nare being changed, then deleting those rows based on their autoincrement pr_id\nprimary key value. This will not attempt to delete rows that do not exist,\nthereby avoiding a gap lock.[1]\n\n---\n[1] https://dev.mysql.com/doc/refman/8.0/en/innodb-locks-set.html\n\nBug: T214035\nChange-Id: Id796df6263362839bfe80b32d5e0582dbd5db9bb\n",
"bugs": [
"T214035"
],
"subject": "WikiPage: Reduce locking in doUpdateRestrictions()",
"hash": "613fd6529d3ca2cf4aba9c322bf8b2f9135769a1",
"date": "2019-11-14T15:20:21"
},
{
"message": "Make sure that each DataUpdate still has outer transaction scope\n\nBug: T221577\nChange-Id: I620e461d791416ca37fa9ca4fca501e28d778cf5\n",
"bugs": [
"T221577"
],
"subject": "Make sure that each DataUpdate still has outer transaction scope",
"hash": "3496f0fca3debf932598087607dc5547075e2cba",
"date": "2019-05-30T20:53:18"
},
{
"message": "WikiPage: Truncate redirect fragments before inserting them into the DB\n\nThe rd_fragment field is 255 bytes wide, but there is no limit on how\nlong title fragments can be. We don't want to let the database silently\ntruncate the fragment for us, because that can result in invalid UTF-8.\nInstead, truncate it before insertion in a UTF-8-aware way.\n\nBug: T207876\nChange-Id: I12745f3f4c174eaced56d80f3661a71d0e5637e6\n",
"bugs": [
"T207876"
],
"subject": "WikiPage: Truncate redirect fragments before inserting them into the DB",
"hash": "13a1d8957b9d639c69cef251b831ed03319182b1",
"date": "2018-10-25T00:33:56"
},
{
"message": "WikiPage: Fix viewing of wiki redirects to NS_MEDIA\n\nIf a user creates a redirect to a Media namespace title, a fatal\nerror is thrown on viewing such rediect because we protect against\nredirecting to virtual namespaces. This fix catches this kind of\nredirect and modifies the namespace to be File before the Title object\nis created.\n\nFollow-up from 613e2699.\n\nBug: T203942\nChange-Id: Ib211d98498f635862fea6bf3e7395f4f8718b3d8\n",
"bugs": [
"T203942"
],
"subject": "WikiPage: Fix viewing of wiki redirects to NS_MEDIA",
"hash": "d4a45f9ea8b9e68893f1742a2ff4db19e3725acd",
"date": "2018-10-11T01:29:47"
},
{
"message": "Use job queue for deletion of pages with many revisions\n\nPages with many revisions experience transaction size exceptions,\ndue to archiving revisions. Use the job queue to split the work\ninto batches and avoid exceptions.\n\nBug: T198176\nChange-Id: Ie800fb5a46be837ac91b24b9402ee90b0355d6cd\n",
"bugs": [
"T198176"
],
"subject": "Use job queue for deletion of pages with many revisions",
"hash": "ca9f1dabf3719c579fd117e7b9826a3269783e7e",
"date": "2018-10-04T00:16:14"
},
{
"message": "Reduce the rate of calls to Category::refreshCounts\n\nBug: T199762\nChange-Id: I23e2e1ebf187d21ea4bd22304aa622199a8b9c5b\n",
"bugs": [
"T199762"
],
"subject": "Reduce the rate of calls to Category::refreshCounts",
"hash": "70ed89ad436f9a5c9090e9927c40a701db6cf93f",
"date": "2018-07-17T23:46:38"
},
{
"message": "Fix table locking in WikiPage::doDeleteArticleReal\n\nThis reverts a recent change that caused the table array and the\njoin array to have mismatching keys, so that the select was a\ncartesian product of page and revision_comment_temp (ie. any\npage deletion locked the whole revision_comment_temp table).\n\nBug: T198350\nChange-Id: Ifb6f0409d4f210d3ecb1da03f59aaba7e229e89e\n",
"bugs": [
"T198350"
],
"subject": "Fix table locking in WikiPage::doDeleteArticleReal",
"hash": "9f4da9f177ea25bf5d912156d167c5f1f1aa251e",
"date": "2018-06-28T16:33:50"
},
{
"message": "WikiPage: Avoid $user variable reuse in doDeleteArticleReal()\n\n$user was being used to represent the person who was deleting the page as\nwell as a variable when dermining the person who made an edit in each\nrow as it was moved to the archive table.\n\nMake it unambigious which variable is used to represent the person deleting\nthe article by renaming it to $deleter.\n\nBug: T188479\nChange-Id: Ia06e7fb840ebc68446127352e336a7e33c813042\n",
"bugs": [
"T188479"
],
"subject": "WikiPage: Avoid $user variable reuse in doDeleteArticleReal()",
"hash": "a21ae2edac758807a2c466be2f5016d87fb526ef",
"date": "2018-02-28T05:56:50"
}
]
},
"includes/MediaWiki.php": {
"File": "includes/MediaWiki.php",
"TicketCount": 9,
"CommitCount": 11,
"Tickets": {
"0": "T194403",
"1": "T190082",
"4": "T203942",
"5": "T207809",
"6": "T214471",
"7": "T225655",
"8": "T227700",
"9": "T206283",
"10": "T255620"
},
"Commits": [
{
"message": "Fix redirects using Special:MyLanguage etc. when using a mobile domain\n\n'wgInternalRedirectTargetUrl' should be set using getLinkURL()\n(which doesn't contain the domain) instead of getFullURL().\nThis is already the case for normal article redirects (see how\n'wgInternalRedirectTargetUrl' is set in Article.php).\n\nBug: T255620\nChange-Id: I77473bedd52bc51c8ef53d6bc695b6bf2ebd0bfd\n",
"bugs": [
"T255620"
],
"subject": "Fix redirects using Special:MyLanguage etc. when using a mobile domain",
"hash": "7f0be4f03b20ae93aba2dd5b20eb6d07837cd622",
"date": "2020-06-17T22:41:49"
},
{
"message": "Avoid using \"enqueue\" mode for deferred updates in doPostOutputShutdown\n\nSet appropriate headers and flush the output as needed to avoid blocking\nthe client on post-send updates for the stock apache2 server scenario.\nSeveral cases have bits of header logic to avoid delay:\n\na) basic GET/POST requests that succeed (e.g. HTTP 2XX)\nb) requests that fail with errors (e.g. HTTP 500)\nc) If-Modified-Since requests (e.g. HTTP 304)\nd) HEAD requests\n\nThis last two still block on deferred updates, so schedulePostSendJobs()\ndoes not trigger on them as a form of mitigation. Slow deferred updates\nshould only trigger on POST anyway (inline and redirect responses are\nOK), so this should not be much of a problem.\n\nDeprecate triggerJobs() and implement post-send job runs as a deferred.\nThis makes it easy to check for the existence of post-send updates by\ncalling DeferredUpdates::pendingUpdatesCount() after the pre-send stage.\nAlso, avoid running jobs on requests that had exceptions. Relatedly,\nremove $mode option from restInPeace() and doPostOutputShutdown()\nOnly one caller was using the non-default options.\n\nBug: T206283\nChange-Id: I2dd2b71f1ced0f4ef8b16ff41ffb23bb5b4c7028\n",
"bugs": [
"T206283"
],
"subject": "Avoid using \"enqueue\" mode for deferred updates in doPostOutputShutdown",
"hash": "4f11b614544be8cb6198fbbef36e90206ed311bf",
"date": "2019-09-30T22:59:59"
},
{
"message": "Fix and re-apply \"RedirectSpecialPage: handle interwiki redirects\"\n\nThis re-applies commit 41106688abbe6dfff61c5642924ced42af3f0d33\n(thereby reverting commit 6c57748aeee6e4f2a197d64785102306fbd4a297)\nand fixes it for local interwiki redirects by adding and using a\nforcing parameter in Special:GoToInterwiki to treat local redirects\nlike external ones.\n\nBug: T227700\nChange-Id: I4bc2ed998430fc2bac71baf850b8988fdb24c1ac\n",
"bugs": [
"T227700"
],
"subject": "Fix and re-apply \"RedirectSpecialPage: handle interwiki redirects\"",
"hash": "d1e7d5e3b2c60d2da2da3c516a94c24599bf3ecc",
"date": "2019-07-24T03:55:49"
},
{
"message": "Revert \"RedirectSpecialPage: handle interwiki redirects.\"\n\nThis reverts commit 41106688abbe6dfff61c5642924ced42af3f0d33.\n\nThe original case is changed by this commit from a MediaWiki fatal\nexception with HTTP 500, to a blank 200 response due to silent\nfailure. Use of GoToInterwiki appears to be invalid at this point in\nthe code. Reverting to keep prod the same as last week, so as\nto unblock the train.\n\nBug: T227700\nChange-Id: Ieece956d2e2e4c21b5ed7a75890b9f11eaf07e66\n",
"bugs": [
"T227700"
],
"subject": "Revert \"RedirectSpecialPage: handle interwiki redirects.\"",
"hash": "b59ab95f916762546182de4da0486e2ae151ece0",
"date": "2019-07-16T11:50:30"
},
{
"message": "RedirectSpecialPage: handle interwiki redirects.\n\nPreviously, WikiPage::performRequest() would assume that Titles returned\nby RedirectSpecialPage::getRedirect() are local pages, and would set\n$wgTitle to whatever was returned. That would lead to a confused state\nwhere the skin would try to render for an interwiki Title.\n\nInstead, WikiPage::performRequest() should wrap the interwiki redirect\nin a call to Special:GoToInterwiki/xyz, just like\nTitle::getFullUrlForRedirect() does, but still avoid the HTTP redirect,\nto avoid leaking private information via view counters (T109724).\n\nThere are two things to test:\n1) call Special:MyLanguage with an interwiki prefix,\n e.g. Special:MyLanguage/wikipedia:XYZ.\n2) create a page that contains an interwiki redirect,\n e.g. #REDIRECT [[wikipedia:XYZ]], then call Special:MyLanguage\n for that page.\n\nFor these tests, the user language should be the same as the content\nlanguage. That is the critical case. If the user language differs\nfrom the content language, the problem would be obscured by another\nbug which is addressed by Ib4cbeec47a877c473.\n\nBug: T227700\nChange-Id: I2852c5a9774f0c76e49f1e3876fcfe85a305f9ce\n",
"bugs": [
"T227700"
],
"subject": "RedirectSpecialPage: handle interwiki redirects.",
"hash": "41106688abbe6dfff61c5642924ced42af3f0d33",
"date": "2019-07-12T12:04:11"
},
{
"message": "Various cleanups to MediaWiki::preOutputCommit\n\nDo not send headers if they were already flushed. Split off some\nchronology protection logic into a separate private method. Use\nILBFactory over LBFactory in a few places. Also, update various\ncode comments.\n\nBug: T225655\nChange-Id: Iecb574e11d8ba09147ff7b84ad57d8845069ba99\n",
"bugs": [
"T225655"
],
"subject": "Various cleanups to MediaWiki::preOutputCommit",
"hash": "f71c22df0adf8ce0410f52c9d82b5775b67cb5d9",
"date": "2019-06-18T10:47:40"
},
{
"message": "Persist sessions pre-send instead of post-send\n\nThis avoids race conditions with certain web request patterns\n\nBug: T214471\nChange-Id: I4dfee10326485e98b028585c7da2e6b30787bb91\n",
"bugs": [
"T214471"
],
"subject": "Persist sessions pre-send instead of post-send",
"hash": "276d065d1620289242cfad5ee877aa88d5db5a60",
"date": "2019-02-06T22:28:02"
},
{
"message": "Create JobQueueEnqueueUpdate class to call JobQueueGroup::pushLazyJobs()\n\nThis assures that MergeableUpdate tasks that lazy push job will actually\nhave those jobs run instead of being added after the lone callback update\nto call JobQueueGroup::pushLazyJobs() already ran.\n\nThis also makes it more obvious that push will happen, since a mergeable\nupdate is added each time lazyPush() is called and a job is buffered,\nrather than rely on some magic callback enqueued into DeferredUpdates at\njust the right point in multiple entry points.\n\nBug: T207809\nChange-Id: I13382ef4a17a9ba0fd3f9964b8c62f564e47e42d\n",
"bugs": [
"T207809"
],
"subject": "Create JobQueueEnqueueUpdate class to call JobQueueGroup::pushLazyJobs()",
"hash": "6030e9cf2c1df7929e2319602aa4d37aa641de11",
"date": "2018-10-28T22:19:06"
},
{
"message": "Replace Media namespace redirects with File namespace\n\nIf a user creates a redirect that goes to a [[Media:example.jpg]]\npage, then an exception is thrown because NS_MEDIA is a virtual\nnamespace. This change catches this case and changes the namespace\nto an NS_FILE namespace and the redirect works correctly. This\nchange only happens when we are dealing with a redirect so other\nuses of the NS_MEDIA namespace shouldn't be affected.\n\nBug: T203942\nChange-Id: Ia744059650e16510732a65d51b138b11cbd43eb4\n",
"bugs": [
"T203942"
],
"subject": "Replace Media namespace redirects with File namespace",
"hash": "613e269920a7d268b012d61fec24fc4d04b1cd7a",
"date": "2018-10-05T22:04:24"
},
{
"message": "rdbms: include client ID hash in ChronologyProtector cookies\n\nPreviously, if an internal service forwarded the cookies for a\nuser (e.g. for permissions) but not the User-Agent header or not\nthe IP address (e.g. XFF), ChronologyProtector could timeout\nwaiting for a matching writeIndex to appear for the wrong key.\n\nThe cookie now tethers the client to the key that holds the\nDB positions from their last state-changing request.\n\nBug: T194403\nBug: T190082\nChange-Id: I84f2cbea82532d911cdfed14644008894498813a\n",
"bugs": [
"T194403",
"T190082"
],
"subject": "rdbms: include client ID hash in ChronologyProtector cookies",
"hash": "fb51330084b4bde1880c76589e55e7cd87ed0c6d",
"date": "2018-06-02T03:57:30"
},
{
"message": "Avoid unnecessary WaitConditionLoop delays in ChronologyProtector\n\nSince it takes time for the agent to get the response and set the\ncookie and, as well, the time into a request that a LoadBalancer is\ninitialized varies by many seconds (cookies loaded from the start),\ngive the cookie a much lower TTL than the DB positions in the stash.\n\nThis avoids having to wait for a position with a given cpPosIndex\nvalue, when the position already expired from the store, which is\na waste of time.\n\nAlso include the timestamp in \"cpPosIndex\" cookies to implement\nlogical expiration in case clients do not expire them correctly.\n\nBug: T194403\nBug: T190082\nChange-Id: I97d8f108dec59c5ccead66432a097cda8ef4a178\n",
"bugs": [
"T194403",
"T190082"
],
"subject": "Avoid unnecessary WaitConditionLoop delays in ChronologyProtector",
"hash": "52af356cad3799ebec3826e1e4743d76a114da3e",
"date": "2018-05-18T20:43:05"
}
]
},
"includes/libs/rdbms/lbfactory/LBFactory.php": {
"File": "includes/libs/rdbms/lbfactory/LBFactory.php",
"TicketCount": 8,
"CommitCount": 10,
"Tickets": [
"T190960",
"T192611",
"T193668",
"T194308",
"T194403",
"T190082",
"T225103",
"T230065"
],
"Commits": [
{
"message": "rdbms: make LBFactory close/rollback dangling handles like LoadBalancer\n\nThe application is expected to call shutdown(), so do not commit if that\nnever happens, which usually indicates some sort of severe error. This\nshould avoid \"already closed\" errors that pile on top of uncaught errors.\n\nBug: T225103\nBug: T230065\nChange-Id: I1bdfe87c47bcc2537d0dfc518f7212c2c9bf21d2\n",
"bugs": [
"T225103",
"T230065"
],
"subject": "rdbms: make LBFactory close/rollback dangling handles like LoadBalancer",
"hash": "c3900f9f8ae920af79ddad588b68a49b9e11df19",
"date": "2019-08-17T19:05:24"
},
{
"message": "rdbms: make getCPInfoFromCookieValue() stricter about allowed values\n\nAll components, not just the write index, must now be present.\n\nBug: T194403\nChange-Id: I279ba3e16d470aca09fdb74cec91d28efb5e2f95\n",
"bugs": [
"T194403"
],
"subject": "rdbms: make getCPInfoFromCookieValue() stricter about allowed values",
"hash": "44b47b43ee0df87d01ed40ecf0899137c5bb2b68",
"date": "2018-06-12T18:18:54"
},
{
"message": "rdbms: include client ID hash in ChronologyProtector cookies\n\nPreviously, if an internal service forwarded the cookies for a\nuser (e.g. for permissions) but not the User-Agent header or not\nthe IP address (e.g. XFF), ChronologyProtector could timeout\nwaiting for a matching writeIndex to appear for the wrong key.\n\nThe cookie now tethers the client to the key that holds the\nDB positions from their last state-changing request.\n\nBug: T194403\nBug: T190082\nChange-Id: I84f2cbea82532d911cdfed14644008894498813a\n",
"bugs": [
"T194403",
"T190082"
],
"subject": "rdbms: include client ID hash in ChronologyProtector cookies",
"hash": "fb51330084b4bde1880c76589e55e7cd87ed0c6d",
"date": "2018-06-02T03:57:30"
},
{
"message": "Avoid unnecessary WaitConditionLoop delays in ChronologyProtector\n\nSince it takes time for the agent to get the response and set the\ncookie and, as well, the time into a request that a LoadBalancer is\ninitialized varies by many seconds (cookies loaded from the start),\ngive the cookie a much lower TTL than the DB positions in the stash.\n\nThis avoids having to wait for a position with a given cpPosIndex\nvalue, when the position already expired from the store, which is\na waste of time.\n\nAlso include the timestamp in \"cpPosIndex\" cookies to implement\nlogical expiration in case clients do not expire them correctly.\n\nBug: T194403\nBug: T190082\nChange-Id: I97d8f108dec59c5ccead66432a097cda8ef4a178\n",
"bugs": [
"T194403",
"T190082"
],
"subject": "Avoid unnecessary WaitConditionLoop delays in ChronologyProtector",
"hash": "52af356cad3799ebec3826e1e4743d76a114da3e",
"date": "2018-05-18T20:43:05"
},
{
"message": "rdbms: fix callback stage errors in LBFactory::commitMasterChanges\n\nJust like 082ed053b6 fixed pre-commit callback errors when new instances\nof LoadBalancer are made during that step, do the same for post-commit\ncallbacks.\n\nBug: T194308\nChange-Id: Ie79e0f22b3aced425cf067d0df6b67e368223e6c\n",
"bugs": [
"T194308"
],
"subject": "rdbms: fix callback stage errors in LBFactory::commitMasterChanges",
"hash": "86af2ef383b6fc9c4032dad769c00e672922d530",
"date": "2018-05-10T04:26:41"
},
{
"message": "rdbms: fix LBFactory::commitAll() round handling\n\nThis avoids \"Transaction round stage must be approved (not cursory)\".\n\nBug: T194308\nChange-Id: I9dbfe9cede02b1b1904c1d5e5a9802306c2492a2\n",
"bugs": [
"T194308"
],
"subject": "rdbms: fix LBFactory::commitAll() round handling",
"hash": "205cfc185446ad9dd355d3a57f4ee60d0dc1de57",
"date": "2018-05-09T21:51:18"
},
{
"message": "rdbms: fix finalization stage errors in LBFactory::commitMasterChanges\n\nIf a pre-commit callback caused a new LoadBalancer object to be created,\nthat object will be in the \"cursory\" state rather than the \"finalized\"\nstate. If any callbacks run on an LB instance, make LBFactory iterate\nover them all again to finalize these new instances.\n\nMake LoadBalancer::finializeMasterChanges allow calls to\nalready-finalized instances for simplicity.\n\nBug: T193668\nChange-Id: I4493e9571625a350c0a102219081ce090967a4ac\n",
"bugs": [
"T193668"
],
"subject": "rdbms: fix finalization stage errors in LBFactory::commitMasterChanges",
"hash": "082ed053b699cfd52555f3432a1b4a823a259236",
"date": "2018-05-07T18:04:43"
},
{
"message": "Make DeferredUpdates avoid running during LBFactory::commitMasterChanges\n\nBug: T193668\nChange-Id: I50890ef17ea72481a14c4abcd93ae58b93f15d28\n",
"bugs": [
"T193668"
],
"subject": "Make DeferredUpdates avoid running during LBFactory::commitMasterChanges",
"hash": "a79b9737f1b05171af60a3127d6c628ea6a16a96",
"date": "2018-05-03T22:11:38"
},
{
"message": "rdbms: make sure cpPosIndex cookie is applied to LBFactory in time\n\nOnce getMain() was called in setSchemaAliases(), the ChronologyProtector\nwas initialized and the setRequestInfo() call in Setup.php had no effect.\nOnly the request values read in LBFactory::__construct() were used, which\nreflect $_GET but not cookie values.\n\nUse the $wgDBtype variable to avoid this and add an exception when that\nsort of thing happens.\n\nFurther defer instantiation of ChronologyProtector so that methods like\nILBFactory::getMainLB() do not trigger construction.\n\nBug: T192611\nChange-Id: I735d3ade5cd12a5d609f4dae19ac88fec4b18b51\n",
"bugs": [
"T192611"
],
"subject": "rdbms: make sure cpPosIndex cookie is applied to LBFactory in time",
"hash": "628a3a9b267620914701a2a0a17bad8ab2e56498",
"date": "2018-04-23T15:44:02"
},
{
"message": "Normalize and lower the default DB lag wait timeout\n\nBug: T190960\nChange-Id: I49aca118583b20314e6bf82f196f3413571f5bd9\n",
"bugs": [
"T190960"
],
"subject": "Normalize and lower the default DB lag wait timeout",
"hash": "7f24eb5d789966ec7c18c0a612fc9089229d4279",
"date": "2018-03-28T20:49:25"
}
]
},
"includes/libs/rdbms/database/DatabaseMysqlBase.php": {
"File": "includes/libs/rdbms/database/DatabaseMysqlBase.php",
"TicketCount": 8,
"CommitCount": 10,
"Tickets": [
"T186764",
"T190960",
"T201900",
"T193565",
"T212284",
"T218388",
"T247865",
"T251457"
],
"Commits": [
{
"message": "rdbms: don't treat lock() as a write operation\n\nI8ac4bc4d6 caused lock() to be counted as a write operation. Since\nacquiring a lock may by design take a long time (e.g. PageEditStash),\nthis was causing transactions to be flagged as problematic due to the\nlarge amount of time spent in this \"write query\".\n\nBug: T251457\nChange-Id: Ic54d6c78b43a463c8f6edc6d65baa671a39ee39c\n",
"bugs": [
"T251457"
],
"subject": "rdbms: don't treat lock() as a write operation",
"hash": "4b5984bcca66424f58abda0569d9adc842725d77",
"date": "2020-05-01T16:55:25"
},
{
"message": "rdbms: Don't silence errors in DatabaseMysqlBase::serverIsReadOnly()\n\nIt's highly unlikely that that query would ever error, but if it does\n(and apparently it does sometimes somehow) we need to either handle the\nerror or let it be raised.\n\nSince there doesn't seem to be any particularly sane thing to assume if\nthere is an error, let's go with \"let it be raised\".\n\nBug: T247865\nChange-Id: I6c0e08df90eb46953ba5eb6b5a3e8c6f52929564\n",
"bugs": [
"T247865"
],
"subject": "rdbms: Don't silence errors in DatabaseMysqlBase::serverIsReadOnly()",
"hash": "1ba4d0ccd4bb86bf836cb07c9917f5dd31df2a53",
"date": "2020-03-25T19:09:06"
},
{
"message": "rdbms: add Database::executeQuery() method for internal use\n\nThis shares reconnection and retry logic but lacks some of the\nrestrictions applied to queries that go through the public query()\ninterface.\n\nUse this in a few places such as doSelectDomain() for mysql/mssql.\n\nBug: T212284\nChange-Id: Ie7341a0e6c4149fc375cc357877486efe9e56eb9\n",
"bugs": [
"T212284"
],
"subject": "rdbms: add Database::executeQuery() method for internal use",
"hash": "2866c9b7d4295314c3138166cdb671de6dbcb3ab",
"date": "2019-06-11T14:00:41"
},
{
"message": "rdbms: treat cloned temporary tables as \"effective write\" targets\n\nMake IDatabase::lastDoneWrites() reflect creation and changes to\nthe cloned temporary unit test tables but not other temporary tables.\nThis effects the LB method hasOrMadeRecentMasterChanges(). Other tables\nare assumpted to really just be there for temporary calculations rather\nacting as test-only ephemeral versions of permanent tables. Treating\nwrites to the \"fake permanent\" temp tables more like real permanent\ntables means that the tests better align with production.\n\nAt the moment, temporary tables still have to use DB_MASTER, given\nthe assertIsWritableMaster() check in query(). This restriction\ncan be lifted at some point, when RDBMs compatibility is robust.\n\nBug: T218388\nChange-Id: I4c0d629da254ac2aaf31aae35bd2efc7bc064ac6\n",
"bugs": [
"T218388"
],
"subject": "rdbms: treat cloned temporary tables as \"effective write\" targets",
"hash": "108fd8b18c1084de7af0bf05831ee9360f595c96",
"date": "2019-03-26T21:24:42"
},
{
"message": "rdbms: use a direct \"USE\" query for doSelectDomain() for mysql\n\nThis should give better error messages on failure.\n\nBug: T212284\nChange-Id: I55260c6e3db1770f01e3d6a6a363b917a57265be\n",
"bugs": [
"T212284"
],
"subject": "rdbms: use a direct \"USE\" query for doSelectDomain() for mysql",
"hash": "321640b117b775ba7feb26281922bfd7833b0618",
"date": "2019-03-26T18:50:28"
},
{
"message": "rdbms: Database::selectDB() update the domain and handle failure better\n\nLoadBalancer uses Database::getDomainId() for deciding which keys to use\nin the foreign connection handle arrays. This method should reflect any\nchanges made to the DB selection.\n\nIf the query fails, then do not change domain field. This is the sort of\napproach that LoadBalancer is expects in openForeignConnection(). Also,\nthrow an exception when selectDB() fails.\n\nThe db/schema/prefix fields of Database no longer exist in favor of just\nusing the newer currentDomain field.\n\nAlso:\n* Add IDatabase::selectDomain() method and made selectDB() wrap it.\n* Extract the DB name from sqlite files if not explicitly provided.\n* Fix inconsistent open() return values from Database subclasses.\n* Make a relationSchemaQualifier() method to handle the concern of\n omitting schema names in queries. The means that getDomainId() can\n still return the right value, rather than confusingly omitt the schema.\n* Make RevisionStore::checkDatabaseWikiId() account for the domain schema.\n Unlike d2a4d614fce09c, this does not incorrectly assume the storage is\n always for the current wiki domain. Also, LBFactorySingle sets the local\n domain so it is defined even in install.php.\n* Make RevisionStoreDbTestBase actually set the LoadBalancer local domain.\n* Make RevisionTest::testLoadFromTitle() account for the domain schema.\n\nBug: T193565\nChange-Id: I6e51cd54c6da78830b38906b8c46789c79498ab5\n",
"bugs": [
"T193565"
],
"subject": "rdbms: Database::selectDB() update the domain and handle failure better",
"hash": "fe0af6cad57821dc5b8741edfed77242d543ccaa",
"date": "2018-10-10T19:03:30"
},
{
"message": "Fix guarding of MySQL's numRows()\n\nIt can be true for successful write queries, not just false.\n\nf3a197e49b785 introduced a caller which calls numRows() on the return\nvalue of CREATE TEMPORARY TABLE queries, and it improved guarding of\nnumRows() in the PostgreSQL and SQLite cases accordingly, but it\nneglected MySQL.\n\nBug: T201900\nChange-Id: I8ae754a2518d9e47b093c31c20d98daaba913513\n",
"bugs": [
"T201900"
],
"subject": "Fix guarding of MySQL's numRows()",
"hash": "9fbbce857ccb64e64615b7bfb97c62a66f96c9ca",
"date": "2018-10-08T07:27:05"
},
{
"message": "rdbms: Avoid numRows() warnings for mysqli after table creation\n\nBug: T201900\nChange-Id: Ie86a7b8e680d79ad3f9be6ca4ec260b0589e5d0e\n",
"bugs": [
"T201900"
],
"subject": "rdbms: Avoid numRows() warnings for mysqli after table creation",
"hash": "d9fdc4098b5ca94397293f5140a472533c990cb7",
"date": "2018-08-14T18:52:27"
},
{
"message": "rdbms: avoid lag estimates in getLagFromPtHeartbeat ruined by snapshots\n\nBug: T190960\nChange-Id: I57dd8d3d0ca96d6fb2f9e83f062f29b1d53224dd\n",
"bugs": [
"T190960"
],
"subject": "rdbms: avoid lag estimates in getLagFromPtHeartbeat ruined by snapshots",
"hash": "24353a60d2c860cd24593d721b45291782a8489f",
"date": "2018-03-31T01:39:57"
},
{
"message": "rdbms: make DOMAIN_ANY ignore bogus MySQL DB names in config\n\nThis regressed in 14ee3f2, trying to select the server config\n\"dbname\" value as the database on connect. If that config is\nbogus, then the connection attempt would fail. Instead, always\npass null to the driver's connection function if the DB domain\ndoesn't matter.\n\nThis affected WMF sites during lag checks, on account of the\n\"serverTemplate\" value in $wgLBFactoryConf always using the local\nwiki domain (whether the cluster had such a database or not).\n\nUse strlen() as mysqli sees null and \"\" as the same for $dbname.\n\nBug: T186764\nChange-Id: I6699d17c0125a08415046211fee7906bbaf2c366\n",
"bugs": [
"T186764"
],
"subject": "rdbms: make DOMAIN_ANY ignore bogus MySQL DB names in config",
"hash": "e59b0556f4531b87d80113fcefc845ed9d6e3500",
"date": "2018-02-09T06:43:24"
}
]
},
"includes/libs/rdbms/database/IDatabase.php": {
"File": "includes/libs/rdbms/database/IDatabase.php",
"TicketCount": 8,
"CommitCount": 9,
"Tickets": [
"T191916",
"T193668",
"T201900",
"T193565",
"T218388",
"T212284",
"T191668",
"T239877"
],
"Commits": [
{
"message": "rdbms: Have Database::makeWhereFrom2d assume $subKey is string-based\n\nUntil I70473280, integer literals were always quoted as strings, because\nthe databases we support all have no problem with casting\nstring-literals for comparisons and such.\n\nBut it turned out that gave MySQL/MariaDB's planner problems in some\nqueries, so we changed it to not quote actual PHP integers.\n\nBut then we run into the fact that PHP associative arrays don't preserve\nthe types of keys, it converts integer-like strings into actual\nintegers. And when those are passed to the DB unquoted for comparison\nwith a string-typed column, MySQL/MariaDB's planner has problems again\nwhile PostgreSQL simply throws an error. Sigh.\n\nThis patch adjusts Database::makeWhereFrom2d to assume that the $subKey\ncolumn is going to need all values quoted, as is the case for all\ncallers in MediaWiki.\n\nBug: T239877\nChange-Id: I69c125e8ab9e4d463eab35c6833aabdc436d7674\n",
"bugs": [
"T239877"
],
"subject": "rdbms: Have Database::makeWhereFrom2d assume $subKey is string-based",
"hash": "9a1ecf2efdb7fa31a8be85ef18c80299da071ed3",
"date": "2019-12-06T11:55:31"
},
{
"message": "Use varargs for IDatabase::buildLike\n\nBug: T191668\nChange-Id: Id66af5c3de3e5bc5c2909316e1984eae95a0012a\n",
"bugs": [
"T191668"
],
"subject": "Use varargs for IDatabase::buildLike",
"hash": "e947a2691d4f737c32cf5084d8bad1028f32c5a2",
"date": "2019-10-04T09:52:42"
},
{
"message": "rdbms: Document varargs for IDatabase::buildLike\n\nThis is needed in order for Phan not to consider calls to\nIDatabase::buildLike as invalid. Interestingly, it does not\nconsider calls to Database::buildLike invalid.\n\nBug: T191668\nChange-Id: I0e027f5ec66d20b1d11e3441086001f6a751e1f5\n",
"bugs": [
"T191668"
],
"subject": "rdbms: Document varargs for IDatabase::buildLike",
"hash": "725a59f0c70e37de6de8e26f91b0101839d37ff9",
"date": "2019-06-18T14:11:15"
},
{
"message": "rdbms: add Database::executeQuery() method for internal use\n\nThis shares reconnection and retry logic but lacks some of the\nrestrictions applied to queries that go through the public query()\ninterface.\n\nUse this in a few places such as doSelectDomain() for mysql/mssql.\n\nBug: T212284\nChange-Id: Ie7341a0e6c4149fc375cc357877486efe9e56eb9\n",
"bugs": [
"T212284"
],
"subject": "rdbms: add Database::executeQuery() method for internal use",
"hash": "2866c9b7d4295314c3138166cdb671de6dbcb3ab",
"date": "2019-06-11T14:00:41"
},
{
"message": "rdbms: treat cloned temporary tables as \"effective write\" targets\n\nMake IDatabase::lastDoneWrites() reflect creation and changes to\nthe cloned temporary unit test tables but not other temporary tables.\nThis effects the LB method hasOrMadeRecentMasterChanges(). Other tables\nare assumpted to really just be there for temporary calculations rather\nacting as test-only ephemeral versions of permanent tables. Treating\nwrites to the \"fake permanent\" temp tables more like real permanent\ntables means that the tests better align with production.\n\nAt the moment, temporary tables still have to use DB_MASTER, given\nthe assertIsWritableMaster() check in query(). This restriction\ncan be lifted at some point, when RDBMs compatibility is robust.\n\nBug: T218388\nChange-Id: I4c0d629da254ac2aaf31aae35bd2efc7bc064ac6\n",
"bugs": [
"T218388"
],
"subject": "rdbms: treat cloned temporary tables as \"effective write\" targets",
"hash": "108fd8b18c1084de7af0bf05831ee9360f595c96",
"date": "2019-03-26T21:24:42"
},
{
"message": "rdbms: Database::selectDB() update the domain and handle failure better\n\nLoadBalancer uses Database::getDomainId() for deciding which keys to use\nin the foreign connection handle arrays. This method should reflect any\nchanges made to the DB selection.\n\nIf the query fails, then do not change domain field. This is the sort of\napproach that LoadBalancer is expects in openForeignConnection(). Also,\nthrow an exception when selectDB() fails.\n\nThe db/schema/prefix fields of Database no longer exist in favor of just\nusing the newer currentDomain field.\n\nAlso:\n* Add IDatabase::selectDomain() method and made selectDB() wrap it.\n* Extract the DB name from sqlite files if not explicitly provided.\n* Fix inconsistent open() return values from Database subclasses.\n* Make a relationSchemaQualifier() method to handle the concern of\n omitting schema names in queries. The means that getDomainId() can\n still return the right value, rather than confusingly omitt the schema.\n* Make RevisionStore::checkDatabaseWikiId() account for the domain schema.\n Unlike d2a4d614fce09c, this does not incorrectly assume the storage is\n always for the current wiki domain. Also, LBFactorySingle sets the local\n domain so it is defined even in install.php.\n* Make RevisionStoreDbTestBase actually set the LoadBalancer local domain.\n* Make RevisionTest::testLoadFromTitle() account for the domain schema.\n\nBug: T193565\nChange-Id: I6e51cd54c6da78830b38906b8c46789c79498ab5\n",
"bugs": [
"T193565"
],
"subject": "rdbms: Database::selectDB() update the domain and handle failure better",
"hash": "fe0af6cad57821dc5b8741edfed77242d543ccaa",
"date": "2018-10-10T19:03:30"
},
{
"message": "Fix guarding of MySQL's numRows()\n\nIt can be true for successful write queries, not just false.\n\nf3a197e49b785 introduced a caller which calls numRows() on the return\nvalue of CREATE TEMPORARY TABLE queries, and it improved guarding of\nnumRows() in the PostgreSQL and SQLite cases accordingly, but it\nneglected MySQL.\n\nBug: T201900\nChange-Id: I8ae754a2518d9e47b093c31c20d98daaba913513\n",
"bugs": [
"T201900"
],
"subject": "Fix guarding of MySQL's numRows()",
"hash": "9fbbce857ccb64e64615b7bfb97c62a66f96c9ca",
"date": "2018-10-08T07:27:05"
},
{
"message": "rdbms: fix finalization stage errors in LBFactory::commitMasterChanges\n\nIf a pre-commit callback caused a new LoadBalancer object to be created,\nthat object will be in the \"cursory\" state rather than the \"finalized\"\nstate. If any callbacks run on an LB instance, make LBFactory iterate\nover them all again to finalize these new instances.\n\nMake LoadBalancer::finializeMasterChanges allow calls to\nalready-finalized instances for simplicity.\n\nBug: T193668\nChange-Id: I4493e9571625a350c0a102219081ce090967a4ac\n",
"bugs": [
"T193668"
],
"subject": "rdbms: fix finalization stage errors in LBFactory::commitMasterChanges",
"hash": "082ed053b699cfd52555f3432a1b4a823a259236",
"date": "2018-05-07T18:04:43"
},
{
"message": "rdbms: fix transaction flushing in Database::close\n\nUse the right IDatabase constants for the $flush parameter to\nthe commit() and rollback() calls.\n\nThis fixes a regression from 3975e04cf4d1.\n\nAlso validate the mode/flush parameters to begin() and commit().\n\nBug: T191916\nChange-Id: I0992f9a87f2add303ed309efcc1adb781baecfdc\n",
"bugs": [
"T191916"
],
"subject": "rdbms: fix transaction flushing in Database::close",
"hash": "f9d10f9e0321151914159773edca3b8eb17dfa54",
"date": "2018-04-11T05:31:31"
}
]
},
"includes/Storage/RevisionStore.php": {
"File": "includes/Storage/RevisionStore.php",
"TicketCount": 13,
"CommitCount": 9,
"Tickets": {
"0": "T184559",
"1": "T183548",
"2": "T184689",
"3": "T184749",
"5": "T183716",
"6": "T183717",
"7": "T183550",
"8": "T183505",
"10": "T184595",
"11": "T187942",
"12": "T201194",
"13": "T195692",
"14": "T205675"
},
"Commits": [
{
"message": "Avoid fatal when finding no base revision for a null revision.\n\nBug: T205675\nChange-Id: Iae67649a1be9597086033ad34d9d00556ba35730\n",
"bugs": [
"T205675"
],
"subject": "Avoid fatal when finding no base revision for a null revision.",
"hash": "539cb2816aecee668c41b418552d75cc6329e0ab",
"date": "2018-10-04T17:54:31"
},
{
"message": "Use \"Unknown user\" instead of an empty user name.\n\nThis changes the user name to \"User unknown\" when constructing a RevisionRecord\nfrom a database row that has an empty ar_user_text resp rev_user_text field.\n\nThis may cause \"User unknown\" to be written to the database, if the\nRevisionRecord is used as the basis for a new revision that is being created,\nparticularly during undeletion. Since \"Unknown user\" is listed in\n$wgReservedUsernames, this should never lead to conflicts with actual user\nnames.\n\nIt is assumed that empty ar_user_text and rev_user_text fields will be\nfixed during migration to the new actor based database schema.\n\nBug: T195692\nChange-Id: I506c513b019778d83741e47f0d11093f5ab67a54\n",
"bugs": [
"T195692"
],
"subject": "Use \"Unknown user\" instead of an empty user name.",
"hash": "d6b989b5506c4afc5f4da47d5cbf394e278738c2",
"date": "2018-09-18T17:53:00"
},
{
"message": "Add safeguard against loading content across wikis.\n\nThe new MCR schema enables cross-wiki loading of page content,\nbut this mechanism doesn't work as long as the new code is reading from\nthe old schema. This is what caused T201194.\n\nBug: T201194\nChange-Id: I58af7a9e02780c55cd8fab20f19be36a0fa804da\n",
"bugs": [
"T201194"
],
"subject": "Add safeguard against loading content across wikis.",
"hash": "25e9a28fd035bb5dccc72d2ebb9413f2d0fb5a39",
"date": "2018-08-06T13:46:24"
},
{
"message": "Make LocalFile check early if the revision store is available\n\nThis reduces the odds of having files without corresponding\nwiki pages, given that the later is done in a deferred update.\n\nAlso made some documentation cleanups.\n\nBug: T187942\nChange-Id: Iff516669f535713d37e0011e2d7ed285c667f1c5\n",
"bugs": [
"T187942"
],
"subject": "Make LocalFile check early if the revision store is available",
"hash": "d9ba7cd0050d531c4f016fda285793568fa133c7",
"date": "2018-02-22T22:07:30"
},
{
"message": "[MCR] RevisionStore::getTitle final logged fallback to master\n\nThere have been many issues with RevisionStore and titles due\nto code paths that already know the title for a Revision not\npassing the title into Revision in various ways or not passing\nin the correct queryFlags.\nThe getTitle method now has a further fallback using Title::newFromID\nand Title::GAID_FOR_UPDATE if not already attempted.\n\nBug: T183548\nBug: T183716\nBug: T183717\nBug: T183550\nBug: T183505\nBug: T184559\nBug: T184595\nChange-Id: I6cf13e6baba354b08533a6151bbbc88a317be9d6\n",
"bugs": [
"T183548",
"T183716",
"T183717",
"T183550",
"T183505",
"T184559",
"T184595"
],
"subject": "[MCR] RevisionStore::getTitle final logged fallback to master",
"hash": "9a509a1792f1ac5c8aa3e16878c616b3be00110e",
"date": "2018-01-29T13:44:54"
},
{
"message": "Document expandBlob behavior when no flags are given.\n\nBug: T184749\nChange-Id: I5f1f029d928a7bc25877b0eae9f3822ec321b24a\n",
"bugs": [
"T184749"
],
"subject": "Document expandBlob behavior when no flags are given.",
"hash": "cb94d35c79266c1e6f05a2b8bae5b836d673c296",
"date": "2018-01-14T08:59:42"
},
{
"message": "RevisionStore, fix loadSlotContent with no $blobFlags\n\nThis includes tests that were previously created in:\nI6dcfc0497bfce6605fa5517c9f91faf7131f4334\n\nBug: T184749\nChange-Id: Ieb02ac593fc6b42af1692d03d9d578a76417eb54\n",
"bugs": [
"T184749"
],
"subject": "RevisionStore, fix loadSlotContent with no $blobFlags",
"hash": "90ca759f1527338d0745e8439973da01ff528c12",
"date": "2018-01-12T14:48:07"
},
{
"message": "Make Revision::__construct work with bad page ID\n\nFor backwards-copatibility, we need to be able to construct a Revision\nobject even for bad page IDs.\n\nBug: T184689\nChange-Id: I18c823d7b72504447982364d581b34e98924b67f\n",
"bugs": [
"T184689"
],
"subject": "Make Revision::__construct work with bad page ID",
"hash": "4589f4d78183fcfde2009e217ea8524102c95a31",
"date": "2018-01-11T16:03:48"
},
{
"message": "Revert \"Revert \"[MCR] Add and use $title param to RevisionStoregetPrevious/Next\"\"\n\nThis is a partial revert of a revert that reverted a fix believed to\nhave had its underlying issue fixed in:\nhttps://gerrit.wikimedia.org/r/#/c/400577/\n\nThe compat layer (Revision), now passes a Title object into the\nRevisionStore, and this title is used to construct the Record and\nalso any new Revision objects.\n\nBug: T184559\nBug: T183548\nChange-Id: Id073265c173f60aa8c456550fdb4bb5196013be8\n",
"bugs": [
"T184559",
"T183548"
],
"subject": "Revert \"Revert \"[MCR] Add and use $title param to RevisionStoregetPrevious/Next\"\"",
"hash": "3e2fdb71ed8dab35934ce289d5e559153326028c",
"date": "2018-01-10T17:05:53"
}
]
},
"includes/diff/DifferenceEngine.php": {
"File": "includes/diff/DifferenceEngine.php",
"TicketCount": 8,
"CommitCount": 8,
"Tickets": [
"T186163",
"T202454",
"T201218",
"T236320",
"T237709",
"T139012",
"T239772",
"T256298"
],
"Commits": [
{
"message": "DifferenceEngine: Don't pass false to DiffTools hook\n\nBug: T256298\nChange-Id: I2aeb96a63c30ed226d8d2f05a78baf37c50c6f4e\n",
"bugs": [
"T256298"
],
"subject": "DifferenceEngine: Don't pass false to DiffTools hook",
"hash": "1359f1533c5afb3ea0b7edb882d784a045e80c9a",
"date": "2020-06-25T01:20:03"
},
{
"message": "DifferenceEngine: Call DiffTools hook with correct parameter order\n\nmNewRevisionRecord is meant to be the first parameter, and\nmOldRevisionRecord the third, not the other way around\n\nBug: T256298\nChange-Id: I030a9834635a85ad464e5826926c26b41a9c31b0\n",
"bugs": [
"T256298"
],
"subject": "DifferenceEngine: Call DiffTools hook with correct parameter order",
"hash": "72010ca8186b8b8cac26bbf1ae29b5cdba1d2306",
"date": "2020-06-24T22:05:55"
},
{
"message": "Remove hacks for lack of index on rc_this_oldid\n\nIn several places, we're including rc_timestamp or other fields in a\nquery selecting on rc_this_oldid because there was historically no index\non the column.\n\nThe needed index was created by I0ccfd26d and deployed by T202167, so\nlet's remove the hacks.\n\nBug: T139012\nBug: T239772\nChange-Id: Ic99760075bde6603c9f2ab3ee262f5a2878205c7\n",
"bugs": [
"T139012",
"T239772"
],
"subject": "Remove hacks for lack of index on rc_this_oldid",
"hash": "152376376e6ef60c7169e31582db2be78194b0d4",
"date": "2019-12-04T21:00:02"
},
{
"message": "DifferenceEngine: Don't try counting revs between deleted revs with different page IDs\n\nRecent changes have caused the method for counting the number of\nrevisions in between the two sides of a diff to throw an error when the\nrevisions have different page IDs.\n\nThis blows up some cases of trying to get a difference between deleted\nrevisions, as the page IDs on the deleted revisions might not match\nand/or might not match the current ID returned by the Title object.\n\nBug: T237709\nChange-Id: I415f7ebb57fa4e879396b0db0e3a79edc2880be5\n",
"bugs": [
"T237709"
],
"subject": "DifferenceEngine: Don't try counting revs between deleted revs with different page IDs",
"hash": "a8cbf85d1ded885b7d4a2891d99732d5bd8f3024",
"date": "2019-11-20T18:21:46"
},
{
"message": "Don't calculate amount of inbetween revisions for MCR undo\n\nIb404d29a662de7736b50a1d07380f651d332ad6b introduced a new sanity\ncheck in RevisionStore::countRevisionsBetween to check whether\nthe ids are not null (otherwise, they're considered unsaved)\n\nFor MCR undo actions, diffs could be generated for unsaved revisions\n(where one revision is a not yet pre-existing combination where\nsome slots remain unchanged, but some slot gets undone)\n\nThe existing if statement here was already trying to guard against\nunsaved revisions, but was doing so in a way different from the\nnew checks in RevisionStore::countRevisionsBetween - even though this\nwas documented to only diff saved revisions, it wasn't checking\nthoroughly enough, and an exception would crop up later.\n\nBug: T236320\nChange-Id: If34766675f50b67d8b0788a6eab07d8d4e6fe183\n",
"bugs": [
"T236320"
],
"subject": "Don't calculate amount of inbetween revisions for MCR undo",
"hash": "7fe98fb56ccacafbb8a00347b6a3f7bb3168cf13",
"date": "2019-11-04T10:17:37"
},
{
"message": "Fix DifferenceEngine revision loading logic\n\nBug: T201218\nBug: T202454\nChange-Id: I867900190cb45b983e89769c7fc0f965e2651918\n",
"bugs": [
"T201218",
"T202454"
],
"subject": "Fix DifferenceEngine revision loading logic",
"hash": "b7ed112908191f28077212a0ee2a83ceeb5a3763",
"date": "2018-08-24T11:20:07"
},
{
"message": "DifferenceEngine: use a fake title when there's no real title\n\nBug: T202454\nChange-Id: I9ee90acc833de93b5fa2579b5debc9637c9e9c5b\n",
"bugs": [
"T202454"
],
"subject": "DifferenceEngine: use a fake title when there's no real title",
"hash": "4a09b9ed528477517b11c22bdfe1eb27d3d06c9e",
"date": "2018-08-23T14:23:05"
},
{
"message": "Add missing PHPDoc block to DifferenceEngine::getParserOutput\n\nBug: T186163\nChange-Id: Ifde6f8e458d90b1ec250dc4d587cd428717fe509\n",
"bugs": [
"T186163"
],
"subject": "Add missing PHPDoc block to DifferenceEngine::getParserOutput",
"hash": "d77dfda69e491d536bd33fa098d7539db361086a",
"date": "2018-02-01T11:49:57"
}
]
},
"includes/MovePage.php": {
"File": "includes/MovePage.php",
"TicketCount": 8,
"CommitCount": 8,
"Tickets": [
"T205675",
"T213168",
"T210739",
"T248789",
"T255608",
"T250023",
"T235589",
"T265779"
],
"Commits": [
{
"message": "MovePage: Handle target page deletion failure gracefully.\n\nThrowing exception is not appropriate here and will prevent proper usage\nof ArticleDeleteHook, which is ostensibly meant to allow extensions to\ncleanly abort the deletion if they need to.\n\nWhen the target page deletion fails, instead of blowing everything up\nwith exception, just cancel the open atomic section and return the\nuser-friendly deletion error back to the user.\n\nBug: T265779\nChange-Id: I66b4458a103f1274715e22a784344c55a62a59ac\n",
"bugs": [
"T265779"
],
"subject": "MovePage: Handle target page deletion failure gracefully.",
"hash": "db20453b0ae3c97860a68a6673b84d3ad215639d",
"date": "2020-10-21T01:07:36"
},
{
"message": "Add extra details to error log to debug why null revision creation is failing\n\nBug: T235589\nChange-Id: Icae5cdc5b010e23d4956269f64069c0271217af7\n",
"bugs": [
"T235589"
],
"subject": "Add extra details to error log to debug why null revision creation is failing",
"hash": "a6284cc69c9e995798732073a0f6744c54ca5464",
"date": "2020-08-17T01:14:56"
},
{
"message": "Add PageMoveCompleting hook, to replace TitleMoveCompleting\n\nWe intially thought we wouldn't need this and would only need\nPageMoveComplete, but it turns out Flow does need it.\n\nBug: T250023\nBug: T255608\nChange-Id: I8e7308541d2fe6d02b9dad63e1c86c89f6e7cf53\n",
"bugs": [
"T250023",
"T255608"
],
"subject": "Add PageMoveCompleting hook, to replace TitleMoveCompleting",
"hash": "c8b9d849fc2ef302ec2adf5ec3bdb647d176c2ab",
"date": "2020-06-17T05:27:28"
},
{
"message": "Revert \"Hard deprecate the `TitleMoveCompleting` hook\"\n\nThis reverts commit 8f7f133ccfaaaa45b7c00835d3d64884bde0c5c3.\n\nPer I8cdef229bf0a3, we still need this in Flow for now, to fix a UBN.\n\nBug: T255608\nChange-Id: Id414c98161b9f560b14d4eaec8aedeec4659df27\n",
"bugs": [
"T255608"
],
"subject": "Revert \"Hard deprecate the `TitleMoveCompleting` hook\"",
"hash": "6fc783ad2b3ef92d4a160dc97e7896fea3904247",
"date": "2020-06-17T04:09:34"
},
{
"message": "MovePage: Use correct Title when creating the null revision\n\nPrior to I4a5fe41fe, the call to Revision::newNullRevision() would load\nthe Title from the database and use that to create the null revision.\n\nI4a5fe41fe, in removing uses of the deprecated Revision, changed that to\ncall RevisionStore::newNullRevision() directly. Instead of loading the\nTitle, that takes one as a parameter. $this->oldTitle was being passed,\nwhich at this point in the execution contains the correct page_id but\nthe *old* page_namespace/page_title. Passing $nt would also not work,\nsince that contains the target page's old page_id (likely 0).\n\nThe solution is to reset the ID in $nt (so it contains both the correct\nID and name), so we can then pass it to RevisionStore::newNullRevision().\n\nBug: T248789\nChange-Id: I8e0ae616006c0cebde60cfa53c0a842bd2cc1545\n",
"bugs": [
"T248789"
],
"subject": "MovePage: Use correct Title when creating the null revision",
"hash": "3309ead776b092a824793d0bb10e2cec5e584989",
"date": "2020-04-02T15:59:08"
},
{
"message": "Fix error reporting in MovePage\n\nBug: T210739\nChange-Id: I8f6c9647ee949b33fd4daeae6aed6b94bb1988aa\n",
"bugs": [
"T210739"
],
"subject": "Fix error reporting in MovePage",
"hash": "e1d2c92ac69b6537866c742d8e9006f98d0e82e8",
"date": "2019-01-17T02:14:52"
},
{
"message": "Fix missing ATOMIC_CANCELABLE in MovePage::move()\n\nFollow-up to I4aaa8af50d684de.\n\nBug: T213168\nChange-Id: I0566b37117b6c69d4043e77e6368bf79fa84e325\n",
"bugs": [
"T213168"
],
"subject": "Fix missing ATOMIC_CANCELABLE in MovePage::move()",
"hash": "eda915cb7af1e63c0678696094e8ad99d990918c",
"date": "2019-01-08T19:37:27"
},
{
"message": "Avoid fatal when finding no base revision for a null revision.\n\nBug: T205675\nChange-Id: Iae67649a1be9597086033ad34d9d00556ba35730\n",
"bugs": [
"T205675"
],
"subject": "Avoid fatal when finding no base revision for a null revision.",
"hash": "539cb2816aecee668c41b418552d75cc6329e0ab",
"date": "2018-10-04T17:54:31"
}
]
},
"includes/Linker.php": {
"File": "includes/Linker.php",
"TicketCount": 8,
"CommitCount": 8,
"Tickets": [
"T222529",
"T222857",
"T224050",
"T222628",
"T224095",
"T226448",
"T227656",
"T200055"
],
"Commits": [
{
"message": "Don't fail hard on bad titles in the database.\n\nThis updates some code that has been constructing TitleValue directly\nto use TitleValue::tryNew or TitleParser::makeTitleValueSafe.\n\nBug: T200055\nChange-Id: If781fe62213413c8fb847fd9e90f079e2f9ffc9d\n",
"bugs": [
"T200055"
],
"subject": "Don't fail hard on bad titles in the database.",
"hash": "e98094956ab61baa73d0753d00b10f782b62e73e",
"date": "2019-11-25T21:15:38"
},
{
"message": "Linker: Fix incorrect test added in Ib9816d8b\n\nThe test was intended to check for whether we have a user ID *or* a user\nname. Instead, it's checking if we have a user ID *and* a user name. And\nit also failed to consider User:0.\n\nBug: T227656\nChange-Id: Ia1b5c4a6ae028513b73a65cd2c885459327d29c3\n",
"bugs": [
"T227656"
],
"subject": "Linker: Fix incorrect test added in Ib9816d8b",
"hash": "1ef6deed0062415dff5671d0a43a396599d1028b",
"date": "2019-07-10T17:30:03"
},
{
"message": "Fix LocalFile::move\n\nFixes a wfFindFile/wfLocalFile mixup in I9437494d.\n\nAlso restore the original behavior in Linker::makeBrokenImageLinkObj\nfor paranoia - findFile has a local cache so calling it and then\ndiscarding the results is not completely a noop.\n\nBug: T226448\nChange-Id: Ibb9d6f6383eb96ba27e0edd60423552e5cea4688\n",
"bugs": [
"T226448"
],
"subject": "Fix LocalFile::move",
"hash": "b71610c069c69b2a991cb43b059642aca1882f4a",
"date": "2019-06-25T16:38:13"
},
{
"message": "Make userLink() not fail too hard on false and null.\n\nThis works around an issue in Flow, which sometimes passes false\nfor a user name.\n\nBug: T224095\nChange-Id: I14dc52f7199012dc35605f3170b06eb1719165a7\n",
"bugs": [
"T224095"
],
"subject": "Make userLink() not fail too hard on false and null.",
"hash": "e9e50ad014e64113f0f7ba1f36fe7cc823959da9",
"date": "2019-06-05T12:51:57"
},
{
"message": "Fix empty auto-summaries triggering a fatal error.\n\nAka: Streamline Linker::formatAutocomments() and add tests\n\nThis uses the \"streamlining\" for the code proposed by Thiemo\nin I38edc1ad7720. I have squashed the two commits, so it now\nhas his code in Linker, but still has my tests as well as his.\n\nThiemo wrote on his patch:\nThis also changes the output in case there is no fragment to link to.\nBefore an empty `/* */` in a summary this would have created a link to\nthe page. I would like to argue this is not what a user expects.\n\nBug: T222628\nChange-Id: I05408ede0e20dfd976f4057fc5baab461d2ef769\n",
"bugs": [
"T222628"
],
"subject": "Fix empty auto-summaries triggering a fatal error.",
"hash": "0c4410a6233da6ee3e844f60b06a68f78a27ebd3",
"date": "2019-05-30T14:34:34"
},
{
"message": "Switch empty username logging from warning to debug.\n\nChange ddd1d4b9203a added logging when an empty username is\npassed to various Linker.php functions. The logging revealed\nthis occuring on live, but it is occuring frequently and\ncausing distracting log noise. Switch the logging type from\nwarning to debug while the root cause is investigated.\n\nBug: T224050\nChange-Id: I93826e486951e992afdf778f446792d3c209996a\n",
"bugs": [
"T224050"
],
"subject": "Switch empty username logging from warning to debug.",
"hash": "1449fa776966e5ce7373ca257155f46de079fee3",
"date": "2019-05-25T02:14:42"
},
{
"message": "Linker: Fix fatal error for \"/* */\" in an edit summary\n\nFollows-up b6e1e99bec8, which switched the method from Title::makeTitleSafe\nto TitleValue. The latter throws fatal on non-string $fragment.\n\nTitle::makeTitleSafe, on the other hand, uses makeName(), which casts\n$fragment to a string, and ignores if it ends up as empty string\n(boolean false becomes empty string and thus did \"the right thing\").\n\nBug: T222857\nChange-Id: Iecc2140fabd31ef0f193740c7fab0fc698c38e51\n",
"bugs": [
"T222857"
],
"subject": "Linker: Fix fatal error for \"/* */\" in an edit summary",
"hash": "63cf83e089a962cec3603fac209b33679430e075",
"date": "2019-05-19T11:20:34"
},
{
"message": "Log warning and show error on empty username\n\nHistorically it seems that if Linker::userLink or friends were passed an\nempty username (probably due to an incorrect database entry), they would\nproduce bogus output, e.g., an <a> with no contents or a link to the\ninvalid page \"User_talk:\" or similar.\n\nIn b6e1e99bec8d we replaced an occurrence of Title::makeTitle() (no\nsafety checks!) with creating a TitleValue, which asserts in its\nconstructor that the title text is not empty. This made such pages fail\nan assertion and stop displaying at all.\n\nNow there's a proper check for the error. Such cases will log a\nproduction error and return \"(no username available)\".\n\nBug: T222529\nChange-Id: Id65bdf9666b0d16e5553b8f38c7cf8fce2e37a25\n",
"bugs": [
"T222529"
],
"subject": "Log warning and show error on empty username",
"hash": "ddd1d4b9203aa3b516afbb099b6819a42df6faec",
"date": "2019-05-06T19:44:23"
}
]
},
"includes/parser/Parser.php": {
"File": "includes/parser/Parser.php",
"TicketCount": 8,
"CommitCount": 8,
"Tickets": [
"T184123",
"T187833",
"T203583",
"T208000",
"T209236",
"T220854",
"T251952",
"T264257"
],
"Commits": [
{
"message": "Revert \"Revert \"Revert \"Hard deprecate all public properties in CacheTime and ParserOutput\"\"\"\n\nThis reverts commit deacee9088948e074722af0148000ad9455b07df.\n\nBug: T264257\nChange-Id: Ie68d8081a42e7d8103e287b6d6857a30dc522f75\n",
"bugs": [
"T264257"
],
"subject": "Revert \"Revert \"Revert \"Hard deprecate all public properties in CacheTime and ParserOutput\"\"\"",
"hash": "3254e41a4cc0bcf85b5b0de3e3d237d4ebd7a987",
"date": "2020-10-01T18:03:41"
},
{
"message": "Fix impedance mismatch with Parser::getRevisionRecordObject()\n\nParser::getRevisionRecordObject() returns `null` if the revision is\nmissing, but it invokes ParserOptions::getCurrentRevisionRecordCallback()\n(ie, Parser::statelessFetchRevisionRecord() by default) which returns\n`false` as its error condition.\n\nThis reverts commit ae74a29af3116eb73a4cb775736b3acee0a65c59, and instead\nfixes the bug at its root.\n\nBug: T251952\nChange-Id: If36b35391f7833a1aded8b5a0de706d44187d423\n",
"bugs": [
"T251952"
],
"subject": "Fix impedance mismatch with Parser::getRevisionRecordObject()",
"hash": "2712cb8330d8e9b4755d89b6909c5589ee6da8d3",
"date": "2020-05-06T16:44:05"
},
{
"message": "Add vary-revision-exist flag to handle {{REVISIONID}} and parser cache\n\nFollow-up to c537eb186862b3\n\nBug: T220854\nChange-Id: Idc19cc29764a38e3671ca1dea158bd5fb46eaf4d\n",
"bugs": [
"T220854"
],
"subject": "Add vary-revision-exist flag to handle {{REVISIONID}} and parser cache",
"hash": "5e6d9340cb81d0040cbeb2371a83e884db90cd68",
"date": "2019-04-13T00:20:50"
},
{
"message": "Protect legacy URL parameter syntax in link and alt options\n\nHTML doesn't allow certain semicolon-less HTML entities in attribute\nvalues to avoid breaking legacy markup like:\n <a href=\"http://example.com?foo&param=bar\">...</a>\n(Note that the & in that URL is not properly entity-escaped as `&amp;`.)\n\nUnlike wikitext, HTML generally allows semicolon-less legacy entities\nin text.\n\nOur alt and link option processing shove text through\nSanitizer::stripAllTags, which does entity decoding including these\nlegacy semicolon-less entities. Wikitext doesn't allow semicolon-less\nentities, so escape & characters where appropriate to protect alt/link\noptions and avoid breaking URLs.\n\nThis was a \"regression\" in how alt options were handled starting in\nddb4913f53624c8ee0a2a91bd44bf750e378569d when we switched to using\nRemex for Sanitizer::stripAllTags -- semicolon-less entities (previously\ninvalid in wikitext) were now being decoded when stripAllTags was\ncalled on alt text. This change became a problem when\nad80f0bca27c2b0905b2b137977586bfab80db34 sent link option text through\nSanitizer::stripAllTags (with the new semicolon-less entity decode)\ninstead of PHP's strip_tags (which, in addition to its other faults,\ndoesn't do entity decode at all). This suddenly started decoding\n\"non-wikitext\" entities like `&para` inside URLs, breaking links.\nFiled T210437 as a follow-up to consider changing the behavior\nof Sanitizer::stripAllTags() globally to prevent it from decoding\nsemicolon-less entities for all callers.\n\nBug: T209236\nChange-Id: I5925e110e335d83eafa9de935c4e06806322f4a9\n",
"bugs": [
"T209236"
],
"subject": "Protect legacy URL parameter syntax in link and alt options",
"hash": "f87898b4885c2def58cab81e7481a5e96569f288",
"date": "2018-11-27T15:12:05"
},
{
"message": "Fix use of non-existent variable Parser::$config\n\nFix bug from Ib4394f370cb561ccf195338a1c2e9e465dcb3dc3\n\nAdd test.\n\nBug: T208000\nChange-Id: Ia81cca1b64afef2af3cb8dff19719a7f0de9d306\n",
"bugs": [
"T208000"
],
"subject": "Fix use of non-existent variable Parser::$config",
"hash": "a6a017cea454458216aca5f06251b2e9b86bd9e3",
"date": "2018-10-25T23:27:55"
},
{
"message": "Provide new, unsaved revision to PST to fix magic words.\n\nThis injects the new, unsaved RevisionRecord object into the Parser used\nfor Pre-Save Transform, and sets the user and timestamp on that revision,\nto allow {{subst:REVISIONUSER}} and {{subst:REVISIONTIMESTAMP}} to function.\n\nBug: T203583\nChange-Id: I31a97d0168ac22346b2dad6b88bf7f6f8a0dd9d0\n",
"bugs": [
"T203583"
],
"subject": "Provide new, unsaved revision to PST to fix magic words.",
"hash": "465954aa23cec76ca47e51a58ff342f46fbbdcab",
"date": "2018-09-06T16:33:44"
},
{
"message": "Limit total expansion size in StripState and improve limit handling\n\n* Add a new limit to the parser which limits the size of the output\n generated by StripState. The relevant bug shows exponential blowup in\n output size.\n* Remove the $prefix parameter from the StripState constructor. Used by\n no Gerrit-hosted extensions, hard-deprecated since 1.26.\n* Convert the existing unstrip recursion depth limit to a normal parser\n limit with limit report row, warning and tracking category. Provide\n the same features in the new limit.\n* Add an optional $parser parameter to the StripState constructor so\n that warnings and tracking categories can be added.\n\nBug: T187833\nChange-Id: Ie5f6081177610dc7830de4a0a40705c0c8cb82f1\n",
"bugs": [
"T187833"
],
"subject": "Limit total expansion size in StripState and improve limit handling",
"hash": "3dfda8c1552a6d43eaf85e3e38427833114ddf06",
"date": "2018-03-05T05:16:04"
},
{
"message": "Follow-up 6f07389ef2eb: fix variable name\n\nCaused Notice: Undefined variable: text\n\nBug: T184123\nChange-Id: I950a02134b145a2928af33995ca37a6965f265e4\n",
"bugs": [
"T184123"
],
"subject": "Follow-up 6f07389ef2eb: fix variable name",
"hash": "7f68220db69e99b374557499ccebaa710c89c750",
"date": "2018-01-04T21:31:41"
}
]
},
"includes/ServiceWiring.php": {
"File": "includes/ServiceWiring.php",
"TicketCount": 15,
"CommitCount": 8,
"Tickets": [
"T183548",
"T183716",
"T183717",
"T183550",
"T183505",
"T184559",
"T184595",
"T192611",
"T202483",
"T231183",
"T231200",
"T231198",
"T231220",
"T201405",
"T249045"
],
"Commits": [
{
"message": "UserNameUtils: use ITextFormatter instead of MessageLocalizer\n\nBug: T249045\nChange-Id: Ica1e1e4788d4b9f9dfcf9f8c8b4136147d92b32e\n",
"bugs": [
"T249045"
],
"subject": "UserNameUtils: use ITextFormatter instead of MessageLocalizer",
"hash": "7f643f2ab6ea5c8e3734adbcefbf6f4d01787de0",
"date": "2020-04-13T16:28:02"
},
{
"message": "Make LocalisationCache a service\n\nThis removes Language::$dataCache without deprecation, because 1) I\ndon't know of a way to properly simulate it in the new paradigm, and 2)\nI found no direct access to the member outside of the Language and\nLanguageTest classes.\n\nAn earlier version of this patch (e4468a1d6b6) had to be reverted\nbecause of a massive slowdown on test runs. Based on some local testing,\nthis should fix the problem. Running all tests in languages is slowed\ndown by only around 20% instead of a factor of five, and memory usage is\nactually reduced greatly (~350 MB -> ~200 MB). The slowdown is still not\ngreat, but I assume it's par for the course for converting things to\nservices and is acceptable. If not, I can try to optimize further.\n\nBug: T231220\nBug: T231198\nBug: T231200\nBug: T201405\nChange-Id: Ieadbd820379a006d8ad2d2e4a1e96241e172ec5a\n",
"bugs": [
"T231220",
"T231198",
"T231200",
"T201405"
],
"subject": "Make LocalisationCache a service",
"hash": "043d88f680cf66c90e2bdf423187ff8b994b1d02",
"date": "2019-10-07T20:18:47"
},
{
"message": "Revert \"Make LocalisationCache a service\"\n\nThis reverts commits:\n - 76a940350d36c323ebedb4ab45cc81ed1c6b6c92\n - b78b8804d076618e967c7b31ec15a1bd9e35d1d0\n - 2e52f48c2ed8dcf480843e2186f685a86810e2ac\n - e4468a1d6b6b9fdc5b64800febdc8748d21f213d\n\nBug: T231200\nBug: T231198\nChange-Id: I1a7e46a979ae5c9c8130dd3927f6663a216ba753\n",
"bugs": [
"T231200",
"T231198"
],
"subject": "Revert \"Make LocalisationCache a service\"",
"hash": "308e6427aef169a575a339e6a8e0558d29403a1d",
"date": "2019-08-26T16:28:26"
},
{
"message": "Pass correct store to rebuildLocalisationCache.php\n\ne4468a1d6b6 completely broke rebuildLocalisationCache.php by\nunconditionally passing in LCStoreDB( [] ) instead of constructing the\ncorrect object.\n\nBug: T231183\nChange-Id: I0d52662e8745cf0e10091169b3b08eff48ef2b8f\n",
"bugs": [
"T231183"
],
"subject": "Pass correct store to rebuildLocalisationCache.php",
"hash": "76a940350d36c323ebedb4ab45cc81ed1c6b6c92",
"date": "2019-08-26T09:56:52"
},
{
"message": "Correctly register storeDirectory in l10n cache\n\ne4468a1d6b6 made LocalisationCache a service and refactored a bunch of\nsetup code. In doing so, when processing 'storeDirectory' from\n$wgLocalisationCacheConf, it accidentally started treating empty\nnon-null values (such as the default \"false\") as storage paths instead\nof meaning \"fall back to $wgCacheDirectory\". This would have broken all\nconfig that used file store for LocalisationCache and did not specify\n'storeDirectory'.\n\nBug: T231183\nChange-Id: I9ff16be628996b202599e3bb2feed088af03775f\n",
"bugs": [
"T231183"
],
"subject": "Correctly register storeDirectory in l10n cache",
"hash": "b78b8804d076618e967c7b31ec15a1bd9e35d1d0",
"date": "2019-08-26T09:41:56"
},
{
"message": "The BlobStoreFactory constructor needs an LBFactory\n\nBlobStoreFactory::newBlobStore() takes a wiki ID as a parameter, so it\nneeds an LBFactory to fetch the correct LoadBalancer from.\n\nBug: T202483\nChange-Id: I834cd95251d76cb862600362525faf60d4e170b9\n",
"bugs": [
"T202483"
],
"subject": "The BlobStoreFactory constructor needs an LBFactory",
"hash": "f2f82dcb948811ac5abc1beda4431822c99e76cf",
"date": "2018-08-22T06:47:04"
},
{
"message": "rdbms: make sure cpPosIndex cookie is applied to LBFactory in time\n\nOnce getMain() was called in setSchemaAliases(), the ChronologyProtector\nwas initialized and the setRequestInfo() call in Setup.php had no effect.\nOnly the request values read in LBFactory::__construct() were used, which\nreflect $_GET but not cookie values.\n\nUse the $wgDBtype variable to avoid this and add an exception when that\nsort of thing happens.\n\nFurther defer instantiation of ChronologyProtector so that methods like\nILBFactory::getMainLB() do not trigger construction.\n\nBug: T192611\nChange-Id: I735d3ade5cd12a5d609f4dae19ac88fec4b18b51\n",
"bugs": [
"T192611"
],
"subject": "rdbms: make sure cpPosIndex cookie is applied to LBFactory in time",
"hash": "628a3a9b267620914701a2a0a17bad8ab2e56498",
"date": "2018-04-23T15:44:02"
},
{
"message": "[MCR] RevisionStore::getTitle final logged fallback to master\n\nThere have been many issues with RevisionStore and titles due\nto code paths that already know the title for a Revision not\npassing the title into Revision in various ways or not passing\nin the correct queryFlags.\nThe getTitle method now has a further fallback using Title::newFromID\nand Title::GAID_FOR_UPDATE if not already attempted.\n\nBug: T183548\nBug: T183716\nBug: T183717\nBug: T183550\nBug: T183505\nBug: T184559\nBug: T184595\nChange-Id: I6cf13e6baba354b08533a6151bbbc88a317be9d6\n",
"bugs": [
"T183548",
"T183716",
"T183717",
"T183550",
"T183505",
"T184559",
"T184595"
],
"subject": "[MCR] RevisionStore::getTitle final logged fallback to master",
"hash": "9a509a1792f1ac5c8aa3e16878c616b3be00110e",
"date": "2018-01-29T13:44:54"
}
]
},
"includes/DefaultSettings.php": {
"File": "includes/DefaultSettings.php",
"TicketCount": 9,
"CommitCount": 7,
"Tickets": [
"T195692",
"T198176",
"T207433",
"T231200",
"T231198",
"T231220",
"T201405",
"T235357",
"T264799"
],
"Commits": [
{
"message": "Log IP/device changes within the same session\n\nStore IP and device information in the session and log when\nit changes. The goal is to detect session leakage when the\nsession is accidentally sent to another user, which is a\nhypothetical cause of T264370. The log will be noisy since\nusers do change IP addresses for a number of reasons,\nbut we are mainly interested in the ability of correlating\nuser-reported incidents where we have a username to filter\nby, so that's OK.\n\nBased on I27468a3f6d58.\n\nBug: T264799\nChange-Id: Ifa14fa637c1b199159ea11e983a25212ae005565\n",
"bugs": [
"T264799"
],
"subject": "Log IP/device changes within the same session",
"hash": "d5d3c90152299abf73f7747d5c53984d9fb53ea1",
"date": "2020-10-08T20:13:25"
},
{
"message": "Stop using SCRIPT_NAME where possible, rely on statically configured routing\n\nIt has become apparent that $_SERVER['SCRIPT_NAME'] may contain the same\nthing as REQUEST_URI, for example in WMF production. PATH_INFO is not\nset, so there is no way to split the URL into SCRIPT_NAME and PATH_INFO\ncomponents apart from configuration.\n\n* Revert the fix for T34486, which added a route for SCRIPT_NAME to the\n PathRouter for the benefit of img_auth.php. In T235357, the route thus\n added contained $1, breaking everything.\n* Remove calls to WebRequest::getPathInfo() from everywhere other than\n index.php. Dynamic modification of $wgArticlePath in order to make\n PathRouter work was weird and broken anyway. All that is really needed\n is a suffix of REQUEST_URI, so I added a function which provides that.\n* Add $wgImgAuthPath, for use as a last resort workaround for T34486.\n* Avoid the use of $_SERVER['SCRIPT_NAME'] to detect the currently\n running script.\n* Deprecated wfGetScriptUrl(), a fairly simple wrapper for SCRIPT_NAME.\n Apparently no callers in core or extensions.\n\nBug: T235357\nChange-Id: If2b82759f3f4aecec79d6e2d88cd4330927fdeca\n",
"bugs": [
"T235357"
],
"subject": "Stop using SCRIPT_NAME where possible, rely on statically configured routing",
"hash": "507501d6ee29eb1b8df443192971fe2b6a6addb6",
"date": "2020-04-01T16:33:38"
},
{
"message": "Make LocalisationCache a service\n\nThis removes Language::$dataCache without deprecation, because 1) I\ndon't know of a way to properly simulate it in the new paradigm, and 2)\nI found no direct access to the member outside of the Language and\nLanguageTest classes.\n\nAn earlier version of this patch (e4468a1d6b6) had to be reverted\nbecause of a massive slowdown on test runs. Based on some local testing,\nthis should fix the problem. Running all tests in languages is slowed\ndown by only around 20% instead of a factor of five, and memory usage is\nactually reduced greatly (~350 MB -> ~200 MB). The slowdown is still not\ngreat, but I assume it's par for the course for converting things to\nservices and is acceptable. If not, I can try to optimize further.\n\nBug: T231220\nBug: T231198\nBug: T231200\nBug: T201405\nChange-Id: Ieadbd820379a006d8ad2d2e4a1e96241e172ec5a\n",
"bugs": [
"T231220",
"T231198",
"T231200",
"T201405"
],
"subject": "Make LocalisationCache a service",
"hash": "043d88f680cf66c90e2bdf423187ff8b994b1d02",
"date": "2019-10-07T20:18:47"
},
{
"message": "Revert \"Make LocalisationCache a service\"\n\nThis reverts commits:\n - 76a940350d36c323ebedb4ab45cc81ed1c6b6c92\n - b78b8804d076618e967c7b31ec15a1bd9e35d1d0\n - 2e52f48c2ed8dcf480843e2186f685a86810e2ac\n - e4468a1d6b6b9fdc5b64800febdc8748d21f213d\n\nBug: T231200\nBug: T231198\nChange-Id: I1a7e46a979ae5c9c8130dd3927f6663a216ba753\n",
"bugs": [
"T231200",
"T231198"
],
"subject": "Revert \"Make LocalisationCache a service\"",
"hash": "308e6427aef169a575a339e6a8e0558d29403a1d",
"date": "2019-08-26T16:28:26"
},
{
"message": "Include BCP 47 codes in $wgDummyLanguageCodes, but deprecate it\n\nAdd BCP 47 codes to $wgDummyLanguageCodes to ensure that\nLanguage::factory() will return a valid MediaWiki-internal code if\ngiven a BCP 47 alias. We will want to make $wgDummyLanguageCodes a\nprivate property of LanguageCode eventually, but let's start with\nremoving it from user configuration.\n\nSetting $wgDummyLanguageCodes in LocalSettings.php has been deprecated\nsince 1.29. Hard deprecate adding entries to $wgDummyLanguageCodes so\nthat we can eventually remove manual overrides from user\nconfiguration.\n\nThis is a follow-up to 48ab87d0a37da80c2e2ae3a20f645548d2a787f9,\nwhich described the various categories of codes, and\n21ead7a98d1a103b77f1e3ba29a85493782d398b, which added the correct\nBCP 47 mappings.\n\nBug: T207433\nChange-Id: I9f6dda3360f79ab65f6392f44c98926588d851c8\n",
"bugs": [
"T207433"
],
"subject": "Include BCP 47 codes in $wgDummyLanguageCodes, but deprecate it",
"hash": "f2e0516934c489bd6883b7602d57accf959018e0",
"date": "2018-10-19T18:31:21"
},
{
"message": "Use job queue for deletion of pages with many revisions\n\nPages with many revisions experience transaction size exceptions,\ndue to archiving revisions. Use the job queue to split the work\ninto batches and avoid exceptions.\n\nBug: T198176\nChange-Id: Ie800fb5a46be837ac91b24b9402ee90b0355d6cd\n",
"bugs": [
"T198176"
],
"subject": "Use job queue for deletion of pages with many revisions",
"hash": "ca9f1dabf3719c579fd117e7b9826a3269783e7e",
"date": "2018-10-04T00:16:14"
},
{
"message": "Use \"Unknown user\" instead of an empty user name.\n\nThis changes the user name to \"User unknown\" when constructing a RevisionRecord\nfrom a database row that has an empty ar_user_text resp rev_user_text field.\n\nThis may cause \"User unknown\" to be written to the database, if the\nRevisionRecord is used as the basis for a new revision that is being created,\nparticularly during undeletion. Since \"Unknown user\" is listed in\n$wgReservedUsernames, this should never lead to conflicts with actual user\nnames.\n\nIt is assumed that empty ar_user_text and rev_user_text fields will be\nfixed during migration to the new actor based database schema.\n\nBug: T195692\nChange-Id: I506c513b019778d83741e47f0d11093f5ab67a54\n",
"bugs": [
"T195692"
],
"subject": "Use \"Unknown user\" instead of an empty user name.",
"hash": "d6b989b5506c4afc5f4da47d5cbf394e278738c2",
"date": "2018-09-18T17:53:00"
}
]
},
"includes/libs/JavaScriptMinifier.php": {
"File": "includes/libs/JavaScriptMinifier.php",
"TicketCount": 1,
"CommitCount": 7,
"Tickets": [
"T201606"
],
"Commits": [
{
"message": "JavaScriptMinifier: Fix bad state after ternary in object literal\n\nThe following pattern of input (found in jquery.js) triggered\nthis bug:\n\n call( {\n key: 1 ? 0 : function () {\n return this;\n }\n } );\n\nThe open brace changes state to PROPERTY_ASSIGNMENT (for object literals).\nThe colon after 'key' sets state to PROPERTY_EXPRESSION.\n\nEach individual parts of an expression (identifiers and literal values)\nis recognised with state *_EXPRESSION_OP, such as PROPERTY_EXPRESSION_OP.\n\nThe '1' after 'key:' correctly sets the state to PROPERTY_EXPRESSION_OP.\nUpto there it goes well, but after that it goes wrong.\n\nThe question mark (TYPE_HOOK) in this context was wrongly switching\nback to PROPERTY_EXPRESSION. That is a problem because that does not\nhandle TYPE_COLON, which meant '0: function' was seen together as a\nsequence of continuous PROPERTY_EXPRESSION_OP where TYPE_FUNC may\nnot be handled.\n\nFixed by changing handling of TYPE_HOOK in PROPERTY_EXPRESSION to\nswitch states to EXPRESSION_TERNARY, and also performing a push\nso that ternary handling can pop back to the property expression.\n\nThis mirrors the handling that already exists for ternaries in\nthe regular handling of EXPRESSION/EXPRESSION_OP (as opposed to\nthe variant for object literal properties).\n\nBug: T201606\nChange-Id: I6104c839cfc3416257543b54a91b74cb4aa4193b\n",
"bugs": [
"T201606"
],
"subject": "JavaScriptMinifier: Fix bad state after ternary in object literal",
"hash": "15110b1ca9bdc6be69ab3fb47a8dc7e34ebc22d2",
"date": "2018-08-16T17:02:07"
},
{
"message": "JavaScriptMinifier: Merge $push and $pop into $model\n\nBug: T201606\nChange-Id: Ie28ccca2c44461e67964913cc76a414dce296dce\n",
"bugs": [
"T201606"
],
"subject": "JavaScriptMinifier: Merge $push and $pop into $model",
"hash": "ff5b0acbc98773e8d6bb457298d8a7e2498ad438",
"date": "2018-08-13T18:12:43"
},
{
"message": "JavaScriptMinifier: Turn $goto into a generic $model\n\nIn preparation for merging $push and $pop into it as well, so\nthat the state changes that happen for a particular state/type\nare declared in the same place.\n\nBug: T201606\nChange-Id: Idd12786d625621af949e7e6487e4c1655f61f295\n",
"bugs": [
"T201606"
],
"subject": "JavaScriptMinifier: Turn $goto into a generic $model",
"hash": "48f64af342419a7dbfc334d5a2d3cb0f82e397c7",
"date": "2018-08-13T18:12:35"
},
{
"message": "JavaScriptMinifier: Fix bad state after '{}' in property value\n\nPreviously, $push contained:\n\n\tself::PROPERTY_EXPRESSION_OP => [\n\t\tself::TYPE_PAREN_OPEN => self::PROPERTY_EXPRESSION_OP\n\t],\n\nBut $pop contained:\n\n\tself::PROPERTY_EXPRESSION_OP => [ self::TYPE_BRACE_CLOSE => true ]\n\nThis meant that when a closing brace was found inside a property\nexpression, it would wrongly pop the stack, eventhough we are still\ninside the property expression.\n\nThe impact is that everything after this is one level higher in\nthe stack than it should be, causing various other types to be\nmisinterpreted. Including in the following contrived example:\n\n\tcall( function () {\n\t\ttry {\n\t\t} catch (e) {\n\t\t\tobj = {\n\t\t\t\tkey: 1 ? 0 : {} // A\n\t\t\t}; // B\n\t\t} // C\n\t\treturn name === 'input';\n\t} );\n\nIn the above, the closing brace at A would close the 'obj.key' assignment\n(PROPERTY_EXPRESSION_OP), instead of waiting for the closing brace at B to\ndecide that.\n\nThen the closing brace at B would wrongly close the 'catch' block (instead of\nthe 'obj' assignment). And lastly, the closing brace at C would close the\nfunction body (STATEMENT).\n\nThis resulted in keyword 'return' being interpreted while in state\nPAREN_EXPRESSION_OP instead of STATEMENT, where PAREN_EXPRESSION_OP is the\narguments list to `call()`. In an argument list, TYPE_RETURN is not valid,\nwhich means we stay in that state, instead of progressing to EXPRESSION_NO_NL,\nwhich then wrongly allows for a line break to be inserted.\n\nBug: T201606\nChange-Id: I07b809a7ca56e282ecb48b5c89c217b4b8da6856\n",
"bugs": [
"T201606"
],
"subject": "JavaScriptMinifier: Fix bad state after '{}' in property value",
"hash": "1f5f6fc2048269335a6b7df71ae6cc6f03de97b7",
"date": "2018-08-10T23:16:47"
},
{
"message": "JavaScriptMinifier: Add better line-breaker tests\n\nSet maxLineLength to '1' instead of messing with filler content.\nThis makes it easy to see all potential line-breaks, instead of only\nat the 999th offset.\n\nBug: T201606\nChange-Id: I220b145c5bc8e7d1a41efacd2a6cea738545f006\n",
"bugs": [
"T201606"
],
"subject": "JavaScriptMinifier: Add better line-breaker tests",
"hash": "8cfe4abf8d707b68141a281c8f8c8d2c0f5190c9",
"date": "2018-08-10T22:22:49"
},
{
"message": "JavaScriptMinifier: Document every operator and token type, with spec ref\n\n* Group related operators and token types together.\n\n* Document what each group's members have in common for the purposes of\n parsing and the state machine.\n\n* Add spec reference.\n\nAlso confirmed that each group matches the spec (nothing missing,\nnothing extra).\n\nBug: T201606\nChange-Id: I9128beed9ab5dcf831d4655854565f826f81c602\n",
"bugs": [
"T201606"
],
"subject": "JavaScriptMinifier: Document every operator and token type, with spec ref",
"hash": "8f857f362169a1a02c8c9e03414c70f470c933c5",
"date": "2018-08-10T20:04:58"
},
{
"message": "JavaScriptMinifier: Disambiguate token constants from state constants\n\nMakes debugging a bit easier.\n\nBug: T201606\nChange-Id: Icc660bc2dfa6af823722dd6567fb185308ac74e7\n",
"bugs": [
"T201606"
],
"subject": "JavaScriptMinifier: Disambiguate token constants from state constants",
"hash": "d45f9434271e5de2f12c3bbcfc8f5816528f739d",
"date": "2018-08-10T20:04:58"
}
]
},
"includes/deferred/DeferredUpdates.php": {
"File": "includes/deferred/DeferredUpdates.php",
"TicketCount": 6,
"CommitCount": 6,
"Tickets": [
"T193668",
"T206288",
"T206283",
"T218456",
"T221577",
"T225103"
],
"Commits": [
{
"message": "Clean up DeferredUpdates transactions and push failed updates as jobs\n\nBail out in attemptUpdate() if the transaction state is dirty rather\nthat failing at some later point. Also, flush implicit transaction\nrounds before calling DeferrableUpdate::doUpdate() for fresher data.\n\nNote that only instances of EnqueueableDataUpdate can become jobs.\nMake handleUpdateQueue() defer throwing the exception until every task\nwas attempted for the special unit test logic.\n\nClean up some of the logging from 34427e7d7b.\n\nBug: T206283\nChange-Id: I84ba1f2f8c4bf7c8ef21a907f73ad1065dd8f330\n",
"bugs": [
"T206283"
],
"subject": "Clean up DeferredUpdates transactions and push failed updates as jobs",
"hash": "d0b7a1b4f38e5b140d8d3ba50039a0e18a94d71b",
"date": "2019-11-14T13:24:53"
},
{
"message": "Clean up DeferredUpdates transaction handling\n\nBail out in attemptUpdate() if the transaction state is dirty rather\nthat failing at some later point. Also, flush implicit transaction\nrounds before calling DeferrableUpdate::doUpdate() for fresher data.\n\nBug: T225103\nChange-Id: I4f5d2f9814a562069619f05e003663fcedbd3f64\n",
"bugs": [
"T225103"
],
"subject": "Clean up DeferredUpdates transaction handling",
"hash": "b39f8289ff56e50213129abee91f3791bc6ffeb5",
"date": "2019-07-18T20:16:28"
},
{
"message": "Make sure that each DataUpdate still has outer transaction scope\n\nBug: T221577\nChange-Id: I620e461d791416ca37fa9ca4fca501e28d778cf5\n",
"bugs": [
"T221577"
],
"subject": "Make sure that each DataUpdate still has outer transaction scope",
"hash": "3496f0fca3debf932598087607dc5547075e2cba",
"date": "2019-05-30T20:53:18"
},
{
"message": "Revert \"Split out new RefreshSecondaryDataUpdate class\"\n\nThis reverts commits a1f7fd3adaa3, 0ef02cd018901.\n\nBug: T218456\nChange-Id: I9bbea3d13460ed44755d77fc61ff23fb906cf71e\n",
"bugs": [
"T218456"
],
"subject": "Revert \"Split out new RefreshSecondaryDataUpdate class\"",
"hash": "6b7ddf9c9bfa7f651a50d462f2e4e918a9407dab",
"date": "2019-03-19T14:51:27"
},
{
"message": "Make DeferredUpdates enqueue updates that failed to run when possible\n\nBug: T206288\nBug: T206283\nChange-Id: I6025bcc7d68cf214d291191d4044a66cdeff108b\n",
"bugs": [
"T206288",
"T206283"
],
"subject": "Make DeferredUpdates enqueue updates that failed to run when possible",
"hash": "0ef02cd018901d68f96130b073f01daf3c8c9bf5",
"date": "2019-03-15T19:14:07"
},
{
"message": "Make DeferredUpdates avoid running during LBFactory::commitMasterChanges\n\nBug: T193668\nChange-Id: I50890ef17ea72481a14c4abcd93ae58b93f15d28\n",
"bugs": [
"T193668"
],
"subject": "Make DeferredUpdates avoid running during LBFactory::commitMasterChanges",
"hash": "a79b9737f1b05171af60a3127d6c628ea6a16a96",
"date": "2018-05-03T22:11:38"
}
]
},
"includes/export/XmlDumpWriter.php": {
"File": "includes/export/XmlDumpWriter.php",
"TicketCount": 4,
"CommitCount": 6,
"Tickets": {
"0": "T217329",
"2": "T228614",
"3": "T228720",
"4": "T253468"
},
"Commits": [
{
"message": "Xml dumps should not die when the page redirect target cannot be determined\n\nBug: T253468\nChange-Id: I4212073df993d60669d199c254fec242bde653d7\n",
"bugs": [
"T253468"
],
"subject": "Xml dumps should not die when the page redirect target cannot be determined",
"hash": "29992f25271983bc0fe22a1fd7471d11d40d5059",
"date": "2020-05-24T14:33:33"
},
{
"message": "make XmlDumpwriter more resilient to blob store corruption\n\nLoading content can also throw InvalidArgumentException when\nthe cluster address is an unknown cluster.\n\nBug: T228720\nChange-Id: I313f9a5a27b21a33e90639abae3f505640c30e23\n",
"bugs": [
"T228720"
],
"subject": "make XmlDumpwriter more resilient to blob store corruption",
"hash": "a27820692f61842f15e1b75d153b9626a8854fbd",
"date": "2019-07-24T05:59:38"
},
{
"message": "Make XmlDumpwriter resilient to blob store corruption.\n\nIn the WMF databases, we have several revisions for which we cannot\nload the content. They typically (but not necessarily) have\ncontent_address = \"tt:0\" and content_sha1 = \"\" and rev_sha1 = \"\"\nand content_size = 0 and rev_len = 0.\n\nThis patch makes sure we can still generate dumps in the presence of\nsuch revisions.\n\nBug: T228720\nChange-Id: Iaadad44eb5b5fe5a4f2e60da406ffc11f39c735b\n",
"bugs": [
"T228720"
],
"subject": "Make XmlDumpwriter resilient to blob store corruption.",
"hash": "30bb36f210e562af9ce0ecb2c8c0e39f69195510",
"date": "2019-07-23T11:59:57"
},
{
"message": "don't load revision text content unless requested to\n\nBug: T228614\nChange-Id: Idef4d9684560110a16c6a7c074402c5a5a6e59db\n",
"bugs": [
"T228614"
],
"subject": "don't load revision text content unless requested to",
"hash": "accecbc9a8ff43e12a88361f077dc324979f5c99",
"date": "2019-07-22T07:50:06"
},
{
"message": "redo: don't die producing xml files if rev text export conversion fails\n\nRegresson introduced in If4c31b7975b4d901afa8c194c10446c99e27eadf\n\nBug: T217329\nChange-Id: I003a8c230db293d37ae05e0157b3447775a95e59\n",
"bugs": [
"T217329"
],
"subject": "redo: don't die producing xml files if rev text export conversion fails",
"hash": "7fdcc1d319ff0e1726e9e73c7535861db1fc6136",
"date": "2019-04-01T15:30:35"
},
{
"message": "don't die producing xml files if rev text export conversion fails\n\nIn abstracts for the specific case, we don't care at all, since the\nproblem is that it's a self redirect. Redirects are filtered out of\nthe stream at the end so it won't even show up.\n\nIn anything else, we do what dumpTextPass does already, which is to\nleave the text alone and emit it as is.\n\nBug: T217329\nChange-Id: I39cdf89531c67962b1a9bba4e0a91f7c655ad6f3\n",
"bugs": [
"T217329"
],
"subject": "don't die producing xml files if rev text export conversion fails",
"hash": "45831b2213a83262c1c77e771bd4aaf665cd629c",
"date": "2019-03-13T23:16:24"
}
]
},
"includes/filerepo/file/LocalFile.php": {
"File": "includes/filerepo/file/LocalFile.php",
"TicketCount": 5,
"CommitCount": 6,
"Tickets": [
"T187942",
"T189985",
"T207419",
"T226448",
"T221812"
],
"Commits": [
{
"message": "LocalFile: avoid hard failures on non-existing files.\n\nSome methods on LocalFile will fatal if called on a non-existing file.\nApiQueryImageInfo did not take that into account.\n\nThis patch changes LocalFile to avoid fatal errors, and ApiQueryImageInfo\nto not try and report information on non-existing files.\n\nNOTE: the modified code has NO test coverage! This should be fixed\nbefore this patch is applied, or the patch needs to be thoroughly tested\nmanually.\n\nBug: T221812\nChange-Id: I9b74545a393d1b7a25c8262d4fe37a6492bbc11e\n",
"bugs": [
"T221812"
],
"subject": "LocalFile: avoid hard failures on non-existing files.",
"hash": "bdc6b4e378c6872a20f6fb5842f1a49961af91b4",
"date": "2019-09-18T09:18:44"
},
{
"message": "Fix LocalFile::move\n\nFixes a wfFindFile/wfLocalFile mixup in I9437494d.\n\nAlso restore the original behavior in Linker::makeBrokenImageLinkObj\nfor paranoia - findFile has a local cache so calling it and then\ndiscarding the results is not completely a noop.\n\nBug: T226448\nChange-Id: Ibb9d6f6383eb96ba27e0edd60423552e5cea4688\n",
"bugs": [
"T226448"
],
"subject": "Fix LocalFile::move",
"hash": "b71610c069c69b2a991cb43b059642aca1882f4a",
"date": "2019-06-25T16:38:13"
},
{
"message": "Fix comment handling on image upload or deletion\n\nBefore Iab5f5215, the call to CommentStore::insertWithTempTable() also\nhappened to populate image_comment_temp for the later call to\ninsertSelect() when moving rows from the image table to oldimage or\nfilearchive. There was nothing in the image table itself that needed\nupdating.\n\nIn that change those calls were changed to CommentStore::insert(), but\nit was missed that in that case we do have to update the image table\nitself.\n\nBug: T207419\nChange-Id: I26c417c9ab8a9160a7c7ec548ffdfabf17f01980\n",
"bugs": [
"T207419"
],
"subject": "Fix comment handling on image upload or deletion",
"hash": "a2f8caa37193ccf63e88ead2269cb5d1880954f0",
"date": "2018-10-18T21:12:44"
},
{
"message": "Add COALESCE for image deletion and imgcomment_description_id\n\nI have no idea why this suddenly started raising an error rather than it\ndoing so since February (I0dd7258fe). But this should fix it.\n\nBug: T207419\nChange-Id: Id97e1c7c2655d90928c777bc3377e5ea23f49f6b\n",
"bugs": [
"T207419"
],
"subject": "Add COALESCE for image deletion and imgcomment_description_id",
"hash": "b53f0c278f6cb84c3cbbcb3100231c8c7e2fd870",
"date": "2018-10-18T20:55:55"
},
{
"message": "Move image_comment_temp entries when the file is moved\n\nBug: T189985\nChange-Id: I437102d62cb94fd3195ff06ee8185ce5a2dc941e\n",
"bugs": [
"T189985"
],
"subject": "Move image_comment_temp entries when the file is moved",
"hash": "9ceb2e08a049ed60e669c4636bcb8eb5eb1d87c6",
"date": "2018-03-18T15:37:05"
},
{
"message": "Make LocalFile check early if the revision store is available\n\nThis reduces the odds of having files without corresponding\nwiki pages, given that the later is done in a deferred update.\n\nAlso made some documentation cleanups.\n\nBug: T187942\nChange-Id: Iff516669f535713d37e0011e2d7ed285c667f1c5\n",
"bugs": [
"T187942"
],
"subject": "Make LocalFile check early if the revision store is available",
"hash": "d9ba7cd0050d531c4f016fda285793568fa133c7",
"date": "2018-02-22T22:07:30"
}
]
},
"includes/EditPage.php": {
"File": "includes/EditPage.php",
"TicketCount": 6,
"CommitCount": 6,
"Tickets": [
"T187378",
"T203583",
"T237570",
"T251404",
"T262463",
"T237467"
],
"Commits": [
{
"message": "EditPage: ensure we only try to formatNum() numeric strings\n\nBug: T237467\nChange-Id: I03ffa99f7de1dcc48535ba1e1251567dbf3db116\n",
"bugs": [
"T237467"
],
"subject": "EditPage: ensure we only try to formatNum() numeric strings",
"hash": "1dd1958929193d724998ce8a17b99c741aef273b",
"date": "2020-09-18T02:42:55"
},
{
"message": "EditPage: Fix member call on boolean when undo is impossible\n\nUgh, my mistake.\nAlso added a test that should cover this. It fails on the previous\nversion of code, succeeds after applying this patch.\n\nBug: T262463\nChange-Id: Ifda30daadea5a908505423caaf818b9f88f989ad\n",
"bugs": [
"T262463"
],
"subject": "EditPage: Fix member call on boolean when undo is impossible",
"hash": "603cf919ee4a2e12b7999d24bc779c5c5fe2fceb",
"date": "2020-09-09T20:59:00"
},
{
"message": "EditPage::showHeader - only warn editing an old revision if it exists\n\nCaused by ca8aa4d6b25eca09fd60b0c29974d2be9e54dcbd\n\nBug: T251404\nChange-Id: I772c322cfe9450e1bc444a32753c81f48a1f3210\n",
"bugs": [
"T251404"
],
"subject": "EditPage::showHeader - only warn editing an old revision if it exists",
"hash": "c757ec5fd9277bfe9feeef8e87953b284c60eb78",
"date": "2020-04-29T12:47:32"
},
{
"message": "EditPage: Improve handling of missing revision contents\n\nHistorically, if the content of a revision was missing (e.g. a bad entry\nin the `text` table), EditPage would display an empty textarea.\nIa94521b7 accidentally broke that, causing an uncaught exception.\n\nThis patch restores the previous behavior, with the addition of a notice\nat the top of the page that the content couldn't be loaded.\n\nThis also cleans up the missing section handling so it isn't so easily\nconfused with other failures (and also so it doesn't pass false to a\nmethod declared as taking Content|null).\n\nBug: T237570\nChange-Id: Ia70de11c2e4833b202fde3028a1a94dfc741f0a5\n",
"bugs": [
"T237570"
],
"subject": "EditPage: Improve handling of missing revision contents",
"hash": "6070db31402cfbccb09a67932675b1ce622d0f15",
"date": "2019-11-07T16:47:59"
},
{
"message": "Provide new, unsaved revision to PST to fix magic words.\n\nThis injects the new, unsaved RevisionRecord object into the Parser used\nfor Pre-Save Transform, and sets the user and timestamp on that revision,\nto allow {{subst:REVISIONUSER}} and {{subst:REVISIONTIMESTAMP}} to function.\n\nBug: T203583\nChange-Id: I31a97d0168ac22346b2dad6b88bf7f6f8a0dd9d0\n",
"bugs": [
"T203583"
],
"subject": "Provide new, unsaved revision to PST to fix magic words.",
"hash": "465954aa23cec76ca47e51a58ff342f46fbbdcab",
"date": "2018-09-06T16:33:44"
},
{
"message": "EditPage::getBaseRevision can return null. (fix phpdoc)\n\nIf !$this->mBaseRevision then the code to populte $this->mBaseRevision\nis run.\nThis code either calls Revision::newFromId or Revision::loadFromTimestamp\nboth of which are documented as being able to return null.\nAs a result EditPage::getBaseRevision can alos return null.\n\nBug: T187378\nChange-Id: I60ad9ddcfbe6e1060cab1ad6aa2194c1a3406cbf\n",
"bugs": [
"T187378"
],
"subject": "EditPage::getBaseRevision can return null. (fix phpdoc)",
"hash": "9f62f0a1d54902f9a5ed0de1fc3a39e17a3143f5",
"date": "2018-02-15T10:08:20"
}
]
},
"includes/libs/rdbms/database/DBConnRef.php": {
"File": "includes/libs/rdbms/database/DBConnRef.php",
"TicketCount": 4,
"CommitCount": 6,
"Tickets": [
"T193668",
"T193565",
"T218388",
"T191668"
],
"Commits": [
{
"message": "Use varargs for IDatabase::buildLike\n\nBug: T191668\nChange-Id: Id66af5c3de3e5bc5c2909316e1984eae95a0012a\n",
"bugs": [
"T191668"
],
"subject": "Use varargs for IDatabase::buildLike",
"hash": "e947a2691d4f737c32cf5084d8bad1028f32c5a2",
"date": "2019-10-04T09:52:42"
},
{
"message": "rdbms: Document varargs for IDatabase::buildLike\n\nThis is needed in order for Phan not to consider calls to\nIDatabase::buildLike as invalid. Interestingly, it does not\nconsider calls to Database::buildLike invalid.\n\nBug: T191668\nChange-Id: I0e027f5ec66d20b1d11e3441086001f6a751e1f5\n",
"bugs": [
"T191668"
],
"subject": "rdbms: Document varargs for IDatabase::buildLike",
"hash": "725a59f0c70e37de6de8e26f91b0101839d37ff9",
"date": "2019-06-18T14:11:15"
},
{
"message": "rdbms: treat cloned temporary tables as \"effective write\" targets\n\nMake IDatabase::lastDoneWrites() reflect creation and changes to\nthe cloned temporary unit test tables but not other temporary tables.\nThis effects the LB method hasOrMadeRecentMasterChanges(). Other tables\nare assumpted to really just be there for temporary calculations rather\nacting as test-only ephemeral versions of permanent tables. Treating\nwrites to the \"fake permanent\" temp tables more like real permanent\ntables means that the tests better align with production.\n\nAt the moment, temporary tables still have to use DB_MASTER, given\nthe assertIsWritableMaster() check in query(). This restriction\ncan be lifted at some point, when RDBMs compatibility is robust.\n\nBug: T218388\nChange-Id: I4c0d629da254ac2aaf31aae35bd2efc7bc064ac6\n",
"bugs": [
"T218388"
],
"subject": "rdbms: treat cloned temporary tables as \"effective write\" targets",
"hash": "108fd8b18c1084de7af0bf05831ee9360f595c96",
"date": "2019-03-26T21:24:42"
},
{
"message": "rdbms: Database::selectDB() update the domain and handle failure better\n\nLoadBalancer uses Database::getDomainId() for deciding which keys to use\nin the foreign connection handle arrays. This method should reflect any\nchanges made to the DB selection.\n\nIf the query fails, then do not change domain field. This is the sort of\napproach that LoadBalancer is expects in openForeignConnection(). Also,\nthrow an exception when selectDB() fails.\n\nThe db/schema/prefix fields of Database no longer exist in favor of just\nusing the newer currentDomain field.\n\nAlso:\n* Add IDatabase::selectDomain() method and made selectDB() wrap it.\n* Extract the DB name from sqlite files if not explicitly provided.\n* Fix inconsistent open() return values from Database subclasses.\n* Make a relationSchemaQualifier() method to handle the concern of\n omitting schema names in queries. The means that getDomainId() can\n still return the right value, rather than confusingly omitt the schema.\n* Make RevisionStore::checkDatabaseWikiId() account for the domain schema.\n Unlike d2a4d614fce09c, this does not incorrectly assume the storage is\n always for the current wiki domain. Also, LBFactorySingle sets the local\n domain so it is defined even in install.php.\n* Make RevisionStoreDbTestBase actually set the LoadBalancer local domain.\n* Make RevisionTest::testLoadFromTitle() account for the domain schema.\n\nBug: T193565\nChange-Id: I6e51cd54c6da78830b38906b8c46789c79498ab5\n",
"bugs": [
"T193565"
],
"subject": "rdbms: Database::selectDB() update the domain and handle failure better",
"hash": "fe0af6cad57821dc5b8741edfed77242d543ccaa",
"date": "2018-10-10T19:03:30"
},
{
"message": "rdbms: Disable DBConnRef::selectDB() for sanity\n\nBug: T193565\nChange-Id: I4276d1a7d77a019e0e60dab4b9ec36c93e418037\n",
"bugs": [
"T193565"
],
"subject": "rdbms: Disable DBConnRef::selectDB() for sanity",
"hash": "5891967297a5e1f69925baacbf0c3a4ed13291d6",
"date": "2018-08-15T03:03:51"
},
{
"message": "rdbms: fix finalization stage errors in LBFactory::commitMasterChanges\n\nIf a pre-commit callback caused a new LoadBalancer object to be created,\nthat object will be in the \"cursory\" state rather than the \"finalized\"\nstate. If any callbacks run on an LB instance, make LBFactory iterate\nover them all again to finalize these new instances.\n\nMake LoadBalancer::finializeMasterChanges allow calls to\nalready-finalized instances for simplicity.\n\nBug: T193668\nChange-Id: I4493e9571625a350c0a102219081ce090967a4ac\n",
"bugs": [
"T193668"
],
"subject": "rdbms: fix finalization stage errors in LBFactory::commitMasterChanges",
"hash": "082ed053b699cfd52555f3432a1b4a823a259236",
"date": "2018-05-07T18:04:43"
}
]
},
"includes/parser/ParserOutput.php": {
"File": "includes/parser/ParserOutput.php",
"TicketCount": 4,
"CommitCount": 5,
"Tickets": [
"T186927",
"T229366",
"T261347",
"T264257"
],
"Commits": [
{
"message": "Revert \"Revert \"Revert \"Hard deprecate all public properties in CacheTime and ParserOutput\"\"\"\n\nThis reverts commit deacee9088948e074722af0148000ad9455b07df.\n\nBug: T264257\nChange-Id: Ie68d8081a42e7d8103e287b6d6857a30dc522f75\n",
"bugs": [
"T264257"
],
"subject": "Revert \"Revert \"Revert \"Hard deprecate all public properties in CacheTime and ParserOutput\"\"\"",
"hash": "3254e41a4cc0bcf85b5b0de3e3d237d4ebd7a987",
"date": "2020-10-01T18:03:41"
},
{
"message": "ParserOutput: don't throw on bad editsection\n\nWhen ParserOutput encounters a bad page title in an editsection\nplaceholder, this should not cause a fatal error. We can just not\nproduce an edit link and continue.\n\nIt's still worth logging though, since the parser shouldn't be putting\ninvalid links into editsection placeholders.\n\nBug: T261347\nChange-Id: I154e85aec4b408e659e6281b02473c51f370865d\n",
"bugs": [
"T261347"
],
"subject": "ParserOutput: don't throw on bad editsection",
"hash": "e6f37dc1d8a642d865ec0923ef2a50cd7042d6a3",
"date": "2020-09-23T22:30:59"
},
{
"message": "Suppress notice from ParserOutput::__sleep()\n\nBug: T229366\nChange-Id: I8f0a537f0b6b76aac0c52e691ec4653c51c49940\n",
"bugs": [
"T229366"
],
"subject": "Suppress notice from ParserOutput::__sleep()",
"hash": "bdedfb8ffae69b01ded6769f253060db84616d9c",
"date": "2019-08-01T00:11:12"
},
{
"message": "Revert rename of mSpeculativeRevId to speculativeRevIdUsed\n\nAnd add a test which is confirmed to fail on HHVM prior to this change\nwith the error message \"serialize(): \"\" returned as member variable from\n__sleep() but does not exist\".\n\nBug: T229366\nChange-Id: I236bb4d64bc2e9f7756885e8c418399804eac5e1\n",
"bugs": [
"T229366"
],
"subject": "Revert rename of mSpeculativeRevId to speculativeRevIdUsed",
"hash": "212ae934cd0f53c117a512d52ce5cacde478ed57",
"date": "2019-07-31T02:42:27"
},
{
"message": "Fix ParserOutput::getText 'unwrap' flag for end-of-doc comment\n\nThe closing div might be followed by debug information in HTML comments.\n\nBug: T186927\nChange-Id: I72d4079dfe9ca9b3a14476ace1364eb5efdee846\n",
"bugs": [
"T186927"
],
"subject": "Fix ParserOutput::getText 'unwrap' flag for end-of-doc comment",
"hash": "5859eff1b0c6e2438f4e21a4532e7f547d7cf2c2",
"date": "2018-02-09T22:17:22"
}
]
},
"includes/Storage/DerivedPageDataUpdater.php": {
"File": "includes/Storage/DerivedPageDataUpdater.php",
"TicketCount": 5,
"CommitCount": 5,
"Tickets": [
"T203583",
"T206288",
"T218456",
"T221577",
"T206283"
],
"Commits": [
{
"message": "Add RefreshSecondaryDataUpdate and use it in DerivedPageDataUpdater\n\nThis class implements EnqueueableDataUpdate and can be pushed as a\njob if it fails to run via DeferredUpdates.\n\nUnlike a1f7fd3adaa3, make RefreshSecondaryDataUpdate skip failing\nupdates in doUpdate(). Instead of throwing the first exception from\nany update, log any exceptions that occur and try all the other\nupdates. The first error will be re-thrown afterwards.\n\nAlso, make sure that each DataUpdate still has outer transaction\nscope. This property is documented at mediawiki.org and should not\nbe changed.\n\nAdd integration tests for RefreshSecondaryDataUpdateTest.\n\nBug: T218456\nBug: T206283\nChange-Id: I7c6554a4d4cd76dfe7cd2967afe30b3aa1069fcb\n",
"bugs": [
"T218456",
"T206283"
],
"subject": "Add RefreshSecondaryDataUpdate and use it in DerivedPageDataUpdater",
"hash": "1f4efc6c34aa363c9f5c4d8fd860a39faac4ae2d",
"date": "2020-03-11T07:42:48"
},
{
"message": "Make sure that each DataUpdate still has outer transaction scope\n\nBug: T221577\nChange-Id: I620e461d791416ca37fa9ca4fca501e28d778cf5\n",
"bugs": [
"T221577"
],
"subject": "Make sure that each DataUpdate still has outer transaction scope",
"hash": "3496f0fca3debf932598087607dc5547075e2cba",
"date": "2019-05-30T20:53:18"
},
{
"message": "Revert \"Split out new RefreshSecondaryDataUpdate class\"\n\nThis reverts commits a1f7fd3adaa3, 0ef02cd018901.\n\nBug: T218456\nChange-Id: I9bbea3d13460ed44755d77fc61ff23fb906cf71e\n",
"bugs": [
"T218456"
],
"subject": "Revert \"Split out new RefreshSecondaryDataUpdate class\"",
"hash": "6b7ddf9c9bfa7f651a50d462f2e4e918a9407dab",
"date": "2019-03-19T14:51:27"
},
{
"message": "Split out new RefreshSecondaryDataUpdate class\n\nMake DerivedPageDataUpdater bundle all the related DataUpdate tasks\non page change with a RefreshSecondaryDataUpdate wrapper. If one of\nthe DataUpdate tasks fails, then the entire bundle of updates can be\nre-run in the form of enqueueing a RefreshLinksJob instance (these\njobs are idempotent). If several of the bundled tasks fail, it is easy\nfor DeferredUpdates to know that only one RefreshLinksJob should be\nenqueued.\n\nThe goal is to make DataUpdate tasks more reliable and resilient.\nMost of these deferred update failures are due to ephemeral problems\nlike lock contention. Since the job queue is already able to reliably\nstore and retry jobs, and the time that a regular web request can spend\nin post-send is more limited, it makes the most sense to just enqueue\ntasks as jobs if they fail post-send.\n\nMake LinkUpdate no longer defined as enqueueable as RefreshLinksJob\nsince they are not very congruent (LinksUpdate only does some of the\nwork that RefreshLinksJob does). Only the wrapper, with the bundle of\nDataUpdate instances, is congruent to RefreshLinksJob.\n\nThis change does not itself implement the enqueue-on-failure logic\nin DeferredUpdates, but is merely a prerequisite.\n\nBug: T206288\nChange-Id: I191103c1aeff4c9fedbf524ee387dad9bdf5fab8\n",
"bugs": [
"T206288"
],
"subject": "Split out new RefreshSecondaryDataUpdate class",
"hash": "a1f7fd3adaa380b276aefa467ec90fce4c916ce6",
"date": "2019-03-15T17:14:50"
},
{
"message": "Provide new, unsaved revision to PST to fix magic words.\n\nThis injects the new, unsaved RevisionRecord object into the Parser used\nfor Pre-Save Transform, and sets the user and timestamp on that revision,\nto allow {{subst:REVISIONUSER}} and {{subst:REVISIONTIMESTAMP}} to function.\n\nBug: T203583\nChange-Id: I31a97d0168ac22346b2dad6b88bf7f6f8a0dd9d0\n",
"bugs": [
"T203583"
],
"subject": "Provide new, unsaved revision to PST to fix magic words.",
"hash": "465954aa23cec76ca47e51a58ff342f46fbbdcab",
"date": "2018-09-06T16:33:44"
}
]
},
"includes/changetags/ChangeTags.php": {
"File": "includes/changetags/ChangeTags.php",
"TicketCount": 6,
"CommitCount": 5,
"Tickets": [
"T201934",
"T207313",
"T207881",
"T225564",
"T139012",
"T239772"
],
"Commits": [
{
"message": "Remove hacks for lack of index on rc_this_oldid\n\nIn several places, we're including rc_timestamp or other fields in a\nquery selecting on rc_this_oldid because there was historically no index\non the column.\n\nThe needed index was created by I0ccfd26d and deployed by T202167, so\nlet's remove the hacks.\n\nBug: T139012\nBug: T239772\nChange-Id: Ic99760075bde6603c9f2ab3ee262f5a2878205c7\n",
"bugs": [
"T139012",
"T239772"
],
"subject": "Remove hacks for lack of index on rc_this_oldid",
"hash": "152376376e6ef60c7169e31582db2be78194b0d4",
"date": "2019-12-04T21:00:02"
},
{
"message": "[bugfix] Fetch tag ID before calling undefineTag()\n\nundefineTag() also deletes the tag if it's not used anywhere,\nwhich is breaking the rest of deleteTagEverywhere() in that case.\n\nBug: T225564\nChange-Id: I7ca5db9efd0088b266e33c0a9ce78d73a4fa87c9\n",
"bugs": [
"T225564"
],
"subject": "[bugfix] Fetch tag ID before calling undefineTag()",
"hash": "27cbfaa54f1e699b1b2f61ef2f33eff1c999819b",
"date": "2019-06-16T14:17:53"
},
{
"message": "Use a pre-commit hook for change_tag_def count updates\n\nBug: T207881\nChange-Id: I3000f14d0e49482b0c90ffcfc494211fbc198a20\n",
"bugs": [
"T207881"
],
"subject": "Use a pre-commit hook for change_tag_def count updates",
"hash": "30d0c549501f2cc7a0f821859f303e5ad89af4ee",
"date": "2018-10-24T21:09:41"
},
{
"message": "Fix bad join on ChangeTag subquery\n\nBug: T207313\nChange-Id: Iae6440630a533dfbcee3ccec34a9f231d3d013b5\n",
"bugs": [
"T207313"
],
"subject": "Fix bad join on ChangeTag subquery",
"hash": "a5500e7a0d0a2de049f64af920f6084b5a79ffb9",
"date": "2018-10-22T13:53:29"
},
{
"message": "Swap SET and WHERE statements in ChangeTags::undefineTag\n\nFollows-up 417d8036ae9.\n\nIt should work the other way around, I'm so stupid.\n\nBug: T201934\nChange-Id: I69132b2c237d05242ec6ed1a1e3aca7886edf2bc\n",
"bugs": [
"T201934"
],
"subject": "Swap SET and WHERE statements in ChangeTags::undefineTag",
"hash": "abd115b3ab971b6ceece38d2300ecfb7dc41d029",
"date": "2018-08-14T19:11:25"
}
]
},
"includes/api/ApiMain.php": {
"File": "includes/api/ApiMain.php",
"TicketCount": 6,
"CommitCount": 5,
"Tickets": [
"T199949",
"T208926",
"T228758",
"T233752",
"T261030",
"T264200"
],
"Commits": [
{
"message": "Revert \"ApiEditPage: Show existing watchlist expiry if status is not being changed.\"\n\nThis reverts commit 07e547f47cae761489a33e9ebb8a9b108298f34e.\n\nReason for revert: LiquidThreads extends the ApiEditPage class,\neven though it shouldn't, and thus fails when the dependencies\nare not injected.\n\nBug: T261030\nBug: T264200\nChange-Id: Ib14f8a04bb6c723aa502a47ef9ccde6fe96a0ac7\n",
"bugs": [
"T261030",
"T264200"
],
"subject": "Revert \"ApiEditPage: Show existing watchlist expiry if status is not being changed.\"",
"hash": "149e99f07230d041945871ddb6e0647ccc83dc21",
"date": "2020-09-30T15:29:59"
},
{
"message": "API: Use ConvertibleTimestamp::setFakeTime for testing curtimestamp\n\nMainly to avoid spurious test failures when CI is being extremely slow.\n\nBug: T233752\nChange-Id: Ie2cdd84dc076a852fbdce52f661ef893f9a2d45b\n",
"bugs": [
"T233752"
],
"subject": "API: Use ConvertibleTimestamp::setFakeTime for testing curtimestamp",
"hash": "995aad376af72419dd2fe8870954c9b400be4766",
"date": "2019-09-26T16:35:00"
},
{
"message": "API: Only take HTTP code from ApiUsageException\n\nCodes set on other Exception types are unlikely to be intended as HTTP\ncodes.\n\nBug: T228758\nChange-Id: Ia6a53cb621f87ff97d5f16215a1b09ae11ca8f53\n",
"bugs": [
"T228758"
],
"subject": "API: Only take HTTP code from ApiUsageException",
"hash": "acb2e15615a5cc792b08dd62494dd0608b1a9d33",
"date": "2019-07-23T14:24:18"
},
{
"message": "API: Validate API error codes\n\nValidate them in ApiMessageTrait when the message is created, and again\nin ApiMain before they're included in the header.\n\nThis also introduces an \"api-warning\" log channel, since \"api\" is too\nspammy for real use, and converts a few existing things to use it.\n\nBug: T208926\nChange-Id: Ib2d8bd4d4a5d58af76431835ba783c148de7792a\nDepends-On: Iced44f2602d57eea9a2d15aee5b8c9a50092b49c\nDepends-On: I5c2747f527c30ded7a614feb26f5777d901bd512\nDepends-On: I9c9bd8f5309518fcbab7179fb71d209c005e5e64\n",
"bugs": [
"T208926"
],
"subject": "API: Validate API error codes",
"hash": "4eace785e66d199cb8fe1ec224bdc49831949a6d",
"date": "2018-11-26T18:41:08"
},
{
"message": "ApiMain: Always create a new printer in getPrinterByName()\n\nApiMain already caches the printer in ->mPrinter, so if\ngetPrinterByName() is being called more than once that's because we\nreally want a new printer instance, without any cached errors or other\nbehavior that results from reusing the same instance.\n\nBug: T199949\nChange-Id: I779cbbaa8aab9b049a8eed732416edd828121ec4\n",
"bugs": [
"T199949"
],
"subject": "ApiMain: Always create a new printer in getPrinterByName()",
"hash": "78955203c3c1f810cae04c7c92790c99c1de9e7f",
"date": "2018-07-19T14:45:28"
}
]
},
"includes/Setup.php": {
"File": "includes/Setup.php",
"TicketCount": 6,
"CommitCount": 5,
"Tickets": {
"0": "T194403",
"1": "T190082",
"4": "T207433",
"5": "T232140",
"6": "T244370",
"7": "T264799"
},
"Commits": [
{
"message": "Log IP/device changes within the same session\n\nStore IP and device information in the session and log when\nit changes. The goal is to detect session leakage when the\nsession is accidentally sent to another user, which is a\nhypothetical cause of T264370. The log will be noisy since\nusers do change IP addresses for a number of reasons,\nbut we are mainly interested in the ability of correlating\nuser-reported incidents where we have a username to filter\nby, so that's OK.\n\nBased on I27468a3f6d58.\n\nBug: T264799\nChange-Id: Ifa14fa637c1b199159ea11e983a25212ae005565\n",
"bugs": [
"T264799"
],
"subject": "Log IP/device changes within the same session",
"hash": "d5d3c90152299abf73f7747d5c53984d9fb53ea1",
"date": "2020-10-08T20:13:25"
},
{
"message": "Follow-up 8cd2e13: Setup: Check that 1x key has been set in wgLogos before using\n\nBug: T232140\nBug: T244370\nChange-Id: If1e0a384db6584d8854d7f39313ee62ae8423a7f\n",
"bugs": [
"T232140",
"T244370"
],
"subject": "Follow-up 8cd2e13: Setup: Check that 1x key has been set in wgLogos before using",
"hash": "ec60dbbaea660fa747fc99da7c73f3aae3f8be2e",
"date": "2020-02-05T15:44:16"
},
{
"message": "Include BCP 47 codes in $wgDummyLanguageCodes, but deprecate it\n\nAdd BCP 47 codes to $wgDummyLanguageCodes to ensure that\nLanguage::factory() will return a valid MediaWiki-internal code if\ngiven a BCP 47 alias. We will want to make $wgDummyLanguageCodes a\nprivate property of LanguageCode eventually, but let's start with\nremoving it from user configuration.\n\nSetting $wgDummyLanguageCodes in LocalSettings.php has been deprecated\nsince 1.29. Hard deprecate adding entries to $wgDummyLanguageCodes so\nthat we can eventually remove manual overrides from user\nconfiguration.\n\nThis is a follow-up to 48ab87d0a37da80c2e2ae3a20f645548d2a787f9,\nwhich described the various categories of codes, and\n21ead7a98d1a103b77f1e3ba29a85493782d398b, which added the correct\nBCP 47 mappings.\n\nBug: T207433\nChange-Id: I9f6dda3360f79ab65f6392f44c98926588d851c8\n",
"bugs": [
"T207433"
],
"subject": "Include BCP 47 codes in $wgDummyLanguageCodes, but deprecate it",
"hash": "f2e0516934c489bd6883b7602d57accf959018e0",
"date": "2018-10-19T18:31:21"
},
{
"message": "rdbms: include client ID hash in ChronologyProtector cookies\n\nPreviously, if an internal service forwarded the cookies for a\nuser (e.g. for permissions) but not the User-Agent header or not\nthe IP address (e.g. XFF), ChronologyProtector could timeout\nwaiting for a matching writeIndex to appear for the wrong key.\n\nThe cookie now tethers the client to the key that holds the\nDB positions from their last state-changing request.\n\nBug: T194403\nBug: T190082\nChange-Id: I84f2cbea82532d911cdfed14644008894498813a\n",
"bugs": [
"T194403",
"T190082"
],
"subject": "rdbms: include client ID hash in ChronologyProtector cookies",
"hash": "fb51330084b4bde1880c76589e55e7cd87ed0c6d",
"date": "2018-06-02T03:57:30"
},
{
"message": "Avoid unnecessary WaitConditionLoop delays in ChronologyProtector\n\nSince it takes time for the agent to get the response and set the\ncookie and, as well, the time into a request that a LoadBalancer is\ninitialized varies by many seconds (cookies loaded from the start),\ngive the cookie a much lower TTL than the DB positions in the stash.\n\nThis avoids having to wait for a position with a given cpPosIndex\nvalue, when the position already expired from the store, which is\na waste of time.\n\nAlso include the timestamp in \"cpPosIndex\" cookies to implement\nlogical expiration in case clients do not expire them correctly.\n\nBug: T194403\nBug: T190082\nChange-Id: I97d8f108dec59c5ccead66432a097cda8ef4a178\n",
"bugs": [
"T194403",
"T190082"
],
"subject": "Avoid unnecessary WaitConditionLoop delays in ChronologyProtector",
"hash": "52af356cad3799ebec3826e1e4743d76a114da3e",
"date": "2018-05-18T20:43:05"
}
]
},
"includes/libs/rdbms/ChronologyProtector.php": {
"File": "includes/libs/rdbms/ChronologyProtector.php",
"TicketCount": 3,
"CommitCount": 5,
"Tickets": [
"T187942",
"T194403",
"T190082"
],
"Commits": [
{
"message": "rdbms: fix value of ChronologyProtector::POSITION_COOKIE_TTL\n\nThis was supposed to be 10 (~LoadBalancer::MAX_WAIT_DEFAULT) but\nended up as 60 by mistake in 52af356cad (when the constant was added).\n\nBug: T194403\nChange-Id: Ie94949eebaafde2e0c4e2fcffabcb78866363a27\n",
"bugs": [
"T194403"
],
"subject": "rdbms: fix value of ChronologyProtector::POSITION_COOKIE_TTL",
"hash": "6cce704da1335078aa98838e7a7da9eeb4e46b67",
"date": "2018-07-11T12:40:17"
},
{
"message": "Add more logging to ChronologyProtector::initPositions()\n\nBug: T194403\nChange-Id: I8f1ccb3c6e257ae48ae6bbecd8f2a8f51cd2ed41\n",
"bugs": [
"T194403"
],
"subject": "Add more logging to ChronologyProtector::initPositions()",
"hash": "9130b0395c2c1a0e39172b358169f28c1c29d13a",
"date": "2018-06-08T22:14:05"
},
{
"message": "rdbms: include client ID hash in ChronologyProtector cookies\n\nPreviously, if an internal service forwarded the cookies for a\nuser (e.g. for permissions) but not the User-Agent header or not\nthe IP address (e.g. XFF), ChronologyProtector could timeout\nwaiting for a matching writeIndex to appear for the wrong key.\n\nThe cookie now tethers the client to the key that holds the\nDB positions from their last state-changing request.\n\nBug: T194403\nBug: T190082\nChange-Id: I84f2cbea82532d911cdfed14644008894498813a\n",
"bugs": [
"T194403",
"T190082"
],
"subject": "rdbms: include client ID hash in ChronologyProtector cookies",
"hash": "fb51330084b4bde1880c76589e55e7cd87ed0c6d",
"date": "2018-06-02T03:57:30"
},
{
"message": "Avoid unnecessary WaitConditionLoop delays in ChronologyProtector\n\nSince it takes time for the agent to get the response and set the\ncookie and, as well, the time into a request that a LoadBalancer is\ninitialized varies by many seconds (cookies loaded from the start),\ngive the cookie a much lower TTL than the DB positions in the stash.\n\nThis avoids having to wait for a position with a given cpPosIndex\nvalue, when the position already expired from the store, which is\na waste of time.\n\nAlso include the timestamp in \"cpPosIndex\" cookies to implement\nlogical expiration in case clients do not expire them correctly.\n\nBug: T194403\nBug: T190082\nChange-Id: I97d8f108dec59c5ccead66432a097cda8ef4a178\n",
"bugs": [
"T194403",
"T190082"
],
"subject": "Avoid unnecessary WaitConditionLoop delays in ChronologyProtector",
"hash": "52af356cad3799ebec3826e1e4743d76a114da3e",
"date": "2018-05-18T20:43:05"
},
{
"message": "rdbms: make DBMasterPos implement Serializable\n\nChronologyProtector uses these classes to briefly store positions\nand everytime the fields change then errors can happen when old\nvalues are unserialized and used. Use a simple two-element map\nformat for serialized positions. The fields are recomputed back\nfrom the data map.\n\nValues from before this change will issue the warning\n\"Erroneous data format for unserializing\". To avoid that, bump\nthe ChronologyProtector key version. Future field changes will\nnot require this.\n\nThis change should be deployed on all wikis at once.\n\nBug: T187942\nChange-Id: I71bbbc9b9d4c7e02ac02f1d8750b70bda08d4db1\n",
"bugs": [
"T187942"
],
"subject": "rdbms: make DBMasterPos implement Serializable",
"hash": "26d87a26fee1b6e66e221c4452a9f1d23cc003b6",
"date": "2018-02-23T20:46:28"
}
]
},
"includes/watcheditem/WatchedItemStore.php": {
"File": "includes/watcheditem/WatchedItemStore.php",
"TicketCount": 5,
"CommitCount": 5,
"Tickets": [
"T204729",
"T208003",
"T207941",
"T226741",
"T243449"
],
"Commits": [
{
"message": "When clearing don't load the watchlist if we must clear through a job\n\nIt looks like this bug has existed in the special page since before the\ntime of the refactoring into WatchedItemStore although apparently\nit has only just surfaced now!\n\nThis adds a new method into WatchedItemStore that decides if the\nwatchlist can be cleared interactively or must use the job queue.\nThis can then be used in the special page instead of the old logic\nwhich would load the watchlist and then count the loaded items\n(inefficient if you know your clearing the list anyway)\n\nBug: T243449\nChange-Id: I810d89e3e1142a223430f7fc5f8598a493637a72\n",
"bugs": [
"T243449"
],
"subject": "When clearing don't load the watchlist if we must clear through a job",
"hash": "8d3f6b417f2baece555d365dc518ebaf9bcbd7bd",
"date": "2020-01-29T18:36:47"
},
{
"message": "WatchedItemStore: Fix fatal when revision is deleted\n\nPeople following links from their watchlist emails to since-deleted\nrevisions were getting fatals from\nWatchedItemStore::getNotificationTimestamp().\n\nIt's hard to tell whether returning null or false is the right thing to\ndo here, because this function can return both and the difference is not\ndocumented, and I wasn't able to find any callers that care about the\ndistinction.\n\nBug: T226741\nChange-Id: Ib2a836099343f4c161227266dbeeafbc76dccc2b\n",
"bugs": [
"T226741"
],
"subject": "WatchedItemStore: Fix fatal when revision is deleted",
"hash": "881e17a0c712076daacf06a65bce3b64fafec477",
"date": "2019-07-10T20:15:20"
},
{
"message": "WatchedItemStore: Use batching in setNotificationTimestampsForUser\n\nUpdate rows in batches, using the same logic as is used by\nremoveWatchBatchForUser().\n\nAlso remove the functionality for updating all rows, and move that to\nresetAllNotificationTimestampsForUser() instead. To that end, add a\ntimestamp parameter to that method and to the job it uses, and make\nsetNotificationTimestampsForUser() behave like a backwards-compatibility\nwrapper around resetAllNotificationTimestampsForUser() when no list of\ntitles is specified.\n\nBug: T207941\nChange-Id: I58342257395de6fcfb4c392b3945b12883ca1680\nFollows-Up: I2008ff89c95fe6f66a3fd789d2cef0e8fe52bd93\n",
"bugs": [
"T207941"
],
"subject": "WatchedItemStore: Use batching in setNotificationTimestampsForUser",
"hash": "1da7573bb77beff9e4466430b57551986e6be248",
"date": "2019-03-21T04:41:42"
},
{
"message": "Use commit-and-wait when processing more than updateRowsPerQuery\n\nThis is needed to avoid triggering \"does not have outer scope\" errors when\naddWatchBatchForUser() is invoked from WatchAction::doWatch() or\nWatchAction::doUnwatch() in the context of uploading a file, moving a page, or\nother instances when we don't have outer scope.\n\nBug: T208003\nChange-Id: Ice5cb8fced64883476daea5cdac36e47dfcccb61\n",
"bugs": [
"T208003"
],
"subject": "Use commit-and-wait when processing more than updateRowsPerQuery",
"hash": "ee63d2bf20ea797e26391a1f08c1220950bca991",
"date": "2018-11-05T20:49:33"
},
{
"message": "WatchedItemStore::countVisitingWatchersMultiple() shouldn't query all titles when asked for none\n\nIf a caller gives an empty array for $targetsWithVisitThresholds, per\nthe documentation it should be expecting an empty array in return, not a\ncount of watchers for every title in the database.\n\nBug: T204729\nChange-Id: I0f25fae301450d077bb30597281aaef0fba209d4\n",
"bugs": [
"T204729"
],
"subject": "WatchedItemStore::countVisitingWatchersMultiple() shouldn't query all titles when asked for none",
"hash": "f5469d36602cb2a95396830b14e9a631d698f3a6",
"date": "2018-09-18T16:01:32"
}
]
},
"includes/parser/CoreParserFunctions.php": {
"File": "includes/parser/CoreParserFunctions.php",
"TicketCount": 3,
"CommitCount": 5,
"Tickets": {
"0": "T251952",
"2": "T253725",
"3": "T237467"
},
"Commits": [
{
"message": "CoreParserFunctions: ensure formatNum is only called on numeric strings\n\nThe {{formatnum}} parser function can take anything, not just numeric\nstrings. We'd like to restrict Language::commafy() to operate only on\nnumeric strings, however (see T237467). Split the argument to the\n{{formatnum}} parser function so that we only invoke\nLanguage::commafy() on numeric strings. Add a tracking category so we\ncan (gradually) lint our content appropriately.\n\nBug: T237467\nChange-Id: Ib6c832df1f69aa4579402701fad1f77e548291ee\n",
"bugs": [
"T237467"
],
"subject": "CoreParserFunctions: ensure formatNum is only called on numeric strings",
"hash": "bfa4357d9184c978b4c7b674d4efa7da3b116959",
"date": "2020-09-15T20:23:41"
},
{
"message": "Fix impedance mismatch with Parser::fetchCurrentRevisionRecordOfTitle\n\nThis newly-added method returns `false` on error; the caller expects\nit to return `null`.\n\nBug: T253725\nFollowup-To: If36b35391f7833a1aded8b5a0de706d44187d423\nChange-Id: I6af7aeabbba9f95338497026fd08d9ae23f75c22\n",
"bugs": [
"T253725"
],
"subject": "Fix impedance mismatch with Parser::fetchCurrentRevisionRecordOfTitle",
"hash": "1113039771d85526c1763ab1e88e4f2670bfb4f1",
"date": "2020-05-27T16:10:27"
},
{
"message": "Partially revert \"Fix impedance mismatch with Parser::getRevisionRecordObject()\"\n\nReason for revert: issue arose again when deployed with wmf.34\n\nPartial revert: keep the intended fix in Parser.php, revert\nremoval of fail-safe logic in CoreParserFunctions.hp\n\nThis reverts commit 2712cb8330d8e9b4755d89b6909c5589ee6da8d3.\n\nBug: T253725\nChange-Id: I06266ca8bd29520b2c8f86c430d0f1e2d5dd20c0\n",
"bugs": [
"T253725"
],
"subject": "Partially revert \"Fix impedance mismatch with Parser::getRevisionRecordObject()\"",
"hash": "c45ccd7ca8e6f2f643c3c89cf8a2bd1e936e6dea",
"date": "2020-05-27T08:10:50"
},
{
"message": "Fix impedance mismatch with Parser::getRevisionRecordObject()\n\nParser::getRevisionRecordObject() returns `null` if the revision is\nmissing, but it invokes ParserOptions::getCurrentRevisionRecordCallback()\n(ie, Parser::statelessFetchRevisionRecord() by default) which returns\n`false` as its error condition.\n\nThis reverts commit ae74a29af3116eb73a4cb775736b3acee0a65c59, and instead\nfixes the bug at its root.\n\nBug: T251952\nChange-Id: If36b35391f7833a1aded8b5a0de706d44187d423\n",
"bugs": [
"T251952"
],
"subject": "Fix impedance mismatch with Parser::getRevisionRecordObject()",
"hash": "2712cb8330d8e9b4755d89b6909c5589ee6da8d3",
"date": "2020-05-06T16:44:05"
},
{
"message": "CoreParserFunctions::revisionuser - only call getUser on RevisionRecord\n\nBug: T251952\nChange-Id: Ib74f8546a955a41b119fce00ccaa9a0b635245ef\n",
"bugs": [
"T251952"
],
"subject": "CoreParserFunctions::revisionuser - only call getUser on RevisionRecord",
"hash": "ae74a29af3116eb73a4cb775736b3acee0a65c59",
"date": "2020-05-05T21:11:40"
}
]
},
"includes/deferred/LinksUpdate.php": {
"File": "includes/deferred/LinksUpdate.php",
"TicketCount": 6,
"CommitCount": 5,
"Tickets": [
"T191282",
"T193668",
"T206288",
"T218456",
"T206283",
"T250551"
],
"Commits": [
{
"message": "LinksUpdate: report title when no page ID found\n\nBug: T250551\nChange-Id: I97fac4cb4094195d79ad1569d73a77e5805573f3\n",
"bugs": [
"T250551"
],
"subject": "LinksUpdate: report title when no page ID found",
"hash": "7bf7022c54b26777ff2b6fe9e1020e887afd87fd",
"date": "2020-04-30T17:24:39"
},
{
"message": "Make LinksUpdate no longer extend EnqueueableDataUpdate\n\nLinksUpdate does not match RefreshLinksJob since the former is only a subset\nof the later. Also, DeferredUpdates::doUpdates() only runs in \"enqueue\" mode\nfor cases in MediaWiki::restInPeace() if there is no post-send support.\n\nIn a future commit, the deferred callback in which LinksUpdate runs\ncurrently, will be abstracted into its own deferred update, which\nwill then bring back EnqueueableDataUpdate for this update.\n\nBug: T206283\nChange-Id: I0680be445e8b8e8d0dba85df135b84640f4fcb81\n",
"bugs": [
"T206283"
],
"subject": "Make LinksUpdate no longer extend EnqueueableDataUpdate",
"hash": "29fcd02ef855d0d986f44a722d9b223e63532e0d",
"date": "2019-08-05T17:10:33"
},
{
"message": "Revert \"Split out new RefreshSecondaryDataUpdate class\"\n\nThis reverts commits a1f7fd3adaa3, 0ef02cd018901.\n\nBug: T218456\nChange-Id: I9bbea3d13460ed44755d77fc61ff23fb906cf71e\n",
"bugs": [
"T218456"
],
"subject": "Revert \"Split out new RefreshSecondaryDataUpdate class\"",
"hash": "6b7ddf9c9bfa7f651a50d462f2e4e918a9407dab",
"date": "2019-03-19T14:51:27"
},
{
"message": "Split out new RefreshSecondaryDataUpdate class\n\nMake DerivedPageDataUpdater bundle all the related DataUpdate tasks\non page change with a RefreshSecondaryDataUpdate wrapper. If one of\nthe DataUpdate tasks fails, then the entire bundle of updates can be\nre-run in the form of enqueueing a RefreshLinksJob instance (these\njobs are idempotent). If several of the bundled tasks fail, it is easy\nfor DeferredUpdates to know that only one RefreshLinksJob should be\nenqueued.\n\nThe goal is to make DataUpdate tasks more reliable and resilient.\nMost of these deferred update failures are due to ephemeral problems\nlike lock contention. Since the job queue is already able to reliably\nstore and retry jobs, and the time that a regular web request can spend\nin post-send is more limited, it makes the most sense to just enqueue\ntasks as jobs if they fail post-send.\n\nMake LinkUpdate no longer defined as enqueueable as RefreshLinksJob\nsince they are not very congruent (LinksUpdate only does some of the\nwork that RefreshLinksJob does). Only the wrapper, with the bundle of\nDataUpdate instances, is congruent to RefreshLinksJob.\n\nThis change does not itself implement the enqueue-on-failure logic\nin DeferredUpdates, but is merely a prerequisite.\n\nBug: T206288\nChange-Id: I191103c1aeff4c9fedbf524ee387dad9bdf5fab8\n",
"bugs": [
"T206288"
],
"subject": "Split out new RefreshSecondaryDataUpdate class",
"hash": "a1f7fd3adaa380b276aefa467ec90fce4c916ce6",
"date": "2019-03-15T17:14:50"
},
{
"message": "Use AutoCommitUpdate in LinksUpdate::doUpdate\n\nThe hook handlers are likely to write to secondary databases, in which\ncase it is better to wrap the callback in its own transaction round.\n\nThis lowers the chance of pending write warnings happening in\nrunMasterTransactionIdleCallbacks() as well as DBTransactionError\nexceptions in LBFactory due to recursion during commit.\n\nBug: T191282\nBug: T193668\nChange-Id: Ie207ca312888b6bb076f783d41f05b701f70a52e\n",
"bugs": [
"T191282",
"T193668"
],
"subject": "Use AutoCommitUpdate in LinksUpdate::doUpdate",
"hash": "8659c59dcfa7d7b1be83d6048ef477757f38047e",
"date": "2018-05-03T23:47:17"
}
]
},
"includes/Revision.php": {
"File": "includes/Revision.php",
"TicketCount": 7,
"CommitCount": 5,
"Tickets": [
"T184559",
"T183548",
"T184687",
"T184693",
"T184690",
"T184689",
"T205675"
],
"Commits": [
{
"message": "Avoid fatal when finding no base revision for a null revision.\n\nBug: T205675\nChange-Id: Iae67649a1be9597086033ad34d9d00556ba35730\n",
"bugs": [
"T205675"
],
"subject": "Avoid fatal when finding no base revision for a null revision.",
"hash": "539cb2816aecee668c41b418552d75cc6329e0ab",
"date": "2018-10-04T17:54:31"
},
{
"message": "Make Revision::__construct work with bad page ID\n\nFor backwards-copatibility, we need to be able to construct a Revision\nobject even for bad page IDs.\n\nBug: T184689\nChange-Id: I18c823d7b72504447982364d581b34e98924b67f\n",
"bugs": [
"T184689"
],
"subject": "Make Revision::__construct work with bad page ID",
"hash": "4589f4d78183fcfde2009e217ea8524102c95a31",
"date": "2018-01-11T16:03:48"
},
{
"message": "Handle failure to load content in Revision getSize, etc\n\nThe Revision class used to just return null if size or hsash were unknown\nand could nto be determined. This patch restores this behavior by\ncatching any RevisionAccessExceptions raised by RevisionRecord when\nfailing to load content.\n\nBug: T184693\nBug: T184690\nChange-Id: I393ea19b9fb48219583fc65ce81ea14d8d0a2357\n",
"bugs": [
"T184693",
"T184690"
],
"subject": "Handle failure to load content in Revision getSize, etc",
"hash": "04bac0dee1c919c2f5c63527c90412b0b8fac081",
"date": "2018-01-11T14:23:03"
},
{
"message": "Revision::newNullRevision don't pass null to RevisionStore\n\nRevisionStore::newNullRevision must be passed a Title object when\nbeing used, passing null will result in a fatal.\n\nTitle::newFromID can return null, so check and return null early if we\nhave no Title object.\n\nAlso use Title::GAID_FOR_UPDATE for a higher chance of getting a Title.\nPrior to the Revision overhaul newNullRevision would have always done a\nselect from master, it is documented as accepting $dbw and also passed\nFOR UPDATE as an option to selectRow.\n\nBug: T184687\nChange-Id: If1f99d091ab9cd37d514a4f4cbf3c28b64426cb7\n",
"bugs": [
"T184687"
],
"subject": "Revision::newNullRevision don't pass null to RevisionStore",
"hash": "54f0872c7f8549745e10930dc99eb83444c62c21",
"date": "2018-01-11T09:22:16"
},
{
"message": "Revert \"Revert \"[MCR] Add and use $title param to RevisionStoregetPrevious/Next\"\"\n\nThis is a partial revert of a revert that reverted a fix believed to\nhave had its underlying issue fixed in:\nhttps://gerrit.wikimedia.org/r/#/c/400577/\n\nThe compat layer (Revision), now passes a Title object into the\nRevisionStore, and this title is used to construct the Record and\nalso any new Revision objects.\n\nBug: T184559\nBug: T183548\nChange-Id: Id073265c173f60aa8c456550fdb4bb5196013be8\n",
"bugs": [
"T184559",
"T183548"
],
"subject": "Revert \"Revert \"[MCR] Add and use $title param to RevisionStoregetPrevious/Next\"\"",
"hash": "3e2fdb71ed8dab35934ce289d5e559153326028c",
"date": "2018-01-10T17:05:53"
}
]
},
"includes/libs/rdbms/loadbalancer/ILoadBalancer.php": {
"File": "includes/libs/rdbms/loadbalancer/ILoadBalancer.php",
"TicketCount": 5,
"CommitCount": 5,
"Tickets": [
"T192611",
"T193668",
"T194308",
"T226678",
"T226770"
],
"Commits": [
{
"message": "rdbms: avoid recursion in LoadBalancer when the master has non-zero load\n\nAdd and use IDatabase::getServerConnection() method to avoid loops caused\ncaused by pickReaderIndex() calling getConnection() for the master server.\nThat lead to getReadOnlyReason() triggering pickReaderIndex() again.\n\nMake getLaggedReplicaMode() apply when the master has non-zero load and\nthe replicas are all lagged.\n\nRemove \"allReplicasDownMode\" in favor of checking getExistingReaderIndex()\ninstead. This reduces the amount of state to keep track of a bit.\n\nFollow-up to 95e2c990940f\n\nBug: T226678\nBug: T226770\nChange-Id: Id932c3fcc00625e3960f76d054d38d9679d25ecc\n",
"bugs": [
"T226678",
"T226770"
],
"subject": "rdbms: avoid recursion in LoadBalancer when the master has non-zero load",
"hash": "79d1881eded1537e739c92d3576c48e34b352f88",
"date": "2019-07-09T19:26:46"
},
{
"message": "rdbms: fix callback stage errors in LBFactory::commitMasterChanges\n\nJust like 082ed053b6 fixed pre-commit callback errors when new instances\nof LoadBalancer are made during that step, do the same for post-commit\ncallbacks.\n\nBug: T194308\nChange-Id: Ie79e0f22b3aced425cf067d0df6b67e368223e6c\n",
"bugs": [
"T194308"
],
"subject": "rdbms: fix callback stage errors in LBFactory::commitMasterChanges",
"hash": "86af2ef383b6fc9c4032dad769c00e672922d530",
"date": "2018-05-10T04:26:41"
},
{
"message": "rdbms: fix LBFactory::commitAll() round handling\n\nThis avoids \"Transaction round stage must be approved (not cursory)\".\n\nBug: T194308\nChange-Id: I9dbfe9cede02b1b1904c1d5e5a9802306c2492a2\n",
"bugs": [
"T194308"
],
"subject": "rdbms: fix LBFactory::commitAll() round handling",
"hash": "205cfc185446ad9dd355d3a57f4ee60d0dc1de57",
"date": "2018-05-09T21:51:18"
},
{
"message": "rdbms: fix finalization stage errors in LBFactory::commitMasterChanges\n\nIf a pre-commit callback caused a new LoadBalancer object to be created,\nthat object will be in the \"cursory\" state rather than the \"finalized\"\nstate. If any callbacks run on an LB instance, make LBFactory iterate\nover them all again to finalize these new instances.\n\nMake LoadBalancer::finializeMasterChanges allow calls to\nalready-finalized instances for simplicity.\n\nBug: T193668\nChange-Id: I4493e9571625a350c0a102219081ce090967a4ac\n",
"bugs": [
"T193668"
],
"subject": "rdbms: fix finalization stage errors in LBFactory::commitMasterChanges",
"hash": "082ed053b699cfd52555f3432a1b4a823a259236",
"date": "2018-05-07T18:04:43"
},
{
"message": "rdbms: make sure cpPosIndex cookie is applied to LBFactory in time\n\nOnce getMain() was called in setSchemaAliases(), the ChronologyProtector\nwas initialized and the setRequestInfo() call in Setup.php had no effect.\nOnly the request values read in LBFactory::__construct() were used, which\nreflect $_GET but not cookie values.\n\nUse the $wgDBtype variable to avoid this and add an exception when that\nsort of thing happens.\n\nFurther defer instantiation of ChronologyProtector so that methods like\nILBFactory::getMainLB() do not trigger construction.\n\nBug: T192611\nChange-Id: I735d3ade5cd12a5d609f4dae19ac88fec4b18b51\n",
"bugs": [
"T192611"
],
"subject": "rdbms: make sure cpPosIndex cookie is applied to LBFactory in time",
"hash": "628a3a9b267620914701a2a0a17bad8ab2e56498",
"date": "2018-04-23T15:44:02"
}
]
},
"includes/api/ApiBase.php": {
"File": "includes/api/ApiBase.php",
"TicketCount": 5,
"CommitCount": 5,
"Tickets": [
"T195777",
"T211769",
"T217382",
"T231582",
"T245280"
],
"Commits": [
{
"message": "Don't pass 'ip' through to logging\n\nBug: T245280\nChange-Id: Ib89be9f799e79f46dab661a387822cab43703f61\n",
"bugs": [
"T245280"
],
"subject": "Don't pass 'ip' through to logging",
"hash": "511f9b5d95ebfb3e0f073ebc9a137b564bfd28ff",
"date": "2020-02-14T17:46:10"
},
{
"message": "ApiBase: Always validate that 'limit' is numeric\n\nBug: T231582\nChange-Id: I956d4d623bfeace1b542039283e04a970fd40121\n",
"bugs": [
"T231582"
],
"subject": "ApiBase: Always validate that 'limit' is numeric",
"hash": "a9199bf8548a228584f69ca32724ee296c931d92",
"date": "2019-09-03T22:27:12"
},
{
"message": "API: Handle Messages in errorArrayToStatus()\n\nTwo bugs here:\n* If the error array contains an entry using a Message object instead of\n a string as the key, it'll blow up trying to do\n `self::$blockMsgMap[$error[0]]`.\n* If the error array contains a Message object not wrapped in an array,\n it'll blow up trying to do `...(array)$error`.\n\nBug: T217382\nChange-Id: I2a08e02bca0fb194416b3f2e6a1d6192d5c13cb2\n",
"bugs": [
"T217382"
],
"subject": "API: Handle Messages in errorArrayToStatus()",
"hash": "426df4cd70face4fb9596ddd0c11ef1b83ca9d13",
"date": "2019-03-01T14:53:01"
},
{
"message": "ApiBase: Pass empty string, not null, for $conds to ->select\n\nBug: T211769\nChange-Id: I4cf224c19b340fac5fc895bbee7507d77fd3fdfa\n",
"bugs": [
"T211769"
],
"subject": "ApiBase: Pass empty string, not null, for $conds to ->select",
"hash": "8e965a9d276505edf58065810fe9a59ba49ddc32",
"date": "2018-12-12T14:35:18"
},
{
"message": "API: ApiBase::getParameter() shouldn't throw on other params' errors\n\nThis regression was introduced in Ia19a1617b7.\n\nBug: T195777\nChange-Id: I1e1eb3861ced83f79e56d2325ab693ef4e393999\n",
"bugs": [
"T195777"
],
"subject": "API: ApiBase::getParameter() shouldn't throw on other params' errors",
"hash": "24be43b9aaffa5b723f5074bef10485cf34c3788",
"date": "2018-05-28T22:45:24"
}
]
},
"includes/api/ApiStashEdit.php": {
"File": "includes/api/ApiStashEdit.php",
"TicketCount": 5,
"CommitCount": 5,
"Tickets": [
"T204742",
"T203786",
"T221689",
"T245928",
"T255700"
],
"Commits": [
{
"message": "API: Handle ContentHandler exception for content model mismatch\n\nEnsure the content we are trying to save and the base content have\nidentical content models before proceeding to save so as to forestall\nException that may be thrown by ContentHandler if it founds they're not.\n\nThere are two cases where the models are allowed to differ: Edit that\nundoes content model change or edit that's meant to explicitly change\nthe model. The logic for these is handled separately and may succeed\nor fail, but exception will not be thrown.\n\nBug: T255700\nChange-Id: I8782732bb0fc3059693cd7035b7ebb43fd71d333\n",
"bugs": [
"T255700"
],
"subject": "API: Handle ContentHandler exception for content model mismatch",
"hash": "7af5678847cc64c0030a717121775ea041a28819",
"date": "2020-09-04T02:41:58"
},
{
"message": "stashedit: Ensure that $summary is a string\n\nIt's only documented as string.\n\nBug: T245928\nChange-Id: I8fa287f335e90a59ac18365e7401a5cf703130a3\n",
"bugs": [
"T245928"
],
"subject": "stashedit: Ensure that $summary is a string",
"hash": "60dc4b41f833ae036c1d34e9399a4b1b1b1ca33a",
"date": "2020-02-22T19:55:28"
},
{
"message": "Keep ERROR_* constants in ApiStashEdit for backwards compatibility\n\nFollow-up to 285930668495b\n\nBug: T221689\nChange-Id: Ibe275c69d5b47fd36efac4a91b2334970dd02fe8\n",
"bugs": [
"T221689"
],
"subject": "Keep ERROR_* constants in ApiStashEdit for backwards compatibility",
"hash": "56443daf78e524391e5515caae2b294ae224e68e",
"date": "2019-04-23T19:43:38"
},
{
"message": "Prune old edit stash entries from users as they create more\n\nThis should reduce pressure on certain medium-large sized memcached\nslabs. Pre-1.5 memcached versions have a harder time pruning expired\nentries in time to avoid evictions, so it will be most useful that\nscenario.\n\nBug: T203786\nChange-Id: Ic357dbfcd9abd525b02e8d631d1344db3745d24c\n",
"bugs": [
"T203786"
],
"subject": "Prune old edit stash entries from users as they create more",
"hash": "c3348833097788dd8600e7e8cf826b217245633c",
"date": "2019-03-26T18:24:56"
},
{
"message": "Make ApiStashEdit use a separate key for the parser output due to size\n\nBug: T204742\nChange-Id: Ibab189c8e0dee5e840770bdb0336516fdfc75e4b\n",
"bugs": [
"T204742"
],
"subject": "Make ApiStashEdit use a separate key for the parser output due to size",
"hash": "0dc015c87bfb521ae66768b09458d92fd12d9e64",
"date": "2019-03-06T17:11:07"
}
]
},
"includes/specials/SpecialBlock.php": {
"File": "includes/specials/SpecialBlock.php",
"TicketCount": 3,
"CommitCount": 4,
"Tickets": {
"0": "T259212",
"2": "T263642",
"3": "T265552"
},
"Commits": [
{
"message": "Revert \"Hard deprecate SpecialBlock::canBlockEmail\"\n\nThis reverts commit d1784d139a29f294e6b396b59d8b9c37b7e71eb0.\n\nReason for revert: per T265552\n\nBug: T265552\nChange-Id: I2c25c81d2e7820d91175373a49bb1fe52228321e\n",
"bugs": [
"T265552"
],
"subject": "Revert \"Hard deprecate SpecialBlock::canBlockEmail\"",
"hash": "a4a706524951d26c6b095bcb79736367fe7f9f8e",
"date": "2020-10-15T12:06:29"
},
{
"message": "SpecialUnblock: Allow getTargetAndType to accept null $par\n\nAlso add documentation and type hinting for\nSpecialBlock::getTargetAndType\n\nBug: T263642\nChange-Id: If3e78537cd813d8aabbc6ba53c4d0be949ede9a1\n",
"bugs": [
"T263642"
],
"subject": "SpecialUnblock: Allow getTargetAndType to accept null $par",
"hash": "7432a7c7ad8e1ff077e9e20f19279057f139c40d",
"date": "2020-09-23T14:00:34"
},
{
"message": "SpecialBlock: Make error more generic if block not inserted/found\n\nIf a block cannot be inserted, but no existing block can be found\nagainst the target, display a generic error explaining this, and\nasking the user to report the problem. Also log a warning.\n\nFollow up to I1737e3a69748ebaa743e87b185ba1e3b92afec8c, which\nassumed the error was caused by the databases being out of sync.\nWe have since seen other causes.\n\nBug: T259212\nChange-Id: If0ddbc2de3855c3c7e6c9d78875cbc47a81fddc5\n",
"bugs": [
"T259212"
],
"subject": "SpecialBlock: Make error more generic if block not inserted/found",
"hash": "1c7b878c99cc9d55d30a2244e2a424923b3ffb42",
"date": "2020-08-20T10:43:34"
},
{
"message": "SpecialBlock: Show error if a block could not be inserted or found\n\nIf a block cannot be inserted, it is assumed that there is an\nexisting block against the target. That block is then retrieved\nfrom a replica database using DatabaseBlock::newFromTarget. We are\nseeing errors as a result of assuming that newFromTarget always\nreturns a block object in this situation.\n\nSometimes a block could not be inserted because there is an existing\nblock, but the existing block could not be retrieved. This could\nhappen if:\n* there is an existing block but it is not in the replicas yet\n* an existing block was removed after the insert attempt, but before\n the retrieve attempt\n\nCheck whether DatabaseBlock::newFromTarget returns a block object,\nand display a message explaining the situation if not.\n\nBug: T259212\nChange-Id: I1737e3a69748ebaa743e87b185ba1e3b92afec8c\n",
"bugs": [
"T259212"
],
"subject": "SpecialBlock: Show error if a block could not be inserted or found",
"hash": "d17af1729063be27f0d1a69cce23e42416903e51",
"date": "2020-08-11T07:19:45"
}
]
},
"includes/api/ApiUpload.php": {
"File": "includes/api/ApiUpload.php",
"TicketCount": 3,
"CommitCount": 4,
"Tickets": [
"T208539",
"T223448",
"T228749"
],
"Commits": [
{
"message": "Validate name for async uploads\n\nIf we don't validate the name, we risk running into PHP\nnotices if the name turns out to be invalid: later code\nwill check the title for warnings (e.g. duplicate), but\nthat title will not exist (though it is assumed to be)\n\nIf we'd also validate the file, we'd end up running into\nerrors where the file being verified doesn't yet exist\n(because it's being assembled in a background job)\n\nBug: T208539\nChange-Id: I8e35fe54e8a6356b6670c2a5c88c3e97b0a3ba00\n",
"bugs": [
"T208539"
],
"subject": "Validate name for async uploads",
"hash": "82458c0ce0f5421abe5907957b954848ee3463ee",
"date": "2020-07-07T16:06:12"
},
{
"message": "Don't try to store File objects to the upload session\n\nFile objects can contain closures which can't be serialized.\n\nInstead, add makeWarningsSerializable(), which converts the warnings\nto a serializable array. Make ApiUpload::transformWarnings() act on this\nserializable array instead. For consistency, ApiUpload::getApiWarnings()\nalso needs to convert the result of checkWarnings() before transforming\nit.\n\nBug: T228749\nChange-Id: I8236aaf3683f93a03a5505803f4638e022cf6d85\n",
"bugs": [
"T228749"
],
"subject": "Don't try to store File objects to the upload session",
"hash": "51e837f68f6df7fdc6cb35803e497bfc0532c861",
"date": "2019-07-26T06:15:30"
},
{
"message": "Revert \"Always validate uploads over api\"\n\nThe verification is broken with chunken uploads and ultimately\ncause large files to no more be uploadable.\n\nThis reverts commit 38ec6d8a344d4eda0307dd3a72653dd2171305d6.\n\nBug: T223448\nChange-Id: If414a8f751a3e1488a2ab099abd8b598c973c1f4\n",
"bugs": [
"T223448"
],
"subject": "Revert \"Always validate uploads over api\"",
"hash": "723db87d9d1f9000d04146d6db1ccc92b4677af8",
"date": "2019-05-17T14:42:44"
},
{
"message": "Always validate uploads over api\n\nfilesize and title are validated in UploadBase::verifyUpload with more\naccurate error message\n\nUsing stashed async with a long title can cause null errors later on\n\nBug: T208539\nChange-Id: I545435e2baa222ae1544673011c5527874d1d2cb\n",
"bugs": [
"T208539"
],
"subject": "Always validate uploads over api",
"hash": "38ec6d8a344d4eda0307dd3a72653dd2171305d6",
"date": "2019-05-09T19:20:10"
}
]
},
"includes/api/ApiEditPage.php": {
"File": "includes/api/ApiEditPage.php",
"TicketCount": 3,
"CommitCount": 4,
"Tickets": [
"T255700",
"T261030",
"T264200"
],
"Commits": [
{
"message": "Revert \"Revert \"ApiEditPage: Show existing watchlist expiry if status\nis not being changed.\"\"\n\nThis reverts commit 149e99f07230d041945871ddb6e0647ccc83dc21.\n\nIt's not necessary to change the constructor now, the module is already\nusing service locator to fetch RevisionLookup and ContentHandlerFactory.\n\nThe WatchedItemStore can also be gotten from there, voiding the need for\naltering the constructor now. As Daniel said in T259960#6380471 dependency\ninjection for API modules is good but not urgent.\n\nBug: T261030\nBug: T264200\nChange-Id: I16aa942cc800cd66a2cd538680a02b10cb0b1bfe\n",
"bugs": [
"T261030",
"T264200"
],
"subject": "Revert \"Revert \"ApiEditPage: Show existing watchlist expiry if status\nis not being changed.\"\"",
"hash": "30b947ad5f9ab9249b800f22873fe07a25ca147c",
"date": "2020-09-30T19:28:47"
},
{
"message": "ApiEditPage: Document that it is extended\n\nTo avoid future issues like T264200\n\nBug: T264200\nChange-Id: I0eafbad96be5037fb7795559fe6a62e69d54f0c5\n",
"bugs": [
"T264200"
],
"subject": "ApiEditPage: Document that it is extended",
"hash": "bd7ecc3b06b63f1afa6655ff16f39f6089ac6143",
"date": "2020-09-30T16:47:11"
},
{
"message": "Revert \"ApiEditPage: Show existing watchlist expiry if status is not being changed.\"\n\nThis reverts commit 07e547f47cae761489a33e9ebb8a9b108298f34e.\n\nReason for revert: LiquidThreads extends the ApiEditPage class,\neven though it shouldn't, and thus fails when the dependencies\nare not injected.\n\nBug: T261030\nBug: T264200\nChange-Id: Ib14f8a04bb6c723aa502a47ef9ccde6fe96a0ac7\n",
"bugs": [
"T261030",
"T264200"
],
"subject": "Revert \"ApiEditPage: Show existing watchlist expiry if status is not being changed.\"",
"hash": "149e99f07230d041945871ddb6e0647ccc83dc21",
"date": "2020-09-30T15:29:59"
},
{
"message": "API: Handle ContentHandler exception for content model mismatch\n\nEnsure the content we are trying to save and the base content have\nidentical content models before proceeding to save so as to forestall\nException that may be thrown by ContentHandler if it founds they're not.\n\nThere are two cases where the models are allowed to differ: Edit that\nundoes content model change or edit that's meant to explicitly change\nthe model. The logic for these is handled separately and may succeed\nor fail, but exception will not be thrown.\n\nBug: T255700\nChange-Id: I8782732bb0fc3059693cd7035b7ebb43fd71d333\n",
"bugs": [
"T255700"
],
"subject": "API: Handle ContentHandler exception for content model mismatch",
"hash": "7af5678847cc64c0030a717121775ea041a28819",
"date": "2020-09-04T02:41:58"
}
]
},
"includes/cache/MessageCache.php": {
"File": "includes/cache/MessageCache.php",
"TicketCount": 3,
"CommitCount": 4,
"Tickets": [
"T201893",
"T208897",
"T218918"
],
"Commits": [
{
"message": "MessageCache: Restore 'loadedLanguages' tracking for load()\n\nThis was removed in 97e86d934b3 in 2018 in favour of using\n`$this->cache->has($code)`. This is a problem because there\nare cases where only a narrow subset of that structure is\npopulated (by MessageCache->replace) without things like\n$this->overridable (or anything else that MessageCache->load does)\nhaving ocurred yet.\n\nThe assumption that keys are only added to $this->cache by\nMessageCache->load (or after that method has been called) was\nactually true at some point. But, this changed in 2017 when\ncommit c962b480568e optimised MessageCache->replace to not call\nMessageCache->load.\n\nBug: T208897\nChange-Id: Ie8bb4a4793675e5f1454e65c427f3100035c8b4d\n",
"bugs": [
"T208897"
],
"subject": "MessageCache: Restore 'loadedLanguages' tracking for load()",
"hash": "a5c984cc5978f869516e8a0c4892e8113d1c139f",
"date": "2019-07-29T18:22:10"
},
{
"message": "Only load latest revision in MessageCache::loadFromDB\n\nIn Id044b8dcd7c, we lost a condition that ensured that the cache would\nbe populated with the latest revision. Now it was being populated with\nall revisions, with a random one winning.\n\nBug: T218918\nChange-Id: I1a47356ea35f0abf35bb1a3489d0d3442a3400a5\n",
"bugs": [
"T218918"
],
"subject": "Only load latest revision in MessageCache::loadFromDB",
"hash": "f8dc579261dcea5744cb909785821b8996c8d312",
"date": "2019-03-22T14:44:14"
},
{
"message": "Log error when array_flip fails in MessageCache load\n\nBug: T208897\nChange-Id: If6e7a6a3019abbdc11b6604ec706cc88bfddf128\n",
"bugs": [
"T208897"
],
"subject": "Log error when array_flip fails in MessageCache load",
"hash": "de3fc8f765d9a8e891492615a76cc3223499db89",
"date": "2018-11-19T10:49:23"
},
{
"message": "Avoid MapCacheLRU error when MessageCache fails to load\n\nBug: T201893\nChange-Id: I6093113a3ffa8092dea3351a6ed6c815c7ff7162\n",
"bugs": [
"T201893"
],
"subject": "Avoid MapCacheLRU error when MessageCache fails to load",
"hash": "65ad02955c1766e594483cb7374447b7820cd9d0",
"date": "2018-08-14T06:03:27"
}
]
},
"includes/skins/Skin.php": {
"File": "includes/skins/Skin.php",
"TicketCount": 4,
"CommitCount": 4,
"Tickets": [
"T214735",
"T227817",
"T258001",
"T257996"
],
"Commits": [
{
"message": "Deprecate Skin::mainPageLink\n\nBug: T257996\nDepends-On: I892b3180949383e47f0cf0f2826c82fe9f9756ac\nChange-Id: If3a68cc4e41a56ad94abce8db409830ed7ee3afc\n",
"bugs": [
"T257996"
],
"subject": "Deprecate Skin::mainPageLink",
"hash": "dcc88f70f414afbdb52c901960aacc112f18770d",
"date": "2020-09-09T17:25:50"
},
{
"message": "Deprecate Skin methods that are aliases for footerLink\n\nThese methods are deprecated.\n* Skin::aboutLink()\n* Skin::disclaimerLink()\n* Skin::privacyLink()\n\nTheir usages can be replaced with Skin::footerLink() by providing relevant keys.\n\nBug: T258001\nChange-Id: Ib0b0a3be52f09e43e70a333672a1429b0ad926b4\n",
"bugs": [
"T258001"
],
"subject": "Deprecate Skin methods that are aliases for footerLink",
"hash": "85aaf46160a4147e65588ff0be46ae00213b0ac8",
"date": "2020-09-03T19:20:48"
},
{
"message": "Fix exception when viewing special pages with relative related titles\n\nBug: T227817\nChange-Id: I17e4acae81792c6d13c706741ec2a953300ac004\n",
"bugs": [
"T227817"
],
"subject": "Fix exception when viewing special pages with relative related titles",
"hash": "e4bc582217224c848923d6a8c1e3786ef08b535f",
"date": "2019-07-26T01:44:28"
},
{
"message": "Avoid making master connection from Skin::getUndeleteLink\n\nDo this by calling Title:quickUserCan instead of Title::userCan\nfrom Skin::getUndeleteLink.\n\nBug: T214735\nChange-Id: I24dfd86275638e52012a5647ab3e5c848af840c2\n",
"bugs": [
"T214735"
],
"subject": "Avoid making master connection from Skin::getUndeleteLink",
"hash": "7fc8b36288adcd140da212cbe0cf443b98c2cd16",
"date": "2019-01-30T22:51:27"
}
]
},
"includes/deferred/RefreshSecondaryDataUpdate.php": {
"File": "includes/deferred/RefreshSecondaryDataUpdate.php",
"TicketCount": 4,
"CommitCount": 4,
"Tickets": [
"T206288",
"T218456",
"T206283",
"T248003"
],
"Commits": [
{
"message": "RefreshSecondaryDataUpdate: Commit before running sub-updates\n\nThe call to `$this->updater->getSecondaryDataUpdates()` probably started\nan implicit transaction, and a sub-update may have tried to register a\ncallback to watch for a rollback which would make that implicit\ntransaction non-empty.\n\nBug: T248003\nChange-Id: Ib9db8ec9c43c8b2871f283733ed6a05d2dec6dd1\n",
"bugs": [
"T248003"
],
"subject": "RefreshSecondaryDataUpdate: Commit before running sub-updates",
"hash": "b8858d6d5bf43a785b39d0ba1eb8f177cb7d7444",
"date": "2020-04-09T15:17:00"
},
{
"message": "Add RefreshSecondaryDataUpdate and use it in DerivedPageDataUpdater\n\nThis class implements EnqueueableDataUpdate and can be pushed as a\njob if it fails to run via DeferredUpdates.\n\nUnlike a1f7fd3adaa3, make RefreshSecondaryDataUpdate skip failing\nupdates in doUpdate(). Instead of throwing the first exception from\nany update, log any exceptions that occur and try all the other\nupdates. The first error will be re-thrown afterwards.\n\nAlso, make sure that each DataUpdate still has outer transaction\nscope. This property is documented at mediawiki.org and should not\nbe changed.\n\nAdd integration tests for RefreshSecondaryDataUpdateTest.\n\nBug: T218456\nBug: T206283\nChange-Id: I7c6554a4d4cd76dfe7cd2967afe30b3aa1069fcb\n",
"bugs": [
"T218456",
"T206283"
],
"subject": "Add RefreshSecondaryDataUpdate and use it in DerivedPageDataUpdater",
"hash": "1f4efc6c34aa363c9f5c4d8fd860a39faac4ae2d",
"date": "2020-03-11T07:42:48"
},
{
"message": "Revert \"Split out new RefreshSecondaryDataUpdate class\"\n\nThis reverts commits a1f7fd3adaa3, 0ef02cd018901.\n\nBug: T218456\nChange-Id: I9bbea3d13460ed44755d77fc61ff23fb906cf71e\n",
"bugs": [
"T218456"
],
"subject": "Revert \"Split out new RefreshSecondaryDataUpdate class\"",
"hash": "6b7ddf9c9bfa7f651a50d462f2e4e918a9407dab",
"date": "2019-03-19T14:51:27"
},
{
"message": "Split out new RefreshSecondaryDataUpdate class\n\nMake DerivedPageDataUpdater bundle all the related DataUpdate tasks\non page change with a RefreshSecondaryDataUpdate wrapper. If one of\nthe DataUpdate tasks fails, then the entire bundle of updates can be\nre-run in the form of enqueueing a RefreshLinksJob instance (these\njobs are idempotent). If several of the bundled tasks fail, it is easy\nfor DeferredUpdates to know that only one RefreshLinksJob should be\nenqueued.\n\nThe goal is to make DataUpdate tasks more reliable and resilient.\nMost of these deferred update failures are due to ephemeral problems\nlike lock contention. Since the job queue is already able to reliably\nstore and retry jobs, and the time that a regular web request can spend\nin post-send is more limited, it makes the most sense to just enqueue\ntasks as jobs if they fail post-send.\n\nMake LinkUpdate no longer defined as enqueueable as RefreshLinksJob\nsince they are not very congruent (LinksUpdate only does some of the\nwork that RefreshLinksJob does). Only the wrapper, with the bundle of\nDataUpdate instances, is congruent to RefreshLinksJob.\n\nThis change does not itself implement the enqueue-on-failure logic\nin DeferredUpdates, but is merely a prerequisite.\n\nBug: T206288\nChange-Id: I191103c1aeff4c9fedbf524ee387dad9bdf5fab8\n",
"bugs": [
"T206288"
],
"subject": "Split out new RefreshSecondaryDataUpdate class",
"hash": "a1f7fd3adaa380b276aefa467ec90fce4c916ce6",
"date": "2019-03-15T17:14:50"
}
]
},
"includes/specials/pagers/ContribsPager.php": {
"File": "includes/specials/pagers/ContribsPager.php",
"TicketCount": 4,
"CommitCount": 4,
"Tickets": [
"T199066",
"T221380",
"T231540",
"T220447"
],
"Commits": [
{
"message": "ContribsPage: Re-remove the getContribs() method\n\nDependencies remove the use from Flow and ArticleFeedback.\n\nThis reverts commit e6a8e5268d8b70867e58b1c827d42fec56bb315f.\n\nDepends-On: If77a646344b3ee89505bb17be7571f63cff16a5a\nDepends-On: I3b2fa1c65cfc32e8ebc21166d32f174557694d88\nBug: T220447\nBug: T231540\nChange-Id: I3f87c0310f2f2de674d8c2fa017642bcc69fd834\n",
"bugs": [
"T220447",
"T231540"
],
"subject": "ContribsPage: Re-remove the getContribs() method",
"hash": "da64cb8fc70564d7f447aaf792ff11974c315d51",
"date": "2019-08-30T21:12:54"
},
{
"message": "ContribsPage: bring back getContribs() method\n\nFollows-up 73664393f8da2ec.\n\nApparently the method still has callers. Hot fix (UBN).\n\nBug: T231540\nChange-Id: I09ba81fc7ac4afe4c5cc54c3a589a54e31e9c419\n",
"bugs": [
"T231540"
],
"subject": "ContribsPage: bring back getContribs() method",
"hash": "e6a8e5268d8b70867e58b1c827d42fec56bb315f",
"date": "2019-08-29T19:16:41"
},
{
"message": "ContribsPager: Fix slow queries\n\nWhen ContribsPager is using an auxiliary table like ip_changes or\nrevision_actor_temp for the main action of the query, we already had\ncode in place to let it use the auxiliary table's denormalized timestamp\nfield for the ordering. What we didn't have was code to let it also use\nthe auxiliary table's denormalized timestamp field for *continuation*.\n\nWith the schema defined in tables.sql, the simplest thing to do would be\nto be to add a redundant JOIN condition between rev_timestamp and the\ndenormalized timestamp field which would be enough to allow\nMySQL/MariaDB to propagate the continuation conditional on rev_timestamp\nto the denormalized timestamp field.\n\nUnfortunately many Wikimedia wikis have rev_timestamp defined\ndifferently from table.sql (P8433), and that difference is enough to\nbreak that propagation. So we need to take a more difficult route,\nrestructuring the code tell IndexPager to explicitly use the\ndenormalized fields for ordering and continuation.\n\nOn the plus side, since we're doing that anyway we can get rid of the\ncode mentioned in the first paragraph.\n\nBug: T221380\nChange-Id: Iad6c0c2f1ac5e1c610de15fe6e85a637c287bcd8\n",
"bugs": [
"T221380"
],
"subject": "ContribsPager: Fix slow queries",
"hash": "c1db9d74430a8038766a62d32e06135aceb05c21",
"date": "2019-05-01T01:07:32"
},
{
"message": "ContribsPager: Factor revision check out of formatRow\n\nThis is needed by MobileFrontend. Also helps with readability\nand understanding what is going on inside the larger formatRow\nmethod.\n\nBug: T199066\nChange-Id: I679f4bf4305ca5b0fd523e844a01f06b4bd38b5c\n",
"bugs": [
"T199066"
],
"subject": "ContribsPager: Factor revision check out of formatRow",
"hash": "19c3472b909fb7dfc14195465a9ec1838a20d079",
"date": "2018-09-11T22:32:40"
}
]
},
"includes/GlobalFunctions.php": {
"File": "includes/GlobalFunctions.php",
"TicketCount": 4,
"CommitCount": 4,
"Tickets": [
"T190960",
"T208544",
"T206283",
"T235357"
],
"Commits": [
{
"message": "Stop using SCRIPT_NAME where possible, rely on statically configured routing\n\nIt has become apparent that $_SERVER['SCRIPT_NAME'] may contain the same\nthing as REQUEST_URI, for example in WMF production. PATH_INFO is not\nset, so there is no way to split the URL into SCRIPT_NAME and PATH_INFO\ncomponents apart from configuration.\n\n* Revert the fix for T34486, which added a route for SCRIPT_NAME to the\n PathRouter for the benefit of img_auth.php. In T235357, the route thus\n added contained $1, breaking everything.\n* Remove calls to WebRequest::getPathInfo() from everywhere other than\n index.php. Dynamic modification of $wgArticlePath in order to make\n PathRouter work was weird and broken anyway. All that is really needed\n is a suffix of REQUEST_URI, so I added a function which provides that.\n* Add $wgImgAuthPath, for use as a last resort workaround for T34486.\n* Avoid the use of $_SERVER['SCRIPT_NAME'] to detect the currently\n running script.\n* Deprecated wfGetScriptUrl(), a fairly simple wrapper for SCRIPT_NAME.\n Apparently no callers in core or extensions.\n\nBug: T235357\nChange-Id: If2b82759f3f4aecec79d6e2d88cd4330927fdeca\n",
"bugs": [
"T235357"
],
"subject": "Stop using SCRIPT_NAME where possible, rely on statically configured routing",
"hash": "507501d6ee29eb1b8df443192971fe2b6a6addb6",
"date": "2020-04-01T16:33:38"
},
{
"message": "Avoid using \"enqueue\" mode for deferred updates in doPostOutputShutdown\n\nSet appropriate headers and flush the output as needed to avoid blocking\nthe client on post-send updates for the stock apache2 server scenario.\nSeveral cases have bits of header logic to avoid delay:\n\na) basic GET/POST requests that succeed (e.g. HTTP 2XX)\nb) requests that fail with errors (e.g. HTTP 500)\nc) If-Modified-Since requests (e.g. HTTP 304)\nd) HEAD requests\n\nThis last two still block on deferred updates, so schedulePostSendJobs()\ndoes not trigger on them as a form of mitigation. Slow deferred updates\nshould only trigger on POST anyway (inline and redirect responses are\nOK), so this should not be much of a problem.\n\nDeprecate triggerJobs() and implement post-send job runs as a deferred.\nThis makes it easy to check for the existence of post-send updates by\ncalling DeferredUpdates::pendingUpdatesCount() after the pre-send stage.\nAlso, avoid running jobs on requests that had exceptions. Relatedly,\nremove $mode option from restInPeace() and doPostOutputShutdown()\nOnly one caller was using the non-default options.\n\nBug: T206283\nChange-Id: I2dd2b71f1ced0f4ef8b16ff41ffb23bb5b4c7028\n",
"bugs": [
"T206283"
],
"subject": "Avoid using \"enqueue\" mode for deferred updates in doPostOutputShutdown",
"hash": "4f11b614544be8cb6198fbbef36e90206ed311bf",
"date": "2019-09-30T22:59:59"
},
{
"message": "Prevent PHP notice on SpecialDeletedContributions\n\nBug: T208544\nChange-Id: Ie8d5c3d7257134857713853eec8e0eb42890366a\n",
"bugs": [
"T208544"
],
"subject": "Prevent PHP notice on SpecialDeletedContributions",
"hash": "2c8db66fa496c6c9935873d604d99170201a8ab8",
"date": "2019-02-15T18:45:09"
},
{
"message": "Normalize and lower the default DB lag wait timeout\n\nBug: T190960\nChange-Id: I49aca118583b20314e6bf82f196f3413571f5bd9\n",
"bugs": [
"T190960"
],
"subject": "Normalize and lower the default DB lag wait timeout",
"hash": "7f24eb5d789966ec7c18c0a612fc9089229d4279",
"date": "2018-03-28T20:49:25"
}
]
},
"includes/filerepo/file/File.php": {
"File": "includes/filerepo/file/File.php",
"TicketCount": 3,
"CommitCount": 4,
"Tickets": [
"T198279",
"T221812",
"T263014"
],
"Commits": [
{
"message": "Hard deprecate File::userCan() with $user=null\n\nThe ArchivedFile::userCan and OldLocalFile::userCan() methods, along\nwith a number of other methods where the user parameter was optional,\nwere deprecated in 1.35, but this case was overlooked. This patch is\nintended for backport to 1.35, so that the $user parameter can be\nremoved in 1.36 in accordance with the deprecation policy.\n\nThis path is known to be used by LocalRepo::findFile(),\nFileRepo::findFile(), and FileRepo::findFileFromKey(), so hacky\nworkarounds have been added in this patch to avoid triggering\ndeprecation warnings in 1.35. T263033 has been filed to fix these\n'correctly' in 1.36.\n\nBug: T263014\nChange-Id: I17cab8ce043a5965aeab089392068b91c686025e\n",
"bugs": [
"T263014"
],
"subject": "Hard deprecate File::userCan() with $user=null",
"hash": "5e703cdf669f85e2924dee76c5a8831633259ce0",
"date": "2020-09-16T16:27:51"
},
{
"message": "Revert \"Remove support for (Archived|OldLocal)File::userCan without a user\"\n\nThis reverts commit 264d043d0400d4a0353df03eb5b431dcf25ad0ae.\n\nReason for revert: Parsoid is still using this.\n\nBug: T263014\nChange-Id: I9d6b65b319a45bbdbd479eda0d0580296ceb7f62\n",
"bugs": [
"T263014"
],
"subject": "Revert \"Remove support for (Archived|OldLocal)File::userCan without a user\"",
"hash": "a66e7a6b0c7c1cc46af9c938355e04d955d0d29a",
"date": "2020-09-16T10:47:49"
},
{
"message": "LocalFile: avoid hard failures on non-existing files.\n\nSome methods on LocalFile will fatal if called on a non-existing file.\nApiQueryImageInfo did not take that into account.\n\nThis patch changes LocalFile to avoid fatal errors, and ApiQueryImageInfo\nto not try and report information on non-existing files.\n\nNOTE: the modified code has NO test coverage! This should be fixed\nbefore this patch is applied, or the patch needs to be thoroughly tested\nmanually.\n\nBug: T221812\nChange-Id: I9b74545a393d1b7a25c8262d4fe37a6492bbc11e\n",
"bugs": [
"T221812"
],
"subject": "LocalFile: avoid hard failures on non-existing files.",
"hash": "bdc6b4e378c6872a20f6fb5842f1a49961af91b4",
"date": "2019-09-18T09:18:44"
},
{
"message": "filerepo: clean up remote description cache keys\n\nHash the file name portion and make the string constant portions\nmore relevant to what the keys are actually used for (e.g. there\nis no URL parameter in the key)\n\nBug: T198279\nChange-Id: Idf6f97db26f5be291cdd3a50a91346677fe9c3e6\n",
"bugs": [
"T198279"
],
"subject": "filerepo: clean up remote description cache keys",
"hash": "8804df2da5d0bc1b4cbd57696d770e77b76d4434",
"date": "2018-06-27T07:25:47"
}
]
},
"includes/libs/rdbms/database/DatabaseSqlite.php": {
"File": "includes/libs/rdbms/database/DatabaseSqlite.php",
"TicketCount": 4,
"CommitCount": 4,
"Tickets": [
"T201900",
"T193565",
"T208331",
"T218388"
],
"Commits": [
{
"message": "rdbms: treat cloned temporary tables as \"effective write\" targets\n\nMake IDatabase::lastDoneWrites() reflect creation and changes to\nthe cloned temporary unit test tables but not other temporary tables.\nThis effects the LB method hasOrMadeRecentMasterChanges(). Other tables\nare assumpted to really just be there for temporary calculations rather\nacting as test-only ephemeral versions of permanent tables. Treating\nwrites to the \"fake permanent\" temp tables more like real permanent\ntables means that the tests better align with production.\n\nAt the moment, temporary tables still have to use DB_MASTER, given\nthe assertIsWritableMaster() check in query(). This restriction\ncan be lifted at some point, when RDBMs compatibility is robust.\n\nBug: T218388\nChange-Id: I4c0d629da254ac2aaf31aae35bd2efc7bc064ac6\n",
"bugs": [
"T218388"
],
"subject": "rdbms: treat cloned temporary tables as \"effective write\" targets",
"hash": "108fd8b18c1084de7af0bf05831ee9360f595c96",
"date": "2019-03-26T21:24:42"
},
{
"message": "DatabaseSqlite::insert: Fix affected row count\n\nFollow up to 633eb437a3b808518469c6eaf4e86a436941d837\n\nBug: T208331\nChange-Id: I142bb8c8abd43242d098932da212aa58323a0863\n",
"bugs": [
"T208331"
],
"subject": "DatabaseSqlite::insert: Fix affected row count",
"hash": "13f1ce8244aa5b0ea553625ba2b6c9d36ed45c1a",
"date": "2018-10-31T16:13:45"
},
{
"message": "rdbms: Database::selectDB() update the domain and handle failure better\n\nLoadBalancer uses Database::getDomainId() for deciding which keys to use\nin the foreign connection handle arrays. This method should reflect any\nchanges made to the DB selection.\n\nIf the query fails, then do not change domain field. This is the sort of\napproach that LoadBalancer is expects in openForeignConnection(). Also,\nthrow an exception when selectDB() fails.\n\nThe db/schema/prefix fields of Database no longer exist in favor of just\nusing the newer currentDomain field.\n\nAlso:\n* Add IDatabase::selectDomain() method and made selectDB() wrap it.\n* Extract the DB name from sqlite files if not explicitly provided.\n* Fix inconsistent open() return values from Database subclasses.\n* Make a relationSchemaQualifier() method to handle the concern of\n omitting schema names in queries. The means that getDomainId() can\n still return the right value, rather than confusingly omitt the schema.\n* Make RevisionStore::checkDatabaseWikiId() account for the domain schema.\n Unlike d2a4d614fce09c, this does not incorrectly assume the storage is\n always for the current wiki domain. Also, LBFactorySingle sets the local\n domain so it is defined even in install.php.\n* Make RevisionStoreDbTestBase actually set the LoadBalancer local domain.\n* Make RevisionTest::testLoadFromTitle() account for the domain schema.\n\nBug: T193565\nChange-Id: I6e51cd54c6da78830b38906b8c46789c79498ab5\n",
"bugs": [
"T193565"
],
"subject": "rdbms: Database::selectDB() update the domain and handle failure better",
"hash": "fe0af6cad57821dc5b8741edfed77242d543ccaa",
"date": "2018-10-10T19:03:30"
},
{
"message": "rdbms: Document a bunch of stuff about query verbs\n\nThe decision to treat COMMIT/BEGIN as a \"read\" in isWriteQuery()\nfor the benefit of ChronologyProtector was first introduced\nin r47360 (8653947b, 2009).\n\n* Re-order strings in isTransactableQuery() to match the regular\n expression in isWriteQuery() for quicker mental comparison\n\n* Add missing visibility to DatabaseSqlite->isWriteQuery, matching\n the parent class implementation.\n\nBug: T201900\nChange-Id: Ic90f6455a2e696ba9428ad5835d0f4be6a0d9a5c\n",
"bugs": [
"T201900"
],
"subject": "rdbms: Document a bunch of stuff about query verbs",
"hash": "aeb6a921324770e475e6583aa69dab830d81144e",
"date": "2018-09-28T23:49:10"
}
]
},
"includes/libs/objectcache/WANObjectCache.php": {
"File": "includes/libs/objectcache/WANObjectCache.php",
"TicketCount": 1,
"CommitCount": 4,
"Tickets": [
"T203786"
],
"Commits": [
{
"message": "objectcache: make WANObjectCache prefer ADD over GET/CAS for misses\n\nThis avoids an extra cache query and also avoids I/O if another thread\nalready saved the value in the meantime, bloating the GET response.\n\nBug: T203786\nChange-Id: I05539873f55d3254e2b9ecad0df158db1e6a1a1a\n",
"bugs": [
"T203786"
],
"subject": "objectcache: make WANObjectCache prefer ADD over GET/CAS for misses",
"hash": "35af086e57d5faa0ed282d0c67121ee684a0d2f8",
"date": "2019-03-28T09:42:55"
},
{
"message": "objectcache: optimize WAN cache key updates during HOLDOFF_TTL\n\nAvoid the ADD operation spam from all threads trying to access\na tombstoned key by checking the interim value cache timestamp.\nThis also avoids the GET/CAS spam from threads that manage to\nget the mutex. If a single thread repeatedly accesses the same\ntombstoned value in rapid succession, there will significantly\nless cache operation spam.\n\nDo the same for cache updates to keys in the holdoff state\ndue to \"check keys\" or the \"touchedCallback\" function.\n\nRelatedly, fix getWithSetCallback() to disregard interim values\nset prior to or at the same time as the latest delete() call.\nThis can slightly reduce the chance of the cache being behind\nreplica DBs for a second. It also avoids unit test failures\nwere a series of deletes and cache access happen at the same\ntimestamp (via time injection or regular system time calls).\n\nIn addition:\n* Add PASS_BY_REF flag with backwards compatibility to avoid\n bloating the signature of get()/getMulti() with the new\n tombstone information needed for the above changes.\n* Avoid confusing pass-by-reference in getInterimValue() and\n fix use of incorrect $asOf parameter.\n* Move some more logic into setInterimValue().\n* Update some comments regarding broadcasted operations that\n were not true for the currently assumed mcrouter setup.\n* Rename $cValue => $curValue and $versioned => $needsVersion\n for better readability.\n\nBug: T203786\nChange-Id: I0eb3f9b697193d39a70dd3c0967311ad7e194f20\n",
"bugs": [
"T203786"
],
"subject": "objectcache: optimize WAN cache key updates during HOLDOFF_TTL",
"hash": "b707afa7f485e4908a069931f689852c493bac8b",
"date": "2019-03-04T10:00:29"
},
{
"message": "objectcache: avoid duplicate cache sets for missing keys with lockTSE\n\nFollow-up to 70bf85d4626 which only affected the case of tombstoned keys.\n\nImprove documentation about getWithSetCallback() options.\n\nBug: T203786\nChange-Id: I683a38f65a79cb98a4ae71cbc5dd88aefe48d022\n",
"bugs": [
"T203786"
],
"subject": "objectcache: avoid duplicate cache sets for missing keys with lockTSE",
"hash": "2347a15c84d6e119d710636065fd9af2ef0da180",
"date": "2019-02-20T02:56:27"
},
{
"message": "objectcache: avoid duplicate set() calls with lockTSE when no value is in cache\n\nEach thread will still run the callback, but only one will save the value back\n\nBug: T203786\nChange-Id: Idc4738aa005cc44ec0f1adc6dcf2e3f87d0c9480\n",
"bugs": [
"T203786"
],
"subject": "objectcache: avoid duplicate set() calls with lockTSE when no value is in cache",
"hash": "70bf85d46262f693d62c1f10acd6069265013334",
"date": "2019-02-03T02:17:59"
}
]
},
"includes/Storage/SqlBlobStore.php": {
"File": "includes/Storage/SqlBlobStore.php",
"TicketCount": 3,
"CommitCount": 3,
"Tickets": [
"T184749",
"T187942",
"T205936"
],
"Commits": [
{
"message": "Add findBadBlobs script.\n\nThis script scans for content blobs that can't be loaded due to\ndatabase corruption, and can change their entry in the content table\nto an address starting with \"bad:\". Such addresses cause the content\nto be read as empty, with no log entry. This is useful to avoid\nerrors and log spam due to known bad revisions.\n\nThe script is designed to scan a limited number of revisions from a\ngiven start date. The assumption is that database corruption is\ngenerally caused by an intermedia bug or system failure which will\naffect many revisions over a short period of time.\n\nBug: T205936\nChange-Id: I6f513133e90701bee89d63efa618afc3f91c2d2b\n",
"bugs": [
"T205936"
],
"subject": "Add findBadBlobs script.",
"hash": "071ce36abdec44c5940720616ef3617d74f34858",
"date": "2020-04-17T13:04:59"
},
{
"message": "Make LocalFile check early if the revision store is available\n\nThis reduces the odds of having files without corresponding\nwiki pages, given that the later is done in a deferred update.\n\nAlso made some documentation cleanups.\n\nBug: T187942\nChange-Id: Iff516669f535713d37e0011e2d7ed285c667f1c5\n",
"bugs": [
"T187942"
],
"subject": "Make LocalFile check early if the revision store is available",
"hash": "d9ba7cd0050d531c4f016fda285793568fa133c7",
"date": "2018-02-22T22:07:30"
},
{
"message": "Document expandBlob behavior when no flags are given.\n\nBug: T184749\nChange-Id: I5f1f029d928a7bc25877b0eae9f3822ec321b24a\n",
"bugs": [
"T184749"
],
"subject": "Document expandBlob behavior when no flags are given.",
"hash": "cb94d35c79266c1e6f05a2b8bae5b836d673c296",
"date": "2018-01-14T08:59:42"
}
]
},
"includes/Category.php": {
"File": "includes/Category.php",
"TicketCount": 2,
"CommitCount": 3,
"Tickets": {
"0": "T195397",
"2": "T199762"
},
"Commits": [
{
"message": "Reduce the rate of calls to Category::refreshCounts\n\nBug: T199762\nChange-Id: I23e2e1ebf187d21ea4bd22304aa622199a8b9c5b\n",
"bugs": [
"T199762"
],
"subject": "Reduce the rate of calls to Category::refreshCounts",
"hash": "70ed89ad436f9a5c9090e9927c40a701db6cf93f",
"date": "2018-07-17T23:46:38"
},
{
"message": "Reduce frequency of refreshCounts() calls in LinksDeletionUpdate\n\nBug: T195397\nChange-Id: I0a39c735ec516b70c43c7a40583c43289550b687\n",
"bugs": [
"T195397"
],
"subject": "Reduce frequency of refreshCounts() calls in LinksDeletionUpdate",
"hash": "9a2ba8e21d820478f96adead39b544d92d1d6306",
"date": "2018-06-12T17:50:52"
},
{
"message": "Category: Lock the category row before the categorylinks rows\n\nWe've noticed a large increase in deadlocks between\nLinksDeletionUpdate deleting categorylinks rows and\nCategory::refreshCounts() trying to update the category table.\n\nMy best guess as to what's going on there is that LinksDeletionUpdate\nlocks the category row via the call to WikiPage::updateCategoryCounts()\nthen the categorylinks rows via its own deletions, while Category first\nlocks the categorylinks rows (in share mode) and then the category row\nwhen it tries to update or delete it.\n\nTo break the deadlock, let's have Category do a SELECT FOR UPDATE on the\ncategory row first before it locks the categorylinks rows.\n\nBug: T195397\nChange-Id: Ie11baadf2ff0ba2afbc86b10bc523525c570a490\n",
"bugs": [
"T195397"
],
"subject": "Category: Lock the category row before the categorylinks rows",
"hash": "4aa09d47591337bf1da38fd3c8a11f58d872a33c",
"date": "2018-06-12T15:28:25"
}
]
},
"includes/specials/pagers/NewPagesPager.php": {
"File": "includes/specials/pagers/NewPagesPager.php",
"TicketCount": 3,
"CommitCount": 3,
"Tickets": [
"T188555",
"T240924",
"T263655"
],
"Commits": [
{
"message": "NewPagesPager: Ignore nonexistent namespaces\n\nBug: T263655\nChange-Id: I65d8f049f065d08e168a3ae411a500065830c11f\n",
"bugs": [
"T263655"
],
"subject": "NewPagesPager: Ignore nonexistent namespaces",
"hash": "a6767fc35e7fb6663f0f7226e59b030565173079",
"date": "2020-09-23T17:58:31"
},
{
"message": "NewPagesPager: Fix namespace query conditions\n\nBug: T240924\nChange-Id: I28d276cae0518386cac3f9d571ba09e9eff6678b\n",
"bugs": [
"T240924"
],
"subject": "NewPagesPager: Fix namespace query conditions",
"hash": "b390ef6e5825e8906667d7a755d70b3478ce47b7",
"date": "2019-12-17T06:26:33"
},
{
"message": "NewPagesPages: Use array_merge rather than + for RC query info fields\n\nUnlike CommentStore::getJoin() and ActorMigration::getJoin(), the tables\nand fields of various ::getQueryInfo() methods aren't guaranteed to be\nsafe to use with array '+'.\n\nBug: T188555\nChange-Id: Ibe99edcb93d1729935fed6232ba4fe2e7d39cea6\n",
"bugs": [
"T188555"
],
"subject": "NewPagesPages: Use array_merge rather than + for RC query info fields",
"hash": "d6992713fdb81b89651053df3b57c42c9fdf6d69",
"date": "2018-03-01T14:22:45"
}
]
},
"includes/parser/ParserCache.php": {
"File": "includes/parser/ParserCache.php",
"TicketCount": 2,
"CommitCount": 3,
"Tickets": [
"T204742",
"T264257"
],
"Commits": [
{
"message": "HACK/ParserCache: Force cache-miss if mUsedOptions is undefined\n\nThese are causing thousands of errors from wmf.11-cached pages\nsince we rolled back to wmf.10.\n\nBug: T264257\nChange-Id: Ia3357b2f593ca16fc12241d7ea22bbfd222f2536\n(cherry picked from commit 71ee44aabba5c10187ad6d5cb26b5ef072cbf9b2)\n",
"bugs": [
"T264257"
],
"subject": "HACK/ParserCache: Force cache-miss if mUsedOptions is undefined",
"hash": "b52660a1f1a330d0bb5270df74cbf46abb61a255",
"date": "2020-10-01T18:25:47"
},
{
"message": "Revert \"Revert \"Revert \"Hard deprecate all public properties in CacheTime and ParserOutput\"\"\"\n\nThis reverts commit deacee9088948e074722af0148000ad9455b07df.\n\nBug: T264257\nChange-Id: Ie68d8081a42e7d8103e287b6d6857a30dc522f75\n",
"bugs": [
"T264257"
],
"subject": "Revert \"Revert \"Revert \"Hard deprecate all public properties in CacheTime and ParserOutput\"\"\"",
"hash": "3254e41a4cc0bcf85b5b0de3e3d237d4ebd7a987",
"date": "2020-10-01T18:03:41"
},
{
"message": "parsercache: use WRITE_ALLOW_SEGMENTS for cached ParserOutput values\n\nThis lets large output entries fit into memcached via key segmentation.\n\nFollows b09b3980f99 which applied the feature to PageEditStash.\n\nBug: T204742\nChange-Id: I33a60f5d718cd9033ea12d1d16046d2bede87b5b\n",
"bugs": [
"T204742"
],
"subject": "parsercache: use WRITE_ALLOW_SEGMENTS for cached ParserOutput values",
"hash": "6c31ca3f257243daeb0b4cde3f110d2879cecde8",
"date": "2019-08-25T00:53:28"
}
]
},
"includes/libs/rdbms/loadmonitor/LoadMonitor.php": {
"File": "includes/libs/rdbms/loadmonitor/LoadMonitor.php",
"TicketCount": 4,
"CommitCount": 3,
"Tickets": [
"T190960",
"T226678",
"T226770",
"T245280"
],
"Commits": [
{
"message": "rdbms: Avoid using reserved word \"host\" in Monolog context keys\n\nBug: T245280\nChange-Id: Iee9814aa59e84b4156bf42df43a9ffa5c20bc704\n",
"bugs": [
"T245280"
],
"subject": "rdbms: Avoid using reserved word \"host\" in Monolog context keys",
"hash": "d155717434eda39e872a43186fe510a7f16cc30f",
"date": "2020-02-29T19:15:57"
},
{
"message": "rdbms: avoid recursion in LoadBalancer when the master has non-zero load\n\nAdd and use IDatabase::getServerConnection() method to avoid loops caused\ncaused by pickReaderIndex() calling getConnection() for the master server.\nThat lead to getReadOnlyReason() triggering pickReaderIndex() again.\n\nMake getLaggedReplicaMode() apply when the master has non-zero load and\nthe replicas are all lagged.\n\nRemove \"allReplicasDownMode\" in favor of checking getExistingReaderIndex()\ninstead. This reduces the amount of state to keep track of a bit.\n\nFollow-up to 95e2c990940f\n\nBug: T226678\nBug: T226770\nChange-Id: Id932c3fcc00625e3960f76d054d38d9679d25ecc\n",
"bugs": [
"T226678",
"T226770"
],
"subject": "rdbms: avoid recursion in LoadBalancer when the master has non-zero load",
"hash": "79d1881eded1537e739c92d3576c48e34b352f88",
"date": "2019-07-09T19:26:46"
},
{
"message": "rdbms: avoid lag estimates in getLagFromPtHeartbeat ruined by snapshots\n\nBug: T190960\nChange-Id: I57dd8d3d0ca96d6fb2f9e83f062f29b1d53224dd\n",
"bugs": [
"T190960"
],
"subject": "rdbms: avoid lag estimates in getLagFromPtHeartbeat ruined by snapshots",
"hash": "24353a60d2c860cd24593d721b45291782a8489f",
"date": "2018-03-31T01:39:57"
}
]
},
"includes/libs/rdbms/lbfactory/ILBFactory.php": {
"File": "includes/libs/rdbms/lbfactory/ILBFactory.php",
"TicketCount": 3,
"CommitCount": 3,
"Tickets": [
"T193668",
"T194403",
"T190082"
],
"Commits": [
{
"message": "rdbms: include client ID hash in ChronologyProtector cookies\n\nPreviously, if an internal service forwarded the cookies for a\nuser (e.g. for permissions) but not the User-Agent header or not\nthe IP address (e.g. XFF), ChronologyProtector could timeout\nwaiting for a matching writeIndex to appear for the wrong key.\n\nThe cookie now tethers the client to the key that holds the\nDB positions from their last state-changing request.\n\nBug: T194403\nBug: T190082\nChange-Id: I84f2cbea82532d911cdfed14644008894498813a\n",
"bugs": [
"T194403",
"T190082"
],
"subject": "rdbms: include client ID hash in ChronologyProtector cookies",
"hash": "fb51330084b4bde1880c76589e55e7cd87ed0c6d",
"date": "2018-06-02T03:57:30"
},
{
"message": "Avoid unnecessary WaitConditionLoop delays in ChronologyProtector\n\nSince it takes time for the agent to get the response and set the\ncookie and, as well, the time into a request that a LoadBalancer is\ninitialized varies by many seconds (cookies loaded from the start),\ngive the cookie a much lower TTL than the DB positions in the stash.\n\nThis avoids having to wait for a position with a given cpPosIndex\nvalue, when the position already expired from the store, which is\na waste of time.\n\nAlso include the timestamp in \"cpPosIndex\" cookies to implement\nlogical expiration in case clients do not expire them correctly.\n\nBug: T194403\nBug: T190082\nChange-Id: I97d8f108dec59c5ccead66432a097cda8ef4a178\n",
"bugs": [
"T194403",
"T190082"
],
"subject": "Avoid unnecessary WaitConditionLoop delays in ChronologyProtector",
"hash": "52af356cad3799ebec3826e1e4743d76a114da3e",
"date": "2018-05-18T20:43:05"
},
{
"message": "Make DeferredUpdates avoid running during LBFactory::commitMasterChanges\n\nBug: T193668\nChange-Id: I50890ef17ea72481a14c4abcd93ae58b93f15d28\n",
"bugs": [
"T193668"
],
"subject": "Make DeferredUpdates avoid running during LBFactory::commitMasterChanges",
"hash": "a79b9737f1b05171af60a3127d6c628ea6a16a96",
"date": "2018-05-03T22:11:38"
}
]
},
"includes/specials/SpecialUserrights.php": {
"File": "includes/specials/SpecialUserrights.php",
"TicketCount": 4,
"CommitCount": 3,
"Tickets": [
"T240574",
"T253909",
"T254417",
"T251534"
],
"Commits": [
{
"message": "Revert \"Don't show email link if the user cannot be emailed.\"\n\nReason for revert: UserRightsProxy lacks `canReceiveEmail`\nmethod, and thus Special:GlobalUserRights fails\n\nThis reverts commit a6ad9a3b231b6ca48c7cdebfa1adbe5122cb5a76.\n\nBug: T254417\nBug: T251534\nChange-Id: Ic0f9fbb8eff18fedffd22cf3faffebccface753a\n",
"bugs": [
"T254417",
"T251534"
],
"subject": "Revert \"Don't show email link if the user cannot be emailed.\"",
"hash": "bde3f75a31cca5c5e12f658dbe61f2f12d32e906",
"date": "2020-06-04T00:02:23"
},
{
"message": "UserrightsPage: Restore visibility (previously implicitely public)\n\nMethods had no visibility specified and were made private in\n1c009dffe49a13ee66ac1bf30bbe2df0eed5ec9d, but are\noverriden by CentralAuth's SpecialGlobalGroupMembership.\n\nMake protected so that the overrides work.\n\nBug: T253909\nChange-Id: Ib57237c2398ac91f8cebb5fe254ec5b28bac9255\n",
"bugs": [
"T253909"
],
"subject": "UserrightsPage: Restore visibility (previously implicitely public)",
"hash": "3618aac953a8baea68e4d81b7e5fb6c441fe34fd",
"date": "2020-05-28T20:46:04"
},
{
"message": "Prevent Call to undefined method CentralAuthGroupMembershipProxy::isSystemUser()\n\nBug: T240574\nChange-Id: Ie7a3e246563fb3d114b03d43944f52a6f39737d3\n",
"bugs": [
"T240574"
],
"subject": "Prevent Call to undefined method CentralAuthGroupMembershipProxy::isSystemUser()",
"hash": "3081e157b7051402d1c1bd0e3a6d6cc1d158ad92",
"date": "2019-12-12T14:20:16"
}
]
},
"includes/title/NamespaceInfo.php": {
"File": "includes/title/NamespaceInfo.php",
"TicketCount": 2,
"CommitCount": 3,
"Tickets": [
"T224814",
"T253098"
],
"Commits": [
{
"message": "NamespaceInfo::makeValidNamespace: Don't throw for -1 or -2\n\nBug: T253098\nChange-Id: Ifa5a3d587fe8298d356f513bcbf2a432cc70712b\n",
"bugs": [
"T253098"
],
"subject": "NamespaceInfo::makeValidNamespace: Don't throw for -1 or -2",
"hash": "d443d5e114034ab63317be4db87b073d7528b390",
"date": "2020-06-10T20:37:52"
},
{
"message": "NamespaceInfo: Throw specifically if called on a non-int/non-int-like namespace\n\nMostly just for debugging purposes; later we can enforce this with a type hint,\nbut first I want to track down the uses.\n\nBug: T253098\nDepends-On: Ieecbc91a8f26076775149a96fbe1b19a7f39dcef\nChange-Id: I66ea07f1acf6db2d13de488b775361f45b69020c\n",
"bugs": [
"T253098"
],
"subject": "NamespaceInfo: Throw specifically if called on a non-int/non-int-like namespace",
"hash": "0349fc8b4a3bb5ec90725193cf3d69fbdff27cee",
"date": "2020-06-03T19:55:44"
},
{
"message": "Ensure canHaveTalkPage returns false when getTalkPage would fail.\n\nThis causes Title::getTalkPage and NamespaceInfo::getTitle() to throw\nan MWException when called on a LinkTarget that is an interwiki link\nor a relative section link. These methods were already throwing\nMWException when called on a link to a Special page.\n\nBug: T224814\nChange-Id: I525c186a5b8b8fc22bca195da48afead3bfbd402\n",
"bugs": [
"T224814"
],
"subject": "Ensure canHaveTalkPage returns false when getTalkPage would fail.",
"hash": "dbce648a15ee7100383c3ec9781775f0f895c645",
"date": "2019-07-03T08:40:10"
}
]
},
"includes/jobqueue/jobs/RefreshLinksJob.php": {
"File": "includes/jobqueue/jobs/RefreshLinksJob.php",
"TicketCount": 3,
"CommitCount": 3,
"Tickets": [
"T208147",
"T220037",
"T221577"
],
"Commits": [
{
"message": "Make sure that each DataUpdate still has outer transaction scope\n\nBug: T221577\nChange-Id: I620e461d791416ca37fa9ca4fca501e28d778cf5\n",
"bugs": [
"T221577"
],
"subject": "Make sure that each DataUpdate still has outer transaction scope",
"hash": "3496f0fca3debf932598087607dc5547075e2cba",
"date": "2019-05-30T20:53:18"
},
{
"message": "Add missing transaction round commit calls to RefreshLinksJob\n\nFollow-up to 83933a436e1ee9d\n\nBug: T220037\nChange-Id: Ib1ac31365f9c325c56bae11aefe825ad2b2be881\n",
"bugs": [
"T220037"
],
"subject": "Add missing transaction round commit calls to RefreshLinksJob",
"hash": "203ae7e10419c5b89cbd295c5191c6ec51ddd81b",
"date": "2019-04-04T23:25:40"
},
{
"message": "Improve handling of invalid titles in RefreshLinksJob\n\nRun updates for as many titles as possible and mark the job as failed\nif a title is invalid. Set the error message used by the job executer.\n\nBug: T208147\nChange-Id: I7f5fafe9439d8a7b45166515532075202af7d013\n",
"bugs": [
"T208147"
],
"subject": "Improve handling of invalid titles in RefreshLinksJob",
"hash": "b4290a6b6b327472305c91f1c63367b92cd65c63",
"date": "2018-11-06T19:53:37"
}
]
},
"includes/content/ContentHandler.php": {
"File": "includes/content/ContentHandler.php",
"TicketCount": 3,
"CommitCount": 3,
"Tickets": [
"T202686",
"T244300",
"T247859"
],
"Commits": [
{
"message": "contenthandler: Load revision from PAGE_LATEST when not found\n\nWhen retrieving indexable content for a WikiPage the page may be\nbrand new and not replicated yet. When no revision is found try\nagain reading from the master, then fail with a clear message instead\nof the previous behaviour of dereferencing null.\n\nBug: T247859\nChange-Id: Iab93984b26d40de00f3331ee1bbdde46b3370733\n",
"bugs": [
"T247859"
],
"subject": "contenthandler: Load revision from PAGE_LATEST when not found",
"hash": "6c73dd25487eccae89063c0e7cb7eab92c436d0f",
"date": "2020-03-17T22:48:09"
},
{
"message": "language: remove Language hints for type check as it breaks using of StubUserLang\n\nBug: T244300\nChange-Id: Iec1b5629617f1c171e8af507dc1dcebfef0666eb\n",
"bugs": [
"T244300"
],
"subject": "language: remove Language hints for type check as it breaks using of StubUserLang",
"hash": "ed18dba8f403377ebe6f6a69893eae098b77cf59",
"date": "2020-02-05T14:11:31"
},
{
"message": "Make warning about deprecated SlotDiffRenderer wrapper less noisy\n\nBug: T202686\nChange-Id: I7623daa32e460ce131e1b27eb6e4397dc572612f\n",
"bugs": [
"T202686"
],
"subject": "Make warning about deprecated SlotDiffRenderer wrapper less noisy",
"hash": "34e4e81011cc2525e5b9cd379f504a7fc3a8aae1",
"date": "2018-08-26T19:00:02"
}
]
},
"includes/MediaWikiServices.php": {
"File": "includes/MediaWikiServices.php",
"TicketCount": 6,
"CommitCount": 3,
"Tickets": {
"0": "T231200",
"1": "T231198",
"2": "T231220",
"5": "T201405",
"6": "T187731",
"7": "T259181"
},
"Commits": [
{
"message": "Introduce and apply NonSerializableTrait\n\nThe NonSerializableTrait prevents object serialization via php's native\nserialization mechanism. Most objects are not safe to serialize, and\nNonSerializableTrait provides a covenient and uniform way to protect\nagainst serialization attempts.\n\nThis patch applies the NonSerializableTrait to some key classes in\nMediaWiki.\n\nBug: T187731\nBug: T259181\nChange-Id: I0c3b558d97e3415413bbaa3d98f6ebd5312c4a67\n",
"bugs": [
"T187731",
"T259181"
],
"subject": "Introduce and apply NonSerializableTrait",
"hash": "dc436c3cff5f41bf8f22e0a3d19fa86a1c86b7dd",
"date": "2020-09-28T19:55:49"
},
{
"message": "Make LocalisationCache a service\n\nThis removes Language::$dataCache without deprecation, because 1) I\ndon't know of a way to properly simulate it in the new paradigm, and 2)\nI found no direct access to the member outside of the Language and\nLanguageTest classes.\n\nAn earlier version of this patch (e4468a1d6b6) had to be reverted\nbecause of a massive slowdown on test runs. Based on some local testing,\nthis should fix the problem. Running all tests in languages is slowed\ndown by only around 20% instead of a factor of five, and memory usage is\nactually reduced greatly (~350 MB -> ~200 MB). The slowdown is still not\ngreat, but I assume it's par for the course for converting things to\nservices and is acceptable. If not, I can try to optimize further.\n\nBug: T231220\nBug: T231198\nBug: T231200\nBug: T201405\nChange-Id: Ieadbd820379a006d8ad2d2e4a1e96241e172ec5a\n",
"bugs": [
"T231220",
"T231198",
"T231200",
"T201405"
],
"subject": "Make LocalisationCache a service",
"hash": "043d88f680cf66c90e2bdf423187ff8b994b1d02",
"date": "2019-10-07T20:18:47"
},
{
"message": "Revert \"Make LocalisationCache a service\"\n\nThis reverts commits:\n - 76a940350d36c323ebedb4ab45cc81ed1c6b6c92\n - b78b8804d076618e967c7b31ec15a1bd9e35d1d0\n - 2e52f48c2ed8dcf480843e2186f685a86810e2ac\n - e4468a1d6b6b9fdc5b64800febdc8748d21f213d\n\nBug: T231200\nBug: T231198\nChange-Id: I1a7e46a979ae5c9c8130dd3927f6663a216ba753\n",
"bugs": [
"T231200",
"T231198"
],
"subject": "Revert \"Make LocalisationCache a service\"",
"hash": "308e6427aef169a575a339e6a8e0558d29403a1d",
"date": "2019-08-26T16:28:26"
}
]
},
"includes/api/i18n/en.json": {
"File": "includes/api/i18n/en.json",
"TicketCount": 3,
"CommitCount": 3,
"Tickets": [
"T203255",
"T208929",
"T255700"
],
"Commits": [
{
"message": "API: Handle ContentHandler exception for content model mismatch\n\nEnsure the content we are trying to save and the base content have\nidentical content models before proceeding to save so as to forestall\nException that may be thrown by ContentHandler if it founds they're not.\n\nThere are two cases where the models are allowed to differ: Edit that\nundoes content model change or edit that's meant to explicitly change\nthe model. The logic for these is handled separately and may succeed\nor fail, but exception will not be thrown.\n\nBug: T255700\nChange-Id: I8782732bb0fc3059693cd7035b7ebb43fd71d333\n",
"bugs": [
"T255700"
],
"subject": "API: Handle ContentHandler exception for content model mismatch",
"hash": "7af5678847cc64c0030a717121775ea041a28819",
"date": "2020-09-04T02:41:58"
},
{
"message": "ApiComparePages: Don't try to find next/prev of a deleted revision\n\nRevisionStore::getPreviousRevision() and ::getNextRevision() don't\nhandle being passed a RevisionArchiveRecord very well. Even if they\nwould, it's not clear whether the user wants to be comparing with the\nnext/previous deleted revision or the next/previous revision even if not\ndeleted. So let's just make it an error, at least for now.\n\nBug: T208929\nChange-Id: I151019e336bda92aa4040ba4162eb2588c909652\n",
"bugs": [
"T208929"
],
"subject": "ApiComparePages: Don't try to find next/prev of a deleted revision",
"hash": "3c95d3bdd935768b246d0688eaaa258feeecd4e6",
"date": "2018-12-03T20:31:34"
},
{
"message": "ApiComparePages: Clean up handling of slot deletion\n\nWe can't allow the main slot to be deleted. DifferenceEngine assumes it\nexits.\n\nWe also shouldn't allow parameters such as `tosection-{role}` to be used\nwithout the corresponing `totext-{role}`. This will help prevent people\nfrom being confused into thinking that `tosection-{role}` will do\nanything in that situation (as opposed to `tosection`, which did).\n\nBug: T203255\nChange-Id: I58573bb2c1ee68e6907ef2e88385fe36e5184076\n",
"bugs": [
"T203255"
],
"subject": "ApiComparePages: Clean up handling of slot deletion",
"hash": "07530dfb63fe24c06ab4508c6cb8436896104e7c",
"date": "2018-08-31T15:26:07"
}
]
},
"includes/libs/filebackend/SwiftFileBackend.php": {
"File": "includes/libs/filebackend/SwiftFileBackend.php",
"TicketCount": 3,
"CommitCount": 3,
"Tickets": [
"T204174",
"T219114",
"T259023"
],
"Commits": [
{
"message": "filebackend: Fix index error in SwiftFileBackend\n\nThis fixes a regression introduced by If3d2f18.\n\nBug: T259023\nChange-Id: I654b1bcf0ff65f68ed7d92d51a6b39325da731f3\n",
"bugs": [
"T259023"
],
"subject": "filebackend: Fix index error in SwiftFileBackend",
"hash": "81362c930fc2472f87748a5d653c3924d501111d",
"date": "2020-07-28T13:30:53"
},
{
"message": "build: Upgrade mediawiki/mediawiki-phan-config from 0.5.0 to 0.6.0 and make pass\n\nFix five instances of PhanPluginDuplicateConditionalNullCoalescing;\nescape the rest for now.\n\nBug: T219114\nChange-Id: Ic4bb30c43c5315ce6b878b37b432c6e219414f8b\n",
"bugs": [
"T219114"
],
"subject": "build: Upgrade mediawiki/mediawiki-phan-config from 0.5.0 to 0.6.0 and make pass",
"hash": "460bcf81e752ffd7eb3994fb331324e874f1ca15",
"date": "2019-05-13T14:57:07"
},
{
"message": "filebackend: avoiding computing file SHA-1 hashes unless needed\n\nFileBackendStore already supports stat info not returning SHA-1.\nBuild on that logic with a \"requireSHA1\" parameter to getFileStat()\nto move some logic from SwiftFileBackend to the parent class and\navoid computing missing SHA-1's for Swift when nothing actually\nrequested the SHA-1. Only getFileSha1Base36() needs to trigger this\nlazy-population logic.\n\nNote that thumbnails only use doQuickOperations(), which does not\nneed to examine SHA-1s, it only does regular getFileStat() calls.\n\nAlso renamed addMissingMetadata() to addMissingHashMetadata().\n\nBug: T204174\nChange-Id: I2a378cb2a34608a6da2f8abe604861ff391ffaa7\n",
"bugs": [
"T204174"
],
"subject": "filebackend: avoiding computing file SHA-1 hashes unless needed",
"hash": "e1497e3593154f579cf5f6597bbf8b88cf16dce6",
"date": "2018-12-10T22:51:26"
}
]
},
"includes/specials/SpecialLog.php": {
"File": "includes/specials/SpecialLog.php",
"TicketCount": 3,
"CommitCount": 3,
"Tickets": [
"T186950",
"T201411",
"T219114"
],
"Commits": [
{
"message": "build: Upgrade mediawiki/mediawiki-phan-config from 0.5.0 to 0.6.0 and make pass\n\nFix five instances of PhanPluginDuplicateConditionalNullCoalescing;\nescape the rest for now.\n\nBug: T219114\nChange-Id: Ic4bb30c43c5315ce6b878b37b432c6e219414f8b\n",
"bugs": [
"T219114"
],
"subject": "build: Upgrade mediawiki/mediawiki-phan-config from 0.5.0 to 0.6.0 and make pass",
"hash": "460bcf81e752ffd7eb3994fb331324e874f1ca15",
"date": "2019-05-13T14:57:07"
},
{
"message": "SpecialLog: Don't throw exceptions on invalid date from user input\n\nIf users provide invalid input to the date option on Special:Log (most likely\nan intentional thing given the calendar input widget), don't let the\nTimestampException bubble up - just discard the invalid date.\n\nIntegration test included, which fails without this patch.\n\nBug: T201411\nChange-Id: Ie1a9a84343ae4e78e076586f759917e5fd5af33c\n",
"bugs": [
"T201411"
],
"subject": "SpecialLog: Don't throw exceptions on invalid date from user input",
"hash": "f198154d767824e3ade13e39774f20641917674e",
"date": "2018-09-24T15:56:48"
},
{
"message": "SpecialLog: Fix results when no offender is specified\n\nWith 467ee1e82f15 offender is not ignored when it's a\nnonexistent username and a empty list is showed.\nIgnore offender search when no offender is specified.\n\nBug: T186950\nChange-Id: I93c9dafc9299d0ba1d01090471c5b49dcc904fe8\n",
"bugs": [
"T186950"
],
"subject": "SpecialLog: Fix results when no offender is specified",
"hash": "e6b2491736a9549c984c0942d337fb46ea065be6",
"date": "2018-02-10T11:54:28"
}
]
},
"includes/specials/pagers/ActiveUsersPager.php": {
"File": "includes/specials/pagers/ActiveUsersPager.php",
"TicketCount": 3,
"CommitCount": 3,
"Tickets": [
"T204767",
"T199044",
"T216200"
],
"Commits": [
{
"message": "Follow-up 262fd58: Correctly apply the ORDER BY in ActiveUsersPager\n\nIn 262fd58, an ORDER BY clause was added, but the direction it\nis supposed to use was not loaded correctly from the $data array.\nIt is available under the key 'order', not 'dir'.\n\nBug: T216200\nChange-Id: I5e675c98820813cd9107865e11e82ae57828a974\n",
"bugs": [
"T216200"
],
"subject": "Follow-up 262fd58: Correctly apply the ORDER BY in ActiveUsersPager",
"hash": "b4cdfaa09557db9144c1d84f2ea3cc57810c8374",
"date": "2019-02-19T15:27:04"
},
{
"message": "Improve performance of ActiveUsersPager query\n\nThe query can be very slow, as it has to scan all the recentchanges rows\nfor all the users in querycachetwo (for activeusers). We can speed that\nup at the cost of not filtering out users who were active when\nquerycachetwo was last updated but aren't anymore.\n\nAlso in testing this I found that the query is extremely slow when the\nactor table migration stage is in one of the transitional states. This\ntoo can be sped up with some custom logic.\n\nBug: T199044\nChange-Id: Ia9d2ff00cfcdcc6191d854eb4365ecbf67f60b1c\n",
"bugs": [
"T199044"
],
"subject": "Improve performance of ActiveUsersPager query",
"hash": "262fd585d0e6d7e821461cc7f191c8d2f5644c4c",
"date": "2019-02-07T04:06:13"
},
{
"message": "Add join conditions to ActiveUsersPager\n\nWe're (very slowly and somewhat unofficially) moving towards using join\nconditions everywhere, and here they're needed to avoid errors once the\nactor migration reaches the READ_NEW stage.\n\nBug: T204767\nChange-Id: I8bfe861fac7874f8938bed9bfac3b7ec6f478238\n",
"bugs": [
"T204767"
],
"subject": "Add join conditions to ActiveUsersPager",
"hash": "15441cabe60d84e17ffb25824aeb095d92bc375a",
"date": "2018-10-11T19:34:21"
}
]
},
"includes/libs/rdbms/database/DatabaseMysqli.php": {
"File": "includes/libs/rdbms/database/DatabaseMysqli.php",
"TicketCount": 3,
"CommitCount": 3,
"Tickets": [
"T193565",
"T212284",
"T228092"
],
"Commits": [
{
"message": "rdbms: suppress warnings during DatabaseMysqli::doQuery()\n\nThere is already logging and error handling via SPI and exceptions.\nHaving warnings just makes redundant log entries, some of which are\nfor minor things like recoverable disconnections.\n\nBug: T228092\nChange-Id: I582b5e431c80cebeab177bacfb6445f8588e8cdb\n",
"bugs": [
"T228092"
],
"subject": "rdbms: suppress warnings during DatabaseMysqli::doQuery()",
"hash": "d9702e90689c015c4876e83506e8c331893fc647",
"date": "2019-09-05T18:22:34"
},
{
"message": "rdbms: use a direct \"USE\" query for doSelectDomain() for mysql\n\nThis should give better error messages on failure.\n\nBug: T212284\nChange-Id: I55260c6e3db1770f01e3d6a6a363b917a57265be\n",
"bugs": [
"T212284"
],
"subject": "rdbms: use a direct \"USE\" query for doSelectDomain() for mysql",
"hash": "321640b117b775ba7feb26281922bfd7833b0618",
"date": "2019-03-26T18:50:28"
},
{
"message": "rdbms: Database::selectDB() update the domain and handle failure better\n\nLoadBalancer uses Database::getDomainId() for deciding which keys to use\nin the foreign connection handle arrays. This method should reflect any\nchanges made to the DB selection.\n\nIf the query fails, then do not change domain field. This is the sort of\napproach that LoadBalancer is expects in openForeignConnection(). Also,\nthrow an exception when selectDB() fails.\n\nThe db/schema/prefix fields of Database no longer exist in favor of just\nusing the newer currentDomain field.\n\nAlso:\n* Add IDatabase::selectDomain() method and made selectDB() wrap it.\n* Extract the DB name from sqlite files if not explicitly provided.\n* Fix inconsistent open() return values from Database subclasses.\n* Make a relationSchemaQualifier() method to handle the concern of\n omitting schema names in queries. The means that getDomainId() can\n still return the right value, rather than confusingly omitt the schema.\n* Make RevisionStore::checkDatabaseWikiId() account for the domain schema.\n Unlike d2a4d614fce09c, this does not incorrectly assume the storage is\n always for the current wiki domain. Also, LBFactorySingle sets the local\n domain so it is defined even in install.php.\n* Make RevisionStoreDbTestBase actually set the LoadBalancer local domain.\n* Make RevisionTest::testLoadFromTitle() account for the domain schema.\n\nBug: T193565\nChange-Id: I6e51cd54c6da78830b38906b8c46789c79498ab5\n",
"bugs": [
"T193565"
],
"subject": "rdbms: Database::selectDB() update the domain and handle failure better",
"hash": "fe0af6cad57821dc5b8741edfed77242d543ccaa",
"date": "2018-10-10T19:03:30"
}
]
},
"includes/registration/ExtensionRegistry.php": {
"File": "includes/registration/ExtensionRegistry.php",
"TicketCount": 2,
"CommitCount": 3,
"Tickets": {
"0": "T197535",
"2": "T245629"
},
"Commits": [
{
"message": "ExtensionRegistry: Avoid losing 'defines' when loading lazy-loaded attributes\n\nWhen fetching lazy-loaded attributes in ExtensionRegistry to cache, we are saving all the values\nfrom extension.json a second time. In doing so, we are wrongfully omitting the values\npreviously being defined for the \"defines\" attribute\n(the attribute that is responsible for setting namespace constants).\nThis patch will make sure that we keep the value set in the \"defines\" attribute\nso that the constants are defined when we load from cache the second time.\n\nBug: T245629\nChange-Id: I4f151f88ece56cf718749b9de11fc8e204ccf29d\n",
"bugs": [
"T245629"
],
"subject": "ExtensionRegistry: Avoid losing 'defines' when loading lazy-loaded attributes",
"hash": "478f7e032da6cb7cadf07b40e8ca62eee06285a0",
"date": "2020-03-18T17:32:48"
},
{
"message": "registration: Let extensions add PHP extension requirements\n\nThis change adds the possibility to specify ext-* keys under the 'platform'\nkey introduced in I6744cc0be2 to require given PHP extensions. Note that\nit's impossible to add constraints different from '*', as there is no universal\nway to retrieve PHP extension versions.\n\nBug: T197535\nChange-Id: I510de1e6d80f5c1d92dc1d1665aaa6c25bf28bf7\n",
"bugs": [
"T197535"
],
"subject": "registration: Let extensions add PHP extension requirements",
"hash": "c7e45b62113b1d9d6ff80705eff25ee93b722e19",
"date": "2018-09-30T17:55:57"
},
{
"message": "registration: Let extensions add PHP version requirements\n\nWhile MediaWiki Core already sets requirements for PHP versions, it should be\npossible for extensions to tighten these requirements. This mirrors the PHP\nparameter of extension infoboxes as well.\n\nThis change introduces a new 'platform' key (in addition to 'MediaWiki', 'skins'\nand 'extensions', where non-MediaWiki software requirements will be listed\nin the future, starting with a PHP version constraint. Further keys are\nsupposed to be added to allow setting constraints against php extensions\nand other abilities of the platform.\n\nBug: T197535\nChange-Id: I6744cc0be2363b603331af9dc860eb8603a1a89a\n",
"bugs": [
"T197535"
],
"subject": "registration: Let extensions add PHP version requirements",
"hash": "8af76decf8799f9ebb7fa45990c95bfb13e817c3",
"date": "2018-09-22T01:43:28"
}
]
},
"includes/resourceloader/ResourceLoaderSkinModule.php": {
"File": "includes/resourceloader/ResourceLoaderSkinModule.php",
"TicketCount": 4,
"CommitCount": 3,
"Tickets": [
"T244405",
"T245778",
"T245182",
"T262507"
],
"Commits": [
{
"message": "resourceloader: Fix incorrect order of feature stylesheets\n\nFollow up to I755e5e6784481b419e35, which used array_unshift\nto prepend the 'feature' stylesheets. This works as expected when\nthere is only one 'feature' enabled.\n\nWhen there are multiple, use of unshift will effectivel reverse\nthe order as it unshifts then one at a time.\n\nTo mitigate this, collect them normally in the correct order,\nand then prepend them all at once with array_merge.\n\nBug: T262507\nChange-Id: Ibe2c9f8d024f6be06588a59df10a37681b60d6bc\n",
"bugs": [
"T262507"
],
"subject": "resourceloader: Fix incorrect order of feature stylesheets",
"hash": "f12e35012db7140980c185b641c12637550f9ca1",
"date": "2020-09-10T16:20:10"
},
{
"message": "ResourceLoaderSkinModule: Don't hard-deprecate wgLogoHD just now\n\nBug: T245778\nBug: T245182\nChange-Id: I5f6773516134651ee88079c3f5a7c12d9f3d4f31\n",
"bugs": [
"T245778",
"T245182"
],
"subject": "ResourceLoaderSkinModule: Don't hard-deprecate wgLogoHD just now",
"hash": "56fd5aaaeb77d33f5aa3c25f3b63a56104854caf",
"date": "2020-02-21T13:18:22"
},
{
"message": "ResourceLoaderSkinModule: Restore previous behavior in getLogoData()\n\ngetAvailableLogos() can now also return multiple items if a 'wordmark'\nlogo is defined, but this method only cares about the DPI variants\n('1x'/'1.5x'/'2x') and should return a string if there's only '1x'.\n\nBug: T244405\nChange-Id: I69ddb1f9f97d06253b661caf112b48343cd2453f\n",
"bugs": [
"T244405"
],
"subject": "ResourceLoaderSkinModule: Restore previous behavior in getLogoData()",
"hash": "68d10ec12dc62b5084891a8052015824116877cb",
"date": "2020-02-07T02:10:26"
}
]
},
"includes/installer/Installer.php": {
"File": "includes/installer/Installer.php",
"TicketCount": 5,
"CommitCount": 3,
"Tickets": {
"0": "T231200",
"1": "T231198",
"2": "T231876",
"3": "T231220",
"6": "T201405"
},
"Commits": [
{
"message": "Make LocalisationCache a service\n\nThis removes Language::$dataCache without deprecation, because 1) I\ndon't know of a way to properly simulate it in the new paradigm, and 2)\nI found no direct access to the member outside of the Language and\nLanguageTest classes.\n\nAn earlier version of this patch (e4468a1d6b6) had to be reverted\nbecause of a massive slowdown on test runs. Based on some local testing,\nthis should fix the problem. Running all tests in languages is slowed\ndown by only around 20% instead of a factor of five, and memory usage is\nactually reduced greatly (~350 MB -> ~200 MB). The slowdown is still not\ngreat, but I assume it's par for the course for converting things to\nservices and is acceptable. If not, I can try to optimize further.\n\nBug: T231220\nBug: T231198\nBug: T231200\nBug: T201405\nChange-Id: Ieadbd820379a006d8ad2d2e4a1e96241e172ec5a\n",
"bugs": [
"T231220",
"T231198",
"T231200",
"T201405"
],
"subject": "Make LocalisationCache a service",
"hash": "043d88f680cf66c90e2bdf423187ff8b994b1d02",
"date": "2019-10-07T20:18:47"
},
{
"message": "Revert \"Modify -—with-extensions to throw extension dependency errors\"\n\nThis reverts commit d9eec3c9124d87fd44e6917d5b1512b78352afb3.\n\nReason for revert: Breaking most of CI\n\nBug: T231876\nChange-Id: I9b64a2bb770ee2e7ee717669070843814f37e81e\n",
"bugs": [
"T231876"
],
"subject": "Revert \"Modify -—with-extensions to throw extension dependency errors\"",
"hash": "cc7ec36a577161bba06159abd15efddc8a4f745d",
"date": "2019-09-03T16:35:48"
},
{
"message": "Revert \"Make LocalisationCache a service\"\n\nThis reverts commits:\n - 76a940350d36c323ebedb4ab45cc81ed1c6b6c92\n - b78b8804d076618e967c7b31ec15a1bd9e35d1d0\n - 2e52f48c2ed8dcf480843e2186f685a86810e2ac\n - e4468a1d6b6b9fdc5b64800febdc8748d21f213d\n\nBug: T231200\nBug: T231198\nChange-Id: I1a7e46a979ae5c9c8130dd3927f6663a216ba753\n",
"bugs": [
"T231200",
"T231198"
],
"subject": "Revert \"Make LocalisationCache a service\"",
"hash": "308e6427aef169a575a339e6a8e0558d29403a1d",
"date": "2019-08-26T16:28:26"
}
]
},
"includes/parser/BlockLevelPass.php": {
"File": "includes/parser/BlockLevelPass.php",
"TicketCount": 2,
"CommitCount": 3,
"Tickets": [
"T218817",
"T208070"
],
"Commits": [
{
"message": "Avoid counting input lines twice in BlockLevelPass::execute()\n\nIn T208070 / I120ca25a77b7b933de4afddd1d458e36a95e26da we added a\ncheck whether we were processing the last line of input, in order\nto avoid emitting extra trailing newlines. But if the number of\ninput lines is large, StringUtils::explode() will return an\niterator which doesn't implement Countable for efficiency.\nI22eebb70af1b19d7c25241fc78bfcced4470e78a fixed this, but at the\ncost of scanning the string twice: once just to count the number\nof newlines before we begin to iterate over the lines.\n\nThis patch uses Iterator::valid() to determine if we're on the\nlast iteration without having to scan the string twice.\n\nBug: T208070\nBug: T218817\nChange-Id: I41a45266d266195aa6002d3854e018cacf052ca6\n",
"bugs": [
"T208070",
"T218817"
],
"subject": "Avoid counting input lines twice in BlockLevelPass::execute()",
"hash": "ad89079a44dc195d1d495dc0207364f03f870589",
"date": "2019-03-20T21:35:14"
},
{
"message": "BlockLevelPass: further fixes for T218817\n\nThe previous fix for T218817 (I22eebb70af1b19d7c25241fc78bfcced4470e78a)\nwas a bit premature: we didn't notice that ExplodeIterator *also*\nused a different Iterator::key() than ArrayIterator -- it used\nthe string position as a key, not the line number. Combined with\nan inequality test for \"not the last line\" meant that almost every\nline was now the \"last line\" and we were missing a lot of needed\nnewlines.\n\nCount the lines ourselves to fix the problem.\n\nBug: T208070\nBug: T218817\nChange-Id: I55a2c4c0ec304292162c51aa88b206fea0142392\n",
"bugs": [
"T208070",
"T218817"
],
"subject": "BlockLevelPass: further fixes for T218817",
"hash": "73239ee9bdb2598ada26c20a1b0009c501b162bc",
"date": "2019-03-20T21:31:29"
},
{
"message": "parser: Count occurrences of newlines\n\nStringUtils::explode() returns an ExplodeIterator if the number of\nseparators is too high, which doesn't implement count.\n\nSo count the way that explode does.\n\nBug: T218817\nChange-Id: I22eebb70af1b19d7c25241fc78bfcced4470e78a\n",
"bugs": [
"T218817"
],
"subject": "parser: Count occurrences of newlines",
"hash": "4d7dcf5c963ee39a197b9e02a7aeb09c1f3ac8aa",
"date": "2019-03-20T20:33:55"
}
]
},
"includes/cache/localisation/LocalisationCache.php": {
"File": "includes/cache/localisation/LocalisationCache.php",
"TicketCount": 5,
"CommitCount": 3,
"Tickets": [
"T231183",
"T231200",
"T231198",
"T231220",
"T201405"
],
"Commits": [
{
"message": "Make LocalisationCache a service\n\nThis removes Language::$dataCache without deprecation, because 1) I\ndon't know of a way to properly simulate it in the new paradigm, and 2)\nI found no direct access to the member outside of the Language and\nLanguageTest classes.\n\nAn earlier version of this patch (e4468a1d6b6) had to be reverted\nbecause of a massive slowdown on test runs. Based on some local testing,\nthis should fix the problem. Running all tests in languages is slowed\ndown by only around 20% instead of a factor of five, and memory usage is\nactually reduced greatly (~350 MB -> ~200 MB). The slowdown is still not\ngreat, but I assume it's par for the course for converting things to\nservices and is acceptable. If not, I can try to optimize further.\n\nBug: T231220\nBug: T231198\nBug: T231200\nBug: T201405\nChange-Id: Ieadbd820379a006d8ad2d2e4a1e96241e172ec5a\n",
"bugs": [
"T231220",
"T231198",
"T231200",
"T201405"
],
"subject": "Make LocalisationCache a service",
"hash": "043d88f680cf66c90e2bdf423187ff8b994b1d02",
"date": "2019-10-07T20:18:47"
},
{
"message": "Revert \"Make LocalisationCache a service\"\n\nThis reverts commits:\n - 76a940350d36c323ebedb4ab45cc81ed1c6b6c92\n - b78b8804d076618e967c7b31ec15a1bd9e35d1d0\n - 2e52f48c2ed8dcf480843e2186f685a86810e2ac\n - e4468a1d6b6b9fdc5b64800febdc8748d21f213d\n\nBug: T231200\nBug: T231198\nChange-Id: I1a7e46a979ae5c9c8130dd3927f6663a216ba753\n",
"bugs": [
"T231200",
"T231198"
],
"subject": "Revert \"Make LocalisationCache a service\"",
"hash": "308e6427aef169a575a339e6a8e0558d29403a1d",
"date": "2019-08-26T16:28:26"
},
{
"message": "Pass correct store to rebuildLocalisationCache.php\n\ne4468a1d6b6 completely broke rebuildLocalisationCache.php by\nunconditionally passing in LCStoreDB( [] ) instead of constructing the\ncorrect object.\n\nBug: T231183\nChange-Id: I0d52662e8745cf0e10091169b3b08eff48ef2b8f\n",
"bugs": [
"T231183"
],
"subject": "Pass correct store to rebuildLocalisationCache.php",
"hash": "76a940350d36c323ebedb4ab45cc81ed1c6b6c92",
"date": "2019-08-26T09:56:52"
}
]
},
"includes/specials/pagers/BlockListPager.php": {
"File": "includes/specials/pagers/BlockListPager.php",
"TicketCount": 3,
"CommitCount": 3,
"Tickets": [
"T189251",
"T219695",
"T236425"
],
"Commits": [
{
"message": "Ensure that the $target is a UserIdentity before calling ::getId()\n\nAbstractBlock::parseTarget() may return a UserIdentity, a string, or null for\nthe target. It is possible for the method to return a string for a non-range\nblock if the IP is invalid. To prevent a fatal error, check the return type\nrather than the block type.\n\nBug: T236425\nChange-Id: I98dfcafe06b8621041cd4f72815b418e913eb1b0\n",
"bugs": [
"T236425"
],
"subject": "Ensure that the $target is a UserIdentity before calling ::getId()",
"hash": "6100226d3b3150b665ac14c2c214b5e8ffcf8b76",
"date": "2019-10-24T21:13:28"
},
{
"message": "Fix flaky test BlockListPagerTest::testFormatValue\n\nThe test relied on being executed within the same minute as when it starts\nwhich is not always the case.\n\nBug: T219695\nChange-Id: I99eb0d25138069ca08c2975ff2e60c7f1df0a99f\n",
"bugs": [
"T219695"
],
"subject": "Fix flaky test BlockListPagerTest::testFormatValue",
"hash": "af0720e5ed66bf00d8165c64ad0aa3c613560e22",
"date": "2019-04-01T16:06:59"
},
{
"message": "Typo fix\n\nBug: T189251\nChange-Id: I5e7af2629d566181f3280049b6847c0126850ff2\n",
"bugs": [
"T189251"
],
"subject": "Typo fix",
"hash": "3d20679f2dc5c15cf4d6336de11dee011ee07641",
"date": "2018-03-08T21:46:53"
}
]
},
"includes/logging/LogPager.php": {
"File": "includes/logging/LogPager.php",
"TicketCount": 4,
"CommitCount": 3,
"Tickets": [
"T220999",
"T221458",
"T222324",
"T237026"
],
"Commits": [
{
"message": "LogPager: Add IGNORE INDEX to avoid another MariaDB optimizer bug\n\nThis is very similar to I53a7ed59 (T223151). In that bug it was choosing\nthe `times` index when it should have used `actor_time` due to a\nnon-empty performer. This time it's choosing `times` when it should be\nusing `type_time` due to a non-empty set of types being queried.\n\nBug: T237026\nChange-Id: I02c77dde6b5b54d7bea801135051b006d39459b0\n",
"bugs": [
"T237026"
],
"subject": "LogPager: Add IGNORE INDEX to avoid another MariaDB optimizer bug",
"hash": "fd7228444645ce1e2812943cb86f9f463faa843f",
"date": "2019-11-14T09:58:33"
},
{
"message": "SECURITY: LogPager: Don't STRAIGHT_JOIN when using log_search\n\nWe'll hope MariaDB doesn't trigger T221458 in that situation.\n\nBug: T222324\nSigned-off-by: Scott Bassett <sbassett@deploy1001.eqiad.wmnet>\nSigned-off-by: James D. Forrester <jforrester@wikimedia.org>\nChange-Id: I06364e9d0bce45bd97b2ec837d1f479feff76699\n",
"bugs": [
"T222324"
],
"subject": "SECURITY: LogPager: Don't STRAIGHT_JOIN when using log_search",
"hash": "77ca1430e4cab74dc7cf2db8553e9868836d0732",
"date": "2019-05-02T17:54:42"
},
{
"message": "Add STRAIGHT_JOIN to ApiQueryLogEvents and LogPager to avoid planner oddness\n\nFor some unknown reason, when the `actor` table has few enough rows (or\nfew enough compared to `logging`) MariaDB 10.1.37 decides it makes more\nsense to fetch everything from `actor` + `logging` and filesort rather than\nfetching the limited number of rows from `logging`.\n\nWe can work around it by telling it to not reorder the query.\n\nBug: T220999\nBug: T221458\nChange-Id: I9da981c09f18ba72efeeb8279aad99eb21af699a\n",
"bugs": [
"T220999",
"T221458"
],
"subject": "Add STRAIGHT_JOIN to ApiQueryLogEvents and LogPager to avoid planner oddness",
"hash": "3d1fb0c0443ee92531ca8e1aafbcf9b403879e44",
"date": "2019-04-23T14:00:21"
}
]
},
"includes/logging/LogFormatter.php": {
"File": "includes/logging/LogFormatter.php",
"TicketCount": 3,
"CommitCount": 3,
"Tickets": [
"T185049",
"T212742",
"T222038"
],
"Commits": [
{
"message": "Add permission check for user is permitted to view the log type\n\nNote: formatter patch only\n\nNote: cherry-picked I064f563cb here as well\n\nBug: T222038\nChange-Id: I1c4e57a513e3a0e616b862a5b9d684f463ad9981\n",
"bugs": [
"T222038"
],
"subject": "Add permission check for user is permitted to view the log type",
"hash": "0b91327754153e066a01c552d2704d7954447f7d",
"date": "2019-07-25T20:32:37"
},
{
"message": "LogFormatter: ignore unrecoverable data\n\nIt is possible for the log_params column of the logging\ntable to contain serialized data that cannot be deserialized\nanymore because the types it references are missing.\n\nIt is currently the case with old Flow log entries on enwiki.\nThe extension is uninstalled and the UUID class is not found.\n\nThis patch proposes to simply skip the params and log a\nwarning in that case.\n\nBug: T212742\nChange-Id: I3226b8fb338dd2b81e087af5d798d8f35368282d\n",
"bugs": [
"T212742"
],
"subject": "LogFormatter: ignore unrecoverable data",
"hash": "99e3a646ba2ac2cf0bd55bd572aa549ac9481320",
"date": "2019-04-03T12:50:15"
},
{
"message": "LogFormatter: Fail softer when trying to link an invalid titles\n\nOld log entries contain titles that used to be valid, but now are not.\n\nBug: T185049\nChange-Id: Ia66d901aedf1b385574b3910b29f020b3fd4bd97\n",
"bugs": [
"T185049"
],
"subject": "LogFormatter: Fail softer when trying to link an invalid titles",
"hash": "26bb9d9b23eb2075eefca2097ca393a9d4aa3264",
"date": "2018-08-01T14:19:55"
}
]
},
"includes/api/i18n/qqq.json": {
"File": "includes/api/i18n/qqq.json",
"TicketCount": 3,
"CommitCount": 3,
"Tickets": [
"T203255",
"T208929",
"T255700"
],
"Commits": [
{
"message": "API: Handle ContentHandler exception for content model mismatch\n\nEnsure the content we are trying to save and the base content have\nidentical content models before proceeding to save so as to forestall\nException that may be thrown by ContentHandler if it founds they're not.\n\nThere are two cases where the models are allowed to differ: Edit that\nundoes content model change or edit that's meant to explicitly change\nthe model. The logic for these is handled separately and may succeed\nor fail, but exception will not be thrown.\n\nBug: T255700\nChange-Id: I8782732bb0fc3059693cd7035b7ebb43fd71d333\n",
"bugs": [
"T255700"
],
"subject": "API: Handle ContentHandler exception for content model mismatch",
"hash": "7af5678847cc64c0030a717121775ea041a28819",
"date": "2020-09-04T02:41:58"
},
{
"message": "ApiComparePages: Don't try to find next/prev of a deleted revision\n\nRevisionStore::getPreviousRevision() and ::getNextRevision() don't\nhandle being passed a RevisionArchiveRecord very well. Even if they\nwould, it's not clear whether the user wants to be comparing with the\nnext/previous deleted revision or the next/previous revision even if not\ndeleted. So let's just make it an error, at least for now.\n\nBug: T208929\nChange-Id: I151019e336bda92aa4040ba4162eb2588c909652\n",
"bugs": [
"T208929"
],
"subject": "ApiComparePages: Don't try to find next/prev of a deleted revision",
"hash": "3c95d3bdd935768b246d0688eaaa258feeecd4e6",
"date": "2018-12-03T20:31:34"
},
{
"message": "ApiComparePages: Clean up handling of slot deletion\n\nWe can't allow the main slot to be deleted. DifferenceEngine assumes it\nexits.\n\nWe also shouldn't allow parameters such as `tosection-{role}` to be used\nwithout the corresponing `totext-{role}`. This will help prevent people\nfrom being confused into thinking that `tosection-{role}` will do\nanything in that situation (as opposed to `tosection`, which did).\n\nBug: T203255\nChange-Id: I58573bb2c1ee68e6907ef2e88385fe36e5184076\n",
"bugs": [
"T203255"
],
"subject": "ApiComparePages: Clean up handling of slot deletion",
"hash": "07530dfb63fe24c06ab4508c6cb8436896104e7c",
"date": "2018-08-31T15:26:07"
}
]
},
"includes/registration/ExtensionDependencyError.php": {
"File": "includes/registration/ExtensionDependencyError.php",
"TicketCount": 1,
"CommitCount": 2,
"Tickets": [
"T197535"
],
"Commits": [
{
"message": "registration: Let extensions add PHP extension requirements\n\nThis change adds the possibility to specify ext-* keys under the 'platform'\nkey introduced in I6744cc0be2 to require given PHP extensions. Note that\nit's impossible to add constraints different from '*', as there is no universal\nway to retrieve PHP extension versions.\n\nBug: T197535\nChange-Id: I510de1e6d80f5c1d92dc1d1665aaa6c25bf28bf7\n",
"bugs": [
"T197535"
],
"subject": "registration: Let extensions add PHP extension requirements",
"hash": "c7e45b62113b1d9d6ff80705eff25ee93b722e19",
"date": "2018-09-30T17:55:57"
},
{
"message": "registration: Let extensions add PHP version requirements\n\nWhile MediaWiki Core already sets requirements for PHP versions, it should be\npossible for extensions to tighten these requirements. This mirrors the PHP\nparameter of extension infoboxes as well.\n\nThis change introduces a new 'platform' key (in addition to 'MediaWiki', 'skins'\nand 'extensions', where non-MediaWiki software requirements will be listed\nin the future, starting with a PHP version constraint. Further keys are\nsupposed to be added to allow setting constraints against php extensions\nand other abilities of the platform.\n\nBug: T197535\nChange-Id: I6744cc0be2363b603331af9dc860eb8603a1a89a\n",
"bugs": [
"T197535"
],
"subject": "registration: Let extensions add PHP version requirements",
"hash": "8af76decf8799f9ebb7fa45990c95bfb13e817c3",
"date": "2018-09-22T01:43:28"
}
]
},
"includes/api/ApiSetNotificationTimestamp.php": {
"File": "includes/api/ApiSetNotificationTimestamp.php",
"TicketCount": 1,
"CommitCount": 2,
"Tickets": [
"T207941"
],
"Commits": [
{
"message": "WatchedItemStore: Use batching in setNotificationTimestampsForUser\n\nUpdate rows in batches, using the same logic as is used by\nremoveWatchBatchForUser().\n\nAlso remove the functionality for updating all rows, and move that to\nresetAllNotificationTimestampsForUser() instead. To that end, add a\ntimestamp parameter to that method and to the job it uses, and make\nsetNotificationTimestampsForUser() behave like a backwards-compatibility\nwrapper around resetAllNotificationTimestampsForUser() when no list of\ntitles is specified.\n\nBug: T207941\nChange-Id: I58342257395de6fcfb4c392b3945b12883ca1680\nFollows-Up: I2008ff89c95fe6f66a3fd789d2cef0e8fe52bd93\n",
"bugs": [
"T207941"
],
"subject": "WatchedItemStore: Use batching in setNotificationTimestampsForUser",
"hash": "1da7573bb77beff9e4466430b57551986e6be248",
"date": "2019-03-21T04:41:42"
},
{
"message": "ApiSetNotificationTimestamp: Make entirewatchlist more efficient\n\nUse WatchedItemStore's built-in feature for clearing the entire\nwatchlist when no timestamp is specified. When a timestamp is specified,\nthis will still use the inefficient page-by-page method, which I'll\nimprove in a follow-up commit.\n\nBug: T207941\nChange-Id: I2008ff89c95fe6f66a3fd789d2cef0e8fe52bd93\n",
"bugs": [
"T207941"
],
"subject": "ApiSetNotificationTimestamp: Make entirewatchlist more efficient",
"hash": "ebcac0053eeed3fc28b8f326d111933345061803",
"date": "2019-01-16T22:06:46"
}
]
},
"includes/registration/VersionChecker.php": {
"File": "includes/registration/VersionChecker.php",
"TicketCount": 1,
"CommitCount": 2,
"Tickets": [
"T197535"
],
"Commits": [
{
"message": "registration: Let extensions add PHP extension requirements\n\nThis change adds the possibility to specify ext-* keys under the 'platform'\nkey introduced in I6744cc0be2 to require given PHP extensions. Note that\nit's impossible to add constraints different from '*', as there is no universal\nway to retrieve PHP extension versions.\n\nBug: T197535\nChange-Id: I510de1e6d80f5c1d92dc1d1665aaa6c25bf28bf7\n",
"bugs": [
"T197535"
],
"subject": "registration: Let extensions add PHP extension requirements",
"hash": "c7e45b62113b1d9d6ff80705eff25ee93b722e19",
"date": "2018-09-30T17:55:57"
},
{
"message": "registration: Let extensions add PHP version requirements\n\nWhile MediaWiki Core already sets requirements for PHP versions, it should be\npossible for extensions to tighten these requirements. This mirrors the PHP\nparameter of extension infoboxes as well.\n\nThis change introduces a new 'platform' key (in addition to 'MediaWiki', 'skins'\nand 'extensions', where non-MediaWiki software requirements will be listed\nin the future, starting with a PHP version constraint. Further keys are\nsupposed to be added to allow setting constraints against php extensions\nand other abilities of the platform.\n\nBug: T197535\nChange-Id: I6744cc0be2363b603331af9dc860eb8603a1a89a\n",
"bugs": [
"T197535"
],
"subject": "registration: Let extensions add PHP version requirements",
"hash": "8af76decf8799f9ebb7fa45990c95bfb13e817c3",
"date": "2018-09-22T01:43:28"
}
]
},
"includes/Storage/PageEditStash.php": {
"File": "includes/Storage/PageEditStash.php",
"TicketCount": 2,
"CommitCount": 2,
"Tickets": [
"T204742",
"T245928"
],
"Commits": [
{
"message": "stashedit: Ensure that $summary is a string\n\nIt's only documented as string.\n\nBug: T245928\nChange-Id: I8fa287f335e90a59ac18365e7401a5cf703130a3\n",
"bugs": [
"T245928"
],
"subject": "stashedit: Ensure that $summary is a string",
"hash": "60dc4b41f833ae036c1d34e9399a4b1b1b1ca33a",
"date": "2020-02-22T19:55:28"
},
{
"message": "objectcache: add object segmentation support to BagOStuff\n\nUse it for ApiStashEdit so that large PaserOutput can be stored.\n\nAdd flag to allow for value segmentation on set() in BagOStuff.\nAlso add a flag for immediate deletion of segments on delete().\n\nBagOStuff now has base serialize()/unserialize() methods.\n\nBug: T204742\nChange-Id: I0667a02612526d8ddfd91d5de48b6faa78bd1ab5\n",
"bugs": [
"T204742"
],
"subject": "objectcache: add object segmentation support to BagOStuff",
"hash": "b09b3980f991bb02a64cce0e462b97bada3b4776",
"date": "2019-06-11T15:14:17"
}
]
},
"includes/libs/rdbms/database/DatabasePostgres.php": {
"File": "includes/libs/rdbms/database/DatabasePostgres.php",
"TicketCount": 2,
"CommitCount": 2,
"Tickets": [
"T193565",
"T218388"
],
"Commits": [
{
"message": "rdbms: treat cloned temporary tables as \"effective write\" targets\n\nMake IDatabase::lastDoneWrites() reflect creation and changes to\nthe cloned temporary unit test tables but not other temporary tables.\nThis effects the LB method hasOrMadeRecentMasterChanges(). Other tables\nare assumpted to really just be there for temporary calculations rather\nacting as test-only ephemeral versions of permanent tables. Treating\nwrites to the \"fake permanent\" temp tables more like real permanent\ntables means that the tests better align with production.\n\nAt the moment, temporary tables still have to use DB_MASTER, given\nthe assertIsWritableMaster() check in query(). This restriction\ncan be lifted at some point, when RDBMs compatibility is robust.\n\nBug: T218388\nChange-Id: I4c0d629da254ac2aaf31aae35bd2efc7bc064ac6\n",
"bugs": [
"T218388"
],
"subject": "rdbms: treat cloned temporary tables as \"effective write\" targets",
"hash": "108fd8b18c1084de7af0bf05831ee9360f595c96",
"date": "2019-03-26T21:24:42"
},
{
"message": "rdbms: Database::selectDB() update the domain and handle failure better\n\nLoadBalancer uses Database::getDomainId() for deciding which keys to use\nin the foreign connection handle arrays. This method should reflect any\nchanges made to the DB selection.\n\nIf the query fails, then do not change domain field. This is the sort of\napproach that LoadBalancer is expects in openForeignConnection(). Also,\nthrow an exception when selectDB() fails.\n\nThe db/schema/prefix fields of Database no longer exist in favor of just\nusing the newer currentDomain field.\n\nAlso:\n* Add IDatabase::selectDomain() method and made selectDB() wrap it.\n* Extract the DB name from sqlite files if not explicitly provided.\n* Fix inconsistent open() return values from Database subclasses.\n* Make a relationSchemaQualifier() method to handle the concern of\n omitting schema names in queries. The means that getDomainId() can\n still return the right value, rather than confusingly omitt the schema.\n* Make RevisionStore::checkDatabaseWikiId() account for the domain schema.\n Unlike d2a4d614fce09c, this does not incorrectly assume the storage is\n always for the current wiki domain. Also, LBFactorySingle sets the local\n domain so it is defined even in install.php.\n* Make RevisionStoreDbTestBase actually set the LoadBalancer local domain.\n* Make RevisionTest::testLoadFromTitle() account for the domain schema.\n\nBug: T193565\nChange-Id: I6e51cd54c6da78830b38906b8c46789c79498ab5\n",
"bugs": [
"T193565"
],
"subject": "rdbms: Database::selectDB() update the domain and handle failure better",
"hash": "fe0af6cad57821dc5b8741edfed77242d543ccaa",
"date": "2018-10-10T19:03:30"
}
]
},
"includes/libs/objectcache/MultiWriteBagOStuff.php": {
"File": "includes/libs/objectcache/MultiWriteBagOStuff.php",
"TicketCount": 2,
"CommitCount": 2,
"Tickets": [
"T198280",
"T204742"
],
"Commits": [
{
"message": "objectcache: add object segmentation support to BagOStuff\n\nUse it for ApiStashEdit so that large PaserOutput can be stored.\n\nAdd flag to allow for value segmentation on set() in BagOStuff.\nAlso add a flag for immediate deletion of segments on delete().\n\nBagOStuff now has base serialize()/unserialize() methods.\n\nBug: T204742\nChange-Id: I0667a02612526d8ddfd91d5de48b6faa78bd1ab5\n",
"bugs": [
"T204742"
],
"subject": "objectcache: add object segmentation support to BagOStuff",
"hash": "b09b3980f991bb02a64cce0e462b97bada3b4776",
"date": "2019-06-11T15:14:17"
},
{
"message": "objectcache: make MultiWriteBagOStuff handle duplicate add() operations\n\nBug: T198280\nChange-Id: Ib1bcde2b3fbfb452f80d8d840c494be2eb70eb87\n",
"bugs": [
"T198280"
],
"subject": "objectcache: make MultiWriteBagOStuff handle duplicate add() operations",
"hash": "6612407f9cbefbfd8664faffc39f18d2258aeb4c",
"date": "2018-06-28T16:04:35"
}
]
},
"includes/libs/rdbms/database/DatabaseMssql.php": {
"File": "includes/libs/rdbms/database/DatabaseMssql.php",
"TicketCount": 2,
"CommitCount": 2,
"Tickets": [
"T193565",
"T212284"
],
"Commits": [
{
"message": "rdbms: add Database::executeQuery() method for internal use\n\nThis shares reconnection and retry logic but lacks some of the\nrestrictions applied to queries that go through the public query()\ninterface.\n\nUse this in a few places such as doSelectDomain() for mysql/mssql.\n\nBug: T212284\nChange-Id: Ie7341a0e6c4149fc375cc357877486efe9e56eb9\n",
"bugs": [
"T212284"
],
"subject": "rdbms: add Database::executeQuery() method for internal use",
"hash": "2866c9b7d4295314c3138166cdb671de6dbcb3ab",
"date": "2019-06-11T14:00:41"
},
{
"message": "rdbms: Database::selectDB() update the domain and handle failure better\n\nLoadBalancer uses Database::getDomainId() for deciding which keys to use\nin the foreign connection handle arrays. This method should reflect any\nchanges made to the DB selection.\n\nIf the query fails, then do not change domain field. This is the sort of\napproach that LoadBalancer is expects in openForeignConnection(). Also,\nthrow an exception when selectDB() fails.\n\nThe db/schema/prefix fields of Database no longer exist in favor of just\nusing the newer currentDomain field.\n\nAlso:\n* Add IDatabase::selectDomain() method and made selectDB() wrap it.\n* Extract the DB name from sqlite files if not explicitly provided.\n* Fix inconsistent open() return values from Database subclasses.\n* Make a relationSchemaQualifier() method to handle the concern of\n omitting schema names in queries. The means that getDomainId() can\n still return the right value, rather than confusingly omitt the schema.\n* Make RevisionStore::checkDatabaseWikiId() account for the domain schema.\n Unlike d2a4d614fce09c, this does not incorrectly assume the storage is\n always for the current wiki domain. Also, LBFactorySingle sets the local\n domain so it is defined even in install.php.\n* Make RevisionStoreDbTestBase actually set the LoadBalancer local domain.\n* Make RevisionTest::testLoadFromTitle() account for the domain schema.\n\nBug: T193565\nChange-Id: I6e51cd54c6da78830b38906b8c46789c79498ab5\n",
"bugs": [
"T193565"
],
"subject": "rdbms: Database::selectDB() update the domain and handle failure better",
"hash": "fe0af6cad57821dc5b8741edfed77242d543ccaa",
"date": "2018-10-10T19:03:30"
}
]
},
"includes/specials/SpecialEditWatchlist.php": {
"File": "includes/specials/SpecialEditWatchlist.php",
"TicketCount": 2,
"CommitCount": 2,
"Tickets": [
"T200055",
"T243449"
],
"Commits": [
{
"message": "When clearing don't load the watchlist if we must clear through a job\n\nIt looks like this bug has existed in the special page since before the\ntime of the refactoring into WatchedItemStore although apparently\nit has only just surfaced now!\n\nThis adds a new method into WatchedItemStore that decides if the\nwatchlist can be cleared interactively or must use the job queue.\nThis can then be used in the special page instead of the old logic\nwhich would load the watchlist and then count the loaded items\n(inefficient if you know your clearing the list anyway)\n\nBug: T243449\nChange-Id: I810d89e3e1142a223430f7fc5f8598a493637a72\n",
"bugs": [
"T243449"
],
"subject": "When clearing don't load the watchlist if we must clear through a job",
"hash": "8d3f6b417f2baece555d365dc518ebaf9bcbd7bd",
"date": "2020-01-29T18:36:47"
},
{
"message": "Don't fail hard on bad titles in the database.\n\nThis updates some code that has been constructing TitleValue directly\nto use TitleValue::tryNew or TitleParser::makeTitleValueSafe.\n\nBug: T200055\nChange-Id: If781fe62213413c8fb847fd9e90f079e2f9ffc9d\n",
"bugs": [
"T200055"
],
"subject": "Don't fail hard on bad titles in the database.",
"hash": "e98094956ab61baa73d0753d00b10f782b62e73e",
"date": "2019-11-25T21:15:38"
}
]
},
"includes/page/ImageHistoryPseudoPager.php": {
"File": "includes/page/ImageHistoryPseudoPager.php",
"TicketCount": 2,
"CommitCount": 2,
"Tickets": [
"T204796",
"T244937"
],
"Commits": [
{
"message": "ImageHistoryPseudoPager: Update doQuery() for IndexPager changes\n\nThis class duplicates a bunch of code from IndexPager, and that code was\nchanged in 6786aa5d8e2e87f00a3b1f81d9684ff4eac676ef in a way that broke\nthis code. Update it to account for the fact that mFirstShown,\nmLastShown and mPastTheEndIndex are now arrays. Also update the\ndocumentation in IndexPager to reflect that.\n\nBug: T244937\nChange-Id: I51a50b6d3be1467f4ee399446d1d12cfed71a06c\n",
"bugs": [
"T244937"
],
"subject": "ImageHistoryPseudoPager: Update doQuery() for IndexPager changes",
"hash": "bf9ae0768d99a9e19533cc845a87dd182b1fb63f",
"date": "2020-02-11T23:36:28"
},
{
"message": "ImageHistoryPseudoPager: Protect against TimestampException from bad user input\n\nBug: T204796\nChange-Id: I17455fef0d899c56ce10f0df0db3457d944e353d\n",
"bugs": [
"T204796"
],
"subject": "ImageHistoryPseudoPager: Protect against TimestampException from bad user input",
"hash": "485547cd805a5b06a6c79cc6c9f18dbe5793e026",
"date": "2018-09-26T00:32:07"
}
]
},
"includes/libs/objectcache/RedisBagOStuff.php": {
"File": "includes/libs/objectcache/RedisBagOStuff.php",
"TicketCount": 2,
"CommitCount": 2,
"Tickets": [
"T204742",
"T228303"
],
"Commits": [
{
"message": "Revert \"objectcache: fix race conditions in RedisBagOStuff::incr()\"\n\nThis commit reverts most of 7e647d2f0f5b, but keeps unrelated code\nclean ups from it, as well as the conflicting changes from d8b952ae47.\n\nFrom WMF production nutcracker:\n\n> nc_redis.c parsed unsupported command 'WATCH'\n\nThe use of WATCH, in addition to failing the commands that use it,\nalso appears to also have caused a chain reaction making nutcracker\nintermittently unavailable to other web requests.\n\nBug: T228303\nChange-Id: Ic37efc2963b147e461837571ae0b65acf3f60cb4\n",
"bugs": [
"T228303"
],
"subject": "Revert \"objectcache: fix race conditions in RedisBagOStuff::incr()\"",
"hash": "7b1b22ac80d669263979eda1c992103e8321695d",
"date": "2019-07-18T18:02:01"
},
{
"message": "objectcache: add object segmentation support to BagOStuff\n\nUse it for ApiStashEdit so that large PaserOutput can be stored.\n\nAdd flag to allow for value segmentation on set() in BagOStuff.\nAlso add a flag for immediate deletion of segments on delete().\n\nBagOStuff now has base serialize()/unserialize() methods.\n\nBug: T204742\nChange-Id: I0667a02612526d8ddfd91d5de48b6faa78bd1ab5\n",
"bugs": [
"T204742"
],
"subject": "objectcache: add object segmentation support to BagOStuff",
"hash": "b09b3980f991bb02a64cce0e462b97bada3b4776",
"date": "2019-06-11T15:14:17"
}
]
},
"includes/api/ApiQueryImageInfo.php": {
"File": "includes/api/ApiQueryImageInfo.php",
"TicketCount": 3,
"CommitCount": 2,
"Tickets": [
"T231340",
"T231353",
"T221812"
],
"Commits": [
{
"message": "LocalFile: avoid hard failures on non-existing files.\n\nSome methods on LocalFile will fatal if called on a non-existing file.\nApiQueryImageInfo did not take that into account.\n\nThis patch changes LocalFile to avoid fatal errors, and ApiQueryImageInfo\nto not try and report information on non-existing files.\n\nNOTE: the modified code has NO test coverage! This should be fixed\nbefore this patch is applied, or the patch needs to be thoroughly tested\nmanually.\n\nBug: T221812\nChange-Id: I9b74545a393d1b7a25c8262d4fe37a6492bbc11e\n",
"bugs": [
"T221812"
],
"subject": "LocalFile: avoid hard failures on non-existing files.",
"hash": "bdc6b4e378c6872a20f6fb5842f1a49961af91b4",
"date": "2019-09-18T09:18:44"
},
{
"message": "BadFileLookup::isBadFile() expects null, not false\n\nThis deviation in behavior from wfIsBadImage() is accounted for in that\nfunction, but I didn't account for it when changing callers to use the\nservice.\n\nBug: T231340\nBug: T231353\nChange-Id: Iddf177770fb1763ed295d694ed6bab441ea9ab73\n",
"bugs": [
"T231340",
"T231353"
],
"subject": "BadFileLookup::isBadFile() expects null, not false",
"hash": "bc0405d52ff578338aa319c948011e973bfc57c2",
"date": "2019-08-27T17:50:36"
}
]
},
"includes/specials/SpecialDeletedContributions.php": {
"File": "includes/specials/SpecialDeletedContributions.php",
"TicketCount": 2,
"CommitCount": 2,
"Tickets": [
"T187619",
"T208544"
],
"Commits": [
{
"message": "Prevent PHP notice on SpecialDeletedContributions\n\nBug: T208544\nChange-Id: Ie8d5c3d7257134857713853eec8e0eb42890366a\n",
"bugs": [
"T208544"
],
"subject": "Prevent PHP notice on SpecialDeletedContributions",
"hash": "2c8db66fa496c6c9935873d604d99170201a8ab8",
"date": "2019-02-15T18:45:09"
},
{
"message": "Remove trailing spaces from IP addr in Special:DeletedContributions\n\n* Trim \"target\" to remove trailing spaces from IP address in\n Special:DeletedContributions that triggers MW internal error.\n\nBug: T187619\nChange-Id: Ic6b0d8020553ecce4dcf97f9c78487d3174444d8\n",
"bugs": [
"T187619"
],
"subject": "Remove trailing spaces from IP addr in Special:DeletedContributions",
"hash": "4bc24477876b9ffa9ebb93a3d9bf646a701a9dec",
"date": "2018-10-05T14:55:29"
}
]
},
"includes/deferred/UserEditCountUpdate.php": {
"File": "includes/deferred/UserEditCountUpdate.php",
"TicketCount": 1,
"CommitCount": 2,
"Tickets": [
"T202715"
],
"Commits": [
{
"message": "Make UserEditCountUpdate faster by using auto-commit mode\n\nBug: T202715\nChange-Id: I92c08694cb5e1c367809439cff42e33a56ff9878\n",
"bugs": [
"T202715"
],
"subject": "Make UserEditCountUpdate faster by using auto-commit mode",
"hash": "e2088f1170b2867ca3ababe205137cc6ad010068",
"date": "2018-10-27T20:52:45"
},
{
"message": "Move user_editcount updates to a mergeable deferred update\n\nThis should reduce excess contention and lock timeouts.\nPreviously, it used a pre-commit hook which ran just before the\nend of the DB transaction round.\n\nAlso removed unused User::incEditCountImmediate() method.\n\nBug: T202715\nDepends-on: I6d239a5ea286afb10d9e317b2ee1436de60f7e4f\nDepends-on: I0ad3d17107efc7b0e59f1dd54d5733cd1572a2b7\nChange-Id: I0d6d7ddd91bbb21995142808248d162e05696d47\n",
"bugs": [
"T202715"
],
"subject": "Move user_editcount updates to a mergeable deferred update",
"hash": "390fce6db1e008c53580cedbdfe18dff3de9c766",
"date": "2018-10-25T22:32:18"
}
]
},
"includes/logging/LogEventsList.php": {
"File": "includes/logging/LogEventsList.php",
"TicketCount": 2,
"CommitCount": 2,
"Tickets": [
"T200136",
"T222038"
],
"Commits": [
{
"message": "SECURITY: Add permission check for user is permitted to view the log type\n\nBug: T222038\nChange-Id: I92ec2adfd9c514b3be1c07b7d22b9f9722d24a82\n",
"bugs": [
"T222038"
],
"subject": "SECURITY: Add permission check for user is permitted to view the log type",
"hash": "5de4402b5909f40fccb1fe6c1d1c9317da345c09",
"date": "2019-06-06T19:06:24"
},
{
"message": "LogEventsList: Use DerivativeContext\n\nBug: T200136\nChange-Id: Ie2b7753684dc0257b0b53d9c9314feeb14d99182\n",
"bugs": [
"T200136"
],
"subject": "LogEventsList: Use DerivativeContext",
"hash": "f8a777b9035b351e90bb89edf5be300317a2ee83",
"date": "2018-07-23T04:22:36"
}
]
},
"includes/jobqueue/JobQueueGroup.php": {
"File": "includes/jobqueue/JobQueueGroup.php",
"TicketCount": 2,
"CommitCount": 2,
"Tickets": [
"T199594",
"T207809"
],
"Commits": [
{
"message": "Create JobQueueEnqueueUpdate class to call JobQueueGroup::pushLazyJobs()\n\nThis assures that MergeableUpdate tasks that lazy push job will actually\nhave those jobs run instead of being added after the lone callback update\nto call JobQueueGroup::pushLazyJobs() already ran.\n\nThis also makes it more obvious that push will happen, since a mergeable\nupdate is added each time lazyPush() is called and a job is buffered,\nrather than rely on some magic callback enqueued into DeferredUpdates at\njust the right point in multiple entry points.\n\nBug: T207809\nChange-Id: I13382ef4a17a9ba0fd3f9964b8c62f564e47e42d\n",
"bugs": [
"T207809"
],
"subject": "Create JobQueueEnqueueUpdate class to call JobQueueGroup::pushLazyJobs()",
"hash": "6030e9cf2c1df7929e2319602aa4d37aa641de11",
"date": "2018-10-28T22:19:06"
},
{
"message": "JobQueueGroup: Allow readOnlyReason to be specified per JQ type\n\nWe use $wgReadOnly for various reasons, one of which is to disallow\nwrites in the currently-non-active DC. However, we should allow the\nreadOnlyReason configuration variable to be available per JobQueue type\nand have it the code respect that.\n\nIn our current set-up, we use JobQueueEventBus which ever only uses the\nenqueue execution path and does not depend upon which DC it is executed\nin, so this will allow us to enqueue jobs in both DCs.\n\nNote that this is an alternative approach to the one outlined in\nIbbad6063b6b154d7f7d172c79f7be324bf80eb7e\n\nBug: T199594\nChange-Id: I8f1a57a81ea11c1c587c0057fa8bb3454b0e0b56\n",
"bugs": [
"T199594"
],
"subject": "JobQueueGroup: Allow readOnlyReason to be specified per JQ type",
"hash": "a5aa44567b3312401d6dcb0ffc9706da7e81c629",
"date": "2018-07-19T16:13:38"
}
]
},
"includes/jobqueue/JobSpecification.php": {
"File": "includes/jobqueue/JobSpecification.php",
"TicketCount": 1,
"CommitCount": 2,
"Tickets": [
"T204761"
],
"Commits": [
{
"message": "Return deduplication to CategoryMembershipJob\n\nAfter I86d26e494924eec24e7b1fb32c424ac1284be478 the job is\nno longer instantiated on submission, only upon execution,\nso deduplication flags and dedup info are no longer available\nto kafka queue.\n\nBug: T204761\nDepends-On: Ieb2604e65177736606aed351c6658b7df748dcee\nChange-Id: Ibf95638a2ad218a83347db6749e2e7c9e8dbe0db\n",
"bugs": [
"T204761"
],
"subject": "Return deduplication to CategoryMembershipJob",
"hash": "a738dd647a3d4fdcf1e217c811172f81dba0f673",
"date": "2019-10-29T06:10:22"
},
{
"message": "jobqueue: Remove 'title' and 'namespace' from JobSpecification dedup info\n\nAfter recent refactors of the jobs, the job params will contain\nthe title information if it's relevant. So, the getDeduplicationInfo\nmethod of teh job class no longer includes page namespace/title\nexplicitly, but it was never removed from the JobSpecification\nclass.\n\nSee fc5d51f12936 (I9c9d0726d4066bb0a).\n\nBug: T204761\nChange-Id: Ieb2604e65177736606aed351c6658b7df748dcee\n",
"bugs": [
"T204761"
],
"subject": "jobqueue: Remove 'title' and 'namespace' from JobSpecification dedup info",
"hash": "ef51ecc6db2cb205c55032b43a9ee18020615691",
"date": "2019-10-02T18:08:26"
}
]
},
"includes/jobqueue/jobs/CategoryMembershipChangeJob.php": {
"File": "includes/jobqueue/jobs/CategoryMembershipChangeJob.php",
"TicketCount": 3,
"CommitCount": 2,
"Tickets": [
"T204761",
"T139012",
"T239772"
],
"Commits": [
{
"message": "Remove hacks for lack of index on rc_this_oldid\n\nIn several places, we're including rc_timestamp or other fields in a\nquery selecting on rc_this_oldid because there was historically no index\non the column.\n\nThe needed index was created by I0ccfd26d and deployed by T202167, so\nlet's remove the hacks.\n\nBug: T139012\nBug: T239772\nChange-Id: Ic99760075bde6603c9f2ab3ee262f5a2878205c7\n",
"bugs": [
"T139012",
"T239772"
],
"subject": "Remove hacks for lack of index on rc_this_oldid",
"hash": "152376376e6ef60c7169e31582db2be78194b0d4",
"date": "2019-12-04T21:00:02"
},
{
"message": "Return deduplication to CategoryMembershipJob\n\nAfter I86d26e494924eec24e7b1fb32c424ac1284be478 the job is\nno longer instantiated on submission, only upon execution,\nso deduplication flags and dedup info are no longer available\nto kafka queue.\n\nBug: T204761\nDepends-On: Ieb2604e65177736606aed351c6658b7df748dcee\nChange-Id: Ibf95638a2ad218a83347db6749e2e7c9e8dbe0db\n",
"bugs": [
"T204761"
],
"subject": "Return deduplication to CategoryMembershipJob",
"hash": "a738dd647a3d4fdcf1e217c811172f81dba0f673",
"date": "2019-10-29T06:10:22"
}
]
},
"includes/specials/pagers/ImageListPager.php": {
"File": "includes/specials/pagers/ImageListPager.php",
"TicketCount": 2,
"CommitCount": 2,
"Tickets": [
"T211774",
"T226102"
],
"Commits": [
{
"message": "Fix wfLocalFile() replacement\n\nThis bit of ImageListPager used to call wfLocalFile(), but was\naccidentally replaced with the replacement for wfFindFile() instead.\n\nBug: T226102\nChange-Id: Id50a5359fe2353ae88012d1f5a3331f570b73922\n",
"bugs": [
"T226102"
],
"subject": "Fix wfLocalFile() replacement",
"hash": "6ae93733c3376914098c82fc30c6714c26e2cd5c",
"date": "2019-06-20T10:01:03"
},
{
"message": "ImageListPager: Don't query by oi_user\n\nFor some reason we have indexes for `image` on `(img_user_text,img_timestamp)` and\n`(img_user,img_timestamp)`, but for `oldimage` we only have\n`(oi_user_text,oi_timestamp)`. Thus, when building the query in\nImageListPager, we have to be sure to avoid trying to use `oi_user`\nrather than `oi_user_text` in the WHERE part.\n\nBug: T211774\nChange-Id: Ibea058031f1cb3421e92e09f0a705ea00fb22008\n",
"bugs": [
"T211774"
],
"subject": "ImageListPager: Don't query by oi_user",
"hash": "c969d0c5650bc1805d1a0ef1da839cda2faafb0f",
"date": "2018-12-12T15:31:40"
}
]
},
"includes/Block.php": {
"File": "includes/Block.php",
"TicketCount": 2,
"CommitCount": 2,
"Tickets": {
"0": "T208398",
"2": "T208472"
},
"Commits": [
{
"message": "Block: Clean up handling of non-User targets\n\nThe fix applied in d67121f6d took care of the immediate issue in\nT208398, but after further analysis it was not a correct fix.\n\n* Near line 770, the method shouldn't even be called unless the target\n is TYPE_USER.\n* Near line 1598, it isn't dealing with a target at all.\n* Near line 1813, you're not going to get a sensible result trying to\n call `$user->getTalkPage()` for a range or auto-block ID. What you\n would really need there to handle range and auto-blocks correctly is\n to pass in the User actually making the edit.\n\nBut after some pushback in code review about passing the User into\nBlock::preventsEdit() to make line 1813 work, we'll instead replace the\nmethod with Block::appliesToTitle() and put the check for user talk\npages back into User::isBlockedFrom().\n\nBug: T208398\nBug: T208472\nChange-Id: I23d3a3a1925e97f0cabe328c1cc74e978cb4d24a\n",
"bugs": [
"T208398",
"T208472"
],
"subject": "Block: Clean up handling of non-User targets",
"hash": "74ff87d291e6daddfd791270c6ee95ca587d3d46",
"date": "2018-11-02T16:33:57"
},
{
"message": "Follow-up d67121f6d: Blocks can apply to non-User objects too\n\nBug: T208398\nChange-Id: I1d39f4ff709f37e7047f49964101e83c97cda0e9\n",
"bugs": [
"T208398"
],
"subject": "Follow-up d67121f6d: Blocks can apply to non-User objects too",
"hash": "eab725e3f16df3319b3087b8c0363827ff1f72b2",
"date": "2018-10-31T15:50:46"
}
]
},
"includes/DevelopmentSettings.php": {
"File": "includes/DevelopmentSettings.php",
"TicketCount": 1,
"CommitCount": 2,
"Tickets": [
"T225796"
],
"Commits": [
{
"message": "DevelopmentSettings: Document why wgRateLimits is disabled\n\nFollows-up 9c52f982d8.\n\nBug: T225796\nChange-Id: I6f7a75d58c61712233134a9d480ce68719d6cb6a\n",
"bugs": [
"T225796"
],
"subject": "DevelopmentSettings: Document why wgRateLimits is disabled",
"hash": "098463b77ad46f433bd3c987e137fe216a7963ac",
"date": "2019-06-18T00:02:24"
},
{
"message": "Disable rate limiting in Development Settings\n\nBug: T225796\nChange-Id: I2475a04066d4aaefeba372bd223ef68548a8cf18\n",
"bugs": [
"T225796"
],
"subject": "Disable rate limiting in Development Settings",
"hash": "9c52f982d8e4679ee50973d233a61cdc9653f1fd",
"date": "2019-06-17T09:11:39"
}
]
},
"includes/api/ApiComparePages.php": {
"File": "includes/api/ApiComparePages.php",
"TicketCount": 2,
"CommitCount": 2,
"Tickets": [
"T203255",
"T208929"
],
"Commits": [
{
"message": "ApiComparePages: Don't try to find next/prev of a deleted revision\n\nRevisionStore::getPreviousRevision() and ::getNextRevision() don't\nhandle being passed a RevisionArchiveRecord very well. Even if they\nwould, it's not clear whether the user wants to be comparing with the\nnext/previous deleted revision or the next/previous revision even if not\ndeleted. So let's just make it an error, at least for now.\n\nBug: T208929\nChange-Id: I151019e336bda92aa4040ba4162eb2588c909652\n",
"bugs": [
"T208929"
],
"subject": "ApiComparePages: Don't try to find next/prev of a deleted revision",
"hash": "3c95d3bdd935768b246d0688eaaa258feeecd4e6",
"date": "2018-12-03T20:31:34"
},
{
"message": "ApiComparePages: Clean up handling of slot deletion\n\nWe can't allow the main slot to be deleted. DifferenceEngine assumes it\nexits.\n\nWe also shouldn't allow parameters such as `tosection-{role}` to be used\nwithout the corresponing `totext-{role}`. This will help prevent people\nfrom being confused into thinking that `tosection-{role}` will do\nanything in that situation (as opposed to `tosection`, which did).\n\nBug: T203255\nChange-Id: I58573bb2c1ee68e6907ef2e88385fe36e5184076\n",
"bugs": [
"T203255"
],
"subject": "ApiComparePages: Clean up handling of slot deletion",
"hash": "07530dfb63fe24c06ab4508c6cb8436896104e7c",
"date": "2018-08-31T15:26:07"
}
]
},
"includes/libs/filebackend/FileBackendStore.php": {
"File": "includes/libs/filebackend/FileBackendStore.php",
"TicketCount": 2,
"CommitCount": 2,
"Tickets": [
"T205567",
"T204174"
],
"Commits": [
{
"message": "filebackend: avoiding computing file SHA-1 hashes unless needed\n\nFileBackendStore already supports stat info not returning SHA-1.\nBuild on that logic with a \"requireSHA1\" parameter to getFileStat()\nto move some logic from SwiftFileBackend to the parent class and\navoid computing missing SHA-1's for Swift when nothing actually\nrequested the SHA-1. Only getFileSha1Base36() needs to trigger this\nlazy-population logic.\n\nNote that thumbnails only use doQuickOperations(), which does not\nneed to examine SHA-1s, it only does regular getFileStat() calls.\n\nAlso renamed addMissingMetadata() to addMissingHashMetadata().\n\nBug: T204174\nChange-Id: I2a378cb2a34608a6da2f8abe604861ff391ffaa7\n",
"bugs": [
"T204174"
],
"subject": "filebackend: avoiding computing file SHA-1 hashes unless needed",
"hash": "e1497e3593154f579cf5f6597bbf8b88cf16dce6",
"date": "2018-12-10T22:51:26"
},
{
"message": "filebackend: Add normalization for stat errors\n\nBug: T205567\nChange-Id: I75f1eb6dc2cbff0ea0dc0706cca0ad79c54fc612\n",
"bugs": [
"T205567"
],
"subject": "filebackend: Add normalization for stat errors",
"hash": "8df0342eef3104dbb78f157d7a41965c4ce632df",
"date": "2018-10-04T23:00:48"
}
]
},
"includes/libs/objectcache/ReplicatedBagOStuff.php": {
"File": "includes/libs/objectcache/ReplicatedBagOStuff.php",
"TicketCount": 2,
"CommitCount": 2,
"Tickets": [
"T198279",
"T204742"
],
"Commits": [
{
"message": "objectcache: add object segmentation support to BagOStuff\n\nUse it for ApiStashEdit so that large PaserOutput can be stored.\n\nAdd flag to allow for value segmentation on set() in BagOStuff.\nAlso add a flag for immediate deletion of segments on delete().\n\nBagOStuff now has base serialize()/unserialize() methods.\n\nBug: T204742\nChange-Id: I0667a02612526d8ddfd91d5de48b6faa78bd1ab5\n",
"bugs": [
"T204742"
],
"subject": "objectcache: add object segmentation support to BagOStuff",
"hash": "b09b3980f991bb02a64cce0e462b97bada3b4776",
"date": "2019-06-11T15:14:17"
},
{
"message": "objectcache: define makeKey()/makeGlobalKey() for ReplicatedBagOStuff\n\nProxy to the \"master\"/\"write\" cache object method. This is similar to the\napproach taken in MultiWriteBagOStuff\n\nBug: T198279\nChange-Id: If0933246b7ef4fc07ebeec4c3c9625b1137dbe05\n",
"bugs": [
"T198279"
],
"subject": "objectcache: define makeKey()/makeGlobalKey() for ReplicatedBagOStuff",
"hash": "61a7e1acd0af4a5386df03335733accfde179fa1",
"date": "2018-06-27T14:16:01"
}
]
},
"includes/media/FormatMetadata.php": {
"File": "includes/media/FormatMetadata.php",
"TicketCount": 2,
"CommitCount": 2,
"Tickets": [
"T226751",
"T257497"
],
"Commits": [
{
"message": "Typehint FormatMetadata::collapseContactInfO()\n\nThe method expects array, but was given a string.\nSince there's only one caller, the caller is fixed\nand the method typehinted.\n\nAlso fix doc comment\n\nBug: T257497\nChange-Id: I67c337c4ee95ca30d968b89251dbbe077d2110e3\n",
"bugs": [
"T257497"
],
"subject": "Typehint FormatMetadata::collapseContactInfO()",
"hash": "a31b63824507f8194cd4f333375e30c54de75534",
"date": "2020-07-14T20:26:47"
},
{
"message": "media: Log and fail gracefully on invalid EXIF coordinates\n\nThe $coord value is a value extracted from the EXIF section of an\nimage file. We expect it to be a float, but there is no guarantee this\nis the case. It could, for example, be an empty string.\n\nI suggest this trivial fix. It does have the following effects:\n* Instead of logging a PHP notice when floor() hits something that is\n not a number, I try to log something that's more useful for later,\n more in-depth debugging. Note this log call isn't necessarily meant\n to stay, but to find an even better fix for this issue.\n* I return the string as it is. If it's \"foo\", the user will see \"foo\"\n instead of \"0° 0′ 0″ N\", which wasn't helpful.\n\nAlso note how wrong and misleading the PHPDoc block for this function\nwas.\n\nBug: T226751\nChange-Id: I1ca98728de4113ee1ae4362bd3e62b425d589388\n",
"bugs": [
"T226751"
],
"subject": "media: Log and fail gracefully on invalid EXIF coordinates",
"hash": "f6787ede2db29fcc2c1923e23eaa2e9bf86522a1",
"date": "2019-11-29T13:08:01"
}
]
},
"includes/changes/ChangesList.php": {
"File": "includes/changes/ChangesList.php",
"TicketCount": 2,
"CommitCount": 2,
"Tickets": [
"T219114",
"T251386"
],
"Commits": [
{
"message": "ChangesList::insertRollback - Force rc_this_oldid to integer before use\n\nSame for the other fields that needed to be integers\n\nFollow up to 71b64e46e4b3c5f194597236018756dbab375302\n\nBug: T251386\nChange-Id: Idf4cc7d3cc54a98e46619c0996f8109bc1e88620\n",
"bugs": [
"T251386"
],
"subject": "ChangesList::insertRollback - Force rc_this_oldid to integer before use",
"hash": "3d4fb45f96b98d985759f87122b661ae7246b478",
"date": "2020-04-29T10:56:51"
},
{
"message": "build: Upgrade mediawiki/mediawiki-phan-config from 0.5.0 to 0.6.0 and make pass\n\nFix five instances of PhanPluginDuplicateConditionalNullCoalescing;\nescape the rest for now.\n\nBug: T219114\nChange-Id: Ic4bb30c43c5315ce6b878b37b432c6e219414f8b\n",
"bugs": [
"T219114"
],
"subject": "build: Upgrade mediawiki/mediawiki-phan-config from 0.5.0 to 0.6.0 and make pass",
"hash": "460bcf81e752ffd7eb3994fb331324e874f1ca15",
"date": "2019-05-13T14:57:07"
}
]
},
"includes/externalstore/ExternalStore.php": {
"File": "includes/externalstore/ExternalStore.php",
"TicketCount": 1,
"CommitCount": 2,
"Tickets": [
"T187942"
],
"Commits": [
{
"message": "Make LocalFile check early if the revision store is available\n\nThis reduces the odds of having files without corresponding\nwiki pages, given that the later is done in a deferred update.\n\nAlso made some documentation cleanups.\n\nBug: T187942\nChange-Id: Iff516669f535713d37e0011e2d7ed285c667f1c5\n",
"bugs": [
"T187942"
],
"subject": "Make LocalFile check early if the revision store is available",
"hash": "d9ba7cd0050d531c4f016fda285793568fa133c7",
"date": "2018-02-22T22:07:30"
},
{
"message": "Add ExternalStoreMedium::isReadOnly() method\n\nUse this to abort out of store() calls early\n\nBug: T187942\nChange-Id: I9334d36e8bc3e4589775471eee03be4f4a3119a3\n",
"bugs": [
"T187942"
],
"subject": "Add ExternalStoreMedium::isReadOnly() method",
"hash": "656f60c15434afb69d60e511cfebf84fc4fbc2c2",
"date": "2018-02-22T16:15:20"
}
]
},
"includes/externalstore/ExternalStoreMedium.php": {
"File": "includes/externalstore/ExternalStoreMedium.php",
"TicketCount": 1,
"CommitCount": 2,
"Tickets": [
"T187942"
],
"Commits": [
{
"message": "Make LocalFile check early if the revision store is available\n\nThis reduces the odds of having files without corresponding\nwiki pages, given that the later is done in a deferred update.\n\nAlso made some documentation cleanups.\n\nBug: T187942\nChange-Id: Iff516669f535713d37e0011e2d7ed285c667f1c5\n",
"bugs": [
"T187942"
],
"subject": "Make LocalFile check early if the revision store is available",
"hash": "d9ba7cd0050d531c4f016fda285793568fa133c7",
"date": "2018-02-22T22:07:30"
},
{
"message": "Add ExternalStoreMedium::isReadOnly() method\n\nUse this to abort out of store() calls early\n\nBug: T187942\nChange-Id: I9334d36e8bc3e4589775471eee03be4f4a3119a3\n",
"bugs": [
"T187942"
],
"subject": "Add ExternalStoreMedium::isReadOnly() method",
"hash": "656f60c15434afb69d60e511cfebf84fc4fbc2c2",
"date": "2018-02-22T16:15:20"
}
]
},
"includes/page/PageArchive.php": {
"File": "includes/page/PageArchive.php",
"TicketCount": 2,
"CommitCount": 2,
"Tickets": [
"T200072",
"T222402"
],
"Commits": [
{
"message": "Avoid gap locking in PageArchive::undeleteRevisions\n\nDo not run the FOR UPDATE when it may be not find a row.\nUse the fresh WikiPage object to check exists of the page and than run\nFOR UPDATE for WikiPage::updateRevisionOn later\n\nBug: T222402\nChange-Id: Ifa3f407d7e1d0da6b3b7902c315ba65538add6ec\n",
"bugs": [
"T222402"
],
"subject": "Avoid gap locking in PageArchive::undeleteRevisions",
"hash": "642abbe8b5061e73102cff177abf14bf470930c7",
"date": "2020-07-10T20:13:45"
},
{
"message": "PageArchive: Pass correct overrides to newRevisionFromArchiveRow()\n\nRevision::newFromArchiveRow took 'page' as an override for ar_page_id,\nwhile RevisionStore::newRevisionFromArchiveRow() needs 'page_id'.\n\nThanks to sanity checks elsewhere in RevisionStore, this mistaken\noverride causes an exception to be thrown rather than undeleted\nrevisions potentially pointing to the wrong page.\n\nBug: T200072\nChange-Id: I9d7543866c674f4d8aea9ec00fcc15cbf616ca66\n",
"bugs": [
"T200072"
],
"subject": "PageArchive: Pass correct overrides to newRevisionFromArchiveRow()",
"hash": "75e03c7860f54734eb7164b84cfa59d6cb9ab07e",
"date": "2018-07-23T12:44:02"
}
]
},
"includes/specialpage/ChangesListSpecialPage.php": {
"File": "includes/specialpage/ChangesListSpecialPage.php",
"TicketCount": 2,
"CommitCount": 2,
"Tickets": [
"T229954",
"T253098"
],
"Commits": [
{
"message": "Filter invalid namespace on Special:RecentChanges and friend\n\nThis removes invalid values from the query and cast everything to int\nThe database field is known as integer and there is no need to pass\nstrings to the database\n\nIn case of all namespaces are invalid the parameter is ignored and all\nnamespaces are returned\n\nBug: T253098\nChange-Id: I166ef53aed669de61c20db42e3da75783ebc3970\n",
"bugs": [
"T253098"
],
"subject": "Filter invalid namespace on Special:RecentChanges and friend",
"hash": "faaee68c418990fa7a4f5ecb181f18e1a439598e",
"date": "2020-07-10T19:42:51"
},
{
"message": "ChangesListSpecialPage: skip associated for namespaces that don't have one.\n\nBug: T229954\nChange-Id: Ic69989c3ac60a86d58842ec34dc9e883988d785f\n",
"bugs": [
"T229954"
],
"subject": "ChangesListSpecialPage: skip associated for namespaces that don't have one.",
"hash": "e8608a5c8fb51024e88cfda6089e637db366139d",
"date": "2019-08-07T16:01:44"
}
]
},
"includes/jobqueue/jobs/AssembleUploadChunksJob.php": {
"File": "includes/jobqueue/jobs/AssembleUploadChunksJob.php",
"TicketCount": 2,
"CommitCount": 2,
"Tickets": [
"T228749",
"T204881"
],
"Commits": [
{
"message": "concatenateChunks failures are not job failures\n\nconcatenateChunks can fail for many reasons, including\n- corrupt files\n- don't pass abusefilter\n- exceeds file limits\netc.\n\nWhen we're unable to concatenate the chunks, it shouldn't be\nconsidered a job failure. It just didn't pass whatever requirements,\nand the user should be informed about it, but the job actually\ndid what it had to do and it's not the software's fault that the\nfile isn't any good.\n\nBesides, this job is only triggered on async uploads.\nSynchronous uploads call concatenateChunks directly, and no exceptions\nare thrown there when it fails to concatenate: we simply inform the\nusers what happened - same as what's already happening for these async\nuploads as well.\n\nTL;DR: The job did what it has to do.\n\nBug: T204881\nChange-Id: Idf166d2fb509211080fb9e5f0e085467ef7d7ef2\n",
"bugs": [
"T204881"
],
"subject": "concatenateChunks failures are not job failures",
"hash": "1d191334eed4aae5ce0f593f583533b2cf7e78a7",
"date": "2020-07-01T14:23:19"
},
{
"message": "Don't try to store File objects to the upload session\n\nFile objects can contain closures which can't be serialized.\n\nInstead, add makeWarningsSerializable(), which converts the warnings\nto a serializable array. Make ApiUpload::transformWarnings() act on this\nserializable array instead. For consistency, ApiUpload::getApiWarnings()\nalso needs to convert the result of checkWarnings() before transforming\nit.\n\nBug: T228749\nChange-Id: I8236aaf3683f93a03a5505803f4638e022cf6d85\n",
"bugs": [
"T228749"
],
"subject": "Don't try to store File objects to the upload session",
"hash": "51e837f68f6df7fdc6cb35803e497bfc0532c861",
"date": "2019-07-26T06:15:30"
}
]
},
"includes/filerepo/FileRepo.php": {
"File": "includes/filerepo/FileRepo.php",
"TicketCount": 2,
"CommitCount": 2,
"Tickets": [
"T253922",
"T263014"
],
"Commits": [
{
"message": "Hard deprecate File::userCan() with $user=null\n\nThe ArchivedFile::userCan and OldLocalFile::userCan() methods, along\nwith a number of other methods where the user parameter was optional,\nwere deprecated in 1.35, but this case was overlooked. This patch is\nintended for backport to 1.35, so that the $user parameter can be\nremoved in 1.36 in accordance with the deprecation policy.\n\nThis path is known to be used by LocalRepo::findFile(),\nFileRepo::findFile(), and FileRepo::findFileFromKey(), so hacky\nworkarounds have been added in this patch to avoid triggering\ndeprecation warnings in 1.35. T263033 has been filed to fix these\n'correctly' in 1.36.\n\nBug: T263014\nChange-Id: I17cab8ce043a5965aeab089392068b91c686025e\n",
"bugs": [
"T263014"
],
"subject": "Hard deprecate File::userCan() with $user=null",
"hash": "5e703cdf669f85e2924dee76c5a8831633259ce0",
"date": "2020-09-16T16:27:51"
},
{
"message": "Mark two FileRepo functions public\n\nFunctions references as callbacks are passed outside the class\n\nBug: T253922\nChange-Id: I301283aeb814b558d338e600390b8af5679d7f70\nFollows-Up: I44cd7ba39a898a27f0f66cf34238ab95370d2279\n",
"bugs": [
"T253922"
],
"subject": "Mark two FileRepo functions public",
"hash": "693df22570dfbd399a8d1bf4d822e01089d5829f",
"date": "2020-05-28T21:03:02"
}
]
},
"includes/SiteStats.php": {
"File": "includes/SiteStats.php",
"TicketCount": 1,
"CommitCount": 2,
"Tickets": [
"T186947"
],
"Commits": [
{
"message": "Salvage site_stats row with negative values in miser mode\n\n* Instead of returning all zeroes, just use zero for the\n negative values in the row.\n* Allow large numbers since the fields are BIGINT.\n* Clean up the return types to truly be integers.\n* Respect the $groups argument in SiteStatsInit::getDB().\n\nBug: T186947\nChange-Id: I51fdc45124c12aba114540fc0ec66a3e63d61e09\n",
"bugs": [
"T186947"
],
"subject": "Salvage site_stats row with negative values in miser mode",
"hash": "6535091de23982b0a31642005d47f13b8e554b66",
"date": "2018-02-14T23:37:55"
},
{
"message": "Make SiteStatsInit::doPlaceholderInit() use 1 for ss_row_id\n\nThis makes it consistent with refresh() and avoids having two rows\non new wikis. Also make the SELECT explicitly look for row 1.\n\nBug: T186947\nChange-Id: I4f952888bf8fecc791366a9698e46d61a4ad4ff3\n",
"bugs": [
"T186947"
],
"subject": "Make SiteStatsInit::doPlaceholderInit() use 1 for ss_row_id",
"hash": "fcc7e1c0285944b947995fd5668723fc7e939f80",
"date": "2018-02-12T18:08:10"
}
]
},
"includes/page/Article.php": {
"File": "includes/page/Article.php",
"TicketCount": 3,
"CommitCount": 2,
"Tickets": [
"T198176",
"T187731",
"T259181"
],
"Commits": [
{
"message": "Introduce and apply NonSerializableTrait\n\nThe NonSerializableTrait prevents object serialization via php's native\nserialization mechanism. Most objects are not safe to serialize, and\nNonSerializableTrait provides a covenient and uniform way to protect\nagainst serialization attempts.\n\nThis patch applies the NonSerializableTrait to some key classes in\nMediaWiki.\n\nBug: T187731\nBug: T259181\nChange-Id: I0c3b558d97e3415413bbaa3d98f6ebd5312c4a67\n",
"bugs": [
"T187731",
"T259181"
],
"subject": "Introduce and apply NonSerializableTrait",
"hash": "dc436c3cff5f41bf8f22e0a3d19fa86a1c86b7dd",
"date": "2020-09-28T19:55:49"
},
{
"message": "Use job queue for deletion of pages with many revisions\n\nPages with many revisions experience transaction size exceptions,\ndue to archiving revisions. Use the job queue to split the work\ninto batches and avoid exceptions.\n\nBug: T198176\nChange-Id: Ie800fb5a46be837ac91b24b9402ee90b0355d6cd\n",
"bugs": [
"T198176"
],
"subject": "Use job queue for deletion of pages with many revisions",
"hash": "ca9f1dabf3719c579fd117e7b9826a3269783e7e",
"date": "2018-10-04T00:16:14"
}
]
},
"includes/context/RequestContext.php": {
"File": "includes/context/RequestContext.php",
"TicketCount": 3,
"CommitCount": 2,
"Tickets": [
"T198054",
"T187731",
"T259181"
],
"Commits": [
{
"message": "Introduce and apply NonSerializableTrait\n\nThe NonSerializableTrait prevents object serialization via php's native\nserialization mechanism. Most objects are not safe to serialize, and\nNonSerializableTrait provides a covenient and uniform way to protect\nagainst serialization attempts.\n\nThis patch applies the NonSerializableTrait to some key classes in\nMediaWiki.\n\nBug: T187731\nBug: T259181\nChange-Id: I0c3b558d97e3415413bbaa3d98f6ebd5312c4a67\n",
"bugs": [
"T187731",
"T259181"
],
"subject": "Introduce and apply NonSerializableTrait",
"hash": "dc436c3cff5f41bf8f22e0a3d19fa86a1c86b7dd",
"date": "2020-09-28T19:55:49"
},
{
"message": "specialpage: Fix login crash caused by unknown language via ?uselang\n\nPer Krinkle's comments here: T198054#4598447, it's exactly what is\nhappening. @Fomafix suggests we handle the exception that is been\nthrown by Language::factory() when there is an invalid language code\nprovided. The attempt is to fix this in the central way of ever request\nwhether POST or GET.\n\nWe're working within a particular context, and within this request\ncontext, we can create a language user's language object then generate\na language object. If uselang parameter is provided an invalid language\ncode, getLanguage in this request context will default to $wgLanguageCode\nthen use this code to create the user's language object. In addition,\ngetLanguage() invalidates cached user interface language.\n\nBug: T198054\nChange-Id: I825fdfa882a4243ffc63c9de0d7f482e2cfb9862\n",
"bugs": [
"T198054"
],
"subject": "specialpage: Fix login crash caused by unknown language via ?uselang",
"hash": "34f78a69ef058c19ecdac7290a174624ef8212ee",
"date": "2019-01-29T16:16:02"
}
]
},
"includes/Revision/RevisionStoreRecord.php": {
"File": "includes/Revision/RevisionStoreRecord.php",
"TicketCount": 2,
"CommitCount": 2,
"Tickets": [
"T221763",
"T254210"
],
"Commits": [
{
"message": "Add more data to the rev_timestamp assertion error for debugging\n\nBug: T254210\nChange-Id: Icd4595fdddd68191382bb2651ba111b4cef49eeb\n",
"bugs": [
"T254210"
],
"subject": "Add more data to the rev_timestamp assertion error for debugging",
"hash": "0a2ab7c33c8ca4d28e6f116506db2d2239db732c",
"date": "2020-06-03T22:43:19"
},
{
"message": "RevisionStoreRecord: improve reporting of mismatching titles.\n\nWhen throwing an exception to report a Title mismatchign the value of rev_page while\nconstructing a RevisionStoreRecord, the error message should contain the\ntitle text in addition to the page ID, to make investigation easier.\n\nBug: T221763\nChange-Id: Ife79389e4cd09d7760f26dee505ece236db39fe2\n",
"bugs": [
"T221763"
],
"subject": "RevisionStoreRecord: improve reporting of mismatching titles.",
"hash": "6f68a7407445ebe84026758526c08567623bdbbe",
"date": "2020-01-07T19:58:31"
}
]
},
"includes/OutputPage.php": {
"File": "includes/OutputPage.php",
"TicketCount": 2,
"CommitCount": 2,
"Tickets": [
"T248049",
"T254079"
],
"Commits": [
{
"message": "OutputPage: Temporary hack to avoid taint-check crash\n\nBug: T254079\nChange-Id: Ic4dbba20d5be43d49a39f6a09a734fab6722c44f\n",
"bugs": [
"T254079"
],
"subject": "OutputPage: Temporary hack to avoid taint-check crash",
"hash": "dad17c82da0dfba8348fb03ec8b58202a4860d5d",
"date": "2020-05-31T10:04:13"
},
{
"message": "OutputPage: Fix warning when setting wgUserNewMsgRevisionId\n\nFollow-up to e08e9609ffa0cc8b52a5752acf665861e3857603, which contained a\ntypo ($$ instead of $)\n\nBug: T248049\nChange-Id: I7206cb61a61cad528ece880cffcbdd7b4e04e935\n",
"bugs": [
"T248049"
],
"subject": "OutputPage: Fix warning when setting wgUserNewMsgRevisionId",
"hash": "f160f71823107d028efb94d57627e561d4ff15b7",
"date": "2020-03-19T03:27:14"
}
]
},
"includes/specials/pagers/DeletedContribsPager.php": {
"File": "includes/specials/pagers/DeletedContribsPager.php",
"TicketCount": 2,
"CommitCount": 2,
"Tickets": [
"T252043",
"T252052"
],
"Commits": [
{
"message": "Complete required fields for RevisionFactory::newRevisionFromArchiveRow\n\nRevisionFactory::newRevisionFromArchiveRow requires 'ar_id' field and this will generate undefined property notice once I3baf3de is deployed. The error was previously masked because of mistaken use of RevisionFactory::newRevisionFromRow which does not require this field.\n\nBug: T252052\nChange-Id: I2b8449d117f1cd8ee9abce49ab5b0ebd163f29e9\n",
"bugs": [
"T252052"
],
"subject": "Complete required fields for RevisionFactory::newRevisionFromArchiveRow",
"hash": "820ca474d057c4f36d7cc65bc448e51c21980653",
"date": "2020-05-06T18:33:21"
},
{
"message": "DeletedContribsPager: Revision rows are from the archive table\n\nNeed to use newRevisionFromArchiveRow, not newRevisionFromRow\n\nBroken by 49707c59da2c4776b7752a5f2d5710f772c8d541\n\nBug: T252043\nChange-Id: I3baf3de96cf0ed4981950a8fd7c7944ae72d074b\n",
"bugs": [
"T252043"
],
"subject": "DeletedContribsPager: Revision rows are from the archive table",
"hash": "49a16709773a7d6159960c978682394edc79a1f7",
"date": "2020-05-06T16:29:47"
}
]
},
"includes/Revision/SlotRecord.php": {
"File": "includes/Revision/SlotRecord.php",
"TicketCount": 4,
"CommitCount": 2,
"Tickets": [
"T200653",
"T219816",
"T187731",
"T259181"
],
"Commits": [
{
"message": "Introduce and apply NonSerializableTrait\n\nThe NonSerializableTrait prevents object serialization via php's native\nserialization mechanism. Most objects are not safe to serialize, and\nNonSerializableTrait provides a covenient and uniform way to protect\nagainst serialization attempts.\n\nThis patch applies the NonSerializableTrait to some key classes in\nMediaWiki.\n\nBug: T187731\nBug: T259181\nChange-Id: I0c3b558d97e3415413bbaa3d98f6ebd5312c4a67\n",
"bugs": [
"T187731",
"T259181"
],
"subject": "Introduce and apply NonSerializableTrait",
"hash": "dc436c3cff5f41bf8f22e0a3d19fa86a1c86b7dd",
"date": "2020-09-28T19:55:49"
},
{
"message": "SlotRecord:compute sha1 if empty.\n\nThe SHA1 should be computed automatically if empty, not just if it is\nmissing entirely. This is needed since rev_sha1 may contain an empty\nstring for old revisions.\n\nBug: T200653\nBug: T219816\nChange-Id: Ia6870a828bc9661fb05085e36315a86483ec48c4\n",
"bugs": [
"T200653",
"T219816"
],
"subject": "SlotRecord:compute sha1 if empty.",
"hash": "440d9b84beef50eaefd209fc371ce315a242c923",
"date": "2019-07-04T08:57:17"
}
]
},
"includes/deferred/LinksDeletionUpdate.php": {
"File": "includes/deferred/LinksDeletionUpdate.php",
"TicketCount": 2,
"CommitCount": 2,
"Tickets": [
"T195397",
"T199762"
],
"Commits": [
{
"message": "Reduce the rate of calls to Category::refreshCounts\n\nBug: T199762\nChange-Id: I23e2e1ebf187d21ea4bd22304aa622199a8b9c5b\n",
"bugs": [
"T199762"
],
"subject": "Reduce the rate of calls to Category::refreshCounts",
"hash": "70ed89ad436f9a5c9090e9927c40a701db6cf93f",
"date": "2018-07-17T23:46:38"
},
{
"message": "Reduce frequency of refreshCounts() calls in LinksDeletionUpdate\n\nBug: T195397\nChange-Id: I0a39c735ec516b70c43c7a40583c43289550b687\n",
"bugs": [
"T195397"
],
"subject": "Reduce frequency of refreshCounts() calls in LinksDeletionUpdate",
"hash": "9a2ba8e21d820478f96adead39b544d92d1d6306",
"date": "2018-06-12T17:50:52"
}
]
},
"includes/Rest/RequestFromGlobals.php": {
"File": "includes/Rest/RequestFromGlobals.php",
"TicketCount": 2,
"CommitCount": 2,
"Tickets": {
"0": "T256831",
"2": "T261344"
},
"Commits": [
{
"message": "Rest: Use try/catch to handle URIs with embedded colon\n\nThis is a follow up to a previous fix in\n4079d328e7d4cd689f1d73e38f2b1584cec13d81 which used parse_url()==false\nas an indirect test to see if `new Uri()` would throw. Avoid the\nindirection and use a try/catch instead to be more robust against\nfixes in the Uri library and/or the parse_url() implementation.\n\nBug: T256831\nBug: T261344\nChange-Id: Ia52c5b2c77a4481afd82b468c2f7fb3c05996a91\n",
"bugs": [
"T256831",
"T261344"
],
"subject": "Rest: Use try/catch to handle URIs with embedded colon",
"hash": "4a1b4aeb2aaf22039ca8e0be242b8a2fa763165d",
"date": "2020-09-04T13:50:28"
},
{
"message": "Rest: Handle Uri constructor exception\n\nAll titles that contain a colon followed by a number cannot, currently,\nbe accessed via the Rest endpoint.\n\nFor example https://en.wikipedia.org/wiki/3:33 is a valid title/article\non English Wikipedia and can be accessed there the index/api.php entry\npoints. But the rest endpoint will fatal:\nhttps://en.wikipedia.org/w/rest.php/v1/page/3:33/history\n\nThe exception is thrown in Uri constructor of GuzzleHttp library\nif parse_url() failed to parse the request URL. But parse_url() has\nan open bug of failing to parse URLs that contain the above pattern.\nThe function returns false in such cases, (it previously raised warning\nsee I2715607);\n\nTo make our titles with this pattern accessible, we have to forestall\nthis exception.\n\nBug: T256831\nChange-Id: Ib829afc7b33419b01e69ababa147d33b30c0fbcb\n",
"bugs": [
"T256831"
],
"subject": "Rest: Handle Uri constructor exception",
"hash": "4079d328e7d4cd689f1d73e38f2b1584cec13d81",
"date": "2020-07-14T16:54:29"
}
]
},
"includes/exception/MWExceptionRenderer.php": {
"File": "includes/exception/MWExceptionRenderer.php",
"TicketCount": 2,
"CommitCount": 2,
"Tickets": [
"T206283",
"T225657"
],
"Commits": [
{
"message": "exception: No longer try to send error page post-send on api.php\n\nFor other endpoints this was already fixed, as all MWExceptionRenderer\nlogic checks headers_sent() before outputting headers.\n\nFor the MW_API condition, it was calling wfHttpError(), which in\nturn unconditionally tried to send headers.\n\nFix this by removing use of wfHttpError(), and instead re-use the\nexisting logic for a minimal http error page. Do this by removing\nthe early condition and instead let if fall into the general\nrender methods, and then treat MW_API as a non-OutputPage scenario.\n\nBug: T225657\nChange-Id: I38bbf8007078c290a2576ef177b789fab1d2059f\n",
"bugs": [
"T225657"
],
"subject": "exception: No longer try to send error page post-send on api.php",
"hash": "36e0e638a8eecbc48dd26eb531062865c803346c",
"date": "2020-03-26T17:53:27"
},
{
"message": "Avoid using \"enqueue\" mode for deferred updates in doPostOutputShutdown\n\nSet appropriate headers and flush the output as needed to avoid blocking\nthe client on post-send updates for the stock apache2 server scenario.\nSeveral cases have bits of header logic to avoid delay:\n\na) basic GET/POST requests that succeed (e.g. HTTP 2XX)\nb) requests that fail with errors (e.g. HTTP 500)\nc) If-Modified-Since requests (e.g. HTTP 304)\nd) HEAD requests\n\nThis last two still block on deferred updates, so schedulePostSendJobs()\ndoes not trigger on them as a form of mitigation. Slow deferred updates\nshould only trigger on POST anyway (inline and redirect responses are\nOK), so this should not be much of a problem.\n\nDeprecate triggerJobs() and implement post-send job runs as a deferred.\nThis makes it easy to check for the existence of post-send updates by\ncalling DeferredUpdates::pendingUpdatesCount() after the pre-send stage.\nAlso, avoid running jobs on requests that had exceptions. Relatedly,\nremove $mode option from restInPeace() and doPostOutputShutdown()\nOnly one caller was using the non-default options.\n\nBug: T206283\nChange-Id: I2dd2b71f1ced0f4ef8b16ff41ffb23bb5b4c7028\n",
"bugs": [
"T206283"
],
"subject": "Avoid using \"enqueue\" mode for deferred updates in doPostOutputShutdown",
"hash": "4f11b614544be8cb6198fbbef36e90206ed311bf",
"date": "2019-09-30T22:59:59"
}
]
},
"includes/parser/CacheTime.php": {
"File": "includes/parser/CacheTime.php",
"TicketCount": 2,
"CommitCount": 2,
"Tickets": [
"T205464",
"T264257"
],
"Commits": [
{
"message": "Revert \"Revert \"Revert \"Hard deprecate all public properties in CacheTime and ParserOutput\"\"\"\n\nThis reverts commit deacee9088948e074722af0148000ad9455b07df.\n\nBug: T264257\nChange-Id: Ie68d8081a42e7d8103e287b6d6857a30dc522f75\n",
"bugs": [
"T264257"
],
"subject": "Revert \"Revert \"Revert \"Hard deprecate all public properties in CacheTime and ParserOutput\"\"\"",
"hash": "3254e41a4cc0bcf85b5b0de3e3d237d4ebd7a987",
"date": "2020-10-01T18:03:41"
},
{
"message": "ParserOutput::getCacheTime should stay the same after the first call.\n\nPreviously, getCacheTime would default to the current time, potentially\ncausing the return value to change over subsequent calls. With this change,\nthe value is determined on the first call, and then remembered for subsequent\ncalls.\n\nBug: T205464\nChange-Id: If240161c71d523ad5b0d33b9378950e0bebceb6e\n",
"bugs": [
"T205464"
],
"subject": "ParserOutput::getCacheTime should stay the same after the first call.",
"hash": "414215ccac0f3288ccb1a40a5a8e29b9a70f0e89",
"date": "2018-10-04T11:08:56"
}
]
},
"includes/Storage/PageUpdater.php": {
"File": "includes/Storage/PageUpdater.php",
"TicketCount": 5,
"CommitCount": 2,
"Tickets": [
"T203583",
"T246720",
"T204793",
"T221763",
"T225366"
],
"Commits": [
{
"message": "RevisionStore and PageUpdater: handle stale page ID\n\nSometimes, an edit is done with a Title object that has gone\nout of sync with the database after a page move. In this case,\nwe should re-load the current page ID from the database,\ninstead of failing the update hard.\n\nBug: T246720\nBug: T204793\nBug: T221763\nBug: T225366\nChange-Id: If7701205ec2bf4d4495349d3e67cf53d32ee8357\n",
"bugs": [
"T246720",
"T204793",
"T221763",
"T225366"
],
"subject": "RevisionStore and PageUpdater: handle stale page ID",
"hash": "1fcd23878cfad4a40429aacec5381ad711d9bbe7",
"date": "2020-04-20T16:11:45"
},
{
"message": "Provide new, unsaved revision to PST to fix magic words.\n\nThis injects the new, unsaved RevisionRecord object into the Parser used\nfor Pre-Save Transform, and sets the user and timestamp on that revision,\nto allow {{subst:REVISIONUSER}} and {{subst:REVISIONTIMESTAMP}} to function.\n\nBug: T203583\nChange-Id: I31a97d0168ac22346b2dad6b88bf7f6f8a0dd9d0\n",
"bugs": [
"T203583"
],
"subject": "Provide new, unsaved revision to PST to fix magic words.",
"hash": "465954aa23cec76ca47e51a58ff342f46fbbdcab",
"date": "2018-09-06T16:33:44"
}
]
},
"includes/session/SessionManager.php": {
"File": "includes/session/SessionManager.php",
"TicketCount": 1,
"CommitCount": 2,
"Tickets": [
"T264799"
],
"Commits": [
{
"message": "SessionManager: Always log IP/UA in session-ip\n\nBug: T264799\nChange-Id: I25257cde897db684a8438923487f80b09abe16c0\n",
"bugs": [
"T264799"
],
"subject": "SessionManager: Always log IP/UA in session-ip",
"hash": "56aacd44675c63ba93509d8f2e938cb492a7b496",
"date": "2020-10-09T16:45:22"
},
{
"message": "Log IP/device changes within the same session\n\nStore IP and device information in the session and log when\nit changes. The goal is to detect session leakage when the\nsession is accidentally sent to another user, which is a\nhypothetical cause of T264370. The log will be noisy since\nusers do change IP addresses for a number of reasons,\nbut we are mainly interested in the ability of correlating\nuser-reported incidents where we have a username to filter\nby, so that's OK.\n\nBased on I27468a3f6d58.\n\nBug: T264799\nChange-Id: Ifa14fa637c1b199159ea11e983a25212ae005565\n",
"bugs": [
"T264799"
],
"subject": "Log IP/device changes within the same session",
"hash": "d5d3c90152299abf73f7747d5c53984d9fb53ea1",
"date": "2020-10-08T20:13:25"
}
]
},
"includes/api/ApiWatchlistTrait.php": {
"File": "includes/api/ApiWatchlistTrait.php",
"TicketCount": 2,
"CommitCount": 2,
"Tickets": [
"T261030",
"T264200"
],
"Commits": [
{
"message": "Revert \"Revert \"ApiEditPage: Show existing watchlist expiry if status\nis not being changed.\"\"\n\nThis reverts commit 149e99f07230d041945871ddb6e0647ccc83dc21.\n\nIt's not necessary to change the constructor now, the module is already\nusing service locator to fetch RevisionLookup and ContentHandlerFactory.\n\nThe WatchedItemStore can also be gotten from there, voiding the need for\naltering the constructor now. As Daniel said in T259960#6380471 dependency\ninjection for API modules is good but not urgent.\n\nBug: T261030\nBug: T264200\nChange-Id: I16aa942cc800cd66a2cd538680a02b10cb0b1bfe\n",
"bugs": [
"T261030",
"T264200"
],
"subject": "Revert \"Revert \"ApiEditPage: Show existing watchlist expiry if status\nis not being changed.\"\"",
"hash": "30b947ad5f9ab9249b800f22873fe07a25ca147c",
"date": "2020-09-30T19:28:47"
},
{
"message": "Revert \"ApiEditPage: Show existing watchlist expiry if status is not being changed.\"\n\nThis reverts commit 07e547f47cae761489a33e9ebb8a9b108298f34e.\n\nReason for revert: LiquidThreads extends the ApiEditPage class,\neven though it shouldn't, and thus fails when the dependencies\nare not injected.\n\nBug: T261030\nBug: T264200\nChange-Id: Ib14f8a04bb6c723aa502a47ef9ccde6fe96a0ac7\n",
"bugs": [
"T261030",
"T264200"
],
"subject": "Revert \"ApiEditPage: Show existing watchlist expiry if status is not being changed.\"",
"hash": "149e99f07230d041945871ddb6e0647ccc83dc21",
"date": "2020-09-30T15:29:59"
}
]
},
"includes/user/UserNameUtils.php": {
"File": "includes/user/UserNameUtils.php",
"TicketCount": 1,
"CommitCount": 2,
"Tickets": [
"T249045"
],
"Commits": [
{
"message": "UserNameUtils: use ITextFormatter instead of MessageLocalizer\n\nBug: T249045\nChange-Id: Ica1e1e4788d4b9f9dfcf9f8c8b4136147d92b32e\n",
"bugs": [
"T249045"
],
"subject": "UserNameUtils: use ITextFormatter instead of MessageLocalizer",
"hash": "7f643f2ab6ea5c8e3734adbcefbf6f4d01787de0",
"date": "2020-04-13T16:28:02"
},
{
"message": "Use wfMessage in UserNameUtils::isUsable for now\n\nUntil a MessageFactory can be properly created without the time pressure\nof holding the train\n\nBug: T249045\nChange-Id: I7474c4aaaaaa095abeb697b3a0c9a6bba2f67633\n",
"bugs": [
"T249045"
],
"subject": "Use wfMessage in UserNameUtils::isUsable for now",
"hash": "f7366e197954fdde1e9340907238922a748968ea",
"date": "2020-03-31T20:53:10"
}
]
},
"includes/libs/rdbms/exception/DBQueryError.php": {
"File": "includes/libs/rdbms/exception/DBQueryError.php",
"TicketCount": 1,
"CommitCount": 1,
"Tickets": [
"T212284"
],
"Commits": [
{
"message": "rdbms: use a direct \"USE\" query for doSelectDomain() for mysql\n\nThis should give better error messages on failure.\n\nBug: T212284\nChange-Id: I55260c6e3db1770f01e3d6a6a363b917a57265be\n",
"bugs": [
"T212284"
],
"subject": "rdbms: use a direct \"USE\" query for doSelectDomain() for mysql",
"hash": "321640b117b775ba7feb26281922bfd7833b0618",
"date": "2019-03-26T18:50:28"
}
]
},
"includes/changetags/ChangeTagsLogItem.php": {
"File": "includes/changetags/ChangeTagsLogItem.php",
"TicketCount": 1,
"CommitCount": 1,
"Tickets": [
"T222036"
],
"Commits": [
{
"message": "SECURITY: Add permission check for user is permitted to view the log type\n\nBug: T222036\nChange-Id: I7584ee8db23a8834bbab21e355cab9857a293f72\n",
"bugs": [
"T222036"
],
"subject": "SECURITY: Add permission check for user is permitted to view the log type",
"hash": "8a26fa0508e69f7cdc1680db57c4d8983a70de84",
"date": "2019-06-06T19:06:01"
}
]
},
"includes/logging/LogEntry.php": {
"File": "includes/logging/LogEntry.php",
"TicketCount": 1,
"CommitCount": 1,
"Tickets": [
"T218940"
],
"Commits": [
{
"message": "Supress ChangeTags::addTags() exceptions for ManualLogEntry objects\n\nLooks like some parts of the code try to publish log event when\n$newId is 0 or null. This cause ChangeTags::addTags to throw\nan exception as at least one of rc_id, rev_id, or log_id must be\nspecified.\n\nWhen ChangeTags::addTags() fails (because both $rev_id and $log_id\nare not present), just ignore the exception and continue with\nthe execution.\n\nAlso, if one of those is set to 0, we need to pass null instead\n(do not insert 0's in to DB as both log_id and rev_id are foreign\nkeys).\n\nAdditionally log all places where ManualLogEntry::publish() is\ncalled with incorrect arguments so later we can fix all occurencie\nand remove that try{}catch around ChangeTags::addTags() call.\n\nBug: T218940\nChange-Id: I495f79f2b7a7ef1503d229a689babdc12deb353c\n",
"bugs": [
"T218940"
],
"subject": "Supress ChangeTags::addTags() exceptions for ManualLogEntry objects",
"hash": "2f30defc743b93316e2bd7cf46a548b6da9a6512",
"date": "2019-03-26T17:00:12"
}
]
},
"includes/api/ApiQueryRevisionsBase.php": {
"File": "includes/api/ApiQueryRevisionsBase.php",
"TicketCount": 1,
"CommitCount": 1,
"Tickets": [
"T212428"
],
"Commits": [
{
"message": "Stop gap to shut up log spam due to T212428.\n\nBug: T212428\nChange-Id: I442bbe9837e73167cfdaabb0451bf974dc5039f3\n",
"bugs": [
"T212428"
],
"subject": "Stop gap to shut up log spam due to T212428.",
"hash": "87cd44a9c536974cd7a64666faddfd5aa625f6e6",
"date": "2019-03-26T01:43:46"
}
]
},
"includes/specials/pagers/UsersPager.php": {
"File": "includes/specials/pagers/UsersPager.php",
"TicketCount": 1,
"CommitCount": 1,
"Tickets": [
"T219114"
],
"Commits": [
{
"message": "build: Upgrade mediawiki/mediawiki-phan-config from 0.5.0 to 0.6.0 and make pass\n\nFix five instances of PhanPluginDuplicateConditionalNullCoalescing;\nescape the rest for now.\n\nBug: T219114\nChange-Id: Ic4bb30c43c5315ce6b878b37b432c6e219414f8b\n",
"bugs": [
"T219114"
],
"subject": "build: Upgrade mediawiki/mediawiki-phan-config from 0.5.0 to 0.6.0 and make pass",
"hash": "460bcf81e752ffd7eb3994fb331324e874f1ca15",
"date": "2019-05-13T14:57:07"
}
]
},
"includes/externalstore/ExternalStoreMwstore.php": {
"File": "includes/externalstore/ExternalStoreMwstore.php",
"TicketCount": 1,
"CommitCount": 1,
"Tickets": [
"T187942"
],
"Commits": [
{
"message": "Add ExternalStoreMedium::isReadOnly() method\n\nUse this to abort out of store() calls early\n\nBug: T187942\nChange-Id: I9334d36e8bc3e4589775471eee03be4f4a3119a3\n",
"bugs": [
"T187942"
],
"subject": "Add ExternalStoreMedium::isReadOnly() method",
"hash": "656f60c15434afb69d60e511cfebf84fc4fbc2c2",
"date": "2018-02-22T16:15:20"
}
]
},
"includes/externalstore/ExternalStoreDB.php": {
"File": "includes/externalstore/ExternalStoreDB.php",
"TicketCount": 1,
"CommitCount": 1,
"Tickets": [
"T187942"
],
"Commits": [
{
"message": "Add ExternalStoreMedium::isReadOnly() method\n\nUse this to abort out of store() calls early\n\nBug: T187942\nChange-Id: I9334d36e8bc3e4589775471eee03be4f4a3119a3\n",
"bugs": [
"T187942"
],
"subject": "Add ExternalStoreMedium::isReadOnly() method",
"hash": "656f60c15434afb69d60e511cfebf84fc4fbc2c2",
"date": "2018-02-22T16:15:20"
}
]
},
"includes/jobqueue/jobs/ClearWatchlistNotificationsJob.php": {
"File": "includes/jobqueue/jobs/ClearWatchlistNotificationsJob.php",
"TicketCount": 1,
"CommitCount": 1,
"Tickets": [
"T207941"
],
"Commits": [
{
"message": "WatchedItemStore: Use batching in setNotificationTimestampsForUser\n\nUpdate rows in batches, using the same logic as is used by\nremoveWatchBatchForUser().\n\nAlso remove the functionality for updating all rows, and move that to\nresetAllNotificationTimestampsForUser() instead. To that end, add a\ntimestamp parameter to that method and to the job it uses, and make\nsetNotificationTimestampsForUser() behave like a backwards-compatibility\nwrapper around resetAllNotificationTimestampsForUser() when no list of\ntitles is specified.\n\nBug: T207941\nChange-Id: I58342257395de6fcfb4c392b3945b12883ca1680\nFollows-Up: I2008ff89c95fe6f66a3fd789d2cef0e8fe52bd93\n",
"bugs": [
"T207941"
],
"subject": "WatchedItemStore: Use batching in setNotificationTimestampsForUser",
"hash": "1da7573bb77beff9e4466430b57551986e6be248",
"date": "2019-03-21T04:41:42"
}
]
},
"includes/specials/SpecialConfirmemail.php": {
"File": "includes/specials/SpecialConfirmemail.php",
"TicketCount": 1,
"CommitCount": 1,
"Tickets": [
"T202149"
],
"Commits": [
{
"message": "Use READ_EXCLUSIVE in SpecialConfirmEmail::attemptConfirm\n\nBug: T202149\nChange-Id: I9abc9e653dcc0910a5eea1dad56b2432d33d3c44\n",
"bugs": [
"T202149"
],
"subject": "Use READ_EXCLUSIVE in SpecialConfirmEmail::attemptConfirm",
"hash": "9d07ee57403d7155c5102285b35cd553eb0497a4",
"date": "2019-03-17T05:35:43"
}
]
},
"includes/objectcache/SqlBagOStuff.php": {
"File": "includes/objectcache/SqlBagOStuff.php",
"TicketCount": 1,
"CommitCount": 1,
"Tickets": [
"T204742"
],
"Commits": [
{
"message": "objectcache: add object segmentation support to BagOStuff\n\nUse it for ApiStashEdit so that large PaserOutput can be stored.\n\nAdd flag to allow for value segmentation on set() in BagOStuff.\nAlso add a flag for immediate deletion of segments on delete().\n\nBagOStuff now has base serialize()/unserialize() methods.\n\nBug: T204742\nChange-Id: I0667a02612526d8ddfd91d5de48b6faa78bd1ab5\n",
"bugs": [
"T204742"
],
"subject": "objectcache: add object segmentation support to BagOStuff",
"hash": "b09b3980f991bb02a64cce0e462b97bada3b4776",
"date": "2019-06-11T15:14:17"
}
]
},
"includes/libs/MultiHttpClient.php": {
"File": "includes/libs/MultiHttpClient.php",
"TicketCount": 1,
"CommitCount": 1,
"Tickets": [
"T216086"
],
"Commits": [
{
"message": "MultiHttpClient: Don't relay the end-of-headers line\n\nThe callback registered by CURLOPT_HEADERFUNCTION is called for the\nempty line that separates the headers from the body, as well as all the\nactual headers. In this case, the $header string will be \"\\r\\n\".\n\nIt turns out that HHVM ignores a call to header() when passed a string\nthat's empty after trimming whitespace, while Zend PHP only ignores the\ncall when the string is empty before trimming whitespace. This later\ncauses problems when headers_list() is used expecting all strings\nreturned to contain a colon.\n\nBug: T216086\nChange-Id: I07937b17beb06788166266fbb1ea1bbf456761e3\n",
"bugs": [
"T216086"
],
"subject": "MultiHttpClient: Don't relay the end-of-headers line",
"hash": "8b89f8f37538ba64f9835ee454be21e580495647",
"date": "2019-02-19T21:20:30"
}
]
},
"includes/externalstore/ExternalStoreHttp.php": {
"File": "includes/externalstore/ExternalStoreHttp.php",
"TicketCount": 1,
"CommitCount": 1,
"Tickets": [
"T187942"
],
"Commits": [
{
"message": "Add ExternalStoreMedium::isReadOnly() method\n\nUse this to abort out of store() calls early\n\nBug: T187942\nChange-Id: I9334d36e8bc3e4589775471eee03be4f4a3119a3\n",
"bugs": [
"T187942"
],
"subject": "Add ExternalStoreMedium::isReadOnly() method",
"hash": "656f60c15434afb69d60e511cfebf84fc4fbc2c2",
"date": "2018-02-22T16:15:20"
}
]
},
"includes/deferred/MWCallableUpdate.php": {
"File": "includes/deferred/MWCallableUpdate.php",
"TicketCount": 1,
"CommitCount": 1,
"Tickets": [
"T221577"
],
"Commits": [
{
"message": "Make sure that each DataUpdate still has outer transaction scope\n\nBug: T221577\nChange-Id: I620e461d791416ca37fa9ca4fca501e28d778cf5\n",
"bugs": [
"T221577"
],
"subject": "Make sure that each DataUpdate still has outer transaction scope",
"hash": "3496f0fca3debf932598087607dc5547075e2cba",
"date": "2019-05-30T20:53:18"
}
]
},
"includes/specials/SpecialEditTags.php": {
"File": "includes/specials/SpecialEditTags.php",
"TicketCount": 1,
"CommitCount": 1,
"Tickets": [
"T222036"
],
"Commits": [
{
"message": "SECURITY: Add permission check for user is permitted to view the log type\n\nBug: T222036\nChange-Id: I7584ee8db23a8834bbab21e355cab9857a293f72\n",
"bugs": [
"T222036"
],
"subject": "SECURITY: Add permission check for user is permitted to view the log type",
"hash": "8a26fa0508e69f7cdc1680db57c4d8983a70de84",
"date": "2019-06-06T19:06:01"
}
]
},
"includes/HeaderCallback.php": {
"File": "includes/HeaderCallback.php",
"TicketCount": 1,
"CommitCount": 1,
"Tickets": [
"T216086"
],
"Commits": [
{
"message": "Fix HeaderCallback failing on headers without a colon\n\nBug: T216086\nChange-Id: I3007a5bc238a5271cc3fe4da1844ff74efd58be0\n",
"bugs": [
"T216086"
],
"subject": "Fix HeaderCallback failing on headers without a colon",
"hash": "1aaa08bf370273bc354f08f624f2550c526d9703",
"date": "2019-02-19T17:25:14"
}
]
},
"includes/Storage/RevisionArchiveRecord.php": {
"File": "includes/Storage/RevisionArchiveRecord.php",
"TicketCount": 2,
"CommitCount": 1,
"Tickets": [
"T184693",
"T184690"
],
"Commits": [
{
"message": "Handle failure to load content in Revision getSize, etc\n\nThe Revision class used to just return null if size or hsash were unknown\nand could nto be determined. This patch restores this behavior by\ncatching any RevisionAccessExceptions raised by RevisionRecord when\nfailing to load content.\n\nBug: T184693\nBug: T184690\nChange-Id: I393ea19b9fb48219583fc65ce81ea14d8d0a2357\n",
"bugs": [
"T184693",
"T184690"
],
"subject": "Handle failure to load content in Revision getSize, etc",
"hash": "04bac0dee1c919c2f5c63527c90412b0b8fac081",
"date": "2018-01-11T14:23:03"
}
]
},
"includes/rcfeed/FormattedRCFeed.php": {
"File": "includes/rcfeed/FormattedRCFeed.php",
"TicketCount": 1,
"CommitCount": 1,
"Tickets": [
"T219114"
],
"Commits": [
{
"message": "build: Upgrade mediawiki/mediawiki-phan-config from 0.5.0 to 0.6.0 and make pass\n\nFix five instances of PhanPluginDuplicateConditionalNullCoalescing;\nescape the rest for now.\n\nBug: T219114\nChange-Id: Ic4bb30c43c5315ce6b878b37b432c6e219414f8b\n",
"bugs": [
"T219114"
],
"subject": "build: Upgrade mediawiki/mediawiki-phan-config from 0.5.0 to 0.6.0 and make pass",
"hash": "460bcf81e752ffd7eb3994fb331324e874f1ca15",
"date": "2019-05-13T14:57:07"
}
]
},
"includes/http/HttpRequestFactory.php": {
"File": "includes/http/HttpRequestFactory.php",
"TicketCount": 1,
"CommitCount": 1,
"Tickets": [
"T222935"
],
"Commits": [
{
"message": "Return result from HttpRequestFactory get and post methods\n\nBug: T222935\nChange-Id: Idf1d00d04abbcf4e3391e3979bbab97e595916a5\n",
"bugs": [
"T222935"
],
"subject": "Return result from HttpRequestFactory get and post methods",
"hash": "ba2788f62fc19e2ccbaf9e77f0937e16547e941d",
"date": "2019-05-13T09:32:17"
}
]
},
"includes/jobqueue/jobs/ActivityUpdateJob.php": {
"File": "includes/jobqueue/jobs/ActivityUpdateJob.php",
"TicketCount": 1,
"CommitCount": 1,
"Tickets": [
"T219114"
],
"Commits": [
{
"message": "build: Upgrade mediawiki/mediawiki-phan-config from 0.5.0 to 0.6.0 and make pass\n\nFix five instances of PhanPluginDuplicateConditionalNullCoalescing;\nescape the rest for now.\n\nBug: T219114\nChange-Id: Ic4bb30c43c5315ce6b878b37b432c6e219414f8b\n",
"bugs": [
"T219114"
],
"subject": "build: Upgrade mediawiki/mediawiki-phan-config from 0.5.0 to 0.6.0 and make pass",
"hash": "460bcf81e752ffd7eb3994fb331324e874f1ca15",
"date": "2019-05-13T14:57:07"
}
]
},
"includes/jobqueue/Job.php": {
"File": "includes/jobqueue/Job.php",
"TicketCount": 1,
"CommitCount": 1,
"Tickets": [
"T221368"
],
"Commits": [
{
"message": "jobqueue: Follow-up for fc5d51f12936ed (added GenericParameterJob)\n\n* Remove duplicate $params check from Job::factory done in Job::__construct.\n\n* In Job::factory(), restore use of a valid title as default for passing as\n constructor arg to old job classes. Their constructor may expect it to\n be valid.\n Keep the invalid dummy in Job::__construct, and document why.\n\n* tests: Update test case for failure mode when using Job::factory\n with a class that requires a title. It asserted getting an invalid\n title. This now restores the behaviour prior to fc5d51f12936ed,\n which is that job classes that require a title, get a valid one.\n\n* tests: Remove test case for testToString that used\n an explicitly passed but invalid params value. I've converted\n that to expect the exception we now throw instead.\n\n* tests: Update getMockJob(), also used by testToString, which was\n relying on undocumented behaviour that 'new Title' is public\n and gets namespace=0 and title=''. Before fc5d51f12936ed,\n title params weren't in toString() and it asserted outputting\n three spaces (delimiter, empty string from formatted title,\n delimiter).\n In fc5d51f12936ed, this changed to asserting \"Special:\" which\n seems unintentional as we didn't pass it the internally reserved\n NS_SPECIAL/'' value, and yet was caught by the dbkey=='' check.\n Given this test case doesn't deal with titles, omit it for now.\n\n A job can either have a $title and title/namespace in params,\n or neither. This test was asserting an in-memory scenario\n where $title can be an object, but title/namespace absent from\n params.\n\nBug: T221368\nDepends-On: I89f6ad6967d6f82d87a62c15c0dded901c51b714\nChange-Id: I2ec99a12ecc627359a2aae5153d5d7c54156ff46\n",
"bugs": [
"T221368"
],
"subject": "jobqueue: Follow-up for fc5d51f12936ed (added GenericParameterJob)",
"hash": "4dce6445966a1fdeb1e635eca534af91188b74bf",
"date": "2019-04-25T15:44:11"
}
]
},
"includes/search/SearchEngine.php": {
"File": "includes/search/SearchEngine.php",
"TicketCount": 1,
"CommitCount": 1,
"Tickets": [
"T184934"
],
"Commits": [
{
"message": "Use getSize since SearchSuggestionSet does not implement Countable\n\nBug: T184934\nChange-Id: I39459352399e2023149b715b049084826df22935\n",
"bugs": [
"T184934"
],
"subject": "Use getSize since SearchSuggestionSet does not implement Countable",
"hash": "030da07b1834430b2720bdfcbfb63095380431a5",
"date": "2018-01-22T20:13:34"
}
]
},
"includes/ForkController.php": {
"File": "includes/ForkController.php",
"TicketCount": 1,
"CommitCount": 1,
"Tickets": [
"T219114"
],
"Commits": [
{
"message": "build: Upgrade mediawiki/mediawiki-phan-config from 0.5.0 to 0.6.0 and make pass\n\nFix five instances of PhanPluginDuplicateConditionalNullCoalescing;\nescape the rest for now.\n\nBug: T219114\nChange-Id: Ic4bb30c43c5315ce6b878b37b432c6e219414f8b\n",
"bugs": [
"T219114"
],
"subject": "build: Upgrade mediawiki/mediawiki-phan-config from 0.5.0 to 0.6.0 and make pass",
"hash": "460bcf81e752ffd7eb3994fb331324e874f1ca15",
"date": "2019-05-13T14:57:07"
}
]
},
"includes/Storage/RevisionRecord.php": {
"File": "includes/Storage/RevisionRecord.php",
"TicketCount": 2,
"CommitCount": 1,
"Tickets": [
"T184693",
"T184690"
],
"Commits": [
{
"message": "Handle failure to load content in Revision getSize, etc\n\nThe Revision class used to just return null if size or hsash were unknown\nand could nto be determined. This patch restores this behavior by\ncatching any RevisionAccessExceptions raised by RevisionRecord when\nfailing to load content.\n\nBug: T184693\nBug: T184690\nChange-Id: I393ea19b9fb48219583fc65ce81ea14d8d0a2357\n",
"bugs": [
"T184693",
"T184690"
],
"subject": "Handle failure to load content in Revision getSize, etc",
"hash": "04bac0dee1c919c2f5c63527c90412b0b8fac081",
"date": "2018-01-11T14:23:03"
}
]
},
"includes/Storage/RevisionStoreRecord.php": {
"File": "includes/Storage/RevisionStoreRecord.php",
"TicketCount": 2,
"CommitCount": 1,
"Tickets": [
"T184693",
"T184690"
],
"Commits": [
{
"message": "Handle failure to load content in Revision getSize, etc\n\nThe Revision class used to just return null if size or hsash were unknown\nand could nto be determined. This patch restores this behavior by\ncatching any RevisionAccessExceptions raised by RevisionRecord when\nfailing to load content.\n\nBug: T184693\nBug: T184690\nChange-Id: I393ea19b9fb48219583fc65ce81ea14d8d0a2357\n",
"bugs": [
"T184693",
"T184690"
],
"subject": "Handle failure to load content in Revision getSize, etc",
"hash": "04bac0dee1c919c2f5c63527c90412b0b8fac081",
"date": "2018-01-11T14:23:03"
}
]
},
"includes/Revision/RenderedRevision.php": {
"File": "includes/Revision/RenderedRevision.php",
"TicketCount": 1,
"CommitCount": 1,
"Tickets": [
"T220854"
],
"Commits": [
{
"message": "Add vary-revision-exist flag to handle {{REVISIONID}} and parser cache\n\nFollow-up to c537eb186862b3\n\nBug: T220854\nChange-Id: Idc19cc29764a38e3671ca1dea158bd5fb46eaf4d\n",
"bugs": [
"T220854"
],
"subject": "Add vary-revision-exist flag to handle {{REVISIONID}} and parser cache",
"hash": "5e6d9340cb81d0040cbeb2371a83e884db90cd68",
"date": "2019-04-13T00:20:50"
}
]
},
"includes/parser/DateFormatter.php": {
"File": "includes/parser/DateFormatter.php",
"TicketCount": 1,
"CommitCount": 1,
"Tickets": [
"T220563"
],
"Commits": [
{
"message": "Fix notices emitted from DateFormatter\n\nSome languages have date abbreviations that contain \".\", which allows\nthe non-ISO regexes to match an input string containing an invalid month\nname. Use preg_quote() to avoid this.\n\nAlso fix the error handling case of makeIsoMonth(). If the input date is\ninvalid, don't try to wrap it in a date span, since that's semantically\nincorrect and may also access unset members of $bits, causing a notice.\n\nBug: T220563\nChange-Id: Ib2b3fb315dc93b60de595d3c445637f6bcc78a1a\n",
"bugs": [
"T220563"
],
"subject": "Fix notices emitted from DateFormatter",
"hash": "2dca358e999c4255a8b3432b520d2316783f5621",
"date": "2019-04-10T03:07:12"
}
]
},
"includes/export/WikiExporter.php": {
"File": "includes/export/WikiExporter.php",
"TicketCount": 1,
"CommitCount": 1,
"Tickets": [
"T220257"
],
"Commits": [
{
"message": "for exports, make sure we compare page titles as strings only\n\n...and not as numbers!! Also added strict compare for the namespaces\nfield while we're in here.\n\nBug: T220257\nChange-Id: If68b79334188c2f3be5d254bea3c1e27d52c4a9f\n",
"bugs": [
"T220257"
],
"subject": "for exports, make sure we compare page titles as strings only",
"hash": "804b7f1f0faff301efd2aeff09e121609e77b96e",
"date": "2019-04-06T10:01:33"
}
]
},
"includes/SiteStatsInit.php": {
"File": "includes/SiteStatsInit.php",
"TicketCount": 1,
"CommitCount": 1,
"Tickets": [
"T186947"
],
"Commits": [
{
"message": "Salvage site_stats row with negative values in miser mode\n\n* Instead of returning all zeroes, just use zero for the\n negative values in the row.\n* Allow large numbers since the fields are BIGINT.\n* Clean up the return types to truly be integers.\n* Respect the $groups argument in SiteStatsInit::getDB().\n\nBug: T186947\nChange-Id: I51fdc45124c12aba114540fc0ec66a3e63d61e09\n",
"bugs": [
"T186947"
],
"subject": "Salvage site_stats row with negative values in miser mode",
"hash": "6535091de23982b0a31642005d47f13b8e554b66",
"date": "2018-02-14T23:37:55"
}
]
},
"includes/diff/TextSlotDiffRenderer.php": {
"File": "includes/diff/TextSlotDiffRenderer.php",
"TicketCount": 3,
"CommitCount": 1,
"Tickets": [
"T220217",
"T203069",
"T194272"
],
"Commits": [
{
"message": "Remove warning for unnused 4th argument on wikidiff2\n\nSince we changed the signature back to 3 arguments in wikidiff2 verion\n1.8.0, the warning is invalid.\n\nBug: T220217\nBug: T203069\nBug: T194272\nChange-Id: Ia326c67de28a4e9b024466c62097b4e1e1096007\n",
"bugs": [
"T220217",
"T203069",
"T194272"
],
"subject": "Remove warning for unnused 4th argument on wikidiff2",
"hash": "cbe4ffe26d23784f1b672d5434a3f1a348b91623",
"date": "2019-04-05T16:27:15"
}
]
},
"includes/api/ApiQueryLogEvents.php": {
"File": "includes/api/ApiQueryLogEvents.php",
"TicketCount": 2,
"CommitCount": 1,
"Tickets": [
"T220999",
"T221458"
],
"Commits": [
{
"message": "Add STRAIGHT_JOIN to ApiQueryLogEvents and LogPager to avoid planner oddness\n\nFor some unknown reason, when the `actor` table has few enough rows (or\nfew enough compared to `logging`) MariaDB 10.1.37 decides it makes more\nsense to fetch everything from `actor` + `logging` and filesort rather than\nfetching the limited number of rows from `logging`.\n\nWe can work around it by telling it to not reorder the query.\n\nBug: T220999\nBug: T221458\nChange-Id: I9da981c09f18ba72efeeb8279aad99eb21af699a\n",
"bugs": [
"T220999",
"T221458"
],
"subject": "Add STRAIGHT_JOIN to ApiQueryLogEvents and LogPager to avoid planner oddness",
"hash": "3d1fb0c0443ee92531ca8e1aafbcf9b403879e44",
"date": "2019-04-23T14:00:21"
}
]
},
"includes/libs/objectcache/serialized/SerializedValueContainer.php": {
"File": "includes/libs/objectcache/serialized/SerializedValueContainer.php",
"TicketCount": 1,
"CommitCount": 1,
"Tickets": [
"T204742"
],
"Commits": [
{
"message": "objectcache: add object segmentation support to BagOStuff\n\nUse it for ApiStashEdit so that large PaserOutput can be stored.\n\nAdd flag to allow for value segmentation on set() in BagOStuff.\nAlso add a flag for immediate deletion of segments on delete().\n\nBagOStuff now has base serialize()/unserialize() methods.\n\nBug: T204742\nChange-Id: I0667a02612526d8ddfd91d5de48b6faa78bd1ab5\n",
"bugs": [
"T204742"
],
"subject": "objectcache: add object segmentation support to BagOStuff",
"hash": "b09b3980f991bb02a64cce0e462b97bada3b4776",
"date": "2019-06-11T15:14:17"
}
]
},
"includes/Storage/BlobStore.php": {
"File": "includes/Storage/BlobStore.php",
"TicketCount": 1,
"CommitCount": 1,
"Tickets": [
"T187942"
],
"Commits": [
{
"message": "Make LocalFile check early if the revision store is available\n\nThis reduces the odds of having files without corresponding\nwiki pages, given that the later is done in a deferred update.\n\nAlso made some documentation cleanups.\n\nBug: T187942\nChange-Id: Iff516669f535713d37e0011e2d7ed285c667f1c5\n",
"bugs": [
"T187942"
],
"subject": "Make LocalFile check early if the revision store is available",
"hash": "d9ba7cd0050d531c4f016fda285793568fa133c7",
"date": "2018-02-22T22:07:30"
}
]
},
"includes/resourceloader/ResourceLoaderLanguageDataModule.php": {
"File": "includes/resourceloader/ResourceLoaderLanguageDataModule.php",
"TicketCount": 1,
"CommitCount": 1,
"Tickets": [
"T199941"
],
"Commits": [
{
"message": "Revert \"Ensure LanguageCode::bcp47() returns a valid BCP 47 language code\"\n\nThis reverts commit 8380f0173e79b66f0e2afd6c49cd88afb9f4f6f3.\n\nReason for revert: Caused T199941\n\nBug: T199941\nChange-Id: I93af756a2d70d6bc91f828fe6ac19bf10ca8788f\n",
"bugs": [
"T199941"
],
"subject": "Revert \"Ensure LanguageCode::bcp47() returns a valid BCP 47 language code\"",
"hash": "b302b0cd1cc7c3bc0b01f3e9c4e3fdf98064eb19",
"date": "2018-07-23T17:27:23"
}
]
},
"includes/filerepo/file/ForeignDBFile.php": {
"File": "includes/filerepo/file/ForeignDBFile.php",
"TicketCount": 1,
"CommitCount": 1,
"Tickets": [
"T198279"
],
"Commits": [
{
"message": "filerepo: clean up remote description cache keys\n\nHash the file name portion and make the string constant portions\nmore relevant to what the keys are actually used for (e.g. there\nis no URL parameter in the key)\n\nBug: T198279\nChange-Id: Idf6f97db26f5be291cdd3a50a91346677fe9c3e6\n",
"bugs": [
"T198279"
],
"subject": "filerepo: clean up remote description cache keys",
"hash": "8804df2da5d0bc1b4cbd57696d770e77b76d4434",
"date": "2018-06-27T07:25:47"
}
]
},
"includes/deferred/MergeableUpdate.php": {
"File": "includes/deferred/MergeableUpdate.php",
"TicketCount": 1,
"CommitCount": 1,
"Tickets": [
"T202715"
],
"Commits": [
{
"message": "Move user_editcount updates to a mergeable deferred update\n\nThis should reduce excess contention and lock timeouts.\nPreviously, it used a pre-commit hook which ran just before the\nend of the DB transaction round.\n\nAlso removed unused User::incEditCountImmediate() method.\n\nBug: T202715\nDepends-on: I6d239a5ea286afb10d9e317b2ee1436de60f7e4f\nDepends-on: I0ad3d17107efc7b0e59f1dd54d5733cd1572a2b7\nChange-Id: I0d6d7ddd91bbb21995142808248d162e05696d47\n",
"bugs": [
"T202715"
],
"subject": "Move user_editcount updates to a mergeable deferred update",
"hash": "390fce6db1e008c53580cedbdfe18dff3de9c766",
"date": "2018-10-25T22:32:18"
}
]
},
"includes/libs/rdbms/database/DatabaseDomain.php": {
"File": "includes/libs/rdbms/database/DatabaseDomain.php",
"TicketCount": 1,
"CommitCount": 1,
"Tickets": [
"T193565"
],
"Commits": [
{
"message": "rdbms: re-add DB domain sanity checks to LoadBalancer\n\nAlso clean up empty schema handling in DatabaseDomain\n\nThis reverts commit f23ac02f4fcf156767df66a5df2fa407310fe1d2.\n\nBug: T193565\nChange-Id: I95fde5c069f180ca888a023fade25ec81b846d44\n",
"bugs": [
"T193565"
],
"subject": "rdbms: re-add DB domain sanity checks to LoadBalancer",
"hash": "b06f02021762b3640de6c5a7a592580d7bb7ed95",
"date": "2018-10-16T23:35:05"
}
]
},
"includes/Storage/NameTableStoreFactory.php": {
"File": "includes/Storage/NameTableStoreFactory.php",
"TicketCount": 1,
"CommitCount": 1,
"Tickets": [
"T193565"
],
"Commits": [
{
"message": "rdbms: Database::selectDB() update the domain and handle failure better\n\nLoadBalancer uses Database::getDomainId() for deciding which keys to use\nin the foreign connection handle arrays. This method should reflect any\nchanges made to the DB selection.\n\nIf the query fails, then do not change domain field. This is the sort of\napproach that LoadBalancer is expects in openForeignConnection(). Also,\nthrow an exception when selectDB() fails.\n\nThe db/schema/prefix fields of Database no longer exist in favor of just\nusing the newer currentDomain field.\n\nAlso:\n* Add IDatabase::selectDomain() method and made selectDB() wrap it.\n* Extract the DB name from sqlite files if not explicitly provided.\n* Fix inconsistent open() return values from Database subclasses.\n* Make a relationSchemaQualifier() method to handle the concern of\n omitting schema names in queries. The means that getDomainId() can\n still return the right value, rather than confusingly omitt the schema.\n* Make RevisionStore::checkDatabaseWikiId() account for the domain schema.\n Unlike d2a4d614fce09c, this does not incorrectly assume the storage is\n always for the current wiki domain. Also, LBFactorySingle sets the local\n domain so it is defined even in install.php.\n* Make RevisionStoreDbTestBase actually set the LoadBalancer local domain.\n* Make RevisionTest::testLoadFromTitle() account for the domain schema.\n\nBug: T193565\nChange-Id: I6e51cd54c6da78830b38906b8c46789c79498ab5\n",
"bugs": [
"T193565"
],
"subject": "rdbms: Database::selectDB() update the domain and handle failure better",
"hash": "fe0af6cad57821dc5b8741edfed77242d543ccaa",
"date": "2018-10-10T19:03:30"
}
]
},
"includes/db/DatabaseOracle.php": {
"File": "includes/db/DatabaseOracle.php",
"TicketCount": 1,
"CommitCount": 1,
"Tickets": [
"T193565"
],
"Commits": [
{
"message": "rdbms: Database::selectDB() update the domain and handle failure better\n\nLoadBalancer uses Database::getDomainId() for deciding which keys to use\nin the foreign connection handle arrays. This method should reflect any\nchanges made to the DB selection.\n\nIf the query fails, then do not change domain field. This is the sort of\napproach that LoadBalancer is expects in openForeignConnection(). Also,\nthrow an exception when selectDB() fails.\n\nThe db/schema/prefix fields of Database no longer exist in favor of just\nusing the newer currentDomain field.\n\nAlso:\n* Add IDatabase::selectDomain() method and made selectDB() wrap it.\n* Extract the DB name from sqlite files if not explicitly provided.\n* Fix inconsistent open() return values from Database subclasses.\n* Make a relationSchemaQualifier() method to handle the concern of\n omitting schema names in queries. The means that getDomainId() can\n still return the right value, rather than confusingly omitt the schema.\n* Make RevisionStore::checkDatabaseWikiId() account for the domain schema.\n Unlike d2a4d614fce09c, this does not incorrectly assume the storage is\n always for the current wiki domain. Also, LBFactorySingle sets the local\n domain so it is defined even in install.php.\n* Make RevisionStoreDbTestBase actually set the LoadBalancer local domain.\n* Make RevisionTest::testLoadFromTitle() account for the domain schema.\n\nBug: T193565\nChange-Id: I6e51cd54c6da78830b38906b8c46789c79498ab5\n",
"bugs": [
"T193565"
],
"subject": "rdbms: Database::selectDB() update the domain and handle failure better",
"hash": "fe0af6cad57821dc5b8741edfed77242d543ccaa",
"date": "2018-10-10T19:03:30"
}
]
},
"includes/libs/rdbms/loadbalancer/LoadBalancerSingle.php": {
"File": "includes/libs/rdbms/loadbalancer/LoadBalancerSingle.php",
"TicketCount": 1,
"CommitCount": 1,
"Tickets": [
"T193565"
],
"Commits": [
{
"message": "rdbms: Database::selectDB() update the domain and handle failure better\n\nLoadBalancer uses Database::getDomainId() for deciding which keys to use\nin the foreign connection handle arrays. This method should reflect any\nchanges made to the DB selection.\n\nIf the query fails, then do not change domain field. This is the sort of\napproach that LoadBalancer is expects in openForeignConnection(). Also,\nthrow an exception when selectDB() fails.\n\nThe db/schema/prefix fields of Database no longer exist in favor of just\nusing the newer currentDomain field.\n\nAlso:\n* Add IDatabase::selectDomain() method and made selectDB() wrap it.\n* Extract the DB name from sqlite files if not explicitly provided.\n* Fix inconsistent open() return values from Database subclasses.\n* Make a relationSchemaQualifier() method to handle the concern of\n omitting schema names in queries. The means that getDomainId() can\n still return the right value, rather than confusingly omitt the schema.\n* Make RevisionStore::checkDatabaseWikiId() account for the domain schema.\n Unlike d2a4d614fce09c, this does not incorrectly assume the storage is\n always for the current wiki domain. Also, LBFactorySingle sets the local\n domain so it is defined even in install.php.\n* Make RevisionStoreDbTestBase actually set the LoadBalancer local domain.\n* Make RevisionTest::testLoadFromTitle() account for the domain schema.\n\nBug: T193565\nChange-Id: I6e51cd54c6da78830b38906b8c46789c79498ab5\n",
"bugs": [
"T193565"
],
"subject": "rdbms: Database::selectDB() update the domain and handle failure better",
"hash": "fe0af6cad57821dc5b8741edfed77242d543ccaa",
"date": "2018-10-10T19:03:30"
}
]
},
"includes/filerepo/RepoGroup.php": {
"File": "includes/filerepo/RepoGroup.php",
"TicketCount": 1,
"CommitCount": 1,
"Tickets": [
"T200026"
],
"Commits": [
{
"message": "Avoid passing \"false\" as keys to MapCacheLRU in RepoGroup\n\nBug: T200026\nChange-Id: I40f6ad2a3d281d06c9b6eaf4f31d9796ea5e9e9e\n",
"bugs": [
"T200026"
],
"subject": "Avoid passing \"false\" as keys to MapCacheLRU in RepoGroup",
"hash": "88b69410f933bbda996cb3230e58e1c6cc85eecc",
"date": "2018-07-19T23:19:11"
}
]
},
"includes/jobqueue/jobs/DeletePageJob.php": {
"File": "includes/jobqueue/jobs/DeletePageJob.php",
"TicketCount": 1,
"CommitCount": 1,
"Tickets": [
"T198176"
],
"Commits": [
{
"message": "Use job queue for deletion of pages with many revisions\n\nPages with many revisions experience transaction size exceptions,\ndue to archiving revisions. Use the job queue to split the work\ninto batches and avoid exceptions.\n\nBug: T198176\nChange-Id: Ie800fb5a46be837ac91b24b9402ee90b0355d6cd\n",
"bugs": [
"T198176"
],
"subject": "Use job queue for deletion of pages with many revisions",
"hash": "ca9f1dabf3719c579fd117e7b9826a3269783e7e",
"date": "2018-10-04T00:16:14"
}
]
},
"includes/api/ApiCSPReport.php": {
"File": "includes/api/ApiCSPReport.php",
"TicketCount": 1,
"CommitCount": 1,
"Tickets": [
"T194899"
],
"Commits": [
{
"message": "ApiCSPReport: Fix undefined $userAgent variable\n\nBug: T194899\nChange-Id: Ia83f961da1db2d1245859ae584db883b7a11081c\n",
"bugs": [
"T194899"
],
"subject": "ApiCSPReport: Fix undefined $userAgent variable",
"hash": "bfb5cd8bb3d59185bfd313940fe4e9c7b60489b8",
"date": "2018-05-18T05:18:20"
}
]
},
"includes/specials/SpecialMovepage.php": {
"File": "includes/specials/SpecialMovepage.php",
"TicketCount": 1,
"CommitCount": 1,
"Tickets": [
"T198176"
],
"Commits": [
{
"message": "Use job queue for deletion of pages with many revisions\n\nPages with many revisions experience transaction size exceptions,\ndue to archiving revisions. Use the job queue to split the work\ninto batches and avoid exceptions.\n\nBug: T198176\nChange-Id: Ie800fb5a46be837ac91b24b9402ee90b0355d6cd\n",
"bugs": [
"T198176"
],
"subject": "Use job queue for deletion of pages with many revisions",
"hash": "ca9f1dabf3719c579fd117e7b9826a3269783e7e",
"date": "2018-10-04T00:16:14"
}
]
},
"includes/api/ApiQuerySiteinfo.php": {
"File": "includes/api/ApiQuerySiteinfo.php",
"TicketCount": 1,
"CommitCount": 1,
"Tickets": [
"T199941"
],
"Commits": [
{
"message": "Revert \"Ensure LanguageCode::bcp47() returns a valid BCP 47 language code\"\n\nThis reverts commit 8380f0173e79b66f0e2afd6c49cd88afb9f4f6f3.\n\nReason for revert: Caused T199941\n\nBug: T199941\nChange-Id: I93af756a2d70d6bc91f828fe6ac19bf10ca8788f\n",
"bugs": [
"T199941"
],
"subject": "Revert \"Ensure LanguageCode::bcp47() returns a valid BCP 47 language code\"",
"hash": "b302b0cd1cc7c3bc0b01f3e9c4e3fdf98064eb19",
"date": "2018-07-23T17:27:23"
}
]
},
"includes/cache/CacheHelper.php": {
"File": "includes/cache/CacheHelper.php",
"TicketCount": 1,
"CommitCount": 1,
"Tickets": [
"T200394"
],
"Commits": [
{
"message": "Make sure to not unpack an associative array into parameter list\n\nBug: T200394\nChange-Id: I9c28e1cadeb76275d24eb7725f1578bf5ba43ad0\n",
"bugs": [
"T200394"
],
"subject": "Make sure to not unpack an associative array into parameter list",
"hash": "1facc21e49541fe48945ad9a1e9486bfff32c3de",
"date": "2018-07-28T23:52:57"
}
]
},
"includes/Storage/RevisionStoreFactory.php": {
"File": "includes/Storage/RevisionStoreFactory.php",
"TicketCount": 1,
"CommitCount": 1,
"Tickets": [
"T201194"
],
"Commits": [
{
"message": "Add safeguard against loading content across wikis.\n\nThe new MCR schema enables cross-wiki loading of page content,\nbut this mechanism doesn't work as long as the new code is reading from\nthe old schema. This is what caused T201194.\n\nBug: T201194\nChange-Id: I58af7a9e02780c55cd8fab20f19be36a0fa804da\n",
"bugs": [
"T201194"
],
"subject": "Add safeguard against loading content across wikis.",
"hash": "25e9a28fd035bb5dccc72d2ebb9413f2d0fb5a39",
"date": "2018-08-06T13:46:24"
}
]
},
"includes/specials/SpecialMostlinkedcategories.php": {
"File": "includes/specials/SpecialMostlinkedcategories.php",
"TicketCount": 1,
"CommitCount": 1,
"Tickets": [
"T205469"
],
"Commits": [
{
"message": "Fix double-wrapped HtmlArmor causing fatals\n\nFollow-up to a89ef9b3b9cc4702c0f23443ae51df4f3bed7ecb. I'm guessing\nthis is supposed to be htmlspecialchars() based on the changes to\nother files in that patch (e.g. SpecialAncientpages.php).\n\nBug: T205469\nChange-Id: I52e38a5754b339e1516498e6f0eb73fc8d8df59c\n",
"bugs": [
"T205469"
],
"subject": "Fix double-wrapped HtmlArmor causing fatals",
"hash": "3f8d6fa8f5ccd3d4bbb3ae5d437298491cb9452d",
"date": "2018-09-26T14:03:38"
}
]
},
"includes/tidy/RemexCompatMunger.php": {
"File": "includes/tidy/RemexCompatMunger.php",
"TicketCount": 1,
"CommitCount": 1,
"Tickets": [
"T200827"
],
"Commits": [
{
"message": "RemexCompatMunger: Don't call endTag() in case B/b\n\nThis was naïve, the linked bug documents a case where endTag() was\ncalled despite children of the p-wrap still being in TreeBuilder's\nstack. Instead, wait for the parent of the p-wrap to have endTag()\ncalled on it, I've submitted a patch which will clean up the node in\nthat case.\n\nBug: T200827\nChange-Id: I34694813eace9cadabf2db8f9ccca83d1368cfad\n",
"bugs": [
"T200827"
],
"subject": "RemexCompatMunger: Don't call endTag() in case B/b",
"hash": "10c8cfea305ec1d450b16ad54ebddb5f910016f4",
"date": "2018-08-07T04:07:31"
}
]
},
"includes/specials/SpecialUndelete.php": {
"File": "includes/specials/SpecialUndelete.php",
"TicketCount": 2,
"CommitCount": 1,
"Tickets": [
"T201848",
"T202920"
],
"Commits": [
{
"message": "Support multi-content diffs on Special:Undelete\n\nBug: T201848\nBUg: T202920\nChange-Id: Ia9eedb457c1db6badfd4f81d0bc8516c4f5ccbf2\n",
"bugs": [
"T201848",
"T202920"
],
"subject": "Support multi-content diffs on Special:Undelete",
"hash": "05817c7361336b24466f60b2460a99595ce60e59",
"date": "2018-09-21T05:58:13"
}
]
},
"includes/libs/MapCacheLRU.php": {
"File": "includes/libs/MapCacheLRU.php",
"TicketCount": 1,
"CommitCount": 1,
"Tickets": [
"T201893"
],
"Commits": [
{
"message": "Improve MapCacheLRU error message\n\nBug: T201893\nChange-Id: I74ef2cf31d83186f68e676da1b80c4c44ca28d69\n",
"bugs": [
"T201893"
],
"subject": "Improve MapCacheLRU error message",
"hash": "b4da5a8181e6576f6c44a9ac7ffe156daf8dacf8",
"date": "2018-08-14T05:52:22"
}
]
},
"includes/jobqueue/JobRunner.php": {
"File": "includes/jobqueue/JobRunner.php",
"TicketCount": 1,
"CommitCount": 1,
"Tickets": [
"T207809"
],
"Commits": [
{
"message": "Create JobQueueEnqueueUpdate class to call JobQueueGroup::pushLazyJobs()\n\nThis assures that MergeableUpdate tasks that lazy push job will actually\nhave those jobs run instead of being added after the lone callback update\nto call JobQueueGroup::pushLazyJobs() already ran.\n\nThis also makes it more obvious that push will happen, since a mergeable\nupdate is added each time lazyPush() is called and a job is buffered,\nrather than rely on some magic callback enqueued into DeferredUpdates at\njust the right point in multiple entry points.\n\nBug: T207809\nChange-Id: I13382ef4a17a9ba0fd3f9964b8c62f564e47e42d\n",
"bugs": [
"T207809"
],
"subject": "Create JobQueueEnqueueUpdate class to call JobQueueGroup::pushLazyJobs()",
"hash": "6030e9cf2c1df7929e2319602aa4d37aa641de11",
"date": "2018-10-28T22:19:06"
}
]
},
"includes/deferred/JobQueueEnqueueUpdate.php": {
"File": "includes/deferred/JobQueueEnqueueUpdate.php",
"TicketCount": 1,
"CommitCount": 1,
"Tickets": [
"T207809"
],
"Commits": [
{
"message": "Create JobQueueEnqueueUpdate class to call JobQueueGroup::pushLazyJobs()\n\nThis assures that MergeableUpdate tasks that lazy push job will actually\nhave those jobs run instead of being added after the lone callback update\nto call JobQueueGroup::pushLazyJobs() already ran.\n\nThis also makes it more obvious that push will happen, since a mergeable\nupdate is added each time lazyPush() is called and a job is buffered,\nrather than rely on some magic callback enqueued into DeferredUpdates at\njust the right point in multiple entry points.\n\nBug: T207809\nChange-Id: I13382ef4a17a9ba0fd3f9964b8c62f564e47e42d\n",
"bugs": [
"T207809"
],
"subject": "Create JobQueueEnqueueUpdate class to call JobQueueGroup::pushLazyJobs()",
"hash": "6030e9cf2c1df7929e2319602aa4d37aa641de11",
"date": "2018-10-28T22:19:06"
}
]
},
"includes/libs/rdbms/database/position/MySQLMasterPos.php": {
"File": "includes/libs/rdbms/database/position/MySQLMasterPos.php",
"TicketCount": 1,
"CommitCount": 1,
"Tickets": [
"T187942"
],
"Commits": [
{
"message": "rdbms: make DBMasterPos implement Serializable\n\nChronologyProtector uses these classes to briefly store positions\nand everytime the fields change then errors can happen when old\nvalues are unserialized and used. Use a simple two-element map\nformat for serialized positions. The fields are recomputed back\nfrom the data map.\n\nValues from before this change will issue the warning\n\"Erroneous data format for unserializing\". To avoid that, bump\nthe ChronologyProtector key version. Future field changes will\nnot require this.\n\nThis change should be deployed on all wikis at once.\n\nBug: T187942\nChange-Id: I71bbbc9b9d4c7e02ac02f1d8750b70bda08d4db1\n",
"bugs": [
"T187942"
],
"subject": "rdbms: make DBMasterPos implement Serializable",
"hash": "26d87a26fee1b6e66e221c4452a9f1d23cc003b6",
"date": "2018-02-23T20:46:28"
}
]
},
"includes/api/ApiQueryUserContributions.php": {
"File": "includes/api/ApiQueryUserContributions.php",
"TicketCount": 1,
"CommitCount": 1,
"Tickets": [
"T190507"
],
"Commits": [
{
"message": "SECURITY: Fix variable usage in ApiQueryUserContributions\n\n$from was being used instead of $fromName in the handling for\nucuserprefix, causing broken SQL.\n\nBug: T190507\nChange-Id: I0759637ea5f35853271167ca0aaaabd3b7ab69f9\n",
"bugs": [
"T190507"
],
"subject": "SECURITY: Fix variable usage in ApiQueryUserContributions",
"hash": "ccfca1fe64ae4c231fdb1b7de9089b1496dfcb8f",
"date": "2018-03-23T14:27:43"
}
]
},
"includes/libs/rdbms/database/position/DBMasterPos.php": {
"File": "includes/libs/rdbms/database/position/DBMasterPos.php",
"TicketCount": 1,
"CommitCount": 1,
"Tickets": [
"T187942"
],
"Commits": [
{
"message": "rdbms: make DBMasterPos implement Serializable\n\nChronologyProtector uses these classes to briefly store positions\nand everytime the fields change then errors can happen when old\nvalues are unserialized and used. Use a simple two-element map\nformat for serialized positions. The fields are recomputed back\nfrom the data map.\n\nValues from before this change will issue the warning\n\"Erroneous data format for unserializing\". To avoid that, bump\nthe ChronologyProtector key version. Future field changes will\nnot require this.\n\nThis change should be deployed on all wikis at once.\n\nBug: T187942\nChange-Id: I71bbbc9b9d4c7e02ac02f1d8750b70bda08d4db1\n",
"bugs": [
"T187942"
],
"subject": "rdbms: make DBMasterPos implement Serializable",
"hash": "26d87a26fee1b6e66e221c4452a9f1d23cc003b6",
"date": "2018-02-23T20:46:28"
}
]
},
"includes/api/ApiQueryContributors.php": {
"File": "includes/api/ApiQueryContributors.php",
"TicketCount": 1,
"CommitCount": 1,
"Tickets": [
"T188813"
],
"Commits": [
{
"message": "ApiQueryContributors: Use correct variable\n\nBug: T188813\nChange-Id: Ibc705d61d57cfe8867d1bde35781515c25b777c1\n",
"bugs": [
"T188813"
],
"subject": "ApiQueryContributors: Use correct variable",
"hash": "13619f3f301a4b552f20419a17270bfdd6b376b1",
"date": "2018-03-04T04:31:02"
}
]
},
"includes/parser/StripState.php": {
"File": "includes/parser/StripState.php",
"TicketCount": 1,
"CommitCount": 1,
"Tickets": [
"T187833"
],
"Commits": [
{
"message": "Limit total expansion size in StripState and improve limit handling\n\n* Add a new limit to the parser which limits the size of the output\n generated by StripState. The relevant bug shows exponential blowup in\n output size.\n* Remove the $prefix parameter from the StripState constructor. Used by\n no Gerrit-hosted extensions, hard-deprecated since 1.26.\n* Convert the existing unstrip recursion depth limit to a normal parser\n limit with limit report row, warning and tracking category. Provide\n the same features in the new limit.\n* Add an optional $parser parameter to the StripState constructor so\n that warnings and tracking categories can be added.\n\nBug: T187833\nChange-Id: Ie5f6081177610dc7830de4a0a40705c0c8cb82f1\n",
"bugs": [
"T187833"
],
"subject": "Limit total expansion size in StripState and improve limit handling",
"hash": "3dfda8c1552a6d43eaf85e3e38427833114ddf06",
"date": "2018-03-05T05:16:04"
}
]
},
"includes/resourceloader/ResourceLoader.php": {
"File": "includes/resourceloader/ResourceLoader.php",
"TicketCount": 1,
"CommitCount": 1,
"Tickets": [
"T201606"
],
"Commits": [
{
"message": "resourceloader: Increase minification cache version\n\nFollows-up 1f5f6fc204.\n\nBug: T201606\nChange-Id: I0b5af067d1d44880c8122343f8e4a1b47e998619\n",
"bugs": [
"T201606"
],
"subject": "resourceloader: Increase minification cache version",
"hash": "3d5edf6a09efedb0e445bbfb718fdc92091879e9",
"date": "2018-08-16T17:02:15"
}
]
},
"includes/specialpage/LoginSignupSpecialPage.php": {
"File": "includes/specialpage/LoginSignupSpecialPage.php",
"TicketCount": 1,
"CommitCount": 1,
"Tickets": [
"T198054"
],
"Commits": [
{
"message": "specialpage: Fix login crash caused by unknown language via ?uselang\n\nPer Krinkle's comments here: T198054#4598447, it's exactly what is\nhappening. @Fomafix suggests we handle the exception that is been\nthrown by Language::factory() when there is an invalid language code\nprovided. The attempt is to fix this in the central way of ever request\nwhether POST or GET.\n\nWe're working within a particular context, and within this request\ncontext, we can create a language user's language object then generate\na language object. If uselang parameter is provided an invalid language\ncode, getLanguage in this request context will default to $wgLanguageCode\nthen use this code to create the user's language object. In addition,\ngetLanguage() invalidates cached user interface language.\n\nBug: T198054\nChange-Id: I825fdfa882a4243ffc63c9de0d7f482e2cfb9862\n",
"bugs": [
"T198054"
],
"subject": "specialpage: Fix login crash caused by unknown language via ?uselang",
"hash": "34f78a69ef058c19ecdac7290a174624ef8212ee",
"date": "2019-01-29T16:16:02"
}
]
},
"includes/http/MWHttpRequest.php": {
"File": "includes/http/MWHttpRequest.php",
"TicketCount": 1,
"CommitCount": 1,
"Tickets": [
"T212005"
],
"Commits": [
{
"message": "Fix exception on certain http failures\n\nTask T202110 included a change to recognize an HTTP status code\nof 0 (zero) as an error, but it failed to set a status message,\nresulting in an exception. Changed to set a status message of\n'Error' so that required value is not empty.\n\nBug: T212005\nChange-Id: I5fb78555bfcaeccdd726432f4dfc70924a385c41\n",
"bugs": [
"T212005"
],
"subject": "Fix exception on certain http failures",
"hash": "9cdbf73c9c8c78f538d387521f55f8dc1a84451c",
"date": "2018-12-14T21:55:28"
}
]
},
"includes/specials/pagers/NewFilesPager.php": {
"File": "includes/specials/pagers/NewFilesPager.php",
"TicketCount": 1,
"CommitCount": 1,
"Tickets": [
"T189846"
],
"Commits": [
{
"message": "Fix variable name in NewFilesPager::getQueryInfo\n\nBug: T189846\nChange-Id: I11763ecbfd391deea0494386c0d7c1cb9861aa81\n",
"bugs": [
"T189846"
],
"subject": "Fix variable name in NewFilesPager::getQueryInfo",
"hash": "3577973107e642493a58d2a6580a99d3dff0137b",
"date": "2018-03-16T00:40:32"
}
]
},
"includes/libs/CSSMin.php": {
"File": "includes/libs/CSSMin.php",
"TicketCount": 1,
"CommitCount": 1,
"Tickets": [
"T191237"
],
"Commits": [
{
"message": "CSSMin: Don't match empty string as remappable url\n\nThe empty string being matched causes an undefined array index\nnotice in production, seen from various random gadgets, but spiked\nafter a change in MonoBook from last week that introduced a\nbroken background-image rule with empty string as url.\n\nIn browsers, that is actually interpreted as valid and \"expands\"\nto the current url and re-fetches as Accept:image/*, silly, but\nstill broken. The broken icon was fixed in MonoBook, but we still\nneed to avoid trying to remap empty string as url.\n\nTwo changes:\n\n1. Fix regex used by remap() to not match empty string.\n This was already fixed for the 'url()' case without the\n optional quotes, but with quotes, it was being matched as\n non-empty. This is now fixed by using '+' instead of '*'.\n Added tests to confirm they produce output, and PHPUnit\n is configured to also assert no Notices are emitted (which\n it converts to fatal exceptions).\n\n2. Fix processUrlMatch() as sanity check to throw if the key\n is missing.\n\nBug: T191237\nChange-Id: I0ada337b0b4ab73c80236367ff79c31bcd13aa7d\n",
"bugs": [
"T191237"
],
"subject": "CSSMin: Don't match empty string as remappable url",
"hash": "5385f56e96bd76eb6aa2c46b6d4de6a8649aa85d",
"date": "2018-04-04T20:17:35"
}
]
},
"includes/db/MWLBFactory.php": {
"File": "includes/db/MWLBFactory.php",
"TicketCount": 1,
"CommitCount": 1,
"Tickets": [
"T192611"
],
"Commits": [
{
"message": "rdbms: make sure cpPosIndex cookie is applied to LBFactory in time\n\nOnce getMain() was called in setSchemaAliases(), the ChronologyProtector\nwas initialized and the setRequestInfo() call in Setup.php had no effect.\nOnly the request values read in LBFactory::__construct() were used, which\nreflect $_GET but not cookie values.\n\nUse the $wgDBtype variable to avoid this and add an exception when that\nsort of thing happens.\n\nFurther defer instantiation of ChronologyProtector so that methods like\nILBFactory::getMainLB() do not trigger construction.\n\nBug: T192611\nChange-Id: I735d3ade5cd12a5d609f4dae19ac88fec4b18b51\n",
"bugs": [
"T192611"
],
"subject": "rdbms: make sure cpPosIndex cookie is applied to LBFactory in time",
"hash": "628a3a9b267620914701a2a0a17bad8ab2e56498",
"date": "2018-04-23T15:44:02"
}
]
},
"includes/http/GuzzleHttpRequest.php": {
"File": "includes/http/GuzzleHttpRequest.php",
"TicketCount": 1,
"CommitCount": 1,
"Tickets": [
"T211806"
],
"Commits": [
{
"message": "Fix guzzle InvalidArgumentException when body is passed as an array\n\nThe postBody option to GuzzleHttpRequest can be passed as an array\nor as a string. We were previously handling the array case incorrectly.\n\nBug: T211806\nChange-Id: I8f40b9de9d40a9361eb45103608bf3aaa943bf73\n",
"bugs": [
"T211806"
],
"subject": "Fix guzzle InvalidArgumentException when body is passed as an array",
"hash": "32d9c56c27d99831a56b10f5b1e69996ebe505cd",
"date": "2018-12-13T00:11:27"
}
]
},
"includes/site/Site.php": {
"File": "includes/site/Site.php",
"TicketCount": 1,
"CommitCount": 1,
"Tickets": [
"T191634"
],
"Commits": [
{
"message": "Fix (MediaWiki)Site::normalizePageName return type\n\nI checked all callers of these methods and almost all of them expect the\nmethod to return false. It looks like this return type was known at some\npoint, but got lost. Let's add it back.\n\nBug: T191634\nChange-Id: I43484835b8f26e07ada6a2b1452a99ccc6d9b438\n",
"bugs": [
"T191634"
],
"subject": "Fix (MediaWiki)Site::normalizePageName return type",
"hash": "0df357b9137ac96ebb6d59bccb99ba9c4680bf46",
"date": "2018-04-08T09:36:19"
}
]
},
"includes/site/MediaWikiSite.php": {
"File": "includes/site/MediaWikiSite.php",
"TicketCount": 1,
"CommitCount": 1,
"Tickets": [
"T191634"
],
"Commits": [
{
"message": "Fix (MediaWiki)Site::normalizePageName return type\n\nI checked all callers of these methods and almost all of them expect the\nmethod to return false. It looks like this return type was known at some\npoint, but got lost. Let's add it back.\n\nBug: T191634\nChange-Id: I43484835b8f26e07ada6a2b1452a99ccc6d9b438\n",
"bugs": [
"T191634"
],
"subject": "Fix (MediaWiki)Site::normalizePageName return type",
"hash": "0df357b9137ac96ebb6d59bccb99ba9c4680bf46",
"date": "2018-04-08T09:36:19"
}
]
},
"includes/revisiondelete/RevisionDeleteUser.php": {
"File": "includes/revisiondelete/RevisionDeleteUser.php",
"TicketCount": 1,
"CommitCount": 1,
"Tickets": [
"T210628"
],
"Commits": [
{
"message": "Fix RevisionDeleteUser rev_actor query for MySQL\n\nMySQL uses an extremely bad plan for the update with subquery here.\nFor the time being let's split out the subquery.\n\nThis'll break if the user has too many revision rows, but on the other\nhand all the existing queries here will probably break in the same way\nso let's leave that for later.\n\nBug: T210628\nChange-Id: Ia3eb41b43b96c506349279e19743c41530969636\n",
"bugs": [
"T210628"
],
"subject": "Fix RevisionDeleteUser rev_actor query for MySQL",
"hash": "25f02d91dc843c78ad4b77b2fba7b46e1f4ce1eb",
"date": "2018-11-29T16:24:57"
}
]
},
"includes/api/ApiErrorFormatter.php": {
"File": "includes/api/ApiErrorFormatter.php",
"TicketCount": 1,
"CommitCount": 1,
"Tickets": [
"T208926"
],
"Commits": [
{
"message": "API: Validate API error codes\n\nValidate them in ApiMessageTrait when the message is created, and again\nin ApiMain before they're included in the header.\n\nThis also introduces an \"api-warning\" log channel, since \"api\" is too\nspammy for real use, and converts a few existing things to use it.\n\nBug: T208926\nChange-Id: Ib2d8bd4d4a5d58af76431835ba783c148de7792a\nDepends-On: Iced44f2602d57eea9a2d15aee5b8c9a50092b49c\nDepends-On: I5c2747f527c30ded7a614feb26f5777d901bd512\nDepends-On: I9c9bd8f5309518fcbab7179fb71d209c005e5e64\n",
"bugs": [
"T208926"
],
"subject": "API: Validate API error codes",
"hash": "4eace785e66d199cb8fe1ec224bdc49831949a6d",
"date": "2018-11-26T18:41:08"
}
]
},
"includes/api/ApiMessageTrait.php": {
"File": "includes/api/ApiMessageTrait.php",
"TicketCount": 1,
"CommitCount": 1,
"Tickets": [
"T208926"
],
"Commits": [
{
"message": "API: Validate API error codes\n\nValidate them in ApiMessageTrait when the message is created, and again\nin ApiMain before they're included in the header.\n\nThis also introduces an \"api-warning\" log channel, since \"api\" is too\nspammy for real use, and converts a few existing things to use it.\n\nBug: T208926\nChange-Id: Ib2d8bd4d4a5d58af76431835ba783c148de7792a\nDepends-On: Iced44f2602d57eea9a2d15aee5b8c9a50092b49c\nDepends-On: I5c2747f527c30ded7a614feb26f5777d901bd512\nDepends-On: I9c9bd8f5309518fcbab7179fb71d209c005e5e64\n",
"bugs": [
"T208926"
],
"subject": "API: Validate API error codes",
"hash": "4eace785e66d199cb8fe1ec224bdc49831949a6d",
"date": "2018-11-26T18:41:08"
}
]
},
"includes/site/MediaWikiPageNameNormalizer.php": {
"File": "includes/site/MediaWikiPageNameNormalizer.php",
"TicketCount": 1,
"CommitCount": 1,
"Tickets": [
"T191634"
],
"Commits": [
{
"message": "Fix (MediaWiki)Site::normalizePageName return type\n\nI checked all callers of these methods and almost all of them expect the\nmethod to return false. It looks like this return type was known at some\npoint, but got lost. Let's add it back.\n\nBug: T191634\nChange-Id: I43484835b8f26e07ada6a2b1452a99ccc6d9b438\n",
"bugs": [
"T191634"
],
"subject": "Fix (MediaWiki)Site::normalizePageName return type",
"hash": "0df357b9137ac96ebb6d59bccb99ba9c4680bf46",
"date": "2018-04-08T09:36:19"
}
]
},
"includes/jobqueue/JobQueue.php": {
"File": "includes/jobqueue/JobQueue.php",
"TicketCount": 1,
"CommitCount": 1,
"Tickets": [
"T209429"
],
"Commits": [
{
"message": "JobQueue: Actually return the value from getRootJobCacheKey()\n\nI8d94a650e accidentally left out the 'return' keyword, so the function\nreturns null.\n\nBug: T209429\nChange-Id: Ie29c1ea5eab6ddedd0fe58010fc9cf8e3a6e2f12\n",
"bugs": [
"T209429"
],
"subject": "JobQueue: Actually return the value from getRootJobCacheKey()",
"hash": "0cc80f63c48e71e441e57e9edb6e417081baa86c",
"date": "2018-11-14T18:41:04"
}
]
},
"includes/Storage/BlobStoreFactory.php": {
"File": "includes/Storage/BlobStoreFactory.php",
"TicketCount": 1,
"CommitCount": 1,
"Tickets": [
"T202483"
],
"Commits": [
{
"message": "The BlobStoreFactory constructor needs an LBFactory\n\nBlobStoreFactory::newBlobStore() takes a wiki ID as a parameter, so it\nneeds an LBFactory to fetch the correct LoadBalancer from.\n\nBug: T202483\nChange-Id: I834cd95251d76cb862600362525faf60d4e170b9\n",
"bugs": [
"T202483"
],
"subject": "The BlobStoreFactory constructor needs an LBFactory",
"hash": "f2f82dcb948811ac5abc1beda4431822c99e76cf",
"date": "2018-08-22T06:47:04"
}
]
},
"includes/MergeHistory.php": {
"File": "includes/MergeHistory.php",
"TicketCount": 1,
"CommitCount": 1,
"Tickets": [
"T232464"
],
"Commits": [
{
"message": "MergeHistory: Update revactor_page too\n\nWhen using MergeHistory, we need to update the denormalized\nrevision_actor_temp.revactor_page to match the update we do for\nrevision.rev_page.\n\nAlso, we need a maintenance script to clean up the rows that were broken\nby our failure to do that before.\n\nBug: T232464\nChange-Id: Ib819a9d9fc978d75d7cc7e53f361483b69ab8020\n",
"bugs": [
"T232464"
],
"subject": "MergeHistory: Update revactor_page too",
"hash": "17909bfe0b93c4f5c2fb34eda8a0d2d76e531edd",
"date": "2019-09-17T07:17:38"
}
]
},
"includes/libs/objectcache/WinCacheBagOStuff.php": {
"File": "includes/libs/objectcache/WinCacheBagOStuff.php",
"TicketCount": 1,
"CommitCount": 1,
"Tickets": [
"T204742"
],
"Commits": [
{
"message": "objectcache: add object segmentation support to BagOStuff\n\nUse it for ApiStashEdit so that large PaserOutput can be stored.\n\nAdd flag to allow for value segmentation on set() in BagOStuff.\nAlso add a flag for immediate deletion of segments on delete().\n\nBagOStuff now has base serialize()/unserialize() methods.\n\nBug: T204742\nChange-Id: I0667a02612526d8ddfd91d5de48b6faa78bd1ab5\n",
"bugs": [
"T204742"
],
"subject": "objectcache: add object segmentation support to BagOStuff",
"hash": "b09b3980f991bb02a64cce0e462b97bada3b4776",
"date": "2019-06-11T15:14:17"
}
]
},
"includes/deferred/CdnCacheUpdate.php": {
"File": "includes/deferred/CdnCacheUpdate.php",
"TicketCount": 1,
"CommitCount": 1,
"Tickets": [
"T240083"
],
"Commits": [
{
"message": "CdnCacheUpdate: Accept Titles in addition to strings\n\nThe class was already documented as \"given a list of URLs or Title\ninstances\", this makes that work.\n\nTitle objects will have ->getCdnUrls() called when the update is\nresolved, which avoids problems like those encountered in T240083 where\nthat was being called too early.\n\nBug: T240083\nChange-Id: I30b29a7359a8f393fb19ffc199211a421d3ea4d9\n",
"bugs": [
"T240083"
],
"subject": "CdnCacheUpdate: Accept Titles in addition to strings",
"hash": "d83e00cb92f2c850dfe8ad7f31c65f8db78e47b7",
"date": "2020-03-19T13:56:19"
}
]
},
"includes/revisionlist/RevisionItem.php": {
"File": "includes/revisionlist/RevisionItem.php",
"TicketCount": 1,
"CommitCount": 1,
"Tickets": [
"T252072"
],
"Commits": [
{
"message": "RevisionItem: Fix providing timestamp in getRevisionLink\n\nCaused by: 49707c59da2c4776b7752a5f2d5710f772c8d541\n\nBug: T252072\nChange-Id: I801c6c07283f220ad5bac828d8204e600ba1db1f\n",
"bugs": [
"T252072"
],
"subject": "RevisionItem: Fix providing timestamp in getRevisionLink",
"hash": "9753db190925363f68f570a5207656b16958c224",
"date": "2020-05-06T21:10:01"
}
]
},
"includes/specials/SpecialNewpages.php": {
"File": "includes/specials/SpecialNewpages.php",
"TicketCount": 1,
"CommitCount": 1,
"Tickets": [
"T251950"
],
"Commits": [
{
"message": "SpecialNewpages::revisionFromRcResult - cast visibility as integer\n\nSame with user and actor ids just in case\n\nBug: T251950\nChange-Id: I2713e819153e85ce28f7451b659acb50c68df949\n",
"bugs": [
"T251950"
],
"subject": "SpecialNewpages::revisionFromRcResult - cast visibility as integer",
"hash": "4ca8c99864241133471c9cccdaa394adebec4dff",
"date": "2020-05-05T21:17:22"
}
]
},
"includes/externalstore/ExternalStoreAccess.php": {
"File": "includes/externalstore/ExternalStoreAccess.php",
"TicketCount": 2,
"CommitCount": 1,
"Tickets": [
"T247429",
"T228088"
],
"Commits": [
{
"message": "ExternalStore: report cause of non-exception failure\n\nThis should help with investigating the cause of some errors we have\nbeen seeing sporadically.\n\nBug: T247429\nBug: T228088\nChange-Id: Ibdb48cac447315ed6f37a9cd0e7c05deefc76a28\n",
"bugs": [
"T247429",
"T228088"
],
"subject": "ExternalStore: report cause of non-exception failure",
"hash": "74aba39e34c9cb4d5d91f36e7666bd2d589c0be6",
"date": "2020-04-17T16:00:28"
}
]
},
"includes/Message/MessageFormatterFactory.php": {
"File": "includes/Message/MessageFormatterFactory.php",
"TicketCount": 1,
"CommitCount": 1,
"Tickets": [
"T249045"
],
"Commits": [
{
"message": "UserNameUtils: use ITextFormatter instead of MessageLocalizer\n\nBug: T249045\nChange-Id: Ica1e1e4788d4b9f9dfcf9f8c8b4136147d92b32e\n",
"bugs": [
"T249045"
],
"subject": "UserNameUtils: use ITextFormatter instead of MessageLocalizer",
"hash": "7f643f2ab6ea5c8e3734adbcefbf6f4d01787de0",
"date": "2020-04-13T16:28:02"
}
]
},
"includes/Message/TextFormatter.php": {
"File": "includes/Message/TextFormatter.php",
"TicketCount": 1,
"CommitCount": 1,
"Tickets": [
"T249045"
],
"Commits": [
{
"message": "UserNameUtils: use ITextFormatter instead of MessageLocalizer\n\nBug: T249045\nChange-Id: Ica1e1e4788d4b9f9dfcf9f8c8b4136147d92b32e\n",
"bugs": [
"T249045"
],
"subject": "UserNameUtils: use ITextFormatter instead of MessageLocalizer",
"hash": "7f643f2ab6ea5c8e3734adbcefbf6f4d01787de0",
"date": "2020-04-13T16:28:02"
}
]
},
"includes/actions/Action.php": {
"File": "includes/actions/Action.php",
"TicketCount": 1,
"CommitCount": 1,
"Tickets": [
"T249162"
],
"Commits": [
{
"message": "Revert \"WikiPage/Article split. Rely on Article inside Action\"\n\nThis partialy reverts commit 07f57bd2715634c906065dce1bbd619b39633578.\n\nReason for revert: Article::newFromWikiPage causes an infinite loop\nif 'ArticleFromTitle' hook handler ends up calling Action::getActionName.\n\nThe revert preserves new getArticle and getWikiPage methods since there\nare a few extension changes merged dependent on these. Alternatively this partial revert, we could do a full revert with all the dependencies https://gerrit.wikimedia.org/r/c/mediawiki/core/+/585343\n\nBug: T249162\nChange-Id: Ifa642a631caa2d265ee097711dc8727f84435ef0\n",
"bugs": [
"T249162"
],
"subject": "Revert \"WikiPage/Article split. Rely on Article inside Action\"",
"hash": "64650aaba9aea9b44ab25c2f206cc7e2793eb661",
"date": "2020-04-01T22:30:51"
}
]
},
"includes/WebRequest.php": {
"File": "includes/WebRequest.php",
"TicketCount": 1,
"CommitCount": 1,
"Tickets": [
"T235357"
],
"Commits": [
{
"message": "Stop using SCRIPT_NAME where possible, rely on statically configured routing\n\nIt has become apparent that $_SERVER['SCRIPT_NAME'] may contain the same\nthing as REQUEST_URI, for example in WMF production. PATH_INFO is not\nset, so there is no way to split the URL into SCRIPT_NAME and PATH_INFO\ncomponents apart from configuration.\n\n* Revert the fix for T34486, which added a route for SCRIPT_NAME to the\n PathRouter for the benefit of img_auth.php. In T235357, the route thus\n added contained $1, breaking everything.\n* Remove calls to WebRequest::getPathInfo() from everywhere other than\n index.php. Dynamic modification of $wgArticlePath in order to make\n PathRouter work was weird and broken anyway. All that is really needed\n is a suffix of REQUEST_URI, so I added a function which provides that.\n* Add $wgImgAuthPath, for use as a last resort workaround for T34486.\n* Avoid the use of $_SERVER['SCRIPT_NAME'] to detect the currently\n running script.\n* Deprecated wfGetScriptUrl(), a fairly simple wrapper for SCRIPT_NAME.\n Apparently no callers in core or extensions.\n\nBug: T235357\nChange-Id: If2b82759f3f4aecec79d6e2d88cd4330927fdeca\n",
"bugs": [
"T235357"
],
"subject": "Stop using SCRIPT_NAME where possible, rely on statically configured routing",
"hash": "507501d6ee29eb1b8df443192971fe2b6a6addb6",
"date": "2020-04-01T16:33:38"
}
]
},
"includes/actions/HistoryAction.php": {
"File": "includes/actions/HistoryAction.php",
"TicketCount": 1,
"CommitCount": 1,
"Tickets": [
"T248672"
],
"Commits": [
{
"message": "Reintroduce accidentally dropped referenceness\n\nBug: T248672\nChange-Id: I6a82b2e2c1fda7ee362903641d59673bf75912f7\n",
"bugs": [
"T248672"
],
"subject": "Reintroduce accidentally dropped referenceness",
"hash": "fe257bd4e1c6cdfe325039504f6ee5d0c389332f",
"date": "2020-03-27T13:28:27"
}
]
},
"includes/exception/MWException.php": {
"File": "includes/exception/MWException.php",
"TicketCount": 1,
"CommitCount": 1,
"Tickets": [
"T225657"
],
"Commits": [
{
"message": "exception: No longer try to send error page post-send on api.php\n\nFor other endpoints this was already fixed, as all MWExceptionRenderer\nlogic checks headers_sent() before outputting headers.\n\nFor the MW_API condition, it was calling wfHttpError(), which in\nturn unconditionally tried to send headers.\n\nFix this by removing use of wfHttpError(), and instead re-use the\nexisting logic for a minimal http error page. Do this by removing\nthe early condition and instead let if fall into the general\nrender methods, and then treat MW_API as a non-OutputPage scenario.\n\nBug: T225657\nChange-Id: I38bbf8007078c290a2576ef177b789fab1d2059f\n",
"bugs": [
"T225657"
],
"subject": "exception: No longer try to send error page post-send on api.php",
"hash": "36e0e638a8eecbc48dd26eb531062865c803346c",
"date": "2020-03-26T17:53:27"
}
]
},
"includes/TemplateParser.php": {
"File": "includes/TemplateParser.php",
"TicketCount": 2,
"CommitCount": 1,
"Tickets": [
"T113095",
"T248010"
],
"Commits": [
{
"message": "TemplateParser: Include template dir in cache key\n\nTemplate names aren't expected to be globally unique. Template paths are\nby construction.\n\nInclude the template directory in the cache key in order to avoid the\ncache keys of ambiguosly-named templates - e.g. index.mustache -\noverriding one another.\n\nBug: T113095\nBug: T248010\nChange-Id: I3196967ec2a7a5cec409a0c7ce4471a7d8773978\n",
"bugs": [
"T113095",
"T248010"
],
"subject": "TemplateParser: Include template dir in cache key",
"hash": "d4be8b9c08b839f92530998fa67bacbd9ed007cd",
"date": "2020-03-18T21:57:34"
}
]
},
"includes/specials/SpecialChangeContentModel.php": {
"File": "includes/specials/SpecialChangeContentModel.php",
"TicketCount": 1,
"CommitCount": 1,
"Tickets": [
"T252963"
],
"Commits": [
{
"message": "SpecialChangeContentModel: Only call spam checks on non-empty reasons\n\nThis fixes a fatal error thrown since I12120b51073c, when opening the\nSpecial:ChangeContentModel form, due to a strict string typehint\nfor SpamChecker.\n\nBug: T252963\nChange-Id: Ie29d7bf5cda4a86321a08a76fb18d747a055420f\n",
"bugs": [
"T252963"
],
"subject": "SpecialChangeContentModel: Only call spam checks on non-empty reasons",
"hash": "b1df01900971537ad04fd5a937e03ddd177377b1",
"date": "2020-05-25T00:12:04"
}
]
},
"includes/registration/ExtensionProcessor.php": {
"File": "includes/registration/ExtensionProcessor.php",
"TicketCount": 1,
"CommitCount": 1,
"Tickets": [
"T245629"
],
"Commits": [
{
"message": "ExtensionRegistry: Avoid losing 'defines' when loading lazy-loaded attributes\n\nWhen fetching lazy-loaded attributes in ExtensionRegistry to cache, we are saving all the values\nfrom extension.json a second time. In doing so, we are wrongfully omitting the values\npreviously being defined for the \"defines\" attribute\n(the attribute that is responsible for setting namespace constants).\nThis patch will make sure that we keep the value set in the \"defines\" attribute\nso that the constants are defined when we load from cache the second time.\n\nBug: T245629\nChange-Id: I4f151f88ece56cf718749b9de11fc8e204ccf29d\n",
"bugs": [
"T245629"
],
"subject": "ExtensionRegistry: Avoid losing 'defines' when loading lazy-loaded attributes",
"hash": "478f7e032da6cb7cadf07b40e8ca62eee06285a0",
"date": "2020-03-18T17:32:48"
}
]
},
"includes/objectcache/ObjectCache.php": {
"File": "includes/objectcache/ObjectCache.php",
"TicketCount": 1,
"CommitCount": 1,
"Tickets": [
"T247562"
],
"Commits": [
{
"message": "objectcache: Restore 'keyspace' for LocalServerCache service\n\nFollows-up 746d67f5fcc5 which implicitly caused the APCUBagOStuff\nobject to no longer have a wiki-dependent keyspace. This meant\nthat all cache keys were shared across wikis, even if they used\nmakeKey() instead of makeGlobalKey() because the keyspace was\nnot defined (e.g. it was the string \"local\" for all wikis, instead\nof a string like \"enwiki\").\n\nBug: T247562\nChange-Id: I469e107a54aae91b8782a4dd9a2f1390ab9df2e5\n",
"bugs": [
"T247562"
],
"subject": "objectcache: Restore 'keyspace' for LocalServerCache service",
"hash": "ae6b99bf657698cb6ac60c9c5dbb7685e5ee5e8e",
"date": "2020-03-18T01:44:19"
}
]
},
"includes/deferred/TransactionRoundAwareUpdate.php": {
"File": "includes/deferred/TransactionRoundAwareUpdate.php",
"TicketCount": 2,
"CommitCount": 1,
"Tickets": [
"T218456",
"T206283"
],
"Commits": [
{
"message": "Add RefreshSecondaryDataUpdate and use it in DerivedPageDataUpdater\n\nThis class implements EnqueueableDataUpdate and can be pushed as a\njob if it fails to run via DeferredUpdates.\n\nUnlike a1f7fd3adaa3, make RefreshSecondaryDataUpdate skip failing\nupdates in doUpdate(). Instead of throwing the first exception from\nany update, log any exceptions that occur and try all the other\nupdates. The first error will be re-thrown afterwards.\n\nAlso, make sure that each DataUpdate still has outer transaction\nscope. This property is documented at mediawiki.org and should not\nbe changed.\n\nAdd integration tests for RefreshSecondaryDataUpdateTest.\n\nBug: T218456\nBug: T206283\nChange-Id: I7c6554a4d4cd76dfe7cd2967afe30b3aa1069fcb\n",
"bugs": [
"T218456",
"T206283"
],
"subject": "Add RefreshSecondaryDataUpdate and use it in DerivedPageDataUpdater",
"hash": "1f4efc6c34aa363c9f5c4d8fd860a39faac4ae2d",
"date": "2020-03-11T07:42:48"
}
]
},
"includes/resourceloader/dependencystore/SqlModuleDependencyStore.php": {
"File": "includes/resourceloader/dependencystore/SqlModuleDependencyStore.php",
"TicketCount": 1,
"CommitCount": 1,
"Tickets": [
"T245570"
],
"Commits": [
{
"message": "resourceloader: fix SqlDependencyModuleStore::setMulti() to use upsert()\n\nFollow-up to 5282a02961913cceb8\n\nBug: T245570\nChange-Id: I16c45d389f10b237c649aff34e3a2eaf40464757\n",
"bugs": [
"T245570"
],
"subject": "resourceloader: fix SqlDependencyModuleStore::setMulti() to use upsert()",
"hash": "b6473c3efdf615438e1aa55459baa9e386d67f68",
"date": "2020-02-19T18:30:39"
}
]
},
"includes/libs/StatusValue.php": {
"File": "includes/libs/StatusValue.php",
"TicketCount": 1,
"CommitCount": 1,
"Tickets": [
"T245155"
],
"Commits": [
{
"message": "StatusValue: Fix __toString() to not choke on special parameters\n\ne.g. nested messages, Message::plaintextParam(), and so on.\n\nI'm not inclined to do too much here, since long term we should replace\nMessage with MessageValue and that will likely require reworking or\nreplacing StatusValue too.\n\nBug: T245155\nChange-Id: Ie727de19162467574815853d2584c472a9171240\n",
"bugs": [
"T245155"
],
"subject": "StatusValue: Fix __toString() to not choke on special parameters",
"hash": "c7e451ce95de8731d0054b8ae41f1e2fa3251892",
"date": "2020-02-14T21:51:31"
}
]
},
"includes/api/ApiLogin.php": {
"File": "includes/api/ApiLogin.php",
"TicketCount": 1,
"CommitCount": 1,
"Tickets": [
"T245280"
],
"Commits": [
{
"message": "Don't use 'message' as a logging key\n\nBug: T245280\nChange-Id: I5904fe7539322c4d923bc8b8c88fda272d7dff8b\n",
"bugs": [
"T245280"
],
"subject": "Don't use 'message' as a logging key",
"hash": "4f98a95aa1e5a88b09b0f186909022992d6e9be1",
"date": "2020-02-14T20:31:29"
}
]
},
"includes/auth/AuthManager.php": {
"File": "includes/auth/AuthManager.php",
"TicketCount": 1,
"CommitCount": 1,
"Tickets": [
"T245280"
],
"Commits": [
{
"message": "Don't pass 'ip' through to logging\n\nBug: T245280\nChange-Id: I2a64b12647631c773099602b8c3264a3fa0f1f85\n",
"bugs": [
"T245280"
],
"subject": "Don't pass 'ip' through to logging",
"hash": "30ad44a1f39517f2d2da228e2da15092841961ac",
"date": "2020-02-14T17:48:55"
}
]
},
"includes/api/ApiRollback.php": {
"File": "includes/api/ApiRollback.php",
"TicketCount": 1,
"CommitCount": 1,
"Tickets": [
"T245159"
],
"Commits": [
{
"message": "ApiRollback: Properly deal with UserIdentity\n\nFollow-up I1462edc170127, which enabled the\nUserDef::PARAM_RETURN_OBJECT flag for the 'user' parameter,\nso that it returns UserIdentityValue objects instead of string\nvalues, but the internal use of that value was still expecting\na string\n\n\nBug: T245159\nChange-Id: I2f8d8c406ab81b6d5dc19a1fff389646af61001e\n",
"bugs": [
"T245159"
],
"subject": "ApiRollback: Properly deal with UserIdentity",
"hash": "7bf077baf5503e00902adc3505116d380266c0f5",
"date": "2020-02-13T18:11:04"
}
]
},
"includes/logging/LogPage.php": {
"File": "includes/logging/LogPage.php",
"TicketCount": 1,
"CommitCount": 1,
"Tickets": [
"T245178"
],
"Commits": [
{
"message": "LogPage: Unstub $wgLang before use\n\nThe code using $wgLang is also moved inside a conditional, to avoid\nunstubbing $wgLang when unnecessary.\n\nA possible improvement is to use Skin::getLanguage() instead of $wgLang,\nbut that's not guaranteed to be safe, and we want to unblock the train\nwithout introducing new issues.\n\nBug: T245178\nChange-Id: Ifb06d9b890456092633e38b14674cc3ef48690e3\n",
"bugs": [
"T245178"
],
"subject": "LogPage: Unstub $wgLang before use",
"hash": "1967ba0ad6db90788fd86aafe34689d1b0173eb7",
"date": "2020-02-13T17:44:15"
}
]
},
"includes/pager/IndexPager.php": {
"File": "includes/pager/IndexPager.php",
"TicketCount": 1,
"CommitCount": 1,
"Tickets": [
"T244937"
],
"Commits": [
{
"message": "ImageHistoryPseudoPager: Update doQuery() for IndexPager changes\n\nThis class duplicates a bunch of code from IndexPager, and that code was\nchanged in 6786aa5d8e2e87f00a3b1f81d9684ff4eac676ef in a way that broke\nthis code. Update it to account for the fact that mFirstShown,\nmLastShown and mPastTheEndIndex are now arrays. Also update the\ndocumentation in IndexPager to reflect that.\n\nBug: T244937\nChange-Id: I51a50b6d3be1467f4ee399446d1d12cfed71a06c\n",
"bugs": [
"T244937"
],
"subject": "ImageHistoryPseudoPager: Update doQuery() for IndexPager changes",
"hash": "bf9ae0768d99a9e19533cc845a87dd182b1fb63f",
"date": "2020-02-11T23:36:28"
}
]
},
"includes/page/WikiFilePage.php": {
"File": "includes/page/WikiFilePage.php",
"TicketCount": 1,
"CommitCount": 1,
"Tickets": [
"T250767"
],
"Commits": [
{
"message": "Typehint WikiFilePage::setFile\n\nBug: T250767\nChange-Id: I3ac4c4db6c630f5983c93e1c98363f5256ae7326\n",
"bugs": [
"T250767"
],
"subject": "Typehint WikiFilePage::setFile",
"hash": "6b7ab1e09292be72d4a5b94acfbb9e010cb3427e",
"date": "2020-05-11T10:46:53"
}
]
},
"includes/upload/UploadStash.php": {
"File": "includes/upload/UploadStash.php",
"TicketCount": 1,
"CommitCount": 1,
"Tickets": [
"T254078"
],
"Commits": [
{
"message": "Stop throwing an exception in UploadStash::getExtensionForPath\n\nThe exception serves no purpose, and can only really be triggered via\na test. The API prevents no file extension at all, as does UW js.\n\nThis function (for whatever reason, probably a seperate bug) cannot\nget the extension from a stashed stl file (seems to work fine for\nother types).\n\nWith what/how it's actually used, it doens't really matter if\nwe can't get the extension, we get it by more robust methods later\non.\n\nThis partially reverts 0a82600a27. Before the changes in that commit,\nthe exception was unreachable.\n\nBug: T254078\nChange-Id: I0a7bd13fe8e08c7d4a75b4a3709661dbbf53d6cb\n",
"bugs": [
"T254078"
],
"subject": "Stop throwing an exception in UploadStash::getExtensionForPath",
"hash": "d0d4d903ccbdd2cf9a574735975e67b40835b639",
"date": "2020-05-31T02:02:35"
}
]
},
"includes/skins/SkinTemplate.php": {
"File": "includes/skins/SkinTemplate.php",
"TicketCount": 1,
"CommitCount": 1,
"Tickets": [
"T244300"
],
"Commits": [
{
"message": "language: remove Language hints for type check as it breaks using of StubUserLang\n\nBug: T244300\nChange-Id: Iec1b5629617f1c171e8af507dc1dcebfef0666eb\n",
"bugs": [
"T244300"
],
"subject": "language: remove Language hints for type check as it breaks using of StubUserLang",
"hash": "ed18dba8f403377ebe6f6a69893eae098b77cf59",
"date": "2020-02-05T14:11:31"
}
]
},
"includes/filerepo/file/OldLocalFile.php": {
"File": "includes/filerepo/file/OldLocalFile.php",
"TicketCount": 1,
"CommitCount": 1,
"Tickets": [
"T263014"
],
"Commits": [
{
"message": "Revert \"Remove support for (Archived|OldLocal)File::userCan without a user\"\n\nThis reverts commit 264d043d0400d4a0353df03eb5b431dcf25ad0ae.\n\nReason for revert: Parsoid is still using this.\n\nBug: T263014\nChange-Id: I9d6b65b319a45bbdbd479eda0d0580296ceb7f62\n",
"bugs": [
"T263014"
],
"subject": "Revert \"Remove support for (Archived|OldLocal)File::userCan without a user\"",
"hash": "a66e7a6b0c7c1cc46af9c938355e04d955d0d29a",
"date": "2020-09-16T10:47:49"
}
]
},
"includes/HookContainer/HookContainer.php": {
"File": "includes/HookContainer/HookContainer.php",
"TicketCount": 2,
"CommitCount": 1,
"Tickets": [
"T187731",
"T259181"
],
"Commits": [
{
"message": "Introduce and apply NonSerializableTrait\n\nThe NonSerializableTrait prevents object serialization via php's native\nserialization mechanism. Most objects are not safe to serialize, and\nNonSerializableTrait provides a covenient and uniform way to protect\nagainst serialization attempts.\n\nThis patch applies the NonSerializableTrait to some key classes in\nMediaWiki.\n\nBug: T187731\nBug: T259181\nChange-Id: I0c3b558d97e3415413bbaa3d98f6ebd5312c4a67\n",
"bugs": [
"T187731",
"T259181"
],
"subject": "Introduce and apply NonSerializableTrait",
"hash": "dc436c3cff5f41bf8f22e0a3d19fa86a1c86b7dd",
"date": "2020-09-28T19:55:49"
}
]
},
"includes/Revision/RevisionRecord.php": {
"File": "includes/Revision/RevisionRecord.php",
"TicketCount": 2,
"CommitCount": 1,
"Tickets": [
"T187731",
"T259181"
],
"Commits": [
{
"message": "Introduce and apply NonSerializableTrait\n\nThe NonSerializableTrait prevents object serialization via php's native\nserialization mechanism. Most objects are not safe to serialize, and\nNonSerializableTrait provides a covenient and uniform way to protect\nagainst serialization attempts.\n\nThis patch applies the NonSerializableTrait to some key classes in\nMediaWiki.\n\nBug: T187731\nBug: T259181\nChange-Id: I0c3b558d97e3415413bbaa3d98f6ebd5312c4a67\n",
"bugs": [
"T187731",
"T259181"
],
"subject": "Introduce and apply NonSerializableTrait",
"hash": "dc436c3cff5f41bf8f22e0a3d19fa86a1c86b7dd",
"date": "2020-09-28T19:55:49"
}
]
},
"includes/Revision/RevisionSlots.php": {
"File": "includes/Revision/RevisionSlots.php",
"TicketCount": 2,
"CommitCount": 1,
"Tickets": [
"T187731",
"T259181"
],
"Commits": [
{
"message": "Introduce and apply NonSerializableTrait\n\nThe NonSerializableTrait prevents object serialization via php's native\nserialization mechanism. Most objects are not safe to serialize, and\nNonSerializableTrait provides a covenient and uniform way to protect\nagainst serialization attempts.\n\nThis patch applies the NonSerializableTrait to some key classes in\nMediaWiki.\n\nBug: T187731\nBug: T259181\nChange-Id: I0c3b558d97e3415413bbaa3d98f6ebd5312c4a67\n",
"bugs": [
"T187731",
"T259181"
],
"subject": "Introduce and apply NonSerializableTrait",
"hash": "dc436c3cff5f41bf8f22e0a3d19fa86a1c86b7dd",
"date": "2020-09-28T19:55:49"
}
]
},
"includes/context/ContextSource.php": {
"File": "includes/context/ContextSource.php",
"TicketCount": 2,
"CommitCount": 1,
"Tickets": [
"T187731",
"T259181"
],
"Commits": [
{
"message": "Introduce and apply NonSerializableTrait\n\nThe NonSerializableTrait prevents object serialization via php's native\nserialization mechanism. Most objects are not safe to serialize, and\nNonSerializableTrait provides a covenient and uniform way to protect\nagainst serialization attempts.\n\nThis patch applies the NonSerializableTrait to some key classes in\nMediaWiki.\n\nBug: T187731\nBug: T259181\nChange-Id: I0c3b558d97e3415413bbaa3d98f6ebd5312c4a67\n",
"bugs": [
"T187731",
"T259181"
],
"subject": "Introduce and apply NonSerializableTrait",
"hash": "dc436c3cff5f41bf8f22e0a3d19fa86a1c86b7dd",
"date": "2020-09-28T19:55:49"
}
]
},
"includes/libs/NonSerializableTrait.php": {
"File": "includes/libs/NonSerializableTrait.php",
"TicketCount": 2,
"CommitCount": 1,
"Tickets": [
"T187731",
"T259181"
],
"Commits": [
{
"message": "Introduce and apply NonSerializableTrait\n\nThe NonSerializableTrait prevents object serialization via php's native\nserialization mechanism. Most objects are not safe to serialize, and\nNonSerializableTrait provides a covenient and uniform way to protect\nagainst serialization attempts.\n\nThis patch applies the NonSerializableTrait to some key classes in\nMediaWiki.\n\nBug: T187731\nBug: T259181\nChange-Id: I0c3b558d97e3415413bbaa3d98f6ebd5312c4a67\n",
"bugs": [
"T187731",
"T259181"
],
"subject": "Introduce and apply NonSerializableTrait",
"hash": "dc436c3cff5f41bf8f22e0a3d19fa86a1c86b7dd",
"date": "2020-09-28T19:55:49"
}
]
},
"includes/TrackingCategories.php": {
"File": "includes/TrackingCategories.php",
"TicketCount": 1,
"CommitCount": 1,
"Tickets": [
"T237467"
],
"Commits": [
{
"message": "Register 'nonnumeric-formatnum' in TrackingCategories::$coreTrackingCategories\n\nBug: T237467\nFollows-Up: Ib6c832df1f69aa4579402701fad1f77e548291ee\nChange-Id: I08df861ec6c93ee9b2b02843c667854a3e017270\n",
"bugs": [
"T237467"
],
"subject": "Register 'nonnumeric-formatnum' in TrackingCategories::$coreTrackingCategories",
"hash": "9ebd96acda20fe6cfd95b64dd04568b0b4757fec",
"date": "2020-09-27T15:00:05"
}
]
},
"includes/specials/SpecialUnblock.php": {
"File": "includes/specials/SpecialUnblock.php",
"TicketCount": 1,
"CommitCount": 1,
"Tickets": [
"T263642"
],
"Commits": [
{
"message": "SpecialUnblock: Allow getTargetAndType to accept null $par\n\nAlso add documentation and type hinting for\nSpecialBlock::getTargetAndType\n\nBug: T263642\nChange-Id: If3e78537cd813d8aabbc6ba53c4d0be949ede9a1\n",
"bugs": [
"T263642"
],
"subject": "SpecialUnblock: Allow getTargetAndType to accept null $par",
"hash": "7432a7c7ad8e1ff077e9e20f19279057f139c40d",
"date": "2020-09-23T14:00:34"
}
]
},
"includes/xml/Xml.php": {
"File": "includes/xml/Xml.php",
"TicketCount": 1,
"CommitCount": 1,
"Tickets": [
"T251351"
],
"Commits": [
{
"message": "Drop down lists: Do not use the value for 'other' as option group\n\nInstead use them as groupless reason list\n\nBug: T251351\nChange-Id: Id691d97e3e6f205615f639e051619f3d0f67c7ba\n",
"bugs": [
"T251351"
],
"subject": "Drop down lists: Do not use the value for 'other' as option group",
"hash": "e878341bfcd9960a238aacf478551de88ed98fc6",
"date": "2020-09-16T19:30:43"
}
]
},
"includes/filerepo/LocalRepo.php": {
"File": "includes/filerepo/LocalRepo.php",
"TicketCount": 1,
"CommitCount": 1,
"Tickets": [
"T263014"
],
"Commits": [
{
"message": "Hard deprecate File::userCan() with $user=null\n\nThe ArchivedFile::userCan and OldLocalFile::userCan() methods, along\nwith a number of other methods where the user parameter was optional,\nwere deprecated in 1.35, but this case was overlooked. This patch is\nintended for backport to 1.35, so that the $user parameter can be\nremoved in 1.36 in accordance with the deprecation policy.\n\nThis path is known to be used by LocalRepo::findFile(),\nFileRepo::findFile(), and FileRepo::findFileFromKey(), so hacky\nworkarounds have been added in this patch to avoid triggering\ndeprecation warnings in 1.35. T263033 has been filed to fix these\n'correctly' in 1.36.\n\nBug: T263014\nChange-Id: I17cab8ce043a5965aeab089392068b91c686025e\n",
"bugs": [
"T263014"
],
"subject": "Hard deprecate File::userCan() with $user=null",
"hash": "5e703cdf669f85e2924dee76c5a8831633259ce0",
"date": "2020-09-16T16:27:51"
}
]
},
"includes/filerepo/file/ArchivedFile.php": {
"File": "includes/filerepo/file/ArchivedFile.php",
"TicketCount": 1,
"CommitCount": 1,
"Tickets": [
"T263014"
],
"Commits": [
{
"message": "Revert \"Remove support for (Archived|OldLocal)File::userCan without a user\"\n\nThis reverts commit 264d043d0400d4a0353df03eb5b431dcf25ad0ae.\n\nReason for revert: Parsoid is still using this.\n\nBug: T263014\nChange-Id: I9d6b65b319a45bbdbd479eda0d0580296ceb7f62\n",
"bugs": [
"T263014"
],
"subject": "Revert \"Remove support for (Archived|OldLocal)File::userCan without a user\"",
"hash": "a66e7a6b0c7c1cc46af9c938355e04d955d0d29a",
"date": "2020-09-16T10:47:49"
}
]
},
"includes/specials/SpecialConfirmEmail.php": {
"File": "includes/specials/SpecialConfirmEmail.php",
"TicketCount": 1,
"CommitCount": 1,
"Tickets": [
"T226337"
],
"Commits": [
{
"message": "Use User::getInstanceForUpdate to update user on SpecialConfirmEmail\n\nThis method is recommended when updating a user as specifically it\nloads fresh instance of the user from master with locking read.\nProbably its use in I182351a for Special:ChangeEmail helped to avoid\nthe issue there.\n\nBug: T226337\nChange-Id: I080faa341bdbce952df730ebfdf0512cce413b8d\n",
"bugs": [
"T226337"
],
"subject": "Use User::getInstanceForUpdate to update user on SpecialConfirmEmail",
"hash": "d09014b3748b2d90c60296f9f4fed9caed5adce0",
"date": "2020-09-09T19:50:30"
}
]
},
"includes/HookContainer/DeprecatedHooks.php": {
"File": "includes/HookContainer/DeprecatedHooks.php",
"TicketCount": 1,
"CommitCount": 1,
"Tickets": [
"T255608"
],
"Commits": [
{
"message": "Revert \"Hard deprecate the `TitleMoveCompleting` hook\"\n\nThis reverts commit 8f7f133ccfaaaa45b7c00835d3d64884bde0c5c3.\n\nPer I8cdef229bf0a3, we still need this in Flow for now, to fix a UBN.\n\nBug: T255608\nChange-Id: Id414c98161b9f560b14d4eaec8aedeec4659df27\n",
"bugs": [
"T255608"
],
"subject": "Revert \"Hard deprecate the `TitleMoveCompleting` hook\"",
"hash": "6fc783ad2b3ef92d4a160dc97e7896fea3904247",
"date": "2020-06-17T04:09:34"
}
]
},
"includes/parser/Sanitizer.php": {
"File": "includes/parser/Sanitizer.php",
"TicketCount": 1,
"CommitCount": 1,
"Tickets": [
"T251506"
],
"Commits": [
{
"message": "Sanitizer: Truncate IDs to a reasonable length; deprecate escapeIdReferenceList\n\nOverly-long anchors can cause OOMs later on during TOC processing, and\nare needless.\n\nThe method Sanitizer::escapeIdReferenceList() is also deprecated in\nthis patch, since it is a way to get around the ID length limit and\nappears to be unused outside the Sanitizer class. Since the use\nwithin Sanitizer (for ARIA attributes) appears safe, we'll just make\nthis private in a future release and avoid the potential that someone\nwill misuse this.\n\nBug: T251506\nChange-Id: Ifce057b0c436eabec310f812394e86ee7123e7c8\n",
"bugs": [
"T251506"
],
"subject": "Sanitizer: Truncate IDs to a reasonable length; deprecate escapeIdReferenceList",
"hash": "0de9d89e3727ab17f08158600428dad6c678ff83",
"date": "2020-08-13T15:33:16"
}
]
},
"includes/actions/CreditsAction.php": {
"File": "includes/actions/CreditsAction.php",
"TicketCount": 1,
"CommitCount": 1,
"Tickets": [
"T259962"
],
"Commits": [
{
"message": "Revert \"Handle interwiki usernames in action=credits\"\n\nThis reverts commit 6fbddcec19913cdf3e55fcb667f25d6a1b9f44aa.\n\nReason for revert: Caused T259962\n\nBug: T259962\nChange-Id: Ida2c64d5d27b293f13c2f1f6c35672c6c0f7cb0b\n",
"bugs": [
"T259962"
],
"subject": "Revert \"Handle interwiki usernames in action=credits\"",
"hash": "c80e4a33757eaa6366ab6231bc63a8eb51e32422",
"date": "2020-08-08T11:07:14"
}
]
},
"includes/import/ImportableOldRevisionImporter.php": {
"File": "includes/import/ImportableOldRevisionImporter.php",
"TicketCount": 1,
"CommitCount": 1,
"Tickets": [
"T258666"
],
"Commits": [
{
"message": "Import: use master DB for loading slots.\n\nWe were using the master to load revision data from the revision\ntable, but not when loading slot data. We should use the master DB for\nboth, consistently.\n\nBug: T258666\nChange-Id: I800adb852ec690b63fa926f40428de3272d69584\n",
"bugs": [
"T258666"
],
"subject": "Import: use master DB for loading slots.",
"hash": "cd6d423eefe36bb88a68caf49ad02e006b3fd1f4",
"date": "2020-07-23T16:34:34"
}
]
},
"includes/media/ExifBitmapHandler.php": {
"File": "includes/media/ExifBitmapHandler.php",
"TicketCount": 1,
"CommitCount": 1,
"Tickets": [
"T257497"
],
"Commits": [
{
"message": "Typehint FormatMetadata::collapseContactInfO()\n\nThe method expects array, but was given a string.\nSince there's only one caller, the caller is fixed\nand the method typehinted.\n\nAlso fix doc comment\n\nBug: T257497\nChange-Id: I67c337c4ee95ca30d968b89251dbbe077d2110e3\n",
"bugs": [
"T257497"
],
"subject": "Typehint FormatMetadata::collapseContactInfO()",
"hash": "a31b63824507f8194cd4f333375e30c54de75534",
"date": "2020-07-14T20:26:47"
}
]
},
"includes/specials/SpecialDiff.php": {
"File": "includes/specials/SpecialDiff.php",
"TicketCount": 1,
"CommitCount": 1,
"Tickets": [
"T239265"
],
"Commits": [
{
"message": "Allow to select next/prev/cur on Special:Diff\n\nValidate the used other field for unsigned int manually.\nThe widget does not support setting a type (Requested with T256425)\n\nBug: T239265\nChange-Id: I9b5fa299c370872bb9228d1ba1cdb8cae44cb04a\n",
"bugs": [
"T239265"
],
"subject": "Allow to select next/prev/cur on Special:Diff",
"hash": "057aab5c0c328d72f3176fb5ad63536a5e14a506",
"date": "2020-06-30T19:19:43"
}
]
},
"includes/htmlform/fields/HTMLMultiSelectField.php": {
"File": "includes/htmlform/fields/HTMLMultiSelectField.php",
"TicketCount": 1,
"CommitCount": 1,
"Tickets": [
"T232616"
],
"Commits": [
{
"message": "Avoid undefined index 0 in HTMLMultiSelectField class\n\n$out has only items, when $optionsOouiSections has items, but when\n$options is empty, $out is also empty. In that case $hasSections is\nfalse.\n\nBug: T232616\nChange-Id: Id3959013b7b1db0d4faeecea9148bae97227abcf\n",
"bugs": [
"T232616"
],
"subject": "Avoid undefined index 0 in HTMLMultiSelectField class",
"hash": "169300346ed8fe787f153bd8cb54d98396388533",
"date": "2020-06-29T13:07:07"
}
]
},
"includes/Hook/PageMoveCompleteHook.php": {
"File": "includes/Hook/PageMoveCompleteHook.php",
"TicketCount": 2,
"CommitCount": 1,
"Tickets": [
"T250023",
"T255608"
],
"Commits": [
{
"message": "Add PageMoveCompleting hook, to replace TitleMoveCompleting\n\nWe intially thought we wouldn't need this and would only need\nPageMoveComplete, but it turns out Flow does need it.\n\nBug: T250023\nBug: T255608\nChange-Id: I8e7308541d2fe6d02b9dad63e1c86c89f6e7cf53\n",
"bugs": [
"T250023",
"T255608"
],
"subject": "Add PageMoveCompleting hook, to replace TitleMoveCompleting",
"hash": "c8b9d849fc2ef302ec2adf5ec3bdb647d176c2ab",
"date": "2020-06-17T05:27:28"
}
]
},
"includes/Hook/PageMoveCompletingHook.php": {
"File": "includes/Hook/PageMoveCompletingHook.php",
"TicketCount": 2,
"CommitCount": 1,
"Tickets": [
"T250023",
"T255608"
],
"Commits": [
{
"message": "Add PageMoveCompleting hook, to replace TitleMoveCompleting\n\nWe intially thought we wouldn't need this and would only need\nPageMoveComplete, but it turns out Flow does need it.\n\nBug: T250023\nBug: T255608\nChange-Id: I8e7308541d2fe6d02b9dad63e1c86c89f6e7cf53\n",
"bugs": [
"T250023",
"T255608"
],
"subject": "Add PageMoveCompleting hook, to replace TitleMoveCompleting",
"hash": "c8b9d849fc2ef302ec2adf5ec3bdb647d176c2ab",
"date": "2020-06-17T05:27:28"
}
]
},
"includes/Hook/TitleMoveCompletingHook.php": {
"File": "includes/Hook/TitleMoveCompletingHook.php",
"TicketCount": 2,
"CommitCount": 1,
"Tickets": [
"T250023",
"T255608"
],
"Commits": [
{
"message": "Add PageMoveCompleting hook, to replace TitleMoveCompleting\n\nWe intially thought we wouldn't need this and would only need\nPageMoveComplete, but it turns out Flow does need it.\n\nBug: T250023\nBug: T255608\nChange-Id: I8e7308541d2fe6d02b9dad63e1c86c89f6e7cf53\n",
"bugs": [
"T250023",
"T255608"
],
"subject": "Add PageMoveCompleting hook, to replace TitleMoveCompleting",
"hash": "c8b9d849fc2ef302ec2adf5ec3bdb647d176c2ab",
"date": "2020-06-17T05:27:28"
}
]
},
"includes/HookContainer/HookRunner.php": {
"File": "includes/HookContainer/HookRunner.php",
"TicketCount": 2,
"CommitCount": 1,
"Tickets": [
"T250023",
"T255608"
],
"Commits": [
{
"message": "Add PageMoveCompleting hook, to replace TitleMoveCompleting\n\nWe intially thought we wouldn't need this and would only need\nPageMoveComplete, but it turns out Flow does need it.\n\nBug: T250023\nBug: T255608\nChange-Id: I8e7308541d2fe6d02b9dad63e1c86c89f6e7cf53\n",
"bugs": [
"T250023",
"T255608"
],
"subject": "Add PageMoveCompleting hook, to replace TitleMoveCompleting",
"hash": "c8b9d849fc2ef302ec2adf5ec3bdb647d176c2ab",
"date": "2020-06-17T05:27:28"
}
]
},
"includes/Rest/Handler/PageHistoryHandler.php": {
"File": "includes/Rest/Handler/PageHistoryHandler.php",
"TicketCount": 1,
"CommitCount": 1,
"Tickets": [
"T244184"
],
"Commits": [
{
"message": "PageHistoryHandler: Undefined index 'query'\n\nBug: T244184\nChange-Id: Ida350938d2a3cbc7be936763108253efb188c42c\n",
"bugs": [
"T244184"
],
"subject": "PageHistoryHandler: Undefined index 'query'",
"hash": "e79b72beebbe4b80ba5206196919f614a876caa1",
"date": "2020-02-10T16:21:08"
}
]
},
"includes/watcheditem/NoWriteWatchedItemStore.php": {
"File": "includes/watcheditem/NoWriteWatchedItemStore.php",
"TicketCount": 1,
"CommitCount": 1,
"Tickets": [
"T243449"
],
"Commits": [
{
"message": "When clearing don't load the watchlist if we must clear through a job\n\nIt looks like this bug has existed in the special page since before the\ntime of the refactoring into WatchedItemStore although apparently\nit has only just surfaced now!\n\nThis adds a new method into WatchedItemStore that decides if the\nwatchlist can be cleared interactively or must use the job queue.\nThis can then be used in the special page instead of the old logic\nwhich would load the watchlist and then count the loaded items\n(inefficient if you know your clearing the list anyway)\n\nBug: T243449\nChange-Id: I810d89e3e1142a223430f7fc5f8598a493637a72\n",
"bugs": [
"T243449"
],
"subject": "When clearing don't load the watchlist if we must clear through a job",
"hash": "8d3f6b417f2baece555d365dc518ebaf9bcbd7bd",
"date": "2020-01-29T18:36:47"
}
]
},
"includes/libs/objectcache/RESTBagOStuff.php": {
"File": "includes/libs/objectcache/RESTBagOStuff.php",
"TicketCount": 1,
"CommitCount": 1,
"Tickets": [
"T204742"
],
"Commits": [
{
"message": "objectcache: add object segmentation support to BagOStuff\n\nUse it for ApiStashEdit so that large PaserOutput can be stored.\n\nAdd flag to allow for value segmentation on set() in BagOStuff.\nAlso add a flag for immediate deletion of segments on delete().\n\nBagOStuff now has base serialize()/unserialize() methods.\n\nBug: T204742\nChange-Id: I0667a02612526d8ddfd91d5de48b6faa78bd1ab5\n",
"bugs": [
"T204742"
],
"subject": "objectcache: add object segmentation support to BagOStuff",
"hash": "b09b3980f991bb02a64cce0e462b97bada3b4776",
"date": "2019-06-11T15:14:17"
}
]
},
"includes/logging/BlockLogFormatter.php": {
"File": "includes/logging/BlockLogFormatter.php",
"TicketCount": 1,
"CommitCount": 1,
"Tickets": [
"T224811"
],
"Commits": [
{
"message": "Tolerate invalid titles in some ChangesFeed and LogFormatter code\n\nBug: T224811\nChange-Id: If134e20cc14d80f9186611606df0b860889bd2cf\n",
"bugs": [
"T224811"
],
"subject": "Tolerate invalid titles in some ChangesFeed and LogFormatter code",
"hash": "7271ac0dcd001b514dbd9753a3d5a461fc527a9d",
"date": "2019-06-24T23:15:06"
}
]
},
"includes/Revision/RevisionRenderer.php": {
"File": "includes/Revision/RevisionRenderer.php",
"TicketCount": 1,
"CommitCount": 1,
"Tickets": [
"T229589"
],
"Commits": [
{
"message": "Fix bogus field use in RevisionRenderer::getSpeculativePageId\n\nThis field was renamed, causing a functional merge conflict\n\nFollow-up 5099ee9f7273\n\nBug: T229589\nChange-Id: I7a6bb68ff1fe320313276dc5a67c70de6715ccb6\n",
"bugs": [
"T229589"
],
"subject": "Fix bogus field use in RevisionRenderer::getSpeculativePageId",
"hash": "819a692635a313937b9e49b071b647812a66cb48",
"date": "2019-08-01T19:57:53"
}
]
},
"includes/api/ApiQueryRecentChanges.php": {
"File": "includes/api/ApiQueryRecentChanges.php",
"TicketCount": 1,
"CommitCount": 1,
"Tickets": [
"T228425"
],
"Commits": [
{
"message": "API: Always select rc_user from database (regardless of rcprop=user)\n\nBug: T228425\nChange-Id: I1b6f684c8289282326da0e326b90fcf8ff87d71e\n",
"bugs": [
"T228425"
],
"subject": "API: Always select rc_user from database (regardless of rcprop=user)",
"hash": "5aa9ba58ead7cd9ad101e59357ff83d84ebec1a1",
"date": "2019-07-31T13:45:34"
}
]
},
"includes/specials/SpecialUnlinkAccounts.php": {
"File": "includes/specials/SpecialUnlinkAccounts.php",
"TicketCount": 1,
"CommitCount": 1,
"Tickets": [
"T228717"
],
"Commits": [
{
"message": "Better handle \"no accounts to unlink\" case in Special:UnlinkAccounts\n\nWhen there are no accounts to unlink, say so rather than displaying a\nbutton that results in an error when clicked.\n\nBug: T228717\nChange-Id: I17f8aed213f114338c4b46e26ce369bc63e36a99\n",
"bugs": [
"T228717"
],
"subject": "Better handle \"no accounts to unlink\" case in Special:UnlinkAccounts",
"hash": "3622787eba642211c41bd758fa9a03118f22262f",
"date": "2019-07-29T03:48:54"
}
]
},
"includes/upload/UploadBase.php": {
"File": "includes/upload/UploadBase.php",
"TicketCount": 1,
"CommitCount": 1,
"Tickets": [
"T228749"
],
"Commits": [
{
"message": "Don't try to store File objects to the upload session\n\nFile objects can contain closures which can't be serialized.\n\nInstead, add makeWarningsSerializable(), which converts the warnings\nto a serializable array. Make ApiUpload::transformWarnings() act on this\nserializable array instead. For consistency, ApiUpload::getApiWarnings()\nalso needs to convert the result of checkWarnings() before transforming\nit.\n\nBug: T228749\nChange-Id: I8236aaf3683f93a03a5505803f4638e022cf6d85\n",
"bugs": [
"T228749"
],
"subject": "Don't try to store File objects to the upload session",
"hash": "51e837f68f6df7fdc6cb35803e497bfc0532c861",
"date": "2019-07-26T06:15:30"
}
]
},
"includes/libs/mime/MimeAnalyzer.php": {
"File": "includes/libs/mime/MimeAnalyzer.php",
"TicketCount": 1,
"CommitCount": 1,
"Tickets": [
"T223728"
],
"Commits": [
{
"message": "MimeAnalyzer: fix ZIP parsing failure\n\nunpack() actually returns an array with indexes starting from 1, not\nzero, so unpack(...)[0] gives a notice and always returns null. It is\nlucky that ZIPs normally have zero-length comments, so this would have\nhad little impact on file type detection aside from log spam.\n\nAlso, add a check to make sure the unpack() will not read beyond\nthe end of the file. Without this, unpack() could generate a warning.\n\nThe bug was introduced by me in f12db3804882272794b.\n\nAdd tests. The test files were generated by appending an EOCDR signature\nand some extra bytes to 1bit-png.png.\n\nBug: T223728\nChange-Id: I6fab63102d1d8eea92cdcce5ab6d1eb747a0a890\n",
"bugs": [
"T223728"
],
"subject": "MimeAnalyzer: fix ZIP parsing failure",
"hash": "d8e06a46a86d694a0d01238b04b51735b59a7846",
"date": "2019-07-25T03:40:18"
}
]
},
"includes/specials/SpecialGoToInterwiki.php": {
"File": "includes/specials/SpecialGoToInterwiki.php",
"TicketCount": 1,
"CommitCount": 1,
"Tickets": [
"T227700"
],
"Commits": [
{
"message": "Fix and re-apply \"RedirectSpecialPage: handle interwiki redirects\"\n\nThis re-applies commit 41106688abbe6dfff61c5642924ced42af3f0d33\n(thereby reverting commit 6c57748aeee6e4f2a197d64785102306fbd4a297)\nand fixes it for local interwiki redirects by adding and using a\nforcing parameter in Special:GoToInterwiki to treat local redirects\nlike external ones.\n\nBug: T227700\nChange-Id: I4bc2ed998430fc2bac71baf850b8988fdb24c1ac\n",
"bugs": [
"T227700"
],
"subject": "Fix and re-apply \"RedirectSpecialPage: handle interwiki redirects\"",
"hash": "d1e7d5e3b2c60d2da2da3c516a94c24599bf3ecc",
"date": "2019-07-24T03:55:49"
}
]
},
"includes/specials/SpecialSearch.php": {
"File": "includes/specials/SpecialSearch.php",
"TicketCount": 1,
"CommitCount": 1,
"Tickets": [
"T228171"
],
"Commits": [
{
"message": "Validate sort order in Special:Search\n\nProviding an invalid sort order to Special:Search could trigger an\nexception from the search engine when trying to apply it. Validate the\nsort order, much like API classes do, and let the user know that the\nsort they requested could not be applied.\n\nWe also have a unreported error for invalid profile requested, so\nadded that warning to the display while here.\n\nBug: T228171\nChange-Id: I79079eea8c03a90b5b65f1dad11c99e514de00e1\n",
"bugs": [
"T228171"
],
"subject": "Validate sort order in Special:Search",
"hash": "4d9d61460d9420cc184c22243b0da81855e8f3a1",
"date": "2019-07-23T17:18:34"
}
]
},
"includes/Permissions/PermissionManager.php": {
"File": "includes/Permissions/PermissionManager.php",
"TicketCount": 1,
"CommitCount": 1,
"Tickets": [
"T227772"
],
"Commits": [
{
"message": "Add mechanism for temporary user rights\n\nAdd a mechanism for adding temporary user rights that only exist\nfor the current request. This is occasionally needed to let normal\nusers act with a bot flag; traditionally the fact that User::$mRights\nwas public has been abused to do it, but I88992403 broke that.\n\nBug: T227772\nChange-Id: Ife8f9d8affa750701e4e5d646ed8cd153c1d867b\n",
"bugs": [
"T227772"
],
"subject": "Add mechanism for temporary user rights",
"hash": "659db7bddd607ae6e5877f31e54ef8496a7ad336",
"date": "2019-07-17T01:53:14"
}
]
},
"includes/changes/ChangesFeed.php": {
"File": "includes/changes/ChangesFeed.php",
"TicketCount": 1,
"CommitCount": 1,
"Tickets": [
"T224811"
],
"Commits": [
{
"message": "Tolerate invalid titles in some ChangesFeed and LogFormatter code\n\nBug: T224811\nChange-Id: If134e20cc14d80f9186611606df0b860889bd2cf\n",
"bugs": [
"T224811"
],
"subject": "Tolerate invalid titles in some ChangesFeed and LogFormatter code",
"hash": "7271ac0dcd001b514dbd9753a3d5a461fc527a9d",
"date": "2019-06-24T23:15:06"
}
]
},
"includes/htmlform/fields/HTMLSelectAndOtherField.php": {
"File": "includes/htmlform/fields/HTMLSelectAndOtherField.php",
"TicketCount": 2,
"CommitCount": 1,
"Tickets": [
"T222170",
"T225860"
],
"Commits": [
{
"message": "Fix some issues with HTMLSelectAndOtherField default and validation\n\nBring HTMLSelectAndOtherField in line with other HTMLFormFields by\nensuring that the default value is of the correct type and passes\nvalidation checks.\n\nAlso make sure HTMLSelectAndOtherField::validate checks for the\nfinal value, if the field has required set to true.\n\nBug: T222170\nBug: T225860\nChange-Id: I949ee3df2b1f597982cf522b149c54b8e79d59bc\n",
"bugs": [
"T222170",
"T225860"
],
"subject": "Fix some issues with HTMLSelectAndOtherField default and validation",
"hash": "db9ff28e3ebe87ed965561d4458841b7aad095c9",
"date": "2019-06-15T09:34:36"
}
]
},
"includes/language/LanguageNameUtils.php": {
"File": "includes/language/LanguageNameUtils.php",
"TicketCount": 2,
"CommitCount": 1,
"Tickets": [
"T231200",
"T231198"
],
"Commits": [
{
"message": "Revert \"Make LocalisationCache a service\"\n\nThis reverts commits:\n - 76a940350d36c323ebedb4ab45cc81ed1c6b6c92\n - b78b8804d076618e967c7b31ec15a1bd9e35d1d0\n - 2e52f48c2ed8dcf480843e2186f685a86810e2ac\n - e4468a1d6b6b9fdc5b64800febdc8748d21f213d\n\nBug: T231200\nBug: T231198\nChange-Id: I1a7e46a979ae5c9c8130dd3927f6663a216ba753\n",
"bugs": [
"T231200",
"T231198"
],
"subject": "Revert \"Make LocalisationCache a service\"",
"hash": "308e6427aef169a575a339e6a8e0558d29403a1d",
"date": "2019-08-26T16:28:26"
}
]
},
"includes/logging/DeleteLogFormatter.php": {
"File": "includes/logging/DeleteLogFormatter.php",
"TicketCount": 1,
"CommitCount": 1,
"Tickets": [
"T224815"
],
"Commits": [
{
"message": "DeleteLogFormatter: Handle missing ofield/nfield\n\nofield and nfield may be missing from old log entries. Take that into\naccount when processing.\n\nBug: T224815\nChange-Id: I06dda3106bab9980f6fa7d515542e94a91c17f64\n",
"bugs": [
"T224815"
],
"subject": "DeleteLogFormatter: Handle missing ofield/nfield",
"hash": "144ebd06c73cb85998f5d9e03a992f44c4adb234",
"date": "2019-06-12T23:33:45"
}
]
},
"includes/libs/objectcache/APCBagOStuff.php": {
"File": "includes/libs/objectcache/APCBagOStuff.php",
"TicketCount": 1,
"CommitCount": 1,
"Tickets": [
"T204742"
],
"Commits": [
{
"message": "objectcache: add object segmentation support to BagOStuff\n\nUse it for ApiStashEdit so that large PaserOutput can be stored.\n\nAdd flag to allow for value segmentation on set() in BagOStuff.\nAlso add a flag for immediate deletion of segments on delete().\n\nBagOStuff now has base serialize()/unserialize() methods.\n\nBug: T204742\nChange-Id: I0667a02612526d8ddfd91d5de48b6faa78bd1ab5\n",
"bugs": [
"T204742"
],
"subject": "objectcache: add object segmentation support to BagOStuff",
"hash": "b09b3980f991bb02a64cce0e462b97bada3b4776",
"date": "2019-06-11T15:14:17"
}
]
},
"includes/libs/objectcache/APCUBagOStuff.php": {
"File": "includes/libs/objectcache/APCUBagOStuff.php",
"TicketCount": 1,
"CommitCount": 1,
"Tickets": [
"T204742"
],
"Commits": [
{
"message": "objectcache: add object segmentation support to BagOStuff\n\nUse it for ApiStashEdit so that large PaserOutput can be stored.\n\nAdd flag to allow for value segmentation on set() in BagOStuff.\nAlso add a flag for immediate deletion of segments on delete().\n\nBagOStuff now has base serialize()/unserialize() methods.\n\nBug: T204742\nChange-Id: I0667a02612526d8ddfd91d5de48b6faa78bd1ab5\n",
"bugs": [
"T204742"
],
"subject": "objectcache: add object segmentation support to BagOStuff",
"hash": "b09b3980f991bb02a64cce0e462b97bada3b4776",
"date": "2019-06-11T15:14:17"
}
]
},
"includes/libs/objectcache/BagOStuff.php": {
"File": "includes/libs/objectcache/BagOStuff.php",
"TicketCount": 1,
"CommitCount": 1,
"Tickets": [
"T204742"
],
"Commits": [
{
"message": "objectcache: add object segmentation support to BagOStuff\n\nUse it for ApiStashEdit so that large PaserOutput can be stored.\n\nAdd flag to allow for value segmentation on set() in BagOStuff.\nAlso add a flag for immediate deletion of segments on delete().\n\nBagOStuff now has base serialize()/unserialize() methods.\n\nBug: T204742\nChange-Id: I0667a02612526d8ddfd91d5de48b6faa78bd1ab5\n",
"bugs": [
"T204742"
],
"subject": "objectcache: add object segmentation support to BagOStuff",
"hash": "b09b3980f991bb02a64cce0e462b97bada3b4776",
"date": "2019-06-11T15:14:17"
}
]
},
"includes/libs/objectcache/EmptyBagOStuff.php": {
"File": "includes/libs/objectcache/EmptyBagOStuff.php",
"TicketCount": 1,
"CommitCount": 1,
"Tickets": [
"T204742"
],
"Commits": [
{
"message": "objectcache: add object segmentation support to BagOStuff\n\nUse it for ApiStashEdit so that large PaserOutput can be stored.\n\nAdd flag to allow for value segmentation on set() in BagOStuff.\nAlso add a flag for immediate deletion of segments on delete().\n\nBagOStuff now has base serialize()/unserialize() methods.\n\nBug: T204742\nChange-Id: I0667a02612526d8ddfd91d5de48b6faa78bd1ab5\n",
"bugs": [
"T204742"
],
"subject": "objectcache: add object segmentation support to BagOStuff",
"hash": "b09b3980f991bb02a64cce0e462b97bada3b4776",
"date": "2019-06-11T15:14:17"
}
]
},
"includes/libs/objectcache/HashBagOStuff.php": {
"File": "includes/libs/objectcache/HashBagOStuff.php",
"TicketCount": 1,
"CommitCount": 1,
"Tickets": [
"T204742"
],
"Commits": [
{
"message": "objectcache: add object segmentation support to BagOStuff\n\nUse it for ApiStashEdit so that large PaserOutput can be stored.\n\nAdd flag to allow for value segmentation on set() in BagOStuff.\nAlso add a flag for immediate deletion of segments on delete().\n\nBagOStuff now has base serialize()/unserialize() methods.\n\nBug: T204742\nChange-Id: I0667a02612526d8ddfd91d5de48b6faa78bd1ab5\n",
"bugs": [
"T204742"
],
"subject": "objectcache: add object segmentation support to BagOStuff",
"hash": "b09b3980f991bb02a64cce0e462b97bada3b4776",
"date": "2019-06-11T15:14:17"
}
]
},
"includes/libs/objectcache/MemcachedBagOStuff.php": {
"File": "includes/libs/objectcache/MemcachedBagOStuff.php",
"TicketCount": 1,
"CommitCount": 1,
"Tickets": [
"T204742"
],
"Commits": [
{
"message": "objectcache: add object segmentation support to BagOStuff\n\nUse it for ApiStashEdit so that large PaserOutput can be stored.\n\nAdd flag to allow for value segmentation on set() in BagOStuff.\nAlso add a flag for immediate deletion of segments on delete().\n\nBagOStuff now has base serialize()/unserialize() methods.\n\nBug: T204742\nChange-Id: I0667a02612526d8ddfd91d5de48b6faa78bd1ab5\n",
"bugs": [
"T204742"
],
"subject": "objectcache: add object segmentation support to BagOStuff",
"hash": "b09b3980f991bb02a64cce0e462b97bada3b4776",
"date": "2019-06-11T15:14:17"
}
]
},
"includes/libs/objectcache/MemcachedClient.php": {
"File": "includes/libs/objectcache/MemcachedClient.php",
"TicketCount": 1,
"CommitCount": 1,
"Tickets": [
"T204742"
],
"Commits": [
{
"message": "objectcache: add object segmentation support to BagOStuff\n\nUse it for ApiStashEdit so that large PaserOutput can be stored.\n\nAdd flag to allow for value segmentation on set() in BagOStuff.\nAlso add a flag for immediate deletion of segments on delete().\n\nBagOStuff now has base serialize()/unserialize() methods.\n\nBug: T204742\nChange-Id: I0667a02612526d8ddfd91d5de48b6faa78bd1ab5\n",
"bugs": [
"T204742"
],
"subject": "objectcache: add object segmentation support to BagOStuff",
"hash": "b09b3980f991bb02a64cce0e462b97bada3b4776",
"date": "2019-06-11T15:14:17"
}
]
},
"includes/libs/objectcache/MemcachedPeclBagOStuff.php": {
"File": "includes/libs/objectcache/MemcachedPeclBagOStuff.php",
"TicketCount": 1,
"CommitCount": 1,
"Tickets": [
"T204742"
],
"Commits": [
{
"message": "objectcache: add object segmentation support to BagOStuff\n\nUse it for ApiStashEdit so that large PaserOutput can be stored.\n\nAdd flag to allow for value segmentation on set() in BagOStuff.\nAlso add a flag for immediate deletion of segments on delete().\n\nBagOStuff now has base serialize()/unserialize() methods.\n\nBug: T204742\nChange-Id: I0667a02612526d8ddfd91d5de48b6faa78bd1ab5\n",
"bugs": [
"T204742"
],
"subject": "objectcache: add object segmentation support to BagOStuff",
"hash": "b09b3980f991bb02a64cce0e462b97bada3b4776",
"date": "2019-06-11T15:14:17"
}
]
},
"includes/libs/objectcache/MemcachedPhpBagOStuff.php": {
"File": "includes/libs/objectcache/MemcachedPhpBagOStuff.php",
"TicketCount": 1,
"CommitCount": 1,
"Tickets": [
"T204742"
],
"Commits": [
{
"message": "objectcache: add object segmentation support to BagOStuff\n\nUse it for ApiStashEdit so that large PaserOutput can be stored.\n\nAdd flag to allow for value segmentation on set() in BagOStuff.\nAlso add a flag for immediate deletion of segments on delete().\n\nBagOStuff now has base serialize()/unserialize() methods.\n\nBug: T204742\nChange-Id: I0667a02612526d8ddfd91d5de48b6faa78bd1ab5\n",
"bugs": [
"T204742"
],
"subject": "objectcache: add object segmentation support to BagOStuff",
"hash": "b09b3980f991bb02a64cce0e462b97bada3b4776",
"date": "2019-06-11T15:14:17"
}
]
},
"includes/api/ApiFeedContributions.php": {
"File": "includes/api/ApiFeedContributions.php",
"TicketCount": 1,
"CommitCount": 1,
"Tickets": [
"T230239"
],
"Commits": [
{
"message": "ApiFeedContributions: Throw if the username is invalid\n\nBug: T230239\nChange-Id: I4141047c8f1ff73665b79a27a7c5eb995c52ea88\n",
"bugs": [
"T230239"
],
"subject": "ApiFeedContributions: Throw if the username is invalid",
"hash": "07f5046ede9de09729e64d5c527de32f9f38e46b",
"date": "2019-08-13T17:35:49"
}
]
},
"includes/language/LanguageCode.php": {
"File": "includes/language/LanguageCode.php",
"TicketCount": 2,
"CommitCount": 1,
"Tickets": [
"T231200",
"T231198"
],
"Commits": [
{
"message": "Revert \"Make LocalisationCache a service\"\n\nThis reverts commits:\n - 76a940350d36c323ebedb4ab45cc81ed1c6b6c92\n - b78b8804d076618e967c7b31ec15a1bd9e35d1d0\n - 2e52f48c2ed8dcf480843e2186f685a86810e2ac\n - e4468a1d6b6b9fdc5b64800febdc8748d21f213d\n\nBug: T231200\nBug: T231198\nChange-Id: I1a7e46a979ae5c9c8130dd3927f6663a216ba753\n",
"bugs": [
"T231200",
"T231198"
],
"subject": "Revert \"Make LocalisationCache a service\"",
"hash": "308e6427aef169a575a339e6a8e0558d29403a1d",
"date": "2019-08-26T16:28:26"
}
]
},
"includes/watcheditem/WatchedItemStoreInterface.php": {
"File": "includes/watcheditem/WatchedItemStoreInterface.php",
"TicketCount": 1,
"CommitCount": 1,
"Tickets": [
"T243449"
],
"Commits": [
{
"message": "When clearing don't load the watchlist if we must clear through a job\n\nIt looks like this bug has existed in the special page since before the\ntime of the refactoring into WatchedItemStore although apparently\nit has only just surfaced now!\n\nThis adds a new method into WatchedItemStore that decides if the\nwatchlist can be cleared interactively or must use the job queue.\nThis can then be used in the special page instead of the old logic\nwhich would load the watchlist and then count the loaded items\n(inefficient if you know your clearing the list anyway)\n\nBug: T243449\nChange-Id: I810d89e3e1142a223430f7fc5f8598a493637a72\n",
"bugs": [
"T243449"
],
"subject": "When clearing don't load the watchlist if we must clear through a job",
"hash": "8d3f6b417f2baece555d365dc518ebaf9bcbd7bd",
"date": "2020-01-29T18:36:47"
}
]
},
"includes/title/TitleValue.php": {
"File": "includes/title/TitleValue.php",
"TicketCount": 1,
"CommitCount": 1,
"Tickets": [
"T200055"
],
"Commits": [
{
"message": "Don't fail hard on bad titles in the database.\n\nThis updates some code that has been constructing TitleValue directly\nto use TitleValue::tryNew or TitleParser::makeTitleValueSafe.\n\nBug: T200055\nChange-Id: If781fe62213413c8fb847fd9e90f079e2f9ffc9d\n",
"bugs": [
"T200055"
],
"subject": "Don't fail hard on bad titles in the database.",
"hash": "e98094956ab61baa73d0753d00b10f782b62e73e",
"date": "2019-11-25T21:15:38"
}
]
},
"includes/block/DatabaseBlock.php": {
"File": "includes/block/DatabaseBlock.php",
"TicketCount": 1,
"CommitCount": 1,
"Tickets": [
"T192964"
],
"Commits": [
{
"message": "block: Log some details to debug DatabaseBlock::setBlocker\n\nBug: T192964\nChange-Id: I309eb85e364366eeb2f2a383f1008b4b42d83481\n",
"bugs": [
"T192964"
],
"subject": "block: Log some details to debug DatabaseBlock::setBlocker",
"hash": "52e648f8a3131c81f317773b090d3f4e4c633722",
"date": "2019-12-20T14:45:04"
}
]
},
"includes/api/ApiQueryInfo.php": {
"File": "includes/api/ApiQueryInfo.php",
"TicketCount": 1,
"CommitCount": 1,
"Tickets": [
"T239451"
],
"Commits": [
{
"message": "Avoid master connections for prop=info and intestactionsdetail=full\n\nThere is no security issue using RIGOR_FULL here, because the\ninformation is not used to take an action. It is used for information to\nthe caller and the caller may not allow the action.\nBut even the caller allows the action, the action api code would check\npermission with RIGOR_SECURE before doing the action.\n\nAlso use the constant to make clear which string is from api and which\none is from the PermissionManager\n\nBug: T239451\nChange-Id: If182f0e967187704ba3fdd14592a0badff097571\n",
"bugs": [
"T239451"
],
"subject": "Avoid master connections for prop=info and intestactionsdetail=full",
"hash": "7286cf1c96082fa0ffdffc90a7fd3562d61eae5c",
"date": "2019-12-06T20:29:15"
}
]
},
"includes/ActorMigration.php": {
"File": "includes/ActorMigration.php",
"TicketCount": 2,
"CommitCount": 1,
"Tickets": [
"T239772",
"T207217"
],
"Commits": [
{
"message": "ActorMigration: Improve getWhere() handling of $users\n\nSome callers, when provided an invalid user name, will wind up passing\nnull or false. This raises a PHP warning and winds up treating it as the\nempty array. In this case, it seems best to DWIM and continue that\nbehavior without the warning.\n\nAt the same time, let's more explicitly reject other values for $users.\n\nBug: T239772\nBug: T207217\nChange-Id: I6027481f6cad222369911d5053fecc06c92b36ea\n",
"bugs": [
"T239772",
"T207217"
],
"subject": "ActorMigration: Improve getWhere() handling of $users",
"hash": "531399f5c74803d52f552eacd05f5e22cfc9db87",
"date": "2019-12-04T21:00:02"
}
]
},
"includes/api/ApiQueryUserContribs.php": {
"File": "includes/api/ApiQueryUserContribs.php",
"TicketCount": 2,
"CommitCount": 1,
"Tickets": [
"T139012",
"T239772"
],
"Commits": [
{
"message": "Remove hacks for lack of index on rc_this_oldid\n\nIn several places, we're including rc_timestamp or other fields in a\nquery selecting on rc_this_oldid because there was historically no index\non the column.\n\nThe needed index was created by I0ccfd26d and deployed by T202167, so\nlet's remove the hacks.\n\nBug: T139012\nBug: T239772\nChange-Id: Ic99760075bde6603c9f2ab3ee262f5a2878205c7\n",
"bugs": [
"T139012",
"T239772"
],
"subject": "Remove hacks for lack of index on rc_this_oldid",
"hash": "152376376e6ef60c7169e31582db2be78194b0d4",
"date": "2019-12-04T21:00:02"
}
]
},
"includes/revisiondelete/RevDelRevisionItem.php": {
"File": "includes/revisiondelete/RevDelRevisionItem.php",
"TicketCount": 2,
"CommitCount": 1,
"Tickets": [
"T139012",
"T239772"
],
"Commits": [
{
"message": "Remove hacks for lack of index on rc_this_oldid\n\nIn several places, we're including rc_timestamp or other fields in a\nquery selecting on rc_this_oldid because there was historically no index\non the column.\n\nThe needed index was created by I0ccfd26d and deployed by T202167, so\nlet's remove the hacks.\n\nBug: T139012\nBug: T239772\nChange-Id: Ic99760075bde6603c9f2ab3ee262f5a2878205c7\n",
"bugs": [
"T139012",
"T239772"
],
"subject": "Remove hacks for lack of index on rc_this_oldid",
"hash": "152376376e6ef60c7169e31582db2be78194b0d4",
"date": "2019-12-04T21:00:02"
}
]
},
"includes/Status.php": {
"File": "includes/Status.php",
"TicketCount": 1,
"CommitCount": 1,
"Tickets": [
"T237559"
],
"Commits": [
{
"message": "Do not run wfEscapeWikiText on array in Status class\n\nMessage params could be more complex than a string\nFor example by use of Message::numParam()\n\nBug: T237559\nChange-Id: I1be2ad3f73f189f69f955d1c4e1da75652e5e8ff\n",
"bugs": [
"T237559"
],
"subject": "Do not run wfEscapeWikiText on array in Status class",
"hash": "c4d4d81ae4b376c8865eda18970701675ac1a3fc",
"date": "2019-12-02T12:51:11"
}
]
},
"includes/specials/SpecialChangeEmail.php": {
"File": "includes/specials/SpecialChangeEmail.php",
"TicketCount": 1,
"CommitCount": 1,
"Tickets": [
"T226337"
],
"Commits": [
{
"message": "Use User::getInstanceForUpdate on SpecialChangeEmail\n\nThe special page changes the user settings and should use an fresh user\ninstance for it to make sure the changes are based on the last changes\nin the db.\n\nBug: T226337\nChange-Id: I182351abce58e756e93e12d49490e770b832a2fb\n",
"bugs": [
"T226337"
],
"subject": "Use User::getInstanceForUpdate on SpecialChangeEmail",
"hash": "e5c9b2a8a64d813bb0879375b076df6e9cd5393b",
"date": "2019-11-28T20:30:34"
}
]
},
"includes/api/ApiQueryBase.php": {
"File": "includes/api/ApiQueryBase.php",
"TicketCount": 1,
"CommitCount": 1,
"Tickets": [
"T200055"
],
"Commits": [
{
"message": "Don't fail hard on bad titles in the database.\n\nThis updates some code that has been constructing TitleValue directly\nto use TitleValue::tryNew or TitleParser::makeTitleValueSafe.\n\nBug: T200055\nChange-Id: If781fe62213413c8fb847fd9e90f079e2f9ffc9d\n",
"bugs": [
"T200055"
],
"subject": "Don't fail hard on bad titles in the database.",
"hash": "e98094956ab61baa73d0753d00b10f782b62e73e",
"date": "2019-11-25T21:15:38"
}
]
},
"includes/api/ApiQueryWatchlistRaw.php": {
"File": "includes/api/ApiQueryWatchlistRaw.php",
"TicketCount": 1,
"CommitCount": 1,
"Tickets": [
"T200055"
],
"Commits": [
{
"message": "Don't fail hard on bad titles in the database.\n\nThis updates some code that has been constructing TitleValue directly\nto use TitleValue::tryNew or TitleParser::makeTitleValueSafe.\n\nBug: T200055\nChange-Id: If781fe62213413c8fb847fd9e90f079e2f9ffc9d\n",
"bugs": [
"T200055"
],
"subject": "Don't fail hard on bad titles in the database.",
"hash": "e98094956ab61baa73d0753d00b10f782b62e73e",
"date": "2019-11-25T21:15:38"
}
]
},
"includes/cache/LinkBatch.php": {
"File": "includes/cache/LinkBatch.php",
"TicketCount": 1,
"CommitCount": 1,
"Tickets": [
"T200055"
],
"Commits": [
{
"message": "Don't fail hard on bad titles in the database.\n\nThis updates some code that has been constructing TitleValue directly\nto use TitleValue::tryNew or TitleParser::makeTitleValueSafe.\n\nBug: T200055\nChange-Id: If781fe62213413c8fb847fd9e90f079e2f9ffc9d\n",
"bugs": [
"T200055"
],
"subject": "Don't fail hard on bad titles in the database.",
"hash": "e98094956ab61baa73d0753d00b10f782b62e73e",
"date": "2019-11-25T21:15:38"
}
]
},
"includes/specials/SpecialContributions.php": {
"File": "includes/specials/SpecialContributions.php",
"TicketCount": 1,
"CommitCount": 1,
"Tickets": [
"T234450"
],
"Commits": [
{
"message": "SpecialContributions: Use PoolCounter to limit concurrency\n\nAllow using PoolCounter to limit the number of times a user or IP can\nconcurrently load Special:Contributions.\n\nBy default no limitation is applied. Key 'SpecialContributions' in\n$wgPoolCounterConf must be set to configure the concurrency.\n\nBug: T234450\nChange-Id: Ie769fa170093bfb6d281c651d3857545d139e009\n",
"bugs": [
"T234450"
],
"subject": "SpecialContributions: Use PoolCounter to limit concurrency",
"hash": "f3819b6e2ecb3c1f19f71cc6b19b49988e582a84",
"date": "2019-11-19T19:36:35"
}
]
},
"includes/preferences/DefaultPreferencesFactory.php": {
"File": "includes/preferences/DefaultPreferencesFactory.php",
"TicketCount": 1,
"CommitCount": 1,
"Tickets": [
"T231029"
],
"Commits": [
{
"message": "Add more information to exception thrown\n\nIt's not just about the preference name, because current user's properties\nare used for validation too.\n\nBug: T231029\nChange-Id: I268b959017bb0dce2b4295d5302a544bfa3513eb\n",
"bugs": [
"T231029"
],
"subject": "Add more information to exception thrown",
"hash": "d06baad826f3a6420ace88f7c06fdc1e30e26ffb",
"date": "2019-08-27T02:03:09"
}
]
},
"includes/libs/objectcache/wancache/WANObjectCache.php": {
"File": "includes/libs/objectcache/wancache/WANObjectCache.php",
"TicketCount": 1,
"CommitCount": 1,
"Tickets": [
"T238197"
],
"Commits": [
{
"message": "Add more logging to getWithSetCallback()\n\nTo help isolate the referenced bug, which is a sporadic test failure\nin testGetWithSetCallback().\n\nBug: T238197\nChange-Id: If35d60340c804b6bfe1e9ddfcf53c76373c794b1\n",
"bugs": [
"T238197"
],
"subject": "Add more logging to getWithSetCallback()",
"hash": "24812e409670e31d048314f0f3551538fba8c234",
"date": "2019-11-18T05:32:47"
}
]
},
"includes/api/ApiQueryBacklinksprop.php": {
"File": "includes/api/ApiQueryBacklinksprop.php",
"TicketCount": 1,
"CommitCount": 1,
"Tickets": [
"T235316"
],
"Commits": [
{
"message": "Run executeGenderCacheFromResultWrapper with titles in prop=linkshere\n\nThe gender information are only needed when outputting titles,\nin other cases (when only pageids are selected) it results in a\nundefined property\n\nBug: T235316\nFollow-Up: I911dcb160a7b169091b9e8f66fb3908d0f2a1ba4\nChange-Id: I5c9a501919914afd38343551c755126c98d457e6\n",
"bugs": [
"T235316"
],
"subject": "Run executeGenderCacheFromResultWrapper with titles in prop=linkshere",
"hash": "f5d0ecce94eb90165242aa6c489f89da880ffabb",
"date": "2019-10-12T08:40:22"
}
]
},
"includes/search/RevisionSearchResultTrait.php": {
"File": "includes/search/RevisionSearchResultTrait.php",
"TicketCount": 1,
"CommitCount": 1,
"Tickets": [
"T233763"
],
"Commits": [
{
"message": "searchResult: Fix docs for mTitle\n\nIt's documented as Title, but there's plenty of checks for null.\n\nBug: T233763\nChange-Id: Id3b89caaa857000a42534f164eecad8a6e791de4\n",
"bugs": [
"T233763"
],
"subject": "searchResult: Fix docs for mTitle",
"hash": "d40c7e24f8e895430c08d14f9c437b6e18087efe",
"date": "2019-10-07T11:10:34"
}
]
},
"includes/Rest/EntryPoint.php": {
"File": "includes/Rest/EntryPoint.php",
"TicketCount": 1,
"CommitCount": 1,
"Tickets": [
"T206283"
],
"Commits": [
{
"message": "Avoid using \"enqueue\" mode for deferred updates in doPostOutputShutdown\n\nSet appropriate headers and flush the output as needed to avoid blocking\nthe client on post-send updates for the stock apache2 server scenario.\nSeveral cases have bits of header logic to avoid delay:\n\na) basic GET/POST requests that succeed (e.g. HTTP 2XX)\nb) requests that fail with errors (e.g. HTTP 500)\nc) If-Modified-Since requests (e.g. HTTP 304)\nd) HEAD requests\n\nThis last two still block on deferred updates, so schedulePostSendJobs()\ndoes not trigger on them as a form of mitigation. Slow deferred updates\nshould only trigger on POST anyway (inline and redirect responses are\nOK), so this should not be much of a problem.\n\nDeprecate triggerJobs() and implement post-send job runs as a deferred.\nThis makes it easy to check for the existence of post-send updates by\ncalling DeferredUpdates::pendingUpdatesCount() after the pre-send stage.\nAlso, avoid running jobs on requests that had exceptions. Relatedly,\nremove $mode option from restInPeace() and doPostOutputShutdown()\nOnly one caller was using the non-default options.\n\nBug: T206283\nChange-Id: I2dd2b71f1ced0f4ef8b16ff41ffb23bb5b4c7028\n",
"bugs": [
"T206283"
],
"subject": "Avoid using \"enqueue\" mode for deferred updates in doPostOutputShutdown",
"hash": "4f11b614544be8cb6198fbbef36e90206ed311bf",
"date": "2019-09-30T22:59:59"
}
]
},
"includes/libs/http/MultiHttpClient.php": {
"File": "includes/libs/http/MultiHttpClient.php",
"TicketCount": 1,
"CommitCount": 1,
"Tickets": [
"T232487"
],
"Commits": [
{
"message": "Revert \"Improve MultiHttpClient connection concurrency and reuse\"\n\nThis reverts commit 46531d62852239f620f7b7c0af1e5747a9006228.\n\nBug: T232487\nChange-Id: I8b1b829197f0f5758a85cb1547e13448d425aed2\n",
"bugs": [
"T232487"
],
"subject": "Revert \"Improve MultiHttpClient connection concurrency and reuse\"",
"hash": "134e933e610b712089feab62736ddb2f88ea4f86",
"date": "2019-09-10T14:56:38"
}
]
},
"includes/installer/CliInstaller.php": {
"File": "includes/installer/CliInstaller.php",
"TicketCount": 1,
"CommitCount": 1,
"Tickets": [
"T231876"
],
"Commits": [
{
"message": "Revert \"Modify -—with-extensions to throw extension dependency errors\"\n\nThis reverts commit d9eec3c9124d87fd44e6917d5b1512b78352afb3.\n\nReason for revert: Breaking most of CI\n\nBug: T231876\nChange-Id: I9b64a2bb770ee2e7ee717669070843814f37e81e\n",
"bugs": [
"T231876"
],
"subject": "Revert \"Modify -—with-extensions to throw extension dependency errors\"",
"hash": "cc7ec36a577161bba06159abd15efddc8a4f745d",
"date": "2019-09-03T16:35:48"
}
]
},
"includes/installer/WebInstallerOptions.php": {
"File": "includes/installer/WebInstallerOptions.php",
"TicketCount": 1,
"CommitCount": 1,
"Tickets": [
"T231876"
],
"Commits": [
{
"message": "Revert \"Modify -—with-extensions to throw extension dependency errors\"\n\nThis reverts commit d9eec3c9124d87fd44e6917d5b1512b78352afb3.\n\nReason for revert: Breaking most of CI\n\nBug: T231876\nChange-Id: I9b64a2bb770ee2e7ee717669070843814f37e81e\n",
"bugs": [
"T231876"
],
"subject": "Revert \"Modify -—with-extensions to throw extension dependency errors\"",
"hash": "cc7ec36a577161bba06159abd15efddc8a4f745d",
"date": "2019-09-03T16:35:48"
}
]
},
"includes/upload/UploadFromChunks.php": {
"File": "includes/upload/UploadFromChunks.php",
"TicketCount": 1,
"CommitCount": 1,
"Tickets": [
"T231488"
],
"Commits": [
{
"message": "Fix call to non-existing TempFSFileFactory::getTempFSFile()\n\nShould be TempFSFileFactory::newTempFSFile()\n\nBug: T231488\nChange-Id: I9fbf7d993773f55965268ac10b347110148671c9\n",
"bugs": [
"T231488"
],
"subject": "Fix call to non-existing TempFSFileFactory::getTempFSFile()",
"hash": "60d5b8022d03fe13c8157c0dd08e404891bd2744",
"date": "2019-08-28T19:08:58"
}
]
},
"includes/gallery/ImageGalleryBase.php": {
"File": "includes/gallery/ImageGalleryBase.php",
"TicketCount": 2,
"CommitCount": 1,
"Tickets": [
"T231340",
"T231353"
],
"Commits": [
{
"message": "BadFileLookup::isBadFile() expects null, not false\n\nThis deviation in behavior from wfIsBadImage() is accounted for in that\nfunction, but I didn't account for it when changing callers to use the\nservice.\n\nBug: T231340\nBug: T231353\nChange-Id: Iddf177770fb1763ed295d694ed6bab441ea9ab73\n",
"bugs": [
"T231340",
"T231353"
],
"subject": "BadFileLookup::isBadFile() expects null, not false",
"hash": "bc0405d52ff578338aa319c948011e973bfc57c2",
"date": "2019-08-27T17:50:36"
}
]
},
"includes/pager/TablePager.php": {
"File": "includes/pager/TablePager.php",
"TicketCount": 1,
"CommitCount": 1,
"Tickets": [
"T231261"
],
"Commits": [
{
"message": "TablePager: put parent construct call back at end\n\nRestores position of parent constructor call, changed in commit\nI082152b64141f1a.\n\nThe parent constructor calls getIndexField(), which depends on the\nmSort value already being set.\n\nBug: T231261\nChange-Id: If07f10075a51fbbe9de24464cb6844faaad94780\n",
"bugs": [
"T231261"
],
"subject": "TablePager: put parent construct call back at end",
"hash": "28a221825fbcf02f20ad1b228c2f00b70d6d9a92",
"date": "2019-08-27T04:43:01"
}
]
},
"includes/Storage/RevisionLookup.php": {
"File": "includes/Storage/RevisionLookup.php",
"TicketCount": 2,
"CommitCount": 1,
"Tickets": [
"T184559",
"T183548"
],
"Commits": [
{
"message": "Revert \"Revert \"[MCR] Add and use $title param to RevisionStoregetPrevious/Next\"\"\n\nThis is a partial revert of a revert that reverted a fix believed to\nhave had its underlying issue fixed in:\nhttps://gerrit.wikimedia.org/r/#/c/400577/\n\nThe compat layer (Revision), now passes a Title object into the\nRevisionStore, and this title is used to construct the Record and\nalso any new Revision objects.\n\nBug: T184559\nBug: T183548\nChange-Id: Id073265c173f60aa8c456550fdb4bb5196013be8\n",
"bugs": [
"T184559",
"T183548"
],
"subject": "Revert \"Revert \"[MCR] Add and use $title param to RevisionStoregetPrevious/Next\"\"",
"hash": "3e2fdb71ed8dab35934ce289d5e559153326028c",
"date": "2018-01-10T17:05:53"
}
]
}
}