Reopening for 3rd party migration and MW core cleanup.
- Queries
- All Stories
- Search
- Advanced Search
- Transactions
- Transaction Logs
Advanced Search
Tue, Apr 9
Mon, Apr 8
There were a few duplicate key errors while the change was half-deployed, the last at 23:23:00.
Sun, Apr 7
Thanks for the report @labster. I can accept a Gerrit patch along these lines. It looks like you do have an account in Gerrit already.
Fri, Apr 5
I couldn't reproduce this, with PHP 8.2.15, xdebug disabled, and excimer locally compiled from the current git master.
Thu, Apr 4
CriticalSectionScope is not meant to be alive until the end of the process. That's the whole point of it, to have scope lifetime.
Wed, Apr 3
This should only happen if your IP address is in a /24 subnet (or /64 for IPv6) that hasn't been used for login in the past 80 days. Can you comment on whether that is likely to be the case?
Tue, Apr 2
Note that the other classes extended by GlobalPreferences (DefaultPreferencesFactory, PreferencesFormOOUI and SpecialPreferences) are also not marked stable to extend.
MaxSem refactored ApiOptions to allow GlobalPreferences to extend it. He just didn't add @stable to extend because his work predated the introduction of those annotations.
In T323076#9678101, @tstarling wrote:As such, it seems to me that the apiwarn-globally-overridden warning should have been an error.
In T323076#8394346, @Jdlrobson wrote:(unless we resort to hacky string indexOf checks on the warning).
Shortcuts for zoom will be provided to avoid clicking and repositioning: shift+scroll and "+" and "-" keys will increase/decrease the zoom level centering the zoom area to the current mouse position.
On T198913 we had multiple engineers arguing that users should be informed of global preference updates or overrides, so the default behaviour of action=options, where there is a non-overridden global preference and the extension has not suitably informed the user, should be to fail. As such, it seems to me that the apiwarn-globally-overridden warning should have been an error.
Sat, Mar 30
I found 1.20.6 on a random website. I think the only one we're missing is 1.20.7.
Thu, Mar 28
It's unlikely the Internet Archive or any crawler would have files that were generated in September 2013 and reported missing in December 2013. Maybe community members would have them, but the right time to ask was December 2013.
OK, well if they were already missing in 2014, I'm not going to find it in a 2018 archive.
In T349462#9667721, @Pppery wrote:Anything left to do here?
Legoktm uploaded all MediaWiki tarballs from releases.wikimedia.org to the Internet Archive in 2018. I should be able to recover the remaining missing tarballs from there.
Wed, Mar 27
I can set up a wall time limit, but it seems abusive to queue unlimited Transkribus jobs without any plans to check their responses.
I uploaded the following release tarballs from my personal archives, which mostly derive from a copy I made of the SourceForge files section in 2009. I retroactively designated the dated snapshots of 2003 as "1.0" for clarity when navigating the top-level directory. There was no other 1.0 and they immediately preceded 1.1 in the release notes. For files which already existed on releases.wikimedia.org, I confirmed that the MD5 hash was the same before removing them from the following list.
I investigated this, but the cause was not obvious from the logs. It wasn't out of memory. If it happens again, I would suggest getting the following information before restarting apache:
We (Brad Jorsch and I) didn't want random numbers in Scribunto because it encourages an inefficient implementation of things like "spotlight" templates that show a random featured article from a list of such articles. We want to cache the output from Scribunto but then people will see the same random selection for months at a time, so users will inevitably try to defeat caching or incentivize purge requests.
Mar 26 2024
PhpRedis is getting behind KeyDB with #2466 and I encouraged them along that path with a small PR of my own. I think all we need to do for now in MediaWiki is update our documentation to say that KeyDB is supported.
Mar 25 2024
Testing my fix for this, it's interesting that the case of searching for all namespaces with a specified title part is not reachable by submitting the form. The browser always submits an empty string for wpNamespaceRestrictions which is interpreted as namespace 0.
EXPLAIN says:
I gather that you're trying to show all linter rows, not all pages, in which case the join should not be a left join.
This is Special:BlockList with the "Hide single IP blocks" and "Hide range blocks" boxes both checked. In this case we only want user blocks. A simpler condition filtering for user blocks appears to solve the issue.
Another slow query: T360864.
Thanks @dom_walden. I ran those three queries in production on enwiki, and I got times of 149ms, 479ms and 7ms, the latter presumably being due to the warm cache. Running the third query on a different server took 73ms.
Thanks @dom_walden. Looks like performance should be acceptable. Let's deploy it again and see how it goes.
Mar 24 2024
I would just like an explicit, maximally integrated regression test for this bug. By maximally integrated, I mean testing as many layers as possible while still voting on MediaWiki core. Like this...
Mar 22 2024
The logs show that there have been no more instances of "Failed opening required '...FormatJson.php'. Other ConfirmEdit-related issues can be discussed elsewhere.
Mar 21 2024
I think the likely cause is https://gerrit.wikimedia.org/r/c/mediawiki/core/+/1008569 . Hidden form fields are weird -- getInputHTML() returns an empty string, and instead getDiv() or getTableRow() adds an item to the form's mHiddenFields. The patch added HTMLFormField::getCodex() which calls getInputCodex() which calls getInputHTML(), and none of this does the special hidden field side-effect.
I can reproduce this locally so I suggest rolling back the train. We're not gaining anything by having this be deployed.
There was no captchaId field in the HTML of the form. There's meant to be a hidden field identifying the captcha, but it was missing.
I loaded the account creation page on testwiki, got the captcha ID, dumped the stored info with eval.php, then submitted the form with XWD verbose logging. In the post request debug log, the ID it used in the memcached fetch did not match what I saw on the form.
Were there any user reports? What happened when you tried to do a captcha-protected action? Did the image load? Or was the answer supposedly incorrect?
Mar 19 2024
I thought there was no cross-DC replication of thumbnails. T299125#8221206 seems to support that. So it's expected that a bad file created by T344233 would only affect one swift DC.
Mar 17 2024
Mar 15 2024
I think the only other potentially affected query is the one in DatabaseBlockStore::newLoad().
We used that subquery in 9 different places, so ideally I'd like to save it.
MariaDB [enwiki]> explain SELECT 1=0 AS `hu_deleted`,user_name,user_id FROM `user` WHERE (user_name LIKE 'M%' ESCAPE '`' ) AND (NOT EXISTS (SELECT 1 FROM `block_target` `hu_block_target` JOIN `block` ON ((bl_target=hu_block_target.bt_id)) WHERE (hu_block_target.bt_user=user_id) AND bl_deleted = 1 )) ORDER BY user_name LIMIT 11; +------+--------------+-----------------+-------+-----------------+-----------+---------+------------------------------+---------+--------------------------+ | id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra | +------+--------------+-----------------+-------+-----------------+-----------+---------+------------------------------+---------+--------------------------+ | 1 | PRIMARY | user | range | user_name | user_name | 257 | NULL | 7763136 | Using where; Using index | | 2 | MATERIALIZED | hu_block_target | range | PRIMARY,bt_user | bt_user | 5 | NULL | 810518 | Using where; Using index | | 2 | MATERIALIZED | block | ref | bl_target | bl_target | 4 | enwiki.hu_block_target.bt_id | 1 | Using where | +------+--------------+-----------------+-------+-----------------+-----------+---------+------------------------------+---------+--------------------------+ 3 rows in set (0.004 sec)
Mar 14 2024
I reverted the deployment of read-new mode due to slow queries. I analysed the slow query logs and found four categories of slow query errors, and I filed tasks for each: T360088, T360160, T360163, T360165. I will fix those bugs, and when the fixes reach production, we can continue with the deployment.
I reviewed all the patches linked from T346293, and I think the only other potentially affected query is the one in DatabaseBlockStore::newLoad().
At first glance, looking at the row counts, I thought it was the subquery. But in fact the problem is the OR. You can take the subquery out, and it's still slow, and note the choice of bl_timestamp when it's just trying to find a few targets. If you take out the OR, then it's fast. I tried forcing bl_target instead of bl_timestamp but it was still slow.
In T359032#9628853, @tstarling wrote:As I just wrote on T307816, it's not an opcache bug. The file handle limit is exceeded, then it tries to report an error without closing any of the file handles, so error reporting also fails.
I tried to reproduce this bug by following the procedure in the task description. I made a source tree with MW 1.35.14 and copied 1.37.2 over the top of it, producing a mix of 1.35 and 1.37 files. I compiled PHP 8.1.2 from source and tried to reproduce or model the error in a few different ways. But I didn't get anything particularly close to what is described.
Mar 13 2024
As I just wrote on T307816, it's not an opcache bug. The file handle limit is exceeded, then it tries to report an error without closing any of the file handles, so error reporting also fails.
[edit -- remove wrong comment]
I ran migrateBlocks.php again. Now the only mismatches are:
Mar 12 2024
The issue with logs not being rotated is apparently fixed. If there are some remaining issues with mwlog02, they should probably be discussed on a task that is not "UBN" priority.
Mar 11 2024
I ran @dom_walden's SQL script P58700. It's apparent from the results that the script failed to complete on 34 wikis, exiting with a duplicate key error, which I missed because I didn't capture stderr from the script. The errors can be found in logstash.
We're not declining it, we're fixing it as part of the schema change migration script. It didn't magically fix itself, we had to add a special case to the migration script in order to fix it.
Mar 10 2024
From my personal archives, dated 2004-07-15.