The HTTPS connection is terminated at the proxy server so the app server sees the traffic coming in on port 80. Presumably the proxy will set X-Forwarded-Proto so you can use something like RewriteCond %{HTTP:X-Forwarded-Proto} http instead.
- Queries
- All Stories
- Search
- Advanced Search
- Transactions
- Transaction Logs
Advanced Search
Jul 2 2019
Jul 1 2019
Dashboard: https://logstash-beta.wmflabs.org/app/kibana#/dashboard/AWukuPDnAbPxH_YmoOuJ
- tags (which contain lots of useful information such as MediaWiki version, language, skin...) are handled extremely poorly by kibana in their current format. They should be converted into top-level properties (preferably with field names matching those in PHP errors so that Kibana visualisations / saved searches can be reused). Not sure what's the right place for that: client, EventGate, logstash filter? (The last one has its own domain language so it would be the most painful to maintain.)
- the huge ResourceLoader URLs make stack traces visually noisy, and the culprit (name of script where the error occurred) useless. That will be fixed by source maps in some far future date but for now it would be nice to have functionname-only stack traces, and to display the function name from the top stack frame as the culprit. I see no way to do that in kibana (via the GUI at least) so again that would require some preprocessing.
Discourse uses markdown so you could try one of the existing markdown -> wikitext converters, like Pandoc.
Probably not worth putting much effort into, now that the video player is about to be replaced.
On a closer look, the patch and the task description seem to be about different issues - the patch removes the (disfunctional) View Source button, the task talks about Edit Source.
Jun 30 2019
Thanks @Jamesmontalvo3 for originally reporting the issue and @MarkAHershberger for tracking down the exact trigger!
As for granular enabling, I'd imagine we want to use channels like 'parsoid.trace.tsp' and make the channel configuration fall back to 'parsoid.trace' if unspecified. That would require some changes to Wikimedia config, but doesn't seem too difficult.
The main difference is that Parsoid supports lazy evaluation via closures, while MediaWiki logging doesn't (neither the abstraction layer, PSR-3, nor the specific implementation used in production, Monolog). It's easy to write a Monolog processor for evaluating closures, so that seems like the most likely path forward.
It shouldn't be <a class="external" href="-{R|https://www.mediawiki.org/wiki/Help:OAuth/Errors#E009}-">E009</a>
Note that deleting a thread and purging it from storage are different things. The latter will be needed here for compliance with the data retention policy.
Jun 29 2019
@Halfak says the ores Python package can be used to produce it.
@Halfak can I help move this along? If you can specify the conditions and the expected format, I can produce a dataset.
Jun 28 2019
{ "_index": "logstash-2019.06.28", "_type": "clienterror", "_id": "AWuePijpAbPxH_Ymd-b9", "_score": 1, "_source": { "exception": { "type": "Error", "value": "Unknown module: jquery.wikibase.sitelinkview" }, "request": { "headers": { "User-Agent": "*SNIP*" }, "url": "https://wikidata.beta.wmflabs.org/wiki/Q15905" }, "$schema": "/client/error/1.0.0", "level": "ERROR", "logger": "javascript", "project": "4", "message": "Unknown module: jquery.wikibase.sitelinkview", "type": "clienterror", "platform": "javascript", "normalized_message": "Unknown module: jquery.wikibase.sitelinkview", "tags": [ "es", "normalized_message_untrimmed" ], "culprit": "https://wikidata.beta.wmflabs.org/w/load.php?lang=en&modules=startup&only=scripts&raw=1&skin=vector", "@timestamp": "2019-06-28T13:20:02.266Z", "stacktrace": { "frames": [ { "in_app": "true", "filename": "https://wikidata.beta.wmflabs.org/w/load.php?lang=en&modules=startup&only=scripts&raw=1&skin=vector", "lineno": "97", "colno": "386", "function": "?" }, { "in_app": "true", "filename": "https://wikidata.beta.wmflabs.org/w/load.php?lang=en&modules=startup&only=scripts&raw=1&skin=vector", "lineno": "97", "colno": "190", "function": "?" }, { "in_app": "true", "filename": "https://wikidata.beta.wmflabs.org/w/load.php?lang=en&modules=startup&only=scripts&raw=1&skin=vector", "lineno": "21", "colno": "126", "function": "load" }, { "in_app": "true", "filename": "https://wikidata.beta.wmflabs.org/w/load.php?lang=en&modules=startup&only=scripts&raw=1&skin=vector", "lineno": "9", "colno": "656", "function": "resolveStubbornly" }, { "in_app": "true", "filename": "https://wikidata.beta.wmflabs.org/w/load.php?lang=en&modules=startup&only=scripts&raw=1&skin=vector", "lineno": "9", "colno": "306", "function": "sortDependencies" }, { "in_app": "true", "filename": "https://wikidata.beta.wmflabs.org/w/load.php?lang=en&modules=startup&only=scripts&raw=1&skin=vector", "lineno": "8", "colno": "792", "function": "sortDependencies" } ] }, "meta": { "stream": "client.error" }, "original_tags": [ [ "debug", "false" ], [ "ns", "0" ], [ "page_name", "Q15905" ], [ "skin", "vector" ], [ "action", "view" ], [ "language", "en" ], [ "source", "resolve" ], [ "version", "1.34.0-alpha" ], [ "user_groups", [ "*" ] ], "input-clienterror-eqiad", "kafka", "truncated_by_filter_truncate" ], "@version": "1" }, "fields": { "@timestamp": [ 1561728002266 ] } }
See T226640: ReadingLists CI broken. Should be fixed with the next MediaWiki deploy.
Jun 27 2019
In T226766#5290449, @Reedy wrote:1.27 isn't supported ;)
composer.json@1b990608 (which is currently pinned in MW 1.31) and composer.json@2018.1.2 are identical apart from dev requirements, so probably the simplest fix for 1.31 is replacing "jetbrains/phpstorm-stubs": "dev-master#1b9906084d6635456fcf3f3a01f0d7d5b99a578a" with "jetbrains/phpstorm-stubs": "2018.1.2#1b9906084d6635456fcf3f3a01f0d7d5b99a578a".
Per codesearch Translatewiki is the only code currently affected (and that's not really publicly released software; still, heads-up, @Nikerabbit). Not sure how to check which past releases are affected.
@Majavah just started working on site requests as a new volunteer, maybe they have some insight on what needs to be better documented.
Per IRC discussion, beta Logstash needs to be configured to ingest from the eqiad.client.error Kafka topic.
Presumably this refers to the founding principles, which say "almost anyone [should be able] to edit (most) articles without registration". I don't think that automatically extends to discussion spaces (most of which already require an account of some sort - e.g. you need to register an email address to participate on the mailing lists). Articles are seen as the point of entry for new contributors, so keeping the barrier to entry as low as possible helps (and you don't need any familiarity with the community or movement to spot a spelling error). If the Wikimedia Space is similarly seen as a contact surface with people completely new to the movement, keeping the barrier of entry as low as possible is important. If it's mainly seen as a communication space for people who are already in the movement, then SUL login should suffice. Those both seem like potentially valid approaches to me.
Jun 26 2019
Tested with https://en.wikipedia.beta.wmflabs.org/wiki/User:Tgr/common.js , the errors are sent to https://eventgate-logging.wmflabs.org/v1/events and result in a 201. I don't see anything in https://logstash-beta.wmflabs.org , maybe there's some processing delay?
Apparently rMWd9f688698ce0: rdbms: clean up and refactor ResultWrapper classes has changed the indexing of ResultWrapper from 1-based to 0-based, so query continuation in Reading Lists was broken for almost a month :(
In T114384#5285151, @Xover wrote:If there is a good central backlog listing the planned breaking changes accompanied by analysis of what needs fixing (which scripts call a deprecated API method, say), some guidance on how to fix it, and developer support when community volunteers get in over their head. Most technically minded community members I know of (those working on scripts and bots, say) are perfectly happy to help in this way.
We had no issues for the last 20 days. I imagine if anything broke we would know by now.
Probably some permission check somewhere, that's not included in the grant? Can't find it at a glance, though.
As for the main error, an UPDATE has no effect even though a row on which it is guaranteed to have effect has just been write-locked... no idea what could be going on there.
In T226593#5284764, @Krinkle wrote:The same requests also have the following error emitted, shortly before it reaches the fatal:
PHP Notice: Undefined property: stdClass::$rle_id
Jun 25 2019
Does it also happen with Javascript disabled?
In T226448#5282478, @Tgr wrote:That does explain it, actually. The patch replaces wfFindFile (which proxied to LocalRepo::findFile) with $localRepo->findFile (despite the name, $localRepo is RepoGroup, not LocalRepo). RepoGroup::findFile calls Repo::findFile but has an extra layer of in-process cache which LocalFileMoveBatch does not invalidate. So probably something somewhere earlier did a RepoGroup::findFile call for the move target, false got cached, and it gets loaded from the cache in the lines before DeferredUpdates::addUpdate.
Reverting rMW21e2d71560cb: Replace some uses of deprecated wfFindFile() and wfLocalFile() in full does make the error go away.
Hm, fixing that did not fix the error. (And getting it locally means it's not replag-related.) So I guess that leaves the cache in LocalFile::load (which also does not seem to be reset during move), not sure what triggered a change in behavior though.
In T226448#5281861, @Tgr wrote:They aren't equivalent, the first returns a repo, the second a repo group.
Train blockers are UBN. It's also a nuisance to Commons users per T226473: Caching issues when moving files.
In T226448#5281778, @Reedy wrote:Seems a bit race-y
tgr@stat1006:~$ analytics-mysql enwiki --use-x1
Jun 24 2019
This would be a lot easier on top of T226428: Convert stdclass-cast objects to classes wherever possible and use associative arrays elswhere as far as possible - isset/empty is usually needed because it is hard to guarantee array/stdclass fields are always set, and classes do not have that problem.
Seems like the WHATWG is winning this one.
Jun 23 2019
Also, even in the gate pipeline build, the syntax error was only caught in the unit tests, which suggests there is no HHVM linting whatsoever (presumably that would happen before phpunit) and if a class has no tests, syntax error aren't caught at all until reaching production. That seems bad.
Jun 22 2019
The POST equivalent seems flaky as well:
https://integration.wikimedia.org/ci/job/mediawiki-quibble-vendor-postgres-php72-docker/1011/console
21:10:35 1) Rollback without confirmation should perform rollback via POST request without asking the user to confirm: 21:10:35 Expected rollback page to appear. 21:10:35 running chrome 21:10:35 Error: Expected rollback page to appear. 21:10:35 at elementIdText("0.1626589557410456-1") - getText.js:35:50
(patch is a no-op)
Not sure if this makes sense in light of T96384: Integrate file revisions with description page history.
Jun 21 2019
See T182266: Error "TransportException 404 Not Found" in Jenkins jobs using composer for similar issues in the past. Sometimes it was due to upstream problems, sometimes not.
https://wikimedia.org/.well-known/matrix/server works corrently. https://wikimedia.org/.well-known/matrix/client is loaded via AJAX and complains about the lack of CORS headers though.
Jun 20 2019
Users with 5-24 edits are up too, that's encouraging (unless that stat somehow includes anons): https://stats.wikimedia.org/v2/#/hu.wikipedia.org/contributing/editors/normal|line|all|activity_level~5..24-edits|monthly
(Note to self because someone asked and I had to search for it: the original enabling was in 2009 June 2008 Nov: T17568#198689.)
In T223835#5271300, @Joe wrote:@Tgr just to be sure, you just want the url https://wikimedia.org/.well_known to be served from a static file?
In T226084#5270335, @Tgr wrote:There have been complaints on huwiki as well (and partial / failed page loads, presumably due to some kind of timeout; I have seen a few myself). Don't know if it's related. It has definitely started sooner than the group1 deployment, though.
Not limited to dewiki / Germnany (unsurprisingly), there have been a bunch of reports from huwiki/Hungary as well.
There have been complaints on huwiki as well (and partial / failed page loads, presumably due to some kind of timeout; I have seen a few myself). Don't know if it's related. It has definitely started sooner than the group1 deployment, though.
Jun 19 2019
In T225628#5267723, @hashar wrote:php7.0 went EOL in January 2019 and we released MediaWiki 1.32 the same month claiming to support php7.0. Most probably php7.0 should have been formally dropped at that point.
Jun 18 2019
Jun 17 2019
Gadget support: developers of on-wiki code (templates, modules and gadgets) get little support. We have a team for supporting external tool developers, and at least one for supporting developers of production code, and third-party code of a similar fashion, but the gadget development environment is still Stone Age, with no testing, CI, code review, barely any logging. There isn't even a central platform to store the code so all wikis have to reinvent the wheels. There were several attempts to improve things but they all stalled due to lack of proper resourcing. In the meanwhile, the importance of this kind of development is hard to overstate - templates/modules are a core component of all nontrivial wiki workflows, and of the reader UI, and gadgets are probably the most used among the volunteer-maintained tools. It would be nice to see some improvements there.
Modern release management: complex web applications these days usually try control the stack they run on, via some manner of containerization. MediaWiki in contrast tries to support a huge range of potential systems and services, and mostly fails (in theory we support five DB engines but few extensions actually work on more than two; key features like WYSIWYG editing are impossible to install on the overwhelming majority of MediaWiki installations), the software not being able to assume anything about the system degrades the default the user experience (out-of-the box search is poor, out-of-the-box logging is poor, documentation is a confusing mess of trying to explain how to perform the tasks on dozens of different systems), and we are barred from various potentially valuable technology choices like isomorphic rendering.
Code review: @TJones already mentioned this, but it's probably our largest problem today. Code review for code that another team or volunteer is not actively working on tends to take months at best. There are no incentives for staff members to do code review (not even for code review of other staff, due to the restricted annual review format, much less for volunteers), while spending the same time on e.g. a side project will much more likely result in that work being celebrated. There is (at least in theory) 10% of paid engineering staff time set aside for experimentation, but 0% for supporting volunteers who write code. We use an older version of Gerrit which makes reviews of complex patches challenging to follow.
Some more potential topics:
Also, the cache expiration is 30 days, so a request involving lots of cache misses would not be that unusual. The API should probably be modified to limit the number of uncached extmetadata lookups and force continuation when the limit is reached.
Reasons I can think of off the top of my head:
- Some other part of the imageinfo API is slow. (extmetadata is cached via FormatMetadata::fetchExtendedMetadata() but the API call itself is not.)
- FormatMetadata::fetchExtendedMetadata() itself is slow. It has dynamic cache invalidation (even if it is a cache hit, the ValidateExtendedMetadataCache hook gets invoked) so while unlikely it is not impossible.
- Some broken ValidateExtendedMetadataCache hook. (A bug in this recent patch, for example.)
- The cache (correctly) getting invalidated all the time due to frequent edits coming from SDC. (Theoretically, a change in the structured data shouldn't invalidate it, but this is pre-MCR code and not slot-aware.)
- Some bug affecting the caching logic (e.g. File::getDescriptionTouched() broke).
In T127640#5260781, @Aklapper wrote:In the discussion about potential other tools, which options have been proposed and discussed? For example, has Zulip been investigated? Could the communication requirements which Stewards have be shared somehow/somewhere?
In T225628#5262875, @Jdforrester-WMF wrote:No, that's specifically what Antoine is proposing in this task. Dropping formal MW support for PHP 7.0 is T216165.
Jun 16 2019
In T225628#5254616, @hashar wrote:php 7.0 is end of life, and if we were to support it for MediaWiki 1.31, that means we get to port php 7.0 until January 2021. Or 3 years after that PHP version got EOL. At that point, I don't think those tests would be any helpful since nobody would still be running PHP 7.0. In facts, nobody should be running php 7.0 already in June 2019.
In T225871#5260937, @MaxSem wrote:I would actually object to this: imagine your change has caused multiple test failures that you weren't able to predict in your dev environment (because you didn't have all extensions installed or your environment is otherwise different from our CI). You'll have to amend your PR with one fix at a time and push it just to see what explodes next.
In T224920#5259766, @JoeWalsh wrote:If there's no significant performance penalty on the MW API for getting additional fields, I'd say it's worth just returning everything