Given the data we have, I think we can use:
Verified the fix on enwiki front page using Opera 11.64
I've just verified the current stable Opera out of curiosity and it does (unsurprisingly) render our webps correctly.
Indeed. I've installed 11.64 and even the lossy ones don't work. And it does advertise webp support in request headers:
I guess this means that these older Opera versions send request headers stating that they accept webp when they're in fact incapable of rendering them?
No concerns raised by Legal nor Analytics since I emailed them a week ago about origin trials, I think this confirms that we can use them.
While the Feature policies have been implemented in Chrome some time ago, and the reporting feature added as well, the "report only" mode hasn't been implemented yet:
Fri, Nov 16
I think this is the likely culprit:
This all looks very reasonable to me, thanks.
Thu, Nov 15
Have you diffed the output coming from HHVM and PHP7, to ensure that they're generating the same HTML for these pages?
Wed, Nov 14
I've checked briefly ~100000 images requests on an esams Varnish server, with the current threshold only 0.035% of image responses are image/webp.
In the first example, what's the gzipped change for the whole page? Not just the metadata. Make sure that you're using similar gzipping settings as we do in production, as compression effort may vary depending on the settings. It's also important that the made up article name in the metadata matches the actual article you're testing this on. Otherwise you're introducing text differences that won't compress well, which never happen in the real world. Similarly, an image path that long isn't actually possible. Image paths are based on the corresponding file page title, which is limited to 255 bytes: https://www.mediawiki.org/wiki/Page_title_size_limitations
Mon, Nov 12
I've clarified one last point, which is how they consider that a feature is used (counting towards the quota). It depends on the feature, but for JS APIs, mere existence checks won't count as usage. Eg.
Fri, Nov 9
Thu, Nov 8
Header name + data is 55 bytes longs. We have 1.2 billion thumbnails in Swift. That's 66 GiB of data. Which represents 0.02% of our Swift storage space. Not earth-shattering savings, I'll grant you that, but I think we should get rid of data we don't need. It does affect speed a little as well when fetching thumbnails from Swift.
This seems intentional and I can finally understand why. You need to keep recording the non-oversampled hits, otherwise that lowers your record rate of non-oversampled pageviews.
Looking at how it works, it seems safe for privacy. We would expose a fixed token generated for the trial we want to perform, which automatically expires after 6 weeks. The token can be served as a response header or a meta tag. This means that we would be serving the same token to all visitors.
The basic functionality is there. If we want to iterate on that, it should be the subject of a new task.
Looking at Grafana, it appears that Chrome 69 is prone to sending extremely high values for metrics like firstPaint:
Wed, Nov 7
Tue, Nov 6
1.31.0-wmf.6 was deployed on 2017-11-01 to group 1 and on 2017-11-02 to group 2
This 2017-11-01 SAL entry seems noteworthy:
I've found something very interesting. If you plot both navtiming and navtiming2, navtiming2 for mobile is in direct continuation of the old trend, without any regression (it's improving over time, even!):
Mon, Nov 5
Exactly, the way the webp support works if that it restarts the Varnish transaction after rewriting the request URL. This is a Varnish feature that hasn't been used much in production before. Sorry for the mess it created in the kafka pipeline, I had not anticipated that it could cause something like this. We can easily turn the feature off if you guys have some fixing to do, and turn it back on later.
Sun, Nov 4
This is really bizarre. Second time it happens, and the previous affected file didn't seem to have special characters besides a dash (there could be more than one bug involved, though): https://upload.wikimedia.org/wikipedia/commons/thumb/d/d8/PL_Jean_de_La_Fontaine_-_Bajki.djvu/page657-1024px-PL_Jean_de_La_Fontaine_-_Bajki.djvu.jpg
Mon, Oct 29
Sun, Oct 28
Fri, Oct 26
SELECT COUNT(*) FROM event.quicksurveyinitiation WHERE year = 2018 and month = 10 AND day = 26 AND event.pageviewToken IS NOT NULL;
Some browsers are targeting 60fps, which means around 16ms between paint frames to execute stuff. I don't know if somehow those 60fps targets could be somehow synced to round times, which would explain a 8/16ms cycle, and the spikes would be clients that are up to date with NTP and experiencing smooth execution, others that have a clock skew, that are currently experiencing jank or that are coming from browsers that would have a different behaviour.
Wed, Oct 24
Tue, Oct 23
Now that we have bucketed RUM data in Turnilo, it makes comparing between Chrome 69 and Chrome 70 easy. The visualisations are limited, but opening 2 tabs I can easily switch back and forth to see the difference.
Looks great! Already I'm finding interesting facts about Chrome 69 vs Chrome 70
In fact there are already other Wikimedia logos in there: https://github.com/wikimedia/operations-mediawiki-config/tree/master/static/images
https://www.wikidata.org/extensions/Wikibase/client/assets/wikimedia.png This is a very unusual location for a static image. Was this vetted by Traffic ? This image being consumed by bots/crawlers means a long-term commitment to that URL working. I would have expected it to be housed in /static/ where all the logos are, including wikidata's own. I.e. something like https://www.wikidata.org/static/images/project-logos/wikimedia.png or https://www.wikidata.org/static/images/wikimedia.png
It seems like this extra content would likely repeat strings present elsewhere in the HTML, which means that this additional content should compress well. Could you look at how much extra weight it adds to the page when gzipped on a couple of articles (big and small)? This should help put the cost into perspective.
Mon, Oct 22
This seems to be an issue with Varnish purging. Purging that file with debugging turned on, I can clearly see MediaWiki issuing the order to purge those files, including the problematic thumbnails that remain old no matter what: https://logstash.wikimedia.org/app/kibana#/doc/logstash-*/logstash-2018.10.22/mediawiki?id=AWab6v-X00on8STvlYvw&_g=h@44136fa
No, I think it's impossible to reproduce the exact conditions that happened during upload and caused this. It seems like the various expiry mechanisms ultimately rectified that thumbnail, thanks for the update.
Oct 19 2018
@Aklapper who should I assign the second step to?
I've verified that the scores are being collected correctly and the values make sense when compared to device type on Android.
Recent entries have a null surveyInstanceToken and no pageviewToken field in Hive. Triggering the survey manually and looking at the beacon call, the schema version is correct, however there isn't any surveyInstanceToken not pageviewToken parameter passed.