Fri, May 25
We got a trickle of responses since the survey started a bit less than 24 hours ago. The ratio is about 2% of survey impressions getting a response. I'm going to increase the rate for cawiki and enwikivoyage while I enable the survey for frwiki shortly, this is a much lower response ratio than I expected.
Thu, May 24
The survey has been launched on cawiki and enwikivoyage earlier today. Looking at the data, I discovered that QuickSurveys actually work on mobile, so this is being displayed on the mobile sites of these wikis as well.
We can check survey impressions with this (whether people respond or not): https://grafana.wikimedia.org/dashboard/db/eventlogging-schema?orgId=1&refresh=1m&from=now-24h&to=now&var-schema=QuickSurveyInitiation
Wed, May 23
I'm going to target roughly 100 survey impressions per day on cawiki, which according to Pivot see a bit more than 1 million pageviews per day. That's 0.01% of pageviews getting the survey.
@Whatamidoing-WMF are we good to go to start the survey on cawiki and enwikivoyage tomorrow as planned?
A Swift COPY is possible, but would require exposing sharding information of all wikis to a given wiki's config. Right now each wiki only gets information about its own sharding. It also requires significant refactoring or trickery in the SwiftFileBackend family of classes, which is currently architected around dealing with one wiki-specific "FileBackend" at a time.
Tue, May 22
Trying this again on a Galaxy S6:
Have you reported the issue to SauceLabs?
Mon, May 21
Sat, May 19
The updated Catalan screenshot:
Fri, May 18
Tue, May 15
Failed to compile Chrome 22, complaining about missing files in the source.
Can't manage to build Chromium on MacOS for now:
Looking back around the 15th, however, it seems like the agent picked up Chrome 66 only on April 18th, that browser upgrade doesn't explain the start of the spike on the 15th.
They haven't. CentralNotice isn't to blame for this.
Mon, May 14
https://www.w3.org/TR/navigation-timing-2/ the spec doesn't really make any recommendation about gaps between these:
Hi @stjn yes, absolutely, we can run this study on more projects! Thank you for your help, I've added your translations to the WikimediaMessages extension. This is what the survey looks like now in Russian when I test it locally:
Fri, May 11
Haha, didn't spot that. That shows how outdated the MediaWiki translations are, since that particular message has been in there for years.
Here's an updated screenshot for Catalan:
Thu, May 10
I don't have an Android phone
Fri, May 4
Why wouldn't splitting by country be feasible? Even on a cold browser cache, Varnish gets the GeoIP cookie on the very first load.php request, since it's the first pageload that sets the cookie.
As expected, the metrics are back to their old values:
Thu, May 3
As discussed during our meeting, I tried a few consecutive runs on the same device.
It's hard to say if the backported "fix" had an effect yet, since it was deployed around the lowest traffic point of the day. But so far at this time we have equivalent figures as yesterday, which are 1/3 lower than 7 days ago.
Fetching the Barack Obama article on the mobile site on Chrome with a Galaxy S6 Edge, I get the following firstPaint values: 1737.87, 1519.47, 1833.615, 1545.96, 1919.135
Right off the bat, setup is incredibly simple. Just pick a device on their dashboard:
Wed, May 2
I agree that it's an unlikely cause, but worth ruling out. Have any changes on the backend/ingestion pipeline happened around that time?
Is that what made this fail: https://integration.wikimedia.org/ci/job/quibble-vendor-mysql-hhvm-docker/690/console ?
/wiki/Sp%C3%A9cial:Version InvalidArgumentException from line 58 of /srv/mediawiki/php-1.32.0-wmf.1/extensions/QuickSurveys/includes/SurveyFactory.php: The "perceived-performance-survey" survey doesn't have a coverage.
gilles@terbium:~$ mwscript namespaceDupes.php --wiki=euwikisource 0 pages to fix, 0 were resolvable.
gilles@terbium:~$ mwscript namespaceDupes.php --wiki=euwikisource --dry-run id=1424 ns=0 dbk=Author:Agustin_Kardaberaz -> Egilea:Agustin_Kardaberaz (no conflict) DRY RUN ONLY id=1425 ns=0 dbk=Author:Bilintx -> Egilea:Bilintx (no conflict) DRY RUN ONLY id=1426 ns=0 dbk=Author:Bitor_Garitaonandia -> Egilea:Bitor_Garitaonandia (no conflict) DRY RUN ONLY id=1427 ns=0 dbk=Author:Bruno_Etxenike -> Egilea:Bruno_Etxenike (no conflict) DRY RUN ONLY id=1428 ns=0 dbk=Author:Joan_Etxamendi -> Egilea:Joan_Etxamendi (no conflict) DRY RUN ONLY id=1648 ns=0 dbk=Author:Joan_Piarres_Duvoisin -> Egilea:Joan_Piarres_Duvoisin (no conflict) DRY RUN ONLY id=1429 ns=0 dbk=Author:Joanes_Leizarraga -> Egilea:Joanes_Leizarraga (no conflict) DRY RUN ONLY id=1430 ns=0 dbk=Author:Jon_Mirande -> Egilea:Jon_Mirande (no conflict) DRY RUN ONLY id=1431 ns=0 dbk=Author:Jose_Bizente_Etxagarai -> Egilea:Jose_Bizente_Etxagarai (no conflict) DRY RUN ONLY id=1432 ns=0 dbk=Author:Jose_Manterola -> Egilea:Jose_Manterola (no conflict) DRY RUN ONLY id=1433 ns=0 dbk=Author:Jose_Maria_Iparragirre -> Egilea:Jose_Maria_Iparragirre (no conflict) DRY RUN ONLY id=1434 ns=0 dbk=Author:Lauaxeta -> Egilea:Lauaxeta (no conflict) DRY RUN ONLY id=1435 ns=0 dbk=Author:Manuel_de_Larramendi -> Egilea:Manuel_de_Larramendi (no conflict) DRY RUN ONLY id=1436 ns=0 dbk=Author:Pedro_Agerre_Axular -> Egilea:Pedro_Agerre_Axular (no conflict) DRY RUN ONLY id=1437 ns=0 dbk=Author:Pedro_Mari_Otaño -> Egilea:Pedro_Mari_Otaño (no conflict) DRY RUN ONLY id=1438 ns=0 dbk=Author:Pello_Errota -> Egilea:Pello_Errota (no conflict) DRY RUN ONLY id=1439 ns=0 dbk=Author:Pepe_Artola -> Egilea:Pepe_Artola (no conflict) DRY RUN ONLY id=1440 ns=0 dbk=Author:Pierre_Urte -> Egilea:Pierre_Urte (no conflict) DRY RUN ONLY id=1441 ns=0 dbk=Author:Pierre_Urteren -> Egilea:Pierre_Urteren (no conflict) DRY RUN ONLY id=1442 ns=0 dbk=Author:Ramon_Artola -> Egilea:Ramon_Artola (no conflict) DRY RUN ONLY id=1443 ns=0 dbk=Author:Ramos_Azkarate -> Egilea:Ramos_Azkarate (no conflict) DRY RUN ONLY id=1444 ns=0 dbk=Author:Sabin_Arana -> Egilea:Sabin_Arana (no conflict) DRY RUN ONLY id=1445 ns=0 dbk=Author:Sebastian_Mendiburu -> Egilea:Sebastian_Mendiburu (no conflict) DRY RUN ONLY id=1446 ns=0 dbk=Author:Silvain_Pouvreau -> Egilea:Silvain_Pouvreau (no conflict) DRY RUN ONLY id=1447 ns=0 dbk=Author:Toribio_Alzaga -> Egilea:Toribio_Alzaga (no conflict) DRY RUN ONLY id=1448 ns=0 dbk=Author:Txirrita -> Egilea:Txirrita (no conflict) DRY RUN ONLY id=1449 ns=0 dbk=Author:Xabier_Lizardi -> Egilea:Xabier_Lizardi (no conflict) DRY RUN ONLY id=1450 ns=0 dbk=Author:Xenpelar -> Egilea:Xenpelar (no conflict) DRY RUN ONLY 28 pages to fix, 28 were resolvable.
Works now, thanks!
reprepro copy stretch-wikimedia jessie-wikimedia python-logstash
Sure thing, I wasn't sure if that was the case or where it was.
Of these changes https://gerrit.wikimedia.org/r/#/c/428551/ seems like the prime suspect to me. While the old code was quite agressive, it didn't trust the fact that the load event would happen on the document. The new code does trust that the document load event will happen if document.readyState != 'complete'. It also trusts that by the time the timeout runs in both situations, loadEventEnd will be set.
On late April 26th group 2 wikis moved to 1.32.0-wmf.1
Bar any new campaigns added until then, the largeBannerLimit module should stop loading after May 8th and legacySupport should stop after May 14th.
Tue, May 1
It can start whenever we want. Basically we can say 2 weeks after you post the notes, since I seem to recall you said that was the ideal timeframe.
Mon, Apr 30
We are now unblocked by legal, here's the next wording below the survey:
Apr 27 2018
Took me a while to come up with the CLI syntax to schedule a test run, where I can specify the network conditions (which isn't possible in the GUI):
Apr 26 2018
A few runs targeting a specific type of Android device, with the default connectivity profile (10MB up/down, no latency), FirstPaint: 4046.215, 2772.145, 3923.1, 3153.21. Not very encouraging in terms of stability... but then again, maybe I was hitting different devices.
One thing I'll note right away, is that it's horribly slow to start a test. It might be because I'm waiting in line for the device. But even once I get the device, as you see in the video, there's an awful amount of setup time before the test actually runs. Looking at the billed minutes on their UI, it looks like a test of just loading one page can take 4+ billed minutes!
Actually I can't find a way to tie the network profile to my run. It might be something only for dedicated devices, or only available programatically. I might have to write code driving AWS device farm to find out.