Has anyone noticed that most of OOUI's fallback PNGs are broken anyway?
Help icon:
Edit pencil:
Eyeball (for VE):
Has anyone noticed that most of OOUI's fallback PNGs are broken anyway?
Help icon:
@AlexisJazz - I've tested it on test.wikipedia.org under both an admin account and a regular user account and it works as advertised. The change will go out to Commons tomorrow and shouldn't have any noticeable effect. If nothing breaks, we can change the user rights on Commons after tomorrow.
We can test it right now on https://test.wikimedia.beta.wmflabs.org/ (I already tested as a regular user, correctly limited to 4) but on https://test.wikipedia.org/ the patch isn't active yet. I assume it'll take effect on April 29th there as well?
Note that this bug causes a lot of extra work for the Commons community and should be fairly easy to fix.
@mmodell - To be clear, we need https://phabricator.wikimedia.org/source/tool-wikisource-ocr/ to mirror https://gerrit.wikimedia.org/r/#/admin/projects/labs/tools/wikisource-ocr. Hope you can help us with that.
@AlexisJazz - I'm talking about https://test.wikipedia.org/, which has about 500 administrators, including myself.
@AlexisJazz - By coincidence, test.wikipedia is already set up with all users having upload_by_url. In addition admins there have mass-upload. So we should be able to test with an admin account and a regular user account to see the difference.
@fdans - Yes, the 90 day limit works fine for me.
@AlexisJazz - The change should take effect on Common on April 29th, if there are no problems on test.wikipedia.
@mmodell - Sure, if it's possible to change the name at the same time. It's no longer specific to Google, so it would be nice to rename it to something like tool-wikisource-ocr (similar to the new gerrit repo).
hmm, who else is a Phabricator admin? @Aklapper?
Oh yes, now I understand what you mean. That's very interesting. I wonder if unsuccessful move attempts trigger watchlist updates but not log entries.
@mmodell - Looks like MarcoAurelio is MIA. Any chance you could delete the https://phabricator.wikimedia.org/source/tool-ws-google-ocr/ repo for us?
@DannyS712 - I can't parse what you're saying. What do watchlist entries have to do with these page moves? And why are you saying that https://commons.wikimedia.org/w/index.php?title=File%3A%C9%A1obyounohasi.jpg&redirect=no never occurred? I'm probably misunderstanding you, but your first sentence is very hard to understand.
@Volker_E - Could you explain how SVG background icons or position:fixed fallbacks are necessary for preserving core functionality in MediaWiki (i.e. reading, searching, editing) on Android 2? In other words, what specific feature(s) would be broken? I'm asking because I'm wondering if we could just remove these 2 fallbacks anyway (regardless of Grade C support).
Regardless of the issue of removing the .m. subdomain, would anyone object to us doing the first step of this task: "DeviceDetection.php moved from MobileFrontend to core". It used to be in core and that's really where it belongs. All skins (especially non-WMF skins) should have easy access to this data regardless of whether they are running MobileFrontend or not. Also the WikiEditor extension needs this data for proper eventlogging (T249944).
FYI, MobileFrontend has code to detect phones and tablets in UADeviceDetector.php. It seems like that code should really live in core rather than MobileFrontend, otherwise, it's going to be complicated to detect people using a phone or tablet to edit with the Wikitext editor on the desktop site.
... we'd probably still want to differentiate between MobileFrontend and WikiEditor. As-is, that desktop / phone split for platform is the only way to tell, I think.
Yeah, this seems like an oversight in the current schema. I agree that adding something like mobile-page to the integration field would probably be the best solution. We can probably split that off as a separate bug though, as this bug is just about the platform field.
@Mayakp.wiki - Nevermind, I figured it out. It looks like the edits_hourly dashboard relies on the revision_tags in the database.
To answer the question above, looks like the data in the edits_hourly dashboard comes from the database and mostly relies on revision_tags.
Looking great so far. Would it be possible to add a description for this dashboard in Turnillo (similar to the other dashboards), something like: "Sampled eventlogging of the non-API editing interfaces". That way people can tell the difference between it and the edits_hourly dashboard. Speaking of, does anyone know where the data for the edits_hourly Turnillo dashboard comes from?
@Jdlrobson - Now that T231925 is fixed, what code should PageTriage be using to create the link? I was thinking that it would get fixed automatically by T231925, but it still looks broken as in the description screenshot.
@Mayakp.wiki - How does Turnillo split between "Mobile web" editing and "Other" editing in the edits_hourly dashboard? Is it relying on the EditAttemptStep schema or doing something else?
@Samwilson - I think the last steps needed are to delete the existing Phabricator repo and create a new mirror of the Gerrit repo (with the new name). I don't have adequate permissions to delete the existing repo, but I think you do.
@Doc_James - If the assumption is that the data is in Wikitext (and not JSON or something else), it seems like the best solution to this problem would be to generate the entire table from a single template, and have a Lua module calculate the totals based on the parameters passed to the template for each country. The big downside to this solution is that editors would no longer be able to use the VisualEditor table editor to edit the country data. And like Ed and Gergo mention above, dealing with number formatting is going to be a problem for any potential solution.
@Tchanders - If you end up implementing the suggested solution in the description (which is the least hacky solution), let me know and I can update all the tables on-wiki.
I updated the remote repo in the Toolforge tool and pulled the update from the new repo. It works great!
@Daimona - $cfg['scalar_implicit_cast'] = false; didn't work either, FYI.
@ppelberg - Well, it's after the holidays, but probably an even worse time to bring this up. Regardless, we need this data to move forward with our no-JS guidelines for engineering. From David's analysis, it sounds like this would be a relatively small task (maybe a few days for one engineer). Is there any chance the Editing team could do this in Q4?
To answer my own question, it looks like we could limit it to cases where editor_interface = wikitext and integration = page (to make sure we exclude app edits) for the no-JS number.
So the thing is InnoDB DOES support longer indexes than 767. By default, on the latest versions of MariaDB (10.2 and up) and MySQL (8.0 and up), it supports up to 3072 bytes, and on supported versions before those, it supports that if using Barracuda file format, innodb_file_per_table and innodb_large_prefix = ON.
@jcrespo - That's good to know. One thing I don't really understand is what is the actual severity of this bug? For example, if we want to enable PageAssessments on Russian Wikipedia (T184967), which doesn't have these tables yet, would this bug block it? Or are we using the config options you mention above? In other words, does this bug only affect 3rd party users, or also WMF?
@Ramsey-WMF - I hope y'all are planning to move the blacklist on-wiki. It would make life easier for everyone.
@Ramsey-WMF - That's awesome. Thanks for the info!
This would be using saveSuccess as a way to limit it to sessions that resulted in successful edits. If either having-JS or not-having-JS makes saving substantially harder (lack of tools / bugs), our numbers would be misleading.
I don't actually think this would be misleading, as we want to find out how many actual edits are made with no-JS (i.e. how many edits would we lose by disabling no-JS editing support).
This would exclude VisualEditor users, depressing the overall JS numbers. This would be easy to compensate for by showing a "number of successful edits from VE" figure in the same time period.
Sounds like a good plan.
Bots would probably still be included. Depending on the bot's methodology, it could potentially be classed as JS or no-JS, or bypass this editor entirely and use the API to make its edits.
@DLynch - What we're specifically looking for is no-JS edits made through any editing interface besides the API or mobile apps (regardless of whether they are by bots or not). Is there a way to exclude API edits from the totals? Basically we just need to justify with actual data whether or not we should continue to maintain a no-JS editor (as part of a broader evaluation of all of our no-JS support). I imagine our no-JS editor is used a fair bit, but we need data rather than speculation. The rationale for having an editing API is separate and doesn't need further justification.
@kzimmerman - Since JK is out for a while, I'll chime in here. This data is basically needed for any future editing-related features, as we need to decide whether or not to continue building no-JS fallbacks for those features. For example, the Editing team is currently working on Discussion Tools, which is a group of Javascript editing features for talk pages. Since we don't have a good idea of how much editing is done on no-JS browsers, we don't really know what the impact will be of not providing a no-JS fallback and whether that may impact some communities more than others.
It looks like you just need to change line 90 in Ocr.php from
$this->gcv->addFeatureTextDetection();
to
$this->gcv->addFeatureDocumentTextDetection();
assuming that the structure of the response is the same (which should be checked).
It seems like PNG support should be completely unnecessary in any kind of JS application given we don't run JS here. If we do want to continue supporting background-size on IE8 as a compromise we could limit this fallback to modules added via addModuleStyles.
@dom_walden - Is this still an issue or should we close it?
@Tchanders - Bingo! I can actually reproduce the bug now. Yay! So I wonder if those boxes just have some sub-pixel-width stroke or border being applied for some reason. And is it worth the trouble of trying to track it down?
@Tchanders - Totally agree that the data for the map in T118783#6012706 was junk. Thanks for solving that. I thought it might be a useful clue for solving Spage's example in the description though. I can't reproduce the original problem at all, even when activating the map and zooming in. If it only happens to you when being zoomed in then maybe the bug isn't worth worrying about. Have you tried on Firefox in Ubuntu?
I fixed it by suppressing that particular error type:
$cfg['suppress_issue_types'] = array_merge( $cfg['suppress_issue_types'], [ // This test seems to be buggy or overly strict (T249738) 'PhanTypeMismatchArgumentNullableInternal', ] );
I decided to suppress it repo-wide since there are other parts of that code-base with very similar code that will likely trigger the same phan glitch if they are modified.
@Daimona - FYI, setting null_casts_as_any_type to false in the Phan config didn't work (https://gerrit.wikimedia.org/r/#/c/mediawiki/extensions/PageAssessments/+/587399/). How do you find out which Phan rule triggers a particular error message? It seems like that would be nice to include in the output.
@Daimona - Thanks, I'll suppress it inline, but I hope there's a way to prevent this in the future, as it seems like a needless waste of time.
@Daimona - Why would $ns !== 'all' make it think that $ns can be null? (Removing that part of the code fixes the error.) That doesn't make any sense. And even if it did, it's perfectly fine to pass a null variable to strlen().
Sorting by date would be difficult since the table doesn't currently store the timestamp, only the revision ID of the page at the point of assessment. Even if you forced the user to limit it to a single WikiProject, WikiProject Biography has over 1.7 million pages assigned to it, so it would likely timeout without denormalizing the timestamp data into the page_assessments table. Sorting by class or importance would probably be more doable, but only when limited to a single WikiProject.
FYI, it looks like this bug affects at least 33 other extensions: https://phabricator.wikimedia.org/search/query/.toXYVsB2ZdB/. Some have already added schema changes to work around the problem.
@jcrespo - Any update on this? Should we go ahead and fix it with a schema change?
@Reedy - How do we temporarily add "forge committer identity" and "push" rights to the labs-tools-wikisource-ocr group? Does that require a Gerrit administrator?
I went ahead and created the gerrit repo and the owner group.
@thcipriani - Does this look right for creating the new repo?
ssh -p 29418 gerrit.wikimedia.org gerrit create-project --require-change-id --owner=labs-tools-wikisource-ocr --parent=labs/tools --description="'Toolforge tool for handling Wikisource OCR requests'" labs/tools/wikisource-ocr
@Ramsey-WMF - I still imagine there will be a use case for removing images without adding a tag. What happens currently if someone else tags one of the images in my personal queue before I do? Does it remain in my queue or get removed? If it remains, that would be a good example of a case where there may be no need to add more tags.
Hmm. Why not just make the "Skip" button remove it from your queue? It's not like you can't add more claims later manually. What's the use case for people repeatedly skipping the same images? For example, I've skipped this image's tag suggestions at least 20 times now:
@Ramsey-WMF - Maybe, but there seems to be a related bug... If I click "Skip" for an image it doesn't seem to get removed from my personal queue. So my personal queue is now made up mostly of images that I either don't have anything to do with or images for which Google doesn't have any helpful suggestions (which is common). In other words, my personal queue is slowly turning into garbage. If the "Skip" button is fixed, I think this bug will cease to be a problem though.