I am new to the MediaWiki ecosystem! Currently running the Star Citizen Wiki with a bunch of awesome folks.
I'm a product designer by trade. In my spare time I make pixels fancier or uglier, with code if needed. Here's some of my projects with MW:
I am new to the MediaWiki ecosystem! Currently running the Star Citizen Wiki with a bunch of awesome folks.
I'm a product designer by trade. In my spare time I make pixels fancier or uglier, with code if needed. Here's some of my projects with MW:
For wikis with Semantic MediaWiki enabled, the incorrect timestamp is caused by SMW resetting the parser cache timestamp to invalidate it. Disabling that behavior does fix the timestamp: https://github.com/SemanticMediaWiki/SemanticMediaWiki/commit/d7b6198382264389078fbc067e2d433e2518416b
Thanks for adding me to the task. I will patch it within Citizen as well.
While the current/latest revision timestamp naming is confusing, it is being tacked in https://gerrit.wikimedia.org/r/c/mediawiki/core/+/1126143 as cscott mentioned, and it is not the issue that is raised.
The timestamp that was retrieved do not meet any existing revisions, but rather some random timestamp that is more recent than the latest revision.
The 'original' image. I'm not certain how Google deals with originals vs thumbnails and how it groups these things together.
As per my comment and test above, I suspect that the crawlers never got to the original image because it is only linked from the file page.
The URL of the file page is problematic because it contains a file extension suffix, but it is not an image.
That might be cause why file pages are not indexed on MediaWiki wikis, which also prevents the original image from being indexed.
Yeah that sounds more sensible and consistent compared to what we have now. Redirect pages already take additional effort to get to, so we need to consider what is the user's goal when they are clicking these tabs.
Backport patched submitted.
If it is not accepted, you can also put the following code in LocalSettings.php
/** * @see https://www.mediawiki.org/wiki/Manual:Hooks/BeforePageDisplay */ $wgHooks['BeforePageDisplay'][] = function( $out, $skin ) { // Don't index VE edit pages (https://phabricator.wikimedia.org/T319124) if ( $out->getRequest()->getVal( 'veaction' ) ) { $out->setRobotPolicy( 'noindex,nofollow' ); } };
In T376559#10208796, @Jdlrobson wrote:I've suggested a tag since all phab tickets should have an associated project.
I actually think it would be good to replace minerva-animations-ready and vector-animations-ready with a generic skin-animations-ready and move this code into core (resources/src/mediawiki.page.ready/ready.js) so skins like Timeless, Monobook etc can benefit from this change.
On the current MediaWiki master branch, I have been testing on using mediawiki.skin.variables.less to override Codex tokens, but it seems that extensions that use Codex modules (e.g. MultimediaViewer) does not use the override from the skin at all. Is this intended or is there something wrong with my implementation?
In T333394#9912251, @Jdlrobson wrote:NOTE: At time of writing, I seem to be running into https://issues.chromium.org/issues/41152783 when testing the patch for this fix which is a little worrying in Chrome. * Click hide on table of contents * Click tables of contents icon. See: awww snap
Recent I ran into the same issue with onSkinEditSectionLinks and VisualEditor. My use case is to add icons to the edit section links by adding HTML classes to the edit section link with the onSkinEditSectionLinks hook.
Sorry that I missed the comment in the patch. What version of MW do you need to test against?
It is the same for us. We run a 1.39 wiki and like many third party wikis, we are sticking to LTS versions. This is a much needed fix and it should benefit a lot of third party wikis :)
In T327588#9593598, @Bawolff wrote:object-src is probably not too useful in modern browsers now that <object> is just a glorified iframe. It probably makes sense to just always have that set to 'none'.
TL;DR: Google Images is able to pick up source images with an invisible anchor tag linked to the source image URL.
In T54647#9311769, @TheDJ wrote:In T54647#9311456, @alistair3149 wrote:Our wiki has a significant amount of images and a decent amount of organic traffic, but the original resolution images and file description pages are almost never indexed.
URLs with index.php can be indexed properly by Google Images.Well in that case... you could just try adapting the image linker.php class and swap links to images from the shortened form to using the index.php form ?
If they only filter out urls pre-accessing them (which I suspect is what they are doing), then that might just be enough.
In T54647#1586766, @Ciencia_Al_Poder wrote:This issue has been brought up again in support desk: Images only indexed as thumbnails by search engines
Well, something needs to be done here, so I'll start proposing an idea at least. An RFC may be needed
- Default link for embedded images should be the original version (high res)
- Use the [[ http://www.w3.org/TR/html-longdesc/ | longdesc ]] attribute to point to the file description page
- With JavaScript, place a small icon over the image (only visible when hovering over the image), that clicking on the icon would open the file description page. Clicking on the image will open the original image.
- Next to the image add a link (hidden by default, only visible for text browsers) to the file description page. On images embedded using frame or thumb options it won't be rendered (maybe add the normal link on frame same as on thumb, on the container box)
Our wiki has a significant amount of images and a decent amount of organic traffic, but the original resolution images and file description pages are almost never indexed.
Hoping to bring this to attention for more people and continue the discussion.
Not within a few weeks unfortunately, I don't have a setup that can test it at the moment.
It shouldn't affect any styling since the <picture> element do not contain any styles. We tested with VE on the wiki that is linked above, no regression has been found with it. As for MobileFrontend, it would require some additional testing.
Would it work if RelatedArticlesUseCirrusSearchApiUrl is default to null?
Would this be backported to 1.39 since it is arguably a regression with the ToC?
In T282500#8532625, @Tinss wrote:@alistair3149, thanks for the patch. Will there be a way to disable the default PWA behavior ?
All patches are now merged.
Thank you for the invitation! I'll think about it but I'm unsure since I am not as active in WMF projects and it's pretty far.
In T282500#8471899, @Jdlrobson wrote:@alistair3149 would you be interested on working on this ticket to move the API to core with me reviewing your patches? I am going to be taking a break over Christmas, but would be happy to work with you towards that in January if you feel inclined.
In T321708#8461912, @Saklad5 wrote:In T321708#8461868, @alistair3149 wrote:Just to confirm, Passkey as 2FA works fine on desktop right?
It did in October, yes.
I switched my account to TOTP upon encountering the issue. Let me know if you want me to switch it back temporarily to see if it still works.
As far as I know, all versions of Safari share the same functional implementation. My best guess is that MediaWiki is violating the specification somewhere, and it only works on macOS due to undefined behavior.
I have a working theory that WebAuthn does not work on MobileFrontend because the ResourceLoader module is not loaded at all.
WebAuthn requires Javascript to complete the verification process.
In T321708#8355310, @Saklad5 wrote:OK, I've tested WebAuthn 2FA with Wikipedia, and found an unusual issue: I can successfully create and use a passkey on macOS 13.0 and Safari 16.1. However, when attempting to use it to login on iOS 16.1 and Safari 16.1, Wikipedia's login flow doesn't seem to prompt for a passkey at all.
Instead, it simply says "Please touch your verification device or follow the instructions from the browser". It has a single button, "Continue login", which causes the verification process to fail when pressed. My iPhone definitely has the WebAuthn/passkey credential I registered on my Mac: it just isn't getting asked for it like the latter is.
Is it possible that there is some sort of mobile-specific bug with the WebAuthn implementation?
I can work on a patch to merge hCaptcha and reCaptcha in its current state, but I wanted to gather some feedbacks on the topic before I proceed.
In T270437#8399835, @Lectrician1 wrote:Shouldn't a Dockerfile be modified to run chmod o+rwx cache/sqlite so the user doesn't run into this error every time and then be required to run it?
I have fixed up the lingering issues on the patch above. Is there anything else that is needed to move this forward?
I think it makes sense to move the manifest API from MobileFrontend to core, it'll reduce maintenance and benefit other configurations.
- While getting the author is reasonably straightforward (although Commons lacks machine-readable markup in some edge cases, like T68606: Media viewer fails to give credit to all people in specific circumstances or T89692: CommonsMetadata cannot differentiate between license of the image and other licenses ), schema.org wants the value to be a structured object (an Organization or a Person), not text. I guess we could produce the name and sometimes the url fields at least.
Currently my workaround was just asking people to load the skin last since it will ensure the hook to run at the end :-/
I tried to avoid tempering with HTML in PHP as much as possible. Hopefully T315015 will address the problem down the line, and it also makes much more sense that way for other uses.
Thanks for the reply.