Page MenuHomePhabricator
Search Open Tasks
    • Task
    An important information about speakers is their level of proficiency in the language their record in. Currently on Wiktionaries entries in the Pronunciation section, we only display the speaker's place of residence next to their recording. I think it might be valuable to display their level of proficiency (native, good level, average level, beginner). It is possible since every speaker has to indicate their level before recording.
    • Task
    Steps to Reproduce: Actual Results: {F34127339} Expected Results: Number of results per page filter is broken
    • Task
    Steps to Reproduce: * navigate to combine search page and scroll to data table section Actual Results: {F34127300} Expected Results: {F34127303} [ ] the order of colors [ ] female percentage column should be positioned before the bar visualization column
    • Task
    **For Percentages: ** - When comparing group means or percentages in tables, rounding should not blur the differences between them. - if the range is 10% or more use whole numbers, if less than 1% use two decimal places, and otherwise one. In practice percentages are usually given along with their corresponding frequencies, so precision is less critical as the exact values can be calculated. - [[ https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4483789/ | src ]] - percentage = false ; optional parameter to provide percentage format for numbers if less than 1%: - precision = 2; to round to 2 decimal places if greater than 1%: - precision = 1; to round to 1 decimal places
    • Task
    Rationale: One of the assumptions of Wikisource is that image and OCR will come as a single file in either a PDF or DJVU file. This can not be the case in several scenarios: 1) The Image file and text are provided as separate files. 2) The original image file may be damaged or of inferior quality. 3) A proofread text with page numbers may exist at an external site, such as Project Gutenberg. Solution: Develop a tool that would allow the OCR or Image to be loaded from a separate file. Requirements: 1) Allow users to select one file for images and other for OCR. The tool should match the individual images to the text files in a visual layout similar to Book2Scroll. a. In the simplest case, the text files would be a series of sequentially numbered files corresponding exactly to the image files, e.g. 1.txt … n.txt; 1.png… n.png b. In a more complex case, users will need to set custom ranges to match the images to the OCR similar to the Pages tool on the Index Page for a book. <pagelist 1to2=skip 3="1" 4to8=skip 9="2" 415to420="skip" 416="400" /> In this case, images 1.png and 2.png have no corresponding text files, image 3.png corresponds to 1.txt, images 4.png to 8.png have no corresponding text files; image 9 corresponding to 2.txt and begins a sequence that runs until the next change; the text for image 416.img is 400.txt c. The most complex case would be an html file with page numbers. The parser would need to be able to split the HTML into separate txt files, convert the HTML to wikicode, and then run step b. See, http://www.gutenberg.org/files/64649/64649-h/64649-h.htm as an example. There are about 41,000 files on Project Gutenberg with the page numbers marked with class="pagenum" id="Page_19" 2) The tool should also use the same code to allow users to either replace the image files or add a second set of image files (while keeping the existing text) . This can help when a higher set of images becomes available or multiple versions are needed due to damage or illegible text. As special case would be an option to import the original files from IA when they are needed to extract images or illustrations. This is part of this proposal because it will reuse much of the same code.
    • Task
    perhaps make the fill have some other status active=false until computations have completed
    • Task
    @Aklapper feedback on Tool's commons talk page: https://commons.wikimedia.org/wiki/Commons_talk:VideoCutTool#Tool_logos > Looking at the source, there is a logo file in /src/logo.svg, and there is also a line in src/components/Header.jsx which says const logo = "https://upload.wikimedia.org/wikipedia/commons/5/57/JeremyNguyenGCI_-_Video_Cut_Tool_Logo.svg"; (which is a different logo). Is this intentional? Also, things probably shouldn't be loaded on the fly (via an URL) but could be part of the codebase itself and deployed on the same server. (If I understand correctly, haven't looked much at the code.) --AKlapper (WMF) (talk) 12:16, 4 February 2021 (UTC)
    • Task
    @Psubhashish feedback via email: > I was trying to upload a video (mp4, 32 Mb) from my computer to see how the tool works. I tried to trim the video and download after processing. I was checking every 5 mins or so the progress and gave up after around 30 mins. :) It was ready when I checked after an hour or two. I was using Brave (built on Chromium, mine is updated whatever the latest version is) for this.
    • Task
    It may be interresting (and beneficial?) to study the feasability and perks of running [[https://meta.wikimedia.org/wiki/User:Lingua_Libre_Bot|Lingua Libre Bot]] on [[https://wikitech.wikimedia.org/wiki/Portal:Toolforge | Toolforge ]] (instead of Wikimédia France servers).
    • Task
    **Problem:** We are tracking the number of editors who make at least 1/5/100 edits in the past 30 days at https://grafana.wikimedia.org/d/000000162/wikidata-site-stats?orgId=1. This is for all edits across all namespaces. It'd be useful to also keep this statistic split by namespace. The main namespace, Property namespace and the Lexeme namespace are of special interest but we will want to look at the others as well. **Acceptance criteria:** [ ] new graph exists on https://grafana.wikimedia.org/d/000000162/wikidata-site-stats?orgId=1 that shows the number of editors per namespace over time [ ] graph is split by activity level (at least 1/5/100 edits in the past 30 days) **Notes:** * We count only the edits an editor makes in a specific namespace. If they also edit in another namespace those edits are not counted into the number of edits for that namespace.
    • Task
    As an editor I want to get a quick explanation of the score and the scoring in order to be able to understand the result. **Problem:** We should not expect the user of the tool to understand what the score means. Additionally we need to make it clear that we are judging the quality of the Item and not the quality of the entity described in the Item. This is especially important for Items about people. We are judging the data about the person but not the person. We need to provide a short explanation for this. **Screenshots/mockups:** TODO **BDD** GIVEN AND WHEN AND THEN AND **Acceptance criteria:** [ ] quick intro text exists for what the score means and what goes into it
    • Task
    **Problem:** @abian tested the tool and found that at least these two Item IDs do not work: Q19434274 Q19424300 The tool doesn't seem able to get a score for them and stays on the first page when clicking "assess Item quality".
    • Task
    As an editor I want to see the list of results immediately in order to save clicks for something I always do. **Problem:** When viewing the results page the most important actionable information is hidden. We need to fix this. **Screenshots/mockups:** TODO **BDD** GIVEN AND WHEN AND THEN AND **Acceptance criteria:** *
    • Task
    Proper analysis means extracting a list of specifiers from the format strings, and removing taintedness from safe specifiers (e.g. integers). Phan uses PrintfCheckerPlugin for its checks, but it's not meant for external use. For our use case, the best option might be to have astNodeToPrimitive() and dependencies factored out of the plugin.
    • Task
    As an editor I want to access additional information about the tool. **Problem:** We need to link to source code for contributions and phabricator for bug reports, potentially others. We should also indicate that it is a tool developed with <3 by WMDE. This could happen in a header or footer. **Example:** * https://interaction-timeline.toolforge.org/ * https://query-builder-test.toolforge.org/ **Screenshots/mockups:** TODO **BDD** GIVEN AND WHEN AND THEN AND **Acceptance criteria:** [ ] further links to source code, bug reporting, etc are available on tool
    • Task
    **Problem:** We want to make the tool available in the usual space and deploy it for use by the community. To do this it will be deployed on toolforge. The URL should be item-quality-evaluator.toolforge.org. **Acceptance criteria:** [ ] tool is accessible via item-quality-evaluator.toolforge.org [ ] access is shared with the team [ ] tool is listed on https://hay.toolforge.org/directory/
    • Task
    **Problem:** The tool should be clearly identifiable as a Wikidata tool. We should add the Wikidata logo to the start page. **Screenshots/mockups:** TODO **Acceptance criteria:** [ ] WD logo is on first page of tool **Notes:** * Logo file is available at https://commons.wikimedia.org/wiki/File:Wikidata-logo-en.svg
    • Task
    **Problem:** When referring to the Wikidata entity type "Item" we should capitalize it consistently for the user to ensure it is not mistaken for the general word "item". **Acceptance criteria:** [ ] UI always capitalizes "Item"
    • Task
    When investigating T228771 and T228701 I've discovered a lot of issues with how UploadWizard handled its config. In short – it's a mess. The default settings were defined in `UploadWizard.config.php` which besides returning the config array also does a bunch of static calls, relies on the global context and is generally awful. This means that any UW-related code needs the main request context, which in turn prohibits any kind of config reading from the `load.php` endpoint, which is a requirement to get T228701 to work. The config is then accessed through the `UploadWizardConfig` static class that has some logic for merging the default config with settings specified locally. Configs for campaigns are handled by `UploadWizardCampaign` that does retrieves the campaign config, merges it with the global config, parses some fields of and caches. The cache varies by language only, which breaks any `{{GENDER: }}` tags. It is also supposed to handle campaign start and end dates, but the feature is horribly buggy and doesn't work. The main "global" config is never parsed on its own, which is just... wat. So, I attempted to resolve all that and make it all unit-testable. I ended up with an uhhh... slightly complicated setup: * `ConfigBase` is an abstract class representing any kind of MediaUploader config. It exposes `getConfig` and `getSetting` methods along with protected utilities for merging config arrays. All `*Config` classes inherit from it. * `RawConfig` represents the unparsed global config. This only relies on configuration settings and does not require a RequestContext to function. To slightly improve performance, the global raw config is also "cached" in static variable. * `RequestConfig` represents the unparsed global config, but in the context of the request. This allows us to modify the config at this stage for the current language. This class is only intended to be used internally by `GlobalParsedConfig` and `CampaignParsedConfig`. * `ParsedConfig` is an abstract extension of `ConfigBase`. It represents a parsed config, global or for a campaign. It provides its children with some convenience methods for caching parsed configs. * `GlobalParsedConfig` represents the global parsed config. It automatically handles config parsing and caching, varying by language and the user's grammatical gender. The cache is invalidated on the basis of the hashed value of the raw config. * `CampaignParsedConfig` represents a campaign's parsed config. It similarly handles parsing and caching. The cache is invalidated on the basis of the last campaign revision timestamp. Additionally: * `ConfigFactory` constructs the `GlobalParsedConfig` and `CampaignParsedConfig`. * `ConfigParser` parses configuration arrays. * `ConfigParserFactory` constructs the above. That's 9 classes in total, whoo. I've got this more-or-less implemented and mostly covered by unit tests. I've decided to do the refactor as one huge commit, as MediaUploader is still unreleased anyway and untangling this one class at a time would take significantly more effort. The only thing left now is introducing the new classes to the rest of the code and testing.
    • Task
    Currently both the 2010 wikitext editor and VE/2017 wikitext editor use the core's [[https://www.mediawiki.org/wiki/Upload_dialog | upload dialog]] that lets user upload files in the editor and immediately start using them. It looks like this: {F34126524} It would be nice if we could substitute this rather simplistic dialog with the full MediaUploader. This would require a few things: # Ensure the uploader can fit in the window – resolve T274894 first. # Move all HTML generation from PHP to JS so the form can be dynamically included. # Resolve any potential issues with dependencies in the special page code. For example, we would probably need a new way of retrieving the config, through a specialized API endpoint maybe – investigate. # Look into a reasonable way to replace the upload dialog when MediaUploader is loaded. Ideally, extensions like VisualEditor would not have to know about MediaUploader. Instead, core should present an abstract uploader interface that would be implemented by both us and the built-in upload dialog.
    • Task
    Splitting from old T34316: 1. Go to https://fa.wikipedia.org/wiki/کاربر:Reza1615/pdf3 and see how it looks in a browser. 2. Go to https://fa.wikipedia.org/w/index.php?title=کاربر:Reza1615/pdf3&action=edit and look at its source code 3. Select "Download as PDF" in the sidebar 4. Compare the PDF result to the HTML {F34126459}
    • Task
    Is this translation correct? When X is replaced, it becomes: “You don't have permission to Allows to edit every user pages” ---- **URL**: [[https://translatewiki.net/wiki/MediaWiki:Action-editall/tr]]
    • Task
    Steps to Reproduce: (see [[ https://sandbox.semantic-mediawiki.org/wiki/SubobjectDisplayTitle | SMW Sandbox]]) -create a anonymous subobject: `{{#subobject: |Display title of=DisplayTitle}}` -create a named subobject: `{{#subobject: RealTitle |Display title of=DisplayTitle}}` -link to or query both: `{{#ask: [[-has subobject::{{FULLPAGENAME}}]] |?Display title of }} ` Actual Results: anonymous subobject creates link text "DisplayTitle" (as expected) named subobject creates link text "DisplayTitle#RealTitle" instead Expected Results: both subobjects should create the same link text ("DisplayTitle")
    • Task
    I've seen an upload failure during sending request to https://en.wikipedia.beta.wmflabs.org/w/api.php. I understand there is expected to be transient backend failure on these testing sites so that's fine. The problem is what I've got in the JSON response ``` { "upload": { "result": "Warning", "warnings": { "duplicate": [ "Test-Image.jpg", "Test_image_00g7dsjh.jpg", "Test_image_00ldp6it.jpg", "Test_image_00mifpcy.jpg", // ... ], "duplicate-archive": "Test_image_r0lfued7.jpg" }, "stasherrors": [ { "message": "uploadstash-exception", // <--- HERE "params": [ "UploadStashFileException", "An unknown error occurred in storage backend \"local-swift-eqiad\"." ], "code": "uploadstash-exception", "type": "error" } ] } } ``` Note that the value of `.upload.stasherrors[0].message` is exactly the same as `.upload.stasherrors[0].code`, and I'm not sure whether this is intended. I expect the `message` to be something like this ``` Could not store upload in the stash (UploadStashFileException): "An unknown error occurred in storage backend "local-swift-eqiad". ```
    • Task
    I started looking into custom extension content models, and I found that `JsonContent` core class is not safe to extend. Is that intended? Am I supposed to extend `TextContent` and re-implement the pure JSON part? Or is this just an oversight?
    • Task
    Search is currently broken on Beta Wikimedia Commons: > https://commons.wikimedia.beta.wmflabs.org/w/index.php?search=Test > An error has occurred while searching: We could not complete your search due to a temporary problem. Please try again later. {F34125684} It’s been broken since at least 20:48 UTC today ([failed ACDC browser test run](https://github.com/lucaswerkmeister/ACDC/actions/runs/606283359)); it was still working at 20:47 yesterday ([successful ACDC browser test run](https://github.com/lucaswerkmeister/ACDC/actions/runs/603939308)).
    • Task
    T215470 added a custom copyright footer to Wikimedia Commons which mentions the CC0 nature of structured data. Its [current version](https://gerrit.wikimedia.org/g/mediawiki/extensions/WikimediaMessages/+/4002b48ab9867dadfcc1442f2bcd70dec099ed4d/i18n/wikimedia/en.json#221) reads as follows (emphasis added): > Files are available under licenses specified on their description page. All structured data from the file **and property** namespaces is available under the [Creative Commons CC0 License](https://creativecommons.org/publicdomain/zero/1.0/); all unstructured text is available under the [Creative Commons Attribution-ShareAlike License](https://creativecommons.org/licenses/by-sa/3.0/); additional terms may apply. By using this site, you agree to the [Terms of Use](https://foundation.wikimedia.org/wiki/Terms_of_Use) and the [Privacy Policy](https://foundation.wikimedia.org/wiki/Privacy_policy). There is no property namespace on Wikimedia Commons, so I don’t think it makes sense to mention it here. I suggest to remove it.
    • Task
    We suddenly discovered that several members were tied to a bot that was never on the mentor list. But his name is on the page description (https://ru.wikipedia.org/wiki/Проект:Помощь_начинающим/Наставники). Could it be that a **QBA-II-bot** was included in the mentor list because his username was on the page? {F34125612}
    • Task
    This Toolhub message appears on translatewiki: https://translatewiki.net/wiki/Wikimedia:Toolhub-42060f-,_%5C/en It just says `, \`. Its documentation links to https://gerrit.wikimedia.org/r/plugins/gitiles/wikimedia/toolhub/+/refs/heads/main/toolhub/apps/search/schema.py#55 I'm not sure what it is, but it looks like it shouldn't be translatable.
    • Task
    According to https://wikitech.wikimedia.org/wiki/Help:Cloud_VPS_Instances#Puppet_Configuration_for_Cloud_VPS_instances Horizon should show a list of available roles for puppet, it doesn't appear to do so anymore.
    • Task
    The **Download** Button created by this tool, and when we click on download we got three option EPUB, Mobi and PDF. But when I click any of them, it is downloading only main page content, but not subpage. FYI, we have a subpage link on the main page. Check here https://bn.wikisource.org/wiki/শকুন্তলা_(সিগনেট_প্রেস_সংস্করণ)
    • Task
    Currently the channel info pages aren't showing any watched pages for rc feeds, and nothing happens when you do @rc+ or @rc-. I have tried restarting the bot with @restart.
    • Task
    Although AWB 6.1.0.2 SVN 12432 reports that "archive-format" is an invalid citation parameter of {{cite web}}, it is valid. (e.g. [[Guy Crescent]]) This parameter is documented in [[Template:Cite web#URL]]
    • Task
    Steps to Reproduce: # Navigate to [[ https://meta.wikimedia.org/wiki/Special:CentralAuth/Hejhul%C3%A1k | Special:CentralAuth/Hejhulák ]] on Meta or [[ https://cs.wikipedia.org/wiki/Speci%C3%A1ln%C3%AD:Centr%C3%A1ln%C3%AD_ov%C4%9B%C5%99en%C3%AD/Hejhul%C3%A1k | Speciální:Centrální ověření/Hejhulák ]] on cswiki. # Check block information for cs.wikipedia.org. # Click "28 namespaces" (the "Blocked" column). Actual Results: - The [[ https://cs.wikipedia.org/wiki/Special:BlockList/Hejhul%25C3%25A1k | link ]] takes you to **Bad title** - The requested page title contains invalid characters: "`%C3`". Expected Results: - The link takes you to the valid page [[ https://cs.wikipedia.org/wiki/Special:BlockList/Hejhulák | Special:BlockList/Hejhulák ]].
    • Task
    Steps to Reproduce: Run the bot to add recordings to Wikidata Actual Results: https://www.wikidata.org/wiki/Q748#P443 -> the reference URL is wrong : `https://lingualibre.org/wiki//Q442478` Expected Results: A "correct" reference URL, such as `https://lingualibre.org/wiki/Q442478` We'll also have to figure out a way to fix the existing recordings that were added on Wikidata.
    • Task
    The website https://www.wikimedia.it/ is actually offline. We are contacting the [[ https://wiki.wikimedia.it/wiki/Fornitori/np | upstream service provider (codename np) ]] to fix the website ASAP and provide an explaination for the disservice. Thank you to our @Nemo_bis who is always the first one in detecting and reporting these disservices.
    • Task
    It seems that no one asked for interlanguage links in T138332, even though it’s one of the main reasons to have Wikidata connection. Please display interlanguage links based on Wikisource sitelinks on multilingual Wikisource (i.e. the English interlanguage link on multilingual Wikisource should point to `enwikisource`, the German interlanguage link on multilingual Wikisource should point to `dewikisource`, and so on).
    • Task
    I don't know why this "error" happens, but it does. When you record words and start uploading them, you often have two or three recordings that fail due to the "badtoken" error. Since this error does not happen again when you click on the "upload failed recordings" button, we could easily avoid annoying the user. What we should do is to automatically try re-uploading (once or maybe twice - to prevent infinite loops) a recording if, at the upload, the error was "badtoken".
    • Task
    Once T275952 is resolved, pending translate wikispwiki and components to english. @Jamez42. @Hasley or @Hispano76 can assign the mission. T275954 depends of this task
    • Task
    Once T275952 is resolved, pending translate wikispwiki and components to german. After T275955 is solved, @DerFussi can take this task.
    • Task
    Tras el crecimiento en miembros de WikiSP, es necesario que el centro de trabajo sea adaptado para un ambiente multi-idioma. Esta tarea sirve de tracking para las demás que deben ejecutarse.
    • Task
    Con el ingreso de nuevos miembros con distintos idiomas, es correcto hacer las preparaciones de la wiki para que sea multilenguaje. De forma temporal se usa Meta-Wiki, pero la idea es usar nuestros propios recursos si existe la posibilidad. https://www.mediawiki.org/wiki/MediaWiki_Language_Extension_Bundle
    • Task
    [[ https://en.wikipedia.org/w/index.php?oldid=772743896#Future_of_magic_links | RfC to replace the links prior to disabling ]]
    • Task
    T257391#6865400 makes it possible for the text editing toolbar to be offered within the Reply and New Discussion Tools' `source` modes. This task is about introducing/exposing a setting that enables people to turn the text editing toolbar within Reply and New Discussion Tools' `source` modes on/off. === Requirements - When the setting is "enabled," people should see the same text editing tools/toolbar available within Reply and New Discussion Tools' `source` modes, available in its `visual` modes. - When the setting is "disabled," people should NOT see any text editing tools/toolbar within Reply and New Discussion Tools' `source` modes. - By default, the setting for enabling/disabling text editing tools/toolbar available within the Reply and New Discussion Tools' `source` modes should be configured as follows: --- [ ] #TODO; see `Open questions --> "2."` === Open questions - [ ] 1. Where should people go to turn the text editing toolbar within #discussiontools's `source` mode on/off? - [ ] 2. How should this setting configured by default? -- //Note to self: revisit the logic we implemented in T250523.//
    • Task
    This task is about deciding on and implementing the logic for how topic subscription notifications will get "bundled" /"grouped" within Echo's notice drawer/tray. [i] //Thank you to @Pelagic for raising [this issue](https://www.mediawiki.org/w/index.php?title=Topic:W3c084ihivgkm9vt&topic_showPostId=w3ln4r29qgs2yy2w#flow-post-w3ln4r29qgs2yy2w).// === Stories //In progress...// As someone who is subscribed to a discussion in which many new comments are being posted within a "short" time period: 1. I want to be able to see notifications for those comments en masse, so they do not overwhelm other notifications and cause me to miss notifications about other activity. 2. I want the notifications for these comments to be grouped in such a way that I feel confident taking an action (e.g. marking them as read, visiting the discussion to which these comments have been posted) on that bundle. 3. I want to be able to triage those notifications (e.g. mark them as read) en masse, so that I don't need to spend unnecessary time and effort taking the same action, with the same impact, multiple times. === Requirements //Requirements will be populated once the `Open questions` below have been answered.// === Open questions - [ ] 1. What are the criteria for when notifications are bundled together? In other words, what does the bundle string look like? - [ ] 2. What text is shown for bundled notifications? This is usually different from a normal notification, e.g. "3 people thanked you for your edit on XYZ" instead of "Ed thanked you for your edit on XYZ", and usually focuses on what the bundled notifications have in common - [ ] 3. What text is shown for the individual notifications in a bundle? This usually focuses on what is different, e.g. "Ed thanked you", "Jess thanked you" - [ ] 4. When the user clicks the bundle, where should they go? Clicking the "Expand" link expands the bundle, but clicking elsewhere usually takes you somewhere. Typically this is to the thing that the notifications have in common. For example, clicking a single notification that says "Ed posted on your user talk page" would take you to Ed's post, but clicking a bundle that says "3 people posted on your user talk page" would just take you to your user talk page, but not to a specific post/section //Thank you to @Catrope for sharing the above questions with the #editing-team and helping us to understand, more broadly, how notifcation grouping/bundling works in Echo.// === Done - [ ] All `Open questions` have been answered - [ ] The `Requirements` have been implemented --- |i. Echo's notice drawer/tray |--- |{F34124547} |//source: [File:Screenshot_of_Echo_notification_extension](https://www.mediawiki.org/wiki/Extension:Echo#/media/File:Screenshot_of_Echo_notification_extension.png)//
    • Task
    Our top error message at time of writing. Looks easy to fix - the problem appears to be that the module makes use of mw.cookie but does not import the appropriate dependency. https://logstash.wikimedia.org/app/dashboards#/doc/logstash-*/logstash-2021.02.27?id=04jo4HcBGiM4niWI0Bg2
    • Task
    When all revisions of a page A are merged into another page B using MergeHistory, page A will become a redirect to B, but the redirect will not be properly recorded in the pagelinks table, and so one would have to do a null edit on page A. Instead, MergeHistory should actually delete all page links sourced from A, and then insert a single new link from A to B.
    • Task
    There's a weird ref for the AutoCreateCategoryPages extension https://gerrit.wikimedia.org/g/mediawiki/extensions/AutoCreateCategoryPages/+/refs/master that's almost certainly a mistake Pushing with --delete or pushing an empty ref with --force both fail: ```name=problem,lines=10,counterexample badgit (main #) (•́‸•̀)❥ git push origin --delete refs/master 2021-02-26 17:29:58,763 sshecret SSH_AUTH_SOCK=/run/user/1000/194f906110c5681bc2ac8c5c340ca5dd.sock Host key fingerprint is SHA256:j7HQoQ6fIuEgDHjONjI2CZ+2Iwxqgo2Ur5LbPqBgxOU +---[RSA 1024]----+ | | |. . | |= + . | |+BoE o . | |BBX . o S | |@@o+ + o = | |X=*.. + o . | |*ooo . | |o+o. | +----[SHA256]-----+ remote: Processing changes: done remote: error: internal error To ssh://gerrit.wikimedia.org:29418/mediawiki/extensions/AutoCreateCategoryPages ! [remote rejected] refs/master (internal error) error: failed to push some refs to 'ssh://gerrit.wikimedia.org:29418/mediawiki/extensions/AutoCreateCategoryPages' ``` Stack trace complains about noops: ```name=stacktrace,lines=10 [2021-02-27T00:34:47.343+0000] [SSH git-receive-pack /mediawiki/extensions/AutoCreateCategoryPages (thcipriani)] WARN com.google.gerrit.server.git.MultiProgressMonitor : MultiProgressMonitor worker did not call end() before returning [2021-02-27T00:34:47.344+0000] [SSH git-receive-pack /mediawiki/extensions/AutoCreateCategoryPages (thcipriani)] ERROR com.google.gerrit.server.git.receive.AsyncReceiveCommits : error while processing push java.util.concurrent.ExecutionException: com.google.gerrit.exceptions.StorageException: com.google.gerrit.server.update.UpdateException: java.lang.IllegalArgumentException: ref update is a no-op: DELETE: 0000000000000000000000000000000000000000 0000000000000000000000000000000000000000 refs/master at java.util.concurrent.FutureTask.report(FutureTask.java:122) at java.util.concurrent.FutureTask.get(FutureTask.java:192) at com.google.gerrit.server.git.receive.AsyncReceiveCommits.preReceive(AsyncReceiveCommits.java:387) at com.google.gerrit.server.git.receive.AsyncReceiveCommits.lambda$asHook$1(AsyncReceiveCommits.java:332) at org.eclipse.jgit.transport.ReceivePack.service(ReceivePack.java:2206) at org.eclipse.jgit.transport.ReceivePack.receive(ReceivePack.java:2120) at com.google.gerrit.sshd.commands.Receive.runImpl(Receive.java:98) at com.google.gerrit.sshd.AbstractGitCommand.service(AbstractGitCommand.java:108) at com.google.gerrit.sshd.AbstractGitCommand.access$000(AbstractGitCommand.java:32) at com.google.gerrit.sshd.AbstractGitCommand$1.run(AbstractGitCommand.java:73) at com.google.gerrit.sshd.BaseCommand$TaskThunk.run(BaseCommand.java:488) at com.google.gerrit.server.logging.LoggingContextAwareRunnable.run(LoggingContextAwareRunnable.java:110) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180) at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293) at com.google.gerrit.server.git.WorkQueue$Task.run(WorkQueue.java:610) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) Caused by: com.google.gerrit.exceptions.StorageException: com.google.gerrit.server.update.UpdateException: java.lang.IllegalArgumentException: ref update is a no-op: DELETE: 0000000000000000000000000000000000000000 0000000000000000000000000000000000000000 refs/master at com.google.gerrit.server.git.receive.ReceiveCommits.handleRegularCommands(ReceiveCommits.java:734) at com.google.gerrit.server.git.receive.ReceiveCommits.processCommandsUnsafe(ReceiveCommits.java:647) at com.google.gerrit.server.git.receive.ReceiveCommits.processCommands(ReceiveCommits.java:598) at com.google.gerrit.server.git.receive.AsyncReceiveCommits.lambda$preReceive$3(AsyncReceiveCommits.java:370) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) at com.google.gerrit.server.util.RequestScopePropagator.lambda$cleanup$1(RequestScopePropagator.java:182) at com.google.gerrit.server.util.RequestScopePropagator.lambda$context$0(RequestScopePropagator.java:170) at com.google.gerrit.server.util.ThreadLocalRequestScopePropagator.lambda$wrapImpl$0(ThreadLocalRequestScopePropagator.java:45) at com.google.gerrit.server.util.RequestScopePropagator$1.call(RequestScopePropagator.java:85) at com.google.gerrit.server.util.RequestScopePropagator$2.run(RequestScopePropagator.java:116) ... 9 more Caused by: com.google.gerrit.server.update.UpdateException: java.lang.IllegalArgumentException: ref update is a no-op: DELETE: 0000000000000000000000000000000000000000 0000000000000000000000000000000000000000 refs/master at com.google.gerrit.server.update.BatchUpdate.executeUpdateRepo(BatchUpdate.java:515) at com.google.gerrit.server.update.BatchUpdate.execute(BatchUpdate.java:138) at com.google.gerrit.server.update.BatchUpdate.execute(BatchUpdate.java:386) at com.google.gerrit.server.update.BatchUpdate.execute(BatchUpdate.java:390) at com.google.gerrit.server.git.receive.ReceiveCommits.handleRegularCommands(ReceiveCommits.java:732) ... 19 more Caused by: java.lang.IllegalArgumentException: ref update is a no-op: DELETE: 0000000000000000000000000000000000000000 0000000000000000000000000000000000000000 refs/master at com.google.common.base.Preconditions.checkArgument(Preconditions.java:217) at com.google.gerrit.server.update.ChainedReceiveCommands.add(ChainedReceiveCommands.java:66) at com.google.gerrit.server.update.BatchUpdate$RepoContextImpl.addRefUpdate(BatchUpdate.java:257) at com.google.gerrit.server.git.receive.ReceiveCommits$UpdateOneRefOp.updateRepo(ReceiveCommits.java:3071) at com.google.gerrit.server.update.BatchUpdate.executeUpdateRepo(BatchUpdate.java:502) ... 23 more ``` The message seems to be saying: this ref doesn't exist, but that's not the case on disk: ``` gerrit2@gerrit1001:/srv/gerrit/git/mediawiki/extensions/AutoCreateCategoryPages.git$ git for-each-ref | grep refs/master 2a5f5e7ce5292d9d6d033311a025a8be3f52b069 commit refs/master ``` I can delete the ref on disk, but probably worth filing an upstream task for this, too (this is a note for myself to do this after this weekend)
    • Task
    As they are currently implemented, topic subscriptions (T263820) enable people to subscribe to all of the new activity that happens within a `H2` level section. This task is about enabling people to have more control over the new activity they are notified about by enabling them to subscribe to specific subsections. [i] === Open questions - [ ] If you subscribe to a section, should a new comment in a subsection of that section trigger a notification? If yes, this makes it impossible to get notified about comments only in the first chunk of the section (its content before the first subsection). - [ ] 2. What happens if you reply in a subsection (`=== ===`) with Reply Tool and "Subscribe to section" is checked, do you subscribe to the subsection or the parent section? - [ ] 3. In the same situation, what if you're subscribed to a section and want to unsubscribe? If you answered "subsection" to the previous question, you won't be able to do it from the comment form. - [ ] 4. And if you want to subscribe to the subsection instead, there would be a duplication which may seem unnecessary. Also, if you're subscribed both to a section and its subsection and then unsubscribe from a subsection, the user should get some alert that they still watch the subsection via the section. Probably the other way around too. //Thank you to @Jack_who_built_the_house who inspired us to file this task and shared the questions above in the comment they posted on mediawiki.org here: https://w.wiki/32qu. // --- i. https://en.wikipedia.org/wiki/Help:Section#Subsections
    • Task
    A level of protection that allows to edit the metadata of an entity (name, description, etc.), but not the actual value. This functionality might also be useful for Wikidata.
    • Task
    In this phase we will allow to use generic types. Phase completion condition: Head on list uses generic for proper validation, i.e. if(head([true]), false, true) validates but if(head(["text"]), false, true) does not. (This condition might be simplified)
    • Task
    In Wikifunctions there will be different kinds of edits, with different effects: - adding a label (either top level or on a key or argument) - changing a label (which, depending on the way we implement function calls from client wikis, might or might not be VERY important) - structural changes (like changing an implementation) - changing the documentation We should be able to route the different changes and check them against different user rights.
    • Task
    **Background** The API the #platform_engineering team is building will initially only pull in articles recommended by Miriams algorithm. Roughly after the first build of the API, an update to it will be made to include MediaSearch. The Android team's MVP needs to filter out certain kinds of articles (disambiguation pages, years, and lists). This filtering will be handled upstream by the API for articles except the ones the MediaSearch results. For this reason, Android will need to exclude MediaSearch results when the PET team updates the API. In future versions of the API, there are going to be filters to make it optional to include/exclude those types of articles for both algorithms. **Purpose of this ticket** This ticket is to initiate turning on filtering in Android when the API is updated to include MediaSearch, so we can continue to expclude disambiguation pages, years and lists.
    • Task
    Comments signed with the {{Undated}} template don't get reply links. Example: https://en.wikipedia.org/wiki/Talk:2020–21_UEFA_Nations_League?dtenable=1 {F34124386} The same issue could affect other situations where the timestamp has some formatting applied.
    • Task
    It takes LibUp about 20 hours to go through all the repos, which gives it about 4 hours of downtime before a new run starts. The main problem is that if a run goes slowly because say, it has upgrades to make! then more jobs just get queued and it ends up in a backlog it probably won't ever be able to clear until I do it manually. Now that most of the branch repos are up to date, I suggest that we only queue non-master jobs on days 3 (Wed) & 6 (Sat) of the week. This still gives us decent security coverage but cuts down on amount of processing time.
    • Task
    Echo renders OOUI icons at 30x30, which is 1.5 scale. This is usually fine as the icons are SVGs, but it email it deliberately uses rasterized versions (presumably for better email client support), which results in blurriness. The icons should be scaled to 30x30 before rasterizing. Current: {F34124349,size=full} Expected: {F34124353,size=full}
    • Task
    Due to T275931, MediaWiki message delivery accounts ain't showing up on https://meta.wikimedia.org/wiki/Special:CentralAuth/MediaWiki_message_delivery. The last account attachment was on 2016, whereas we've created tons of new wikis since then. Not having its accounts attached prevent local projects from getting #globaluserpage descriptions on those system users too. Thank you.
    • Task
    In T240640, we decided that comments with multiple signatures should only get one [reply] link at the end. It seems like this is not working perfectly, and some comments gets duplicate [reply] links: * https://en.wikipedia.org/w/index.php?title=Wikipedia:Requests_for_comment/User_names&oldid=1002108637&dtenable=1 {F34124337} * https://en.wikipedia.org/w/index.php?title=Wikipedia:Articles_for_deletion/Equal_Education&oldid=1009060583&dtenable=1 {F34124339} (I actually found these pages in the error logs related to T275066. Apparently the second [reply] link is generated for a comment that consists of nothing at all, which causes issues elsewhere.)
    • Task
    After re-testing {T266076} the following new issues have been found. Testing was done in saucelabs - IE11 Windows 10, screen resolution 1440x900. (1) Special:MediaSearch doesn't display a grid view. The vertical scrolling is present. {F34124319} (2) The filter drop-down menus are persistent. The steps to reproduce the issue - on Special:MediaSearch click somewhere around the filters area - click on one filter and then on another filter - several filters drop-downs will be display simultaneously |{F34124323}|{F34124328} //**Note:**//This issue is not the new - here is the screenshot that was done before adding License filters. {F34124341}
    • Task
    When a small graph is drawn, for example several marks horizontally, a spinning spinner is visible at the edges. .mw-graph has this: background-image: url (/w/extensions/Graph/includes/ajax-loader.gif?b3af8); and border: 1px solid transparent; When there is something short on the graph and #tag:graph has "height": less than 47, then inside this border which is transparent is visible loader.gif and border flashes with gif animation {F34124306} one mark and spinner flashes inside the border around the mark {F34124330} "height":40 and the top and bottom of the spinner remains {F34124335} the spinner itself This is not visible in Special: GraphSandbox since there is no .mw-graph but visible in pages and in Special: ExpandTemplates. Spinner was definitely not visible in 2018. .mw-graph {background-clip: padding-box;} (or content-box) can fix border or border can be replaced with margin/padding
    • Task
    Tested on 6.8.0 (1795) Reported via testflight screenshots as well Steps: 1. Navigate to an article without a article description (Example: Sophrosyne) 2. Add a description and selet publish Expected Result: Description is published Actual Result: The description is no published and an error appears. The updated description does not appear after a refresh. Frequency - 3.3 {F34124303}
    • Task
    I'd like to request a user project, as I have quite a few tasks relating to tools and maintenance that need tracking, and have various dependencies. Thanks in advance.
    • Task
    This task will track the #decommission of server mwmaint2001.codfw.wmnet. With the launch of updates to the decom cookbook, the majority of these steps can be handled by the service owners directly. The DC Ops team only gets involved once the system has been fully removed from service and powered down by the decommission cookbook. mwmaint2001.codfw.wmnet **Steps for service owner:** [] - all system services confirmed offline from production use [] - set all icinga checks to maint mode/disabled while reclaim/decommmission takes place. (likely done by script) [] - remove system from all lvs/pybal active configuration [] - any service group puppet/hiera/dsh config removed [] - remove site.pp, replace with role(spare::system) recommended to ensure services offline but not 100% required as long as the decom script is IMMEDIATELY run below. [] - login to cumin host and run the decom cookbook: cookbook sre.hosts.decommission <host fqdn> -t <phab task>. This does: bootloader wipe, host power down, netbox update to decommissioning status, puppet node clean, puppet node deactivate, debmonitor removal, and run homer. [] - remove all remaining puppet references and all host entries in the puppet repo [] - reassign task from service owner to DC ops team member depending on site of server. **End service owner steps / Begin DC-Ops team steps:** [] - system disks removed (by onsite) [] - determine system age, under 5 years are reclaimed to spare, over 5 years are decommissioned. [] - IF DECOM: system unracked and decommissioned (by onsite), update netbox with result and set state to offline [] - IF DECOM: mgmt dns entries removed. [] - IF RECLAIM: set netbox state to 'inventory' and hostname to asset tag
    • Task
    As noted in T260095 we want to shift the mirror we are referencing for mariadb.
    • Task
    From OTRS - https://ticket.wikimedia.org/otrs/index.pl?Action=AgentTicketZoom;TicketID=11713304 >I was reading an article in the iOS app and encountered a couple of small issues scattered throughout. However, I found I was unable to edit the entire article at once in the app; instead I could only edit individual sections one at a time. I didn't see this limitation mentioned in the Wikimedia Apps/iOS FAQ or Wikipedia:Editing on mobile devices articles, so I assume this is a bug. If this is not a bug, can this email be considered a feature request? Or, if it is a purposeful limitation, can that be stated?
    • Task
    From OTRS - https://ticket.wikimedia.org/otrs/index.pl?Action=AgentTicketZoom;TicketID=11718334 Tested on 6.8.0 (1795) Steps: 1. Search for the term Ortho Phthalate (redirects to Phthalate) 2. Tap on the Phthalate article Expected result: Article loads Actual result: Error occurs Note - I'm not sure if there is anything special/different about this article. I tried other redirect articles without incidence.
    • Task
    From OTRS - https://ticket.wikimedia.org/otrs/index.pl?Action=AgentTicketZoom;TicketID=11718208 Tested on 6.8.0 (1795) Steps to Reproduce: 1. Navigate to an article and tap an edit pencikl 2. Tap Find/Replace icon 3. Type in a search term 4. Tap into the article body after finding the search term Expected result: The cursor is place where the user taps Actual Result: The cursor jumps to the end of the section Frequency - 5/5
    • Task
    From OTRS - https://ticket.wikimedia.org/otrs/index.pl?Action=AgentTicketZoom;TicketID=11714633 > Love the app. Would it be possible to have an option for an auto daytime preference of sepia (instead of light) during the day and black at night? Just slightly more granular as the “matches” option doesn’t use the sepia tones. Thank you!
    • Task
    About half the time I try to ack something in icinga I get a permission denied error. That is because for some reason I have two icinga accounts, one is "Andrew Bogott" and one is "andrew bogott". "Andrew Bogott" is the account name I use on every other wmf website but that account does not have the access rights to do anything in icinga. "andrew bogott" has the rights I need on icinga but I never remember to log in that way. Also, icinga sessions are long-lived and icinga doesn't present a 'log out' button so once I've logged in as Andrew Bogott I have to live with it for weeks or do some kind of fancy thing to clear my cookies. I know I am not the only person suffering from this issue and it is extremely annoying and disruptive, causing many mistaken pages and delayed acks.
    • Task
    The it.wiki designer @Bruce_The_Deus, together with the help of @Galessandroni, produced a new amazing logo for the [[ https://meta.wikimedia.org/wiki/Wikimedia_Italia/LimeSurvey | WMI-LimeSurvey ]] tool, after a challenge started in the [[ https://it.wikipedia.org/wiki/Progetto:Laboratorio_grafico | it.wiki graphic laboratory ]] ([[ https://it.wikipedia.org/wiki/Special:PermaLink/118913153#Nuovo_logo_per_una_"wiki_LimeSurvey" | permalink ]]). This task is about applying this logo in WMI-LimeSurvey: {F34124153} Bonus-point: Generate an `SVG` and adopt that in the Meta-wiki page of the project, here: https://meta.wikimedia.org/wiki/Wikimedia_Italia/LimeSurvey Actually I really don't know how to do it but I have to do it ASAP, because this logo is 100% //adorable// and it really deserves to be online ASAP. asd
    • Task
    Normal spaces (U+0020) before `! ? : ; »` and after `«` are automatically converted by the parser as non-breaking spaces (U+00A0) on French-speaking wikis. However since a few hours, non-breaking spaces seem to be missing or misplaced when there are `<span></span>` tags: For example on https://fr.wikipedia.org/w/index.php?title=Kiev&oldid=180348745 ``` en <a href="/wiki/Ukrainien" title="Ukrainien">ukrainien</a> :&nbsp;<span class="lang-uk" lang="uk">Київ</span>, <i>Kyiv</i> ``` `&nbsp;` should be before the semi-colon, not after. On https://fr.wikipedia.org/w/index.php?title=Visual_novel&oldid=179960076 ``` (abréviation de «&nbsp;<span class="lang-en" lang="en"><i>novel</i></span> »), qui consistent essentiellement en une narration et comportent très peu d'éléments interactifs et les jeux dits «&nbsp;AVG&nbsp;» ou «&nbsp;ADV&nbsp;» (respectivement «&nbsp;<span class="lang-en" lang="en"><i>adventure game</i></span> » et «&nbsp;<span class="lang-en" lang="en"><i>adventure</i></span> ») ``` `&nbsp;` is missing before the closing guillemet (`»`).
    • Task
    As a user of the Android app on mobile, I would like to easily access archived Talk Pages **Source** Android Email
    • Task
    Follow-up to T271353 There is a distribution file, `commonjs2/wvui-icons.commonjs2.js`, that could be used if it were provided in core This was discussed on https://gerrit.wikimedia.org/r/c/mediawiki/core/+/641052, see https://gerrit.wikimedia.org/r/c/mediawiki/core/+/641052/19/resources/src/wvui/wvui.js Use case: avoid needing to copy the icon code from the wvui library to places that want to use it. #sdaw-mediasearch has a copy of the wvui icon component (which should probably be addressed by just using the one available from core...) and has documentation saying "To use a new icon, find the icon in `src/themes/icons.ts` in the WVUI library, copy the icon data, and paste it info `lib/icons.js` in this extension." That lib file currently has 20 copied icons, and has documentation saying "This file can be removed when Media Search uses the Icon component from the WVUI library, where icons are included." but the Icon component does *not* include the actual icon codes, those are separate #WikibaseMediaInfo has a similar copy of the wvui icon, and the same 20 icons copied from wvui #Wikilambda has another copy of the icon and the same 20 icons copied from the #wikibasemediainfo #mediawiki-extensions-globalwatchlist, which has been updated to use the wvui components from resource loader rather than needing to make a copy, still has to copy the 7 icons needed It seems that, for the first three extensions listed, #sdaw-mediasearch copied the wvui component and icons it needed from wvui directly, #wikibasemediainfo copied them from #sdaw-mediasearch, and #wikilambda copied them from #wikibasemediainfo. We should avoid needing to copy these all over the place Of note, the distribution lib file currently in core, `lib/wvui/wvui.commonjs2.js`, **already includes all of these icons**, but they are just not exported and so most are defined and never used. See ["CONCATENATED MODULE: ./src/themes/icons.ts"](https://gerrit.wikimedia.org/g/mediawiki/core/+/db16fcb6acf9f981e730dd6de6af12aa12e36223/resources/lib/wvui/wvui.commonjs2.js#594)
    • Task
    Related to T118062. In simple.wikipedia.org and sco.wikipedia.org: Create a citation with {{efn-ua}} suchas {{efn-ua|Message}} and reference in the notes with {{notelist-ua}} The notelist will be correct using the letter A but for simple.wikipedia.org and sco.wikipedia.org the reference in the body will say [upper-alpha 1] instead of [A] This syntax works correctly in en.wikipedia.org. In the English wikipedia where the notelist has A and the citation shows [A] Example pages include https://simple.wikipedia.org/wiki/International_auxiliary_language https://sco.wikipedia.org/wiki/Seleucus_I_Nicator I've duplicated this on Windows 10 using Edge, Firefox, Chrome and Opera. I've also duplicated it on my iPhone SE iOS 14.4 in Safari
    • Task
    ==== Current behavior Currently SecurePoll allows election admins to be able to make changes to an election they are admin for by modifying the URL (/edit/<election-id>). They are able to save changes to an election by doing this after the election ends, including changing the election start date. This is problematic because it can falsify data for scrutineers who analyze election activity. The expectation is that no changes can be made to an election after the end of an election. ==== Expected behavior After election end, the election edit page becomes inaccessible and no changes can be made to the election settings.
    • Task
    * beta cswiki search for `hastemplate:Wikifikovat`: https://cs.wikipedia.beta.wmflabs.org/w/index.php?search=hastemplate%3AWikifikovat - 34 results * beta cswiki search for `hastemplate:"Wikifikovat"`: https://cs.wikipedia.beta.wmflabs.org/w/index.php?search=hastemplate%3A%22Wikifikovat%22 - 0 results HasTemplateFeature takes a slightly different code path when the value is quoted. The intent is to do a case-sensitive match for the template name, but it seems like there's more going on - the [[https://cs.wikipedia.beta.wmflabs.org/wiki/%C5%A0ablona:Wikifikovat|Wikifikovat]] template exists, matches the casing, and the way it's invoked in e.g. https://cs.wikipedia.beta.wmflabs.org/w/index.php?title=Energetick%C3%A1_%C3%BA%C4%8Dinnost&oldid=1687&action=raw also matches the casing.
    • Task
    In {T250235}, we developed the ability for questions coming from the help panel to go to mentors instead of central help desks. In that task and in {T272753}, we validated in our pilot wikis that the results from this change are acceptable. Therefore, we want this configuration to be the default for all wikis with the Growth features. We would also like to retain the ability for this to be configurable, so that in the future, some wikis could elect to send questions to help desks instead. This may involves these lines of work: * Make this an option that wikis can configure via T274031. * Convert all Growth wikis to the new configuration. * Use this new configuration as the default for new wikis going forward. * Announce this change in the next newsletter.
    • Task
    We are still months away from a production deployment, but we should start working on the basic scaffolding for end user help documentation. We can and should put short tips into the UI as translatable strings, but more full featured help documentation will be easier to maintain on metawiki. Using the wiki will allow Extension:Translate usage for making the documentation more accessible. It will also allow the user community to contribute to the documentation much more easily than in-app help storage would.
    • Task
    If the cursor is to the right of the trigger character, it should probably not re-insert it. >>! In T257391#6835475, @Earwig wrote: > Cool! One (likely) bug: if I press the reply button multiple times, it adds multiple `@`s.
    • Task
    T274170 introduced new hardware as mwmaint2002. T267607 is about upgrading mwmaint servers to buster where we just upgraded mwmaint2001 to buster a couple days ago. But mwmaint2002 has "insetup" role now and we need to actually start using it and replace the old hardware. This ticket is for that part, the part where we take it over from dcops after racking.
    • Task
    Presently, wikifunctions.org is registered with Google and redirects to the Abstract Wikipedia page on Meta. This task is for migration to Wikimedia. We can split this into separate tickets if necessary, and here are some items: [ ] Transfer domain [ ] Set up DNS ( https://gerrit.wikimedia.org/r/plugins/gitiles/operations/dns/+/refs/heads/master/templates/ ) and temporary redirect [ ] After transfer is done, establish Google Webmaster Central domain ownership at https://www.google.com/webmasters/verification/details?hl=en&domain=wikidata.org [ ] Request TLS cert and CAA records [ ] Establish owner delegates [ ] Make a landing page with good keyword targeting and email sign up form on the web root while we build the wiki and prepare for launch?
    • Task
    Preview can at times seem to... flicker? Mostly it only really stands out when actually saving them and the whole theme goes //away//, though. Like we need to make sure applying and saving the theme does so in a way that tells mw/the browser to refresh the css?
    • Task
    Bug reported via Chorewheel through email: **Context** When you go from the main page to any article, a message appears: "We are sorry, but there was an error in the Wikipedia application, as a result of which it was closed." Application version 2.7.50341-r-2021-02-02. My phone is Xiaomi Redmi 4, version Android 6.0.1. I repeat, for many years the application worked very well on this particular phone. The user did try reinstalling the app. Steps to Reproduce: - Go to an article from the mainpage on a Xiaomi Redmi 4 version Android 6.0.1. Actual Results: See error message : We are sorry, but there was an error in the Wikipedia application, as a result of which it was closed Expected Results: See an article displayed Once the bug is fixed email Анатолий Шлешкин
    • Task
    I have recently encountered this strange phenomenon: whenever I am looking at search results (Special:Search, but only the results page), I cannot access the notifications by clicking/tapping on the bell icon. Nothing at all happens, no matter how often I try, with or without refreshing the page. It is no general Minerva problem, as it only happens on the mobile domain. And it works fine on the Special:Search page before entering a search term. Have tried it both in Chrome and in Safari, same behaviour. I also tried it on dewiki and enwiki, with and without advanced mobile contributions; always the same result.
    • Task
    As an end user, when searching MediaSearch, I'd like to know whether there are any search results for my query in the not-currently displayed tabs, so that I know whether to click them to see the results. From User:Mike Peel on the [[ https://commons.wikimedia.org/w/index.php?title=Commons_talk:Structured_data/Media_search | MediaSearch discussion page ]]: > could you put a number next to the links to the other options, to clearly demonstrate that they have relevant results as well. Design: TBD Acceptance Criteria: TBD
    • Task
    ==== Motivation We've added a voter data access and admin log for SecurePoll. To ease access to the respective logs for the user, we should add a link to the election log page from the main elections table on Special:SecurePoll. User story: As a user of SecurePoll, I would like to have a link to the election log from the main election page so that I can easily access the log for any given election. ==== Acceptance criteria {F34123886} * On the Special:SecurePoll page (screenshot above), add a link to the right of the "Tally" link called "Log" which links to "Special:SecurePollLog". * The SecurePollLog form is filtered to the election the user clicked the link against in the "Election (title) field. And filtered to "all securepoll logs" by default. * Link opens in the same tab.
    • Task
    === How many times were you able to reproduce it? 1/10 === Steps to reproduce # Scroll down an explore view for a while, try one with multiple languages. # Tap the save button in the Featured and Random cells as you go === Expected results Sometimes, the article saves (banner pops up) but the button doesn't reflect the correct save state. Scrolling the cell off screen and back again will fix it. === Actual results Button should always reflect the correct save state === Screenshots {F34123906} === Environments observed **App version: ** 6.8.0 (1795) **OS versions:** 14.4 **Device model:** iPhone 12 Pro Max **Device language:** EN
    • Task
    Sidebar/content widths appear to be percentage-based; footer and tab containter widths are set. Normal-ish resolution: {F34123462} Extreme zoom: {F34123874}
    • Task
    Today I have reimaged an-worker1096, one of the Hadoop workers with a GPU, deploying ROCm 3.8 on top (the other stretch nodes run 3.3 with the DKMS package, meanwhile on Buster we prefer to rely on the 5.x Kernel's drivers). Puppet complained about a `rocm-dev` not being able to be installed, due to `rocm-gdb`, requiring `libpython-38`. We have ROCm 3.8 on stat100[5,8] too, both on Buster, so why on an-worker1096 it doesn't work? ``` elukey@stat1005:~$ sudo apt-cache policy rocm-gdb rocm-gdb: Installed: 9.2-rocm-rel-3.7-20 Candidate: 9.2-rocm-rel-3.8-30 Version table: 9.2-rocm-rel-3.8-30 1001 1001 http://apt.wikimedia.org/wikimedia buster-wikimedia/thirdparty/amd-rocm38 amd64 Packages *** 9.2-rocm-rel-3.7-20 100 100 /var/lib/dpkg/status ``` The version of the package contains the ROCm release, 3.7 and 3.8: the former wants libpython3.7, the latter 3.8. On stat1005 we have probably not cleaned up the 3.7 version, and it worked fine. To unblock an-worker1096 I just copied via transfer.py the 3.7 deb from stat1005 (I know it is horrible). We have libpython3.8 from the pyall component, but it is a virtual package, and apt complains that it cannot be installed. This is only one example of problems that we'll have in the future with ROCm, we should decide what to do, and also establish a better procedure to wipe/upgrade an host (to avoid the stat1005 use case again). More info in https://wikitech.wikimedia.org/wiki/Analytics/Systems/Cluster/AMD_GPU
    • Task
    The following has been reported before, as part of other bug reports, but hasn't been resolved yet. The Dutch Wikipedia has a template "Citeer web", comparable to the English Wikipedia "Cite web". One of the parameters is "dodeurl" to indicate that a url can no longer be reached. However, the bot keeps using the incorrect parameter "dodelink". Also, the bot keeps flagging a url as dead while it is very much alive. See diff 58383825 on the Dutch Wikipedia. **Owler.com is not a dead website**.
    • Task
    [[ https://en.wikipedia.org/wiki/Does_exactly_what_it_says_on_the_tin | Ronseal ]]. Readers Web are increasing the scope of the UniversalLanguageSwitcher instrument as part of {T275766} and {T275762}. In order to keep the UniversalLanguageSwitcher extension focussed, we (@nshahquinn-wmf, @Nikerabbit, and Readers Web) agreed that the instrument should be moved to the WikimediaEvents extension. == AC [] The UniversalLanguageSwitcher instrument is moved to the WikimediaEvents extension [] The UniversalLanguageSwitcher is loaded when the skin isn't `minerva` == developer notes the ULS code that needs to be ported is here: https://gerrit.wikimedia.org/r/c/mediawiki/extensions/UniversalLanguageSelector/+/663008
    • Task
    Steps to Reproduce: 1. Try change the "User:AnYiLin/userpage.css" with https://zh.wikipedia.org/wiki/Special:ChangeContentModel. Actual Results: 1. [[https://zh.wikipedia.org/w/index.php?title=Topic:W4673729ezq60xfg|For]] non-administrator or an interface administrator, the Special:ChangeContentModel page says that an administrator is required to execute it. 2. For an administrator only, the Special:ChangeContentModel page says that an interface administrator is required to execute it. 3. For users with both administrator and interface administrator, the operation is [[https://zh.wikipedia.org/w/index.php?title=Special:%E6%97%A5%E5%BF%97&page=User%3AAnYiLin%2Fuserpage.css|feasible]]. 4. This may be because the operation requires permissions like "editusercss" and "editcontentmodel", and this requires two [[https://zh.wikipedia.org/wiki/Special:%E7%BE%A4%E7%BB%84%E6%9D%83%E9%99%90 | user groups]] on zhwiki. Expected Results: Provide more precise user group combinations or permissions error details.
    • Task
    When doing a full fleet reboot/upgrade it takes some time to do manually, it'd be worth to automate in a cookbook. Process: * Make sure the cluster is healthy (sudo ceph status -> HEALTHY_OK) * Set the cluster noout+norebalance policies: ** sudo ceph osd set noout ** sudo ceph osd set norebalance * Downtime the 'Ceph OSDs Down' check on icinga alert1001 host * Host by host (including control nodes, though those don't need `noout`/`norebalance`, it does not hurt): ** sudo cookbook sre.hosts.upgrade-and-reboot --depool-cmd 'true' --repool-cmd 'true' <host-fqdn> ** Wait until ceph cluster is healthy again *** If there's any PGs stuck with 'undersized+remapped..' or similar state, unset norebalance for a bit and set after: **** sudo ceph osd unset norebalance **** Wait until cluster healthy **** sudo ceph osd set norebalance
    • Task
    I was trying to upload [[ https://commons.wikimedia.org/w/index.php?title=Special:Upload&wpDestFile=Science_Summary_for_January_2021.webm&wpForReUpload=1 | a new version of this file (a Science Summary CC BY audiovisual summary for a month) here]]. Problem 1: I could not upload the full-size render because the UploadWizard had an error, without any meaningful description, after it finished uploading even though the file was smaller than [[ https://commons.wikimedia.org/wiki/File:Science_Summary_for_December_2020.webm | this video ]] that I uploaded for which the upload went through. Problem 2: In the "Source file" when uploading the file from the local computer there is a "Maximum file size: 100 MB" which is smaller than the original file. (Relevant notes: Despite of that videos should be automatically embedded, or even publicly hosted, with smaller rendered version in pages and only the largest renderings uploaded the commons, I tried uploading with a far lower-quality rendering for a short-term solution until this gets fixed but even that was 104 MB in size and there are some problems with the melt-renderings/Kdenlive that make running these problematic.) **Problem 3 (this issue)**: because of Problem 2 I uploaded the .webm video to archive.org [[ https://archive.org/download/science-summary-2021-jan/ScienceSummary_2020_Jan_28.webm | here ]] and tried uploading via this URL. Next to the upload via Source URL it says: "Maximum file size: 4 GB" (the file on archive.org is smaller than that and not even the full-size render). It then loads for a while after which it displays an error page with the error "Error: 413, Request Entity Too Large". Once problem 3 is solved, issues for problem 1 and 2 could get created as separate tasks and linked to from here.
    • Task
    As described in the parent task, we are working on an API that will allow bot writers to automatically add highly relevant images to specific articles. The goal is to have bots running on a few trial wikis in the experimental stage at the end of March 2021, so we would like to begin gathering data then, and be able to answer these questions in April 2021. There are two categories of metrics we would like to collect: # Metrics about the health of the project, that will help us understand how and whether to continue to move forward or if we need to make major changes ## How many edits are made by bots to add images, per wiki? ## What proportion of those edits are reverted within 48 hours (aka “unconstructive edits”)? Does this change by wiki? ## How many images are added to an article in each edit? Does the number of images added per edit relate to revert rate? Does this differ by wiki? ## Are there certain topic areas where images added by bots are more likely to be reverted? # Larger analytical questions or hypotheses to answer ## Do page views increase when an article has more images? ## Do revert rates relate to page views?