Page MenuHomePhabricator
Search Open Tasks
Use the application-specific Advanced Search to set additional search criteria: Tasks, Commits. (More information)
    • Task
    On a local machine 1. Special:SpecialPages lists Special:CommunityConfigurationExample page {F48536404} 2. Clicking on the link gives "Internal error" on http://localhost:8080/wiki/Special:CommunityConfigurationExample ``` [19bafc3c9f15f8c343b0e188] /wiki/Special:CommunityConfigurationExample MediaWiki\Config\ConfigException: Key CCExampleBackgroundColor was not found. Backtrace: from /var/www/html/w/extensions/CommunityConfiguration/src/Provider/WikiPageConfigProvider.php(38) #0 /var/www/html/w/extensions/CommunityConfiguration/src/Access/WikiPageConfigReader.php(128): MediaWiki\Extension\CommunityConfiguration\Provider\WikiPageConfigProvider->get() #1 /var/www/html/w/extensions/CommunityConfigurationExample/src/Specials/SpecialCommunityConfigurationExample.php(37): MediaWiki\Extension\CommunityConfiguration\Access\WikiPageConfigReader->get() #2 /var/www/html/w/includes/specialpage/SpecialPage.php(718): CommunityConfigurationExample\Specials\SpecialCommunityConfigurationExample->execute() #3 /var/www/html/w/includes/specialpage/SpecialPageFactory.php(1672): MediaWiki\SpecialPage\SpecialPage->run() #4 /var/www/html/w/includes/actions/ActionEntryPoint.php(504): MediaWiki\SpecialPage\SpecialPageFactory->executePath() #5 /var/www/html/w/includes/actions/ActionEntryPoint.php(145): MediaWiki\Actions\ActionEntryPoint->performRequest() #6 /var/www/html/w/includes/MediaWikiEntryPoint.php(199): MediaWiki\Actions\ActionEntryPoint->execute() #7 /var/www/html/w/index.php(58): MediaWiki\MediaWikiEntryPoint->run() #8 {main} ``` I checked on `eswiki beta` - no Internal error invalid Special pages.
    • Task
    Register some pre-commit hooks to do formatting and style checks. Suggested tools to install and setup as commit hooks: - ruff - mypy - tool to make sure requirements.txt is sync'd properly This is a step towards better CI/CD
    • Task
    We have just [[ https://gerrit.wikimedia.org/r/c/operations/puppet/+/983951 | deployed the CDN config ]] for the new community-crm.wikimedia.org service. This is a CiviCRM instance running Drupal and is using role::crm on crm2001. The login page will display fine, however, when I attempt to log in I am presented with the following error: ``` Request from XXX.XXX.XXX.XXX via cp4042 cp4042, Varnish XID 928498219 Upstream caches: cp4042 int Error: 405, method not allowed at Wed, 24 Apr 2024 23:03:00 GMT ``` I have a browser session that has a cached authentication cookie from testing over ssh tunnels and I am able to see test data. Could you help us diagnose what is causing this error and how we can fix the issue? Thanks!
    • Task
    **Steps to replicate the issue** (include links if applicable): * Setup a local interwiki whose base URL is the same as the local wiki's. * Attempt to visit an interwiki redirect page using this local interwiki (e.g. `http://mywiki.example.com/wiki/iwprefix:Test` **What happens?**: The title is treated as invalid. **What should have happened instead?**: The URL should have redirected to the appropriate interwiki page. **Software version**: 1.39
    • Task
    ####Background The Edit Patrol feature allows users with rollback rights to patrol a feed of recent edits. During initial scoping of the feature, we did not include a blocking flow, and there is not a current blocking flow within the app. Now that we have a patrolling tool within the Android App, it makes sense to add an entry point for blocking users for Admins who are patrolling from the App. ####User stories As an admin on a smaller-language Wiki, I want to be able to block a vandal that I've warned multiple times using Edit Patrol, who is still creating damage. ####Requirements - Entry point: Within user Contributions in the overflow menu that appears after tapping username {F48517916} - Should only be shown to users with Admin rights on their primary language Wikipedia within the app - "Block" brings user to Web view of the Block form for that User - After block form is submitted, user is returned to user Contributions within the app - Block option should not appear when viewing your own user contributions Nice to have - "Block" menu option becomes "Unblock" after block was successful - Menu option for access to Webview of [[ https://en.wikipedia.org/wiki/Help:User_contributions#What_normally_does_not_appear | Special:DeletedContributions]] to see revisions that have not been restored.
    • Task
    From WE5.3 Draft, > Draft hypothesis idea: On template edits, if we can implement an algorithm in Parsoid to reuse HTML of a page that depends on the edited template without processing the page from scratch and demonstrate 1.5x or higher processing speedup, we will have a potential incremental parsing solution for efficient page updates on template edits. > > NOTE: We are only planning to implement this in the Parsoid library and test it on the command line. The actual integration with the processing pipeline will be followup work and will be more involved. In this prototype, we will start with templates that produce well-balanced DOM fragments.
    • Task
    Currently there are some field renaming in the `result_map` mapping, some also enforcing the number type on the field. This could be simplified (and maybe optimized) using the actual (definitive) field name in grok pattern and using type hints.
    • Task
    ####Background Conduct an analysis at 30 days after the last release to all Wikis - Release date: TBD - 30 Days- X ####The Task [] Compare results to baseline data that was collected [] Visualize and present the data in a way that is easily understandable to the team ####Requirements - The data should be based on the metrics in the Epic ####At 30 days: **Validate:** - Key Indicator 1: 65% of Target mature audiences that use the tool say they find it helpful for maintaining the quality of wikis and would recommend it to other patrollers - Key Indicator 2: Edits made by mature audiences increase by 5% - Key Indicator 3: 10% of target mature audiences engage with filter for preferences - Key Indicator 4: 65% of Target mature audiences engage with the tool at least three times in a thirty day window **Guardrails:** - Experienced users without rollback rights, users that have and have not used alternative patrolling tools equally understand the workflow - We do not receive reports of tool being used to negatively target underrepresented content or contributors based on in-app reporting mechanisms **Curiosities:** - How does use of our tool compare to other patrolling tools when looking at MediaWiki Tags (SWViewer, Huggle, and Twinkle) - Do we see an increase in Undo/Rollback/Thank events - How popular is this task with our target audience relative to other Suggested Edits task? - What actions are most popular in the feature? - For Saved messages: - How often do users create messages from scratch instead of using an example message? - How often do users modify an example message before sending? - How often do users modify an example message and save that version to "Your Messages"? - How often do users click on each of the 10 example messages? - For templates - How often are users using a Template while posting talk pages messages messages in Edit Patrol? - How often are they saving a message to "Your Messages" that contains a template? ####Target Quant Languages - Indonesian - French - Chinese - Spanish - Igbo - English
    • Task
    **Steps to replicate the issue** (include links if applicable): * Visit https://meta.wikimedia.org/wiki/Special:ReadingLists **What happens?**: There are duplicate tabs saying "Your lists" {F48504922} **What should have happened instead?**: {F48504860} Only the second tab should be visible. We should hide the first tab or not output it in the first place. **Software version** (on `Special:Version` page; skip for WMF-hosted wikis like Wikipedia): **Other information** (browser name/version, screenshots, etc.):
    • Task
    Seen while working on {T363028} via https://gitlab.wikimedia.org/toolforge-repos/bridgebot ```name=Procfile,lang=yaml bot: bridgebot test-bot: bridgebot -conf etc/testing.toml ``` ```lang=shell-session,counterexample $ test-bot --help bridgebot -conf etc/testing.toml: line 1: bridgebot -conf etc/testing.toml: No such file or directory $ bridgebot --help ERROR: failed to launch: direct exec: argument list too long ``` ```name=Procfile,lang=yaml bot: /layers/heroku_go/go_target/bin/bridgebot -conf /app/etc/bridgebot.toml testbot: /layers/heroku_go/go_target/bin/bridgebot -conf /app/etc/testing.toml ``` ```lang=shell-session,counterexample $ testbot --help /layers/heroku_go/go_target/bin/bridgebot -conf /app/etc/testing.toml: line 1: /layers/heroku_go/go_target/bin/bridgebot -conf /app/etc/testing.toml: No such file or directory $ time bridgebot --help ERROR: failed to launch: direct exec: argument list too long real 0m16.379s user 0m6.879s sys 0m9.129s ``` Calling the golang built binary using it's full path without the assistance of Procfile or `launcher` works: ```lang=shell-session $ webservice buildservice shell --mount none -m 2G -c 1 $ /layers/heroku_go/go_target/bin/bridgebot -conf /app/etc/testing.toml 0000] INFO router: (/layers/heroku_go/go_deps/cache/gitlab.wikimedia.org/toolforge-repos/bridgebot-matterbridge@v0.0.0-20240424042617-38c64944bf1d/gateway/router.go:66: github.com/42wim/matterbridge/gateway.(*Router).Start) Parsing gateway testing-irc-telegram [0000] INFO router: (/layers/heroku_go/go_deps/cache/gitlab.wikimedia.org/toolforge-repos/bridgebot-matterbridge@v0.0.0-20240424042617-38c64944bf1d/gateway/router.go:75: github.com/42wim/matterbridge/gateway.(*Router).Start) Starting bridge: irc.testing ... ```
    • Task
    ### Background A common use case across products is a checkbox or radio that, when checked, displays a text input or text area to capture further user input. This is included in our [[ https://doc.wikimedia.org/codex/main/style-guide/constructing-forms.html#conditional-and-nested-fields | form guidelines ]]: {F48502679} Given the prevalence of this use case, and that we want to standardize its design, we should build this into the Checkbox and Radio components. Note that we may want this for other components in the future as well (e.g. for "select or other" fields, something that exists in HTMLForm and is being used on Special:Block). ### Implementation We will want to add a few new props to Checkbox and Radio (prop names are open for debate) - `showOther`: a boolean prop that, when true, will add a TextInput or TextArea - `otherComponent`: a prop that can either be 'text' or 'textarea', defaulting to 'text' (the name of these strings is also up for debate. This could also be a boolean prop like `useTextArea` or something) - `otherValue`: the value of that input, bound with `v-model:otherValue` The Checkbox and Radio templates will need to be updated to conditionally show the TextInput or TextArea, and appropriate styles will be added. Finally, we'll need tests and demos. --- ### Acceptance criteria - [] A TextInput or TextArea can be conditionally added below a Checkbox or Radio - [] The TextInput or TextArea is styled in line with the Figma designs - [] The new functionality is covered by unit tests - [] A new demo is added to each component's demo page showing the new functionality
    • Task
    The puppet role `deployment_server` should have support for bullseye, for usage at least in cloud VPS projects and also generally to get rid of deployment hosts on buster in testing and production. Currently when trying it on a bullseye VM the issues include: * E: Unable to locate package python-redis * E: Unable to locate package python-gitdb * E: Package 'python-git' has no installation candidate My subteam would like this for cloud VPS devtools project to resolve T360964 but I assume the production deployment servers should also be upgraded. And they are also still on buster so we couldn't copy production. I am currently unsure if this ticket should only be about adding support to the puppet role or if it should also include actual upgrade of the production machines using it. serviceops, any opinion?
    • Task
    In https://gerrit.wikimedia.org/r/c/mediawiki/extensions/GrowthExperiments/+/1017256, @sgs accidentally did not declare JSON schema defaults for the newly-added Help panel schema (T360472). This resulted in an ugly error: ``` MediaWiki internal error. Original exception: [7a2bd9f7260ee9b7668ef1ce] /wiki/Special:CommunityConfiguration/HelpPanel MediaWiki\Config\ConfigException: Key GEHelpPanelExcludedNamespaces was not found. Backtrace: from /var/www/html/w/extensions/CommunityConfiguration/src/Provider/WikiPageConfigProvider.php(38) #0 /var/www/html/w/extensions/CommunityConfiguration/src/Access/WikiPageConfigReader.php(128): MediaWiki\Extension\CommunityConfiguration\Provider\WikiPageConfigProvider->get() #1 /var/www/html/w/extensions/GrowthExperiments/includes/HelpPanel.php(131): MediaWiki\Extension\CommunityConfiguration\Access\WikiPageConfigReader->get() #2 /var/www/html/w/extensions/GrowthExperiments/includes/HelpPanelHooks.php(159): GrowthExperiments\HelpPanel::shouldShowHelpPanel() #3 /var/www/html/w/includes/HookContainer/HookContainer.php(159): GrowthExperiments\HelpPanelHooks->onBeforePageDisplay() #4 /var/www/html/w/includes/HookContainer/HookRunner.php(945): MediaWiki\HookContainer\HookContainer->run() #5 /var/www/html/w/includes/Output/OutputPage.php(2998): MediaWiki\HookContainer\HookRunner->onBeforePageDisplay() #6 /var/www/html/w/includes/actions/ActionEntryPoint.php(162): MediaWiki\Output\OutputPage->output() #7 /var/www/html/w/includes/MediaWikiEntryPoint.php(199): MediaWiki\Actions\ActionEntryPoint->execute() #8 /var/www/html/w/index.php(58): MediaWiki\MediaWikiEntryPoint->run() #9 {main} ``` This error was even uglier, as it only appeared following clearing wiki's internal caches (without that, it appeared to work correctly). Such a error should be definitely caught by (some) testcase – it is questionable whether it should belong to #growthexperiments or #communityconfiguration (tagging with both momentarily), but the issue should not rely on manual detection, as it is the equivalent of taking the site fully down. Without providing such test for this, this has the potential of biting us in the future.
    • Task
    Something to do with a follow-up edit to the page. Doing it on a fresh page works just fine {F48502700}
    • Task
    #communityconfiguration has a support for `dynamicDefault`, which allow to create a default for a schema using a callback, rather than as a PHP constant. This is occassionally needed to workaround PHP language limitations. For example, it is impossible to set an empty object as a default using the standard `JsonSchema::DEFAULT` approach, because: ```lang=php self::DEFAULT => [ (object)[], ] ``` is not valid PHP code (`Constant expression contains invalid operations`). To be able to use empty objects as JSON defaults, we would need support for dynamic defaults. Dynamic defaults would be also useful for conditional fallbacks. ==== Acceptance Criteria [ ] When JSON Schema specifies a dynamic default using a format understood by `ReflectionSchemaSource` (the `dynamicDefault` keyword), `JsonSchemaBuilder` evaluates the dynamic default. [ ] When both a dynamic and static default is specified, the dynamic default should take precedence.
    • Task
    For {T348388}, we want to move login to a central login wiki: when you click login / signup, you get redirected to (in the case of Wikimedia wikis) login.wikimedia.org, go through the user interactions there, and get redirected back at the end. That means the login wiki needs to be able to simulate wiki-specific aspects of login/signup, most notably the `AuthManager::canCreateAccount()` checks as we want to apply local username policies, blocks etc. That will probably involve the CentralAuth authentication provider proxying the calls to ApiQueryUsers on the initial wiki (as otherwise running code in the context of a different wiki is a hard problem).
    • Task
    == Common information * **dashboard**: https://grafana.wikimedia.org/d/ZA1I-IB4z/ipmi-sensor-state?orgId=1&var-Sensor=Power%20Supply&var-server=an-druid1004 * **runbook**: https://wikitech.wikimedia.org/wiki/Dc-operations/Hardware_Troubleshooting_Runbook#Power_Supply_Failures * **alertname**: PowerSupplyFailure * **cluster**: druid_analytics * **instance**: an-druid1004:9290 * **job**: ipmi * **prometheus**: ops * **severity**: task * **site**: eqiad * **source**: prometheus * **team**: dcops * **type**: Power Supply == Firing alerts --- * **dashboard**: https://grafana.wikimedia.org/d/ZA1I-IB4z/ipmi-sensor-state?orgId=1&var-Sensor=Power%20Supply&var-server=an-druid1004 * **description**: Power Supply - PS Redundancy - issue on an-druid1004:9290 * **runbook**: https://wikitech.wikimedia.org/wiki/Dc-operations/Hardware_Troubleshooting_Runbook#Power_Supply_Failures * **summary**: Power Supply - PS Redundancy - issue on an-druid1004:9290 * **alertname**: PowerSupplyFailure * **cluster**: druid_analytics * **id**: 191 * **instance**: an-druid1004:9290 * **job**: ipmi * **name**: PS Redundancy * **prometheus**: ops * **severity**: task * **site**: eqiad * **source**: prometheus * **team**: dcops * **type**: Power Supply * [Source](https://prometheus-eqiad.wikimedia.org/ops/graph?g0.expr=ipmi_sensor_state%7Btype%3D%22Power+Supply%22%7D+%3E+0&g0.tab=1) --- * **dashboard**: https://grafana.wikimedia.org/d/ZA1I-IB4z/ipmi-sensor-state?orgId=1&var-Sensor=Power%20Supply&var-server=an-druid1004 * **description**: Power Supply - Status - issue on an-druid1004:9290 * **runbook**: https://wikitech.wikimedia.org/wiki/Dc-operations/Hardware_Troubleshooting_Runbook#Power_Supply_Failures * **summary**: Power Supply - Status - issue on an-druid1004:9290 * **alertname**: PowerSupplyFailure * **cluster**: druid_analytics * **id**: 74 * **instance**: an-druid1004:9290 * **job**: ipmi * **name**: Status * **prometheus**: ops * **severity**: task * **site**: eqiad * **source**: prometheus * **team**: dcops * **type**: Power Supply * [Source](https://prometheus-eqiad.wikimedia.org/ops/graph?g0.expr=ipmi_sensor_state%7Btype%3D%22Power+Supply%22%7D+%3E+0&g0.tab=1)
    • Task
    ==== Error ==== * service.version: 1.43.0-wmf.2 * trace.id: 4dcbe67b-ffa3-46bf-984c-15056fcd5da9 * [[ https://logstash.wikimedia.org/app/dashboards#/view/AXFV7JE83bOlOASGccsT?_g=(time:(from:'2024-04-23T19:41:10.642Z',to:'2024-04-24T19:59:17.209Z'))&_a=(query:(query_string:(query:'reqId:%224dcbe67b-ffa3-46bf-984c-15056fcd5da9%22'))) | Find trace.id in Logstash ]] ```name=labels.normalized_message,lines=10 [{reqId}] {exception_url} ApiUsageException: Search is currently too busy. Please try again later. ``` ```name=error.stack_trace,lines=10 from /srv/mediawiki/php-1.43.0-wmf.2/includes/api/ApiBase.php(1633) #0 /srv/mediawiki/php-1.43.0-wmf.2/includes/api/ApiQuerySearch.php(139): ApiBase->dieStatus(MediaWiki\Status\Status) #1 /srv/mediawiki/php-1.43.0-wmf.2/includes/api/ApiQuerySearch.php(64): ApiQuerySearch->run(ApiPageSet) #2 /srv/mediawiki/php-1.43.0-wmf.2/includes/api/ApiPageSet.php(279): ApiQuerySearch->executeGenerator(ApiPageSet) #3 /srv/mediawiki/php-1.43.0-wmf.2/includes/api/ApiPageSet.php(242): ApiPageSet->executeInternal(boolean) #4 /srv/mediawiki/php-1.43.0-wmf.2/includes/api/ApiQuery.php(685): ApiPageSet->execute() #5 /srv/mediawiki/php-1.43.0-wmf.2/includes/api/ApiMain.php(1948): ApiQuery->execute() #6 /srv/mediawiki/php-1.43.0-wmf.2/includes/api/ApiMain.php(893): ApiMain->executeAction() #7 /srv/mediawiki/php-1.43.0-wmf.2/extensions/MediaSearch/src/Special/SpecialMediaSearch.php(563): ApiMain->execute() #8 /srv/mediawiki/php-1.43.0-wmf.2/extensions/MediaSearch/src/Special/SpecialMediaSearch.php(235): MediaWiki\Extension\MediaSearch\Special\SpecialMediaSearch->search(string, string, array, integer, string, string) #9 /srv/mediawiki/php-1.43.0-wmf.2/includes/specialpage/SpecialPage.php(718): MediaWiki\Extension\MediaSearch\Special\SpecialMediaSearch->execute(NULL) #10 /srv/mediawiki/php-1.43.0-wmf.2/includes/specialpage/SpecialPageFactory.php(1672): MediaWiki\SpecialPage\SpecialPage->run(NULL) #11 /srv/mediawiki/php-1.43.0-wmf.2/includes/actions/ActionEntryPoint.php(504): MediaWiki\SpecialPage\SpecialPageFactory->executePath(string, MediaWiki\Context\RequestContext) #12 /srv/mediawiki/php-1.43.0-wmf.2/includes/actions/ActionEntryPoint.php(145): MediaWiki\Actions\ActionEntryPoint->performRequest() #13 /srv/mediawiki/php-1.43.0-wmf.2/includes/MediaWikiEntryPoint.php(199): MediaWiki\Actions\ActionEntryPoint->execute() #14 /srv/mediawiki/php-1.43.0-wmf.2/index.php(58): MediaWiki\MediaWikiEntryPoint->run() #15 /srv/mediawiki/w/index.php(3): require(string) #16 {main} ``` ==== Impact ==== Probably minimal. ==== Notes ==== These happen at a low rate. Last 4 weeks: {F48495651} That said, it seems like this should properly be surfaced to the caller somehow rather than landing as an exception in error logs.
    • Task
    Currently, the only service Jaeger shows on trace.wikimedia.org is `OTLPResourceNoServiceName`. This is because the OTLP exporter extension packaged with our version of Envoy actually doesn't export a service name at all -- but it's required by the OTLP spec, and so a placeholder value is filled in. //(Note that despite a lot of digging, I haven't actually found where this happens yet.)// This omission was corrected with Envoy [[ https://github.com/envoyproxy/envoy/pull/22472 | pull request #22472 ]]. That makes the service name easily configurable by name on each tracer stanza, matching how other tracer implementations work in Envoy. The [[ https://github.com/envoyproxy/envoy/commit/40d2bd404c6ead78c782f93fb6203d99242aa61b | merge commit of such ]], in August 2022, became part of v1.24.0 -- but not part of any earlier version. We currently run v1.23.12, and so don't have this fix. A service-wide Envoy upgrade is a so-called "heavy lift"; it involves at the very least a redeployment of every service we run. Fortunately the OTel Collector has various data rewriting abilities -- many thanks @Clement_Goubert for the original suggestion to run that as our collector everywhere. === Proposal Use the OpenTelemetry Collector's [[ https://github.com/open-telemetry/opentelemetry-collector-contrib/tree/main/processor/transformprocessor | transformprocessor ]] to rewrite the service.name Resource according to the following steps: * On k8s 1. If a valid service.name is already set, use that and do nothing else. * // Ensures forwards compatibility.// 1. If a span's upstream_cluster.name is set to something other than `local_service`, use that as the service name. * // Allows easy overriding in existing infrastructure (see below re: mesh).// 1. Otherwise, take the piece of the span's `node_id` value before the first period, and use that as the service name (on k8s this is the pod name, example: `mw-debug.eqiad.pinkunicorn-5bbd65ff7c-ws289`) * //Provides a sensible default without redeploying anything -- node_id is set automatically already.// 1. Otherwise, use our own "unknown" value. * On bare-metal 1. If a valid service.name is already set, use that and do nothing else. * // Ensures forwards compatibility.// 1. If a span's upstream_cluster.name is set to something other than something matching `^local_(port|path)_.*`, use that as the service name. * // Ensures forwards compatibility.// 1. Otherwise, let's add an optional hiera to the role for a service name, use that if present, and if not, use our own "unknown" value. Additionally, for k8s, I propose a new minor version of the mesh module that: * allows specifying a service name for tracing as part of its configuration, which if set, will override the `local_service` cluster name * and where that name will default to `{{ .Release.Namespace }}` if not set === Alternatives considered ===== Other OTel processors 🚫 In this case, since we need to rewrite `service.name`, which is defined as a [[ https://opentelemetry.io/docs/specs/semconv/resource/ | Resource ]] in the spec, we can't use either the [[ https://github.com/open-telemetry/opentelemetry-collector-contrib/tree/main/processor/attributesprocessor | attributesprocessor ]] nor the [[ https://github.com/open-telemetry/opentelemetry-collector-contrib/tree/main/processor/spanprocessor | spanprocessor ]]. Since the source of the data we wish to write into the service.name Resource is in the span attributes, and not already in the Resources section, we can't use the [[ https://github.com/open-telemetry/opentelemetry-collector-contrib/tree/main/processor/resourceprocessor | resourceprocessor ]] either. ===== Upgrade Envoy 👎 Too much work and too much risk for this quarter. However, the implementation described above allows for a graceful migration to defining the service name directly in Envoy when we do upgrade in the future.
    • Task
    Adapt existing types that use identity, such as Z4, Z8, Z40, etc. to use the new identity key. Also audit all existing types if there are any existing keys that should have that.
    • Task
    ## Description When eagerly evaluating, the logic depends on a bespoke list of keys like `Z40K1`, `Z8K5`, etc. which are known to be identities. This is not scalable to user-defined types. Likewise, when inferring the identity of a function, `Z8K5` is directly consulted. In both cases, the upcoming `Z3K4` key will standardize the usage of identities and scale them to user-defined types. **Desired behavior/Acceptance criteria (returned value, expected error, performance expectations, etc.)** * `eagerlyEvaluate` respects the `Z3K4` `IsIdentity` field (by ceasing to expand) * `findIdentity` in `function-schemata` respects the fact that `Z8K5` is now an identity --- ## Completion checklist * [ ] Before closing this task, review one by one the checklist available here: https://www.mediawiki.org/wiki/Abstract_Wikipedia_team/Definition_of_Done#Back-end_Task/Bug_completion_checklist
    • Task
    https://en.wikipedia.org/wiki/Wikipedia:List_of_companies_engaged_in_the_self-publishing_business
    • Task
    https://en.wikipedia.org/wiki/User:JzG/Predatory
    • Task
    This task is to track the service implementation of serviceops host(s) listed in the task description. Once the linked racking task has been resolved, this task can be implemented. This sub-task creation/update is per the request of #serviceops; this task is assigned at creation to the 'Sub-team Technical Contact' provided in the initial ordering task.
    • Task
    @Michael pointed out in [r1022047](https://gerrit.wikimedia.org/r/c/mediawiki/extensions/CommunityConfiguration/+/1022047) that our typehints for config are not clear. In `IValidator`, we say: ``` * @param mixed $config Associative array representing config that's going to be validated ``` which is incorrect on two levels: * it is only `mixed` in theory (the top-level JSON document might not be an object, or it might be an object represented by something else than a `stdClass`, such as an associative array); however, in practice, it is virtually always a `stdClass` instance, nothing else, * it is never an associative array (this caused issues with validation, cf. {T360148}) In other areas of the code, we claim something slightly different. For example, `IConfigurationProvider::storeValidConfiguration` claims: ``` * @param mixed $newConfig The configuration value to store. Can be any JSON serializable type ``` This is mostly true (the store should be able to store anything JSON serializable), but in practice, storing a JSON document that's not an object at the top level will not work. To resolve those confusions, we should clearly document what the datatype for config is. I propose to: * codify that all schemas have to be an object at the top level, * that objects are always represented with a `stdClass` Introducing those two rules will then allow us to improve typehints, and expect `stdClass` is passed in places where a top-level object is expected or returned, such as in `IConfigurationProvider::storeValidConfiguration`, `IConfigurationProvider::loadValidConfiguration` or `IValidator::validate`. We would still need to work with `mixed` in certain places (where a top level object is not expectable), such as `WikiPageConfigProvider::get`. ==== Acceptance Criteria [ ] [CommunityConfiguration Technical documentation](https://www.mediawiki.org/wiki/Extension:CommunityConfiguration/Technical_documentation) should clearly define what are our datatype expectations. [ ] We use `stdClass` anywhere a top-level configuration object is expected across all of CommunityConfiguration extension
    • Task
    https://en.wikipedia.org/wiki/Wikipedia:WikiProject_Academic_Journals/Journals_cited_by_Wikipedia/Questionable1
    • Task
    This task will track the racking, setup, and OS installation of parsoidtest1001 == Hostname / Racking / Installation Details == **Hostnames:** parsoidtest1001 **Racking Proposal:** Any row/rack. **Networking Setup:** 1G, private, yes to AAAA records. **Partitioning/Raid:** No hardware raid, recipes: - partman/standard.cfg, - partman/raid1-2dev.cfg **OS Distro:** bookworm **Sub-team Technical Contact:** Alexandros Kosiaris == Per host setup checklist == Each host should have its own setup checklist copied and pasted into the list below. ==== parsoidtest1001: [] Receive in system on #procurement task T361364 & in Coupa [] Rack system with proposed racking plan (see above) & update Netbox (include all system info plus location, state of planned) [] Run the [[ https://netbox.wikimedia.org/extras/scripts/provision_server.ProvisionServerNetwork/ | Provision a server's network attributes ]] Netbox script - Note that you must run the DNS and Provision cookbook after completing this step [] **Immediately** run the `sre.dns.netbox` cookbook [] **Immediately** run the `sre.hosts.provision` cookbook [] Run the `sre.hardware.upgrade-firmware` cookbook [] Update the `operations/puppet` repo - this should include updates to preseed.yaml, and site.pp with roles defined by service group: https://wikitech.wikimedia.org/wiki/SRE/Dc-operations [] Run the `sre.hosts.reimage` cookbook
    • Task
    https://en.wikipedia.org/wiki/Wikipedia:New_page_patrol_source_guide
    • Task
    ## Description **Steps to reproduce (step by step instructions, with links, commands and necessary data to reproduce the error)** # try to run JS or Python code # see that it is broken **Observed behavior** * JS and Python code is not broken **Expected behavior/Acceptance criteria (returned value, expected error, performance expectations, etc.)** The issue is (apparently) that the resource limits flags don't work with the uncompiled WASM binary. To fix this, we should add the ENABLE_WASMEDGE_RESOURCE_LIMITS flag to control whether to use wasmedge CLI security measures. * only add `--gas-limit`, etc. flags when `ENABLE_WASMEDGE_RESOURCE_LIMITS` is true * add `ENABLE_WASMEDGE_RESOURCE_LIMITS` flag in production * very much do not add that flag in Beta cluster --- ## Completion checklist * [ ] Before closing this task, review one by one the checklist available here: https://www.mediawiki.org/wiki/Abstract_Wikipedia_team/Definition_of_Done#Back-end_Task/Bug_completion_checklist
    • Task
    ## Description Metadata maps may now be nested multiple layers deep. Frontend display needs to handle nested maps, including i18n of nested keys. **Desired behavior/Acceptance criteria** [ ] nested metadata map keys can be internationalized [ ] nested metadata maps can be displayed ## Completion checklist * [ ] Before closing this task, review one by one the checklist available here: https://www.mediawiki.org/wiki/Abstract_Wikipedia_team/Definition_of_Done#Front-end_Task/Bug_Completion_Checklist
    • Task
    If a domain is on the "caution" or "warn" list, that says something about the quality of the source. "Inspect" could be made available as a neutral third option, for instance, for domains that are ambiguous like doi.org.
    • Task
    At the moment the citation watchlist script works entirely in browser, loading two revisions for each page, diffing them. If you have a lot of pages on the watchlist or recent pages loaded this can be a lot of work, and it's redundant. What would make this lower-bandwidth on the client's part would be if a central server did this diffing and analysis work, and clients simply communicated with the server. This would require the user to consent to sending watchlist screening data to this server.
    • Task
    In order to support the access of lexicographic data from Wikidata, we need to be able to support enumerations. We are using identity widely in the Wikifunctions data model, such as Z4K1, Z8K5, and Z40K1, and others. It would make sense to use it in some of our existing types where we currently do not, e.g. Z14, Z60, and Z61. Also planned types, particularly enumerations, such as grammatical features, will require identity. This epic is to capture the work to support enumerations in Wikifunctions, which will unlock the integration with Wikidata from the perspective of types.
    • Task
    This is a placeholder epic for the Automation testing work for Q4. Todo: Recognize the sub Epic we will concentrate on this quarter out of the top 4 recognized
    • Task
    A global gadget is one that is deployed to MediaWiki.org under a "Global-" prefix that is then referenced from individual wiki deployments. https://www.mediawiki.org/wiki/Global_gadgets
    • Task
    Production monitoring improvements so we know when something is wrong. Right now we know if it’s up, what the request rate is, hardware resource consumption. Currently preventing us from debugging render/parser slowness https://wikitech.wikimedia.org/wiki/Wikifunctions/Performance_observability -------- **Definition of Done** [ ] we are able to pinpoint source of performance issues in our features, in various countries [ ] we are alerted when the success rate of requests is not connected to alerting, end to end testing in production, individual tests for python and JS [ ] we are alerted if a different edge case comes up that’s not covered by the manual checklist
    • Task
    Function call metadata provides valuable info to function contributors and users. When metadata shown is incomplete, this deprives them of the info they need to understand and debug function behavior. ------ **Definition of Done** [ ] metadata contains information about nested function calls within a composition (not just the top-level function call) [ ] display layer can handle potentially nested metadata map elements
    • Task
    This is a placeholder epic for the Multilingual work for Q4. Todo: Recognize the sub Epic we will concentrate on this quarter out of the top 4 recognized
    • Task
    When a domain is shown as matching on a list, the list should be included in the tooltip. This probably needs to wait for the upgraded tooltip. (T363381) This should also include a link to the lists in the report
    • Task
    At the moment, you only get "Warn: [domains]" or "Caution: [domains]". We would like to include more information, like what list a given match is on. The current built-in browser based tooltip leaves us with limited options, so we need more like a popup-on-hover.
    • Task
    If you are using the upgraded watchlist and you have either live updates or you push the button to update the feed, this overwrites the changes made by the script. Ideally, the data generated by this script is persisted somewhere so that when these refreshes happen, the script can just re-apply the changes.
    • Task
    - A custom list is defined in the list of lists - User has a configuration page that determines which lists are turned off/on as override to default - This list should be editable directly within the watchlist. The user should never have to interact with their configuration page
    • Task
    **Feature summary** (what you would like to be able to do and where): JavaScript and CSS code should be able to reliably detect when Parsoid HTML is being used and what version is being used so it can adapt accordingly to signficant breaking changes while still being backwards compatible. The solution should meet the following criteria: [] Should be possible to target styles at Parsoid HTML or legacy parser HTML via a single CSS selector e.g. `[data-parsoid] a` [] Should be possible to target styles at a specific version of Parsoid [] Should be possible to check whether Parsoid HTML is being returned in JavaScript [] Should be possible to get Parsoid version by reading HTML in JavaScript **Use case(s)** (list the steps that you performed to discover that problem, and describe the actual underlying problem which you want to solve. Do not describe only a solution): [] Handling breaking changes to the HTML [] Migrating section collapsing code on MobileFrontend **Benefits** (why should this be implemented?): [] Easier to roll back changes safely when issues occur [] Easier to prepare for changes prior to issues occurring.
    • Task
    == Requestor provided information and prerequisites == **Complete ALL items below as the individual person who is requesting access:** * Wikimedia developer account username: `Jsn.sherman` * Email address: `jsherman@wikimedia.org` * SSH public key (must be a separate key from Wikimedia cloud SSH access): `ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIEdUcVkD6BtqZJlTW5QPf63PekD3uSUg+5L8x7el71xf jsherman@wikimedia.org` * Requested group membership: `deployment` * Reason for access: to reduce deployer burden as Moderator tools works towards deploying & maintaining a new extension; I'm starting by working through the checklist for [[ https://wikitech.wikimedia.org/wiki/Backport_windows#New_Backport_Team_member_check-list | becoming a backporter ]] * Name of approving party (manager for WMF/WMDE staff): [[ @DMburugu | Dennis Mburugu ]] * Ensure you have signed the L3 Wikimedia Server Access Responsibilities document: signed 2024-04-24 * Please coordinate obtaining a comment of approval on this task from the approving party. == SRE Clinic Duty Confirmation Checklist for Access Requests == This checklist should be used on all access requests to ensure that all steps are covered, including expansion to existing access. Please double check the step has been completed before checking it off. **This section is to be confirmed and completed by a member of the #SRE team.** [] - User has signed the L3 Acknowledgement of Wikimedia Server Access Responsibilities Document. [] - User has a valid NDA on file with WMF legal. (All WMF Staff/Contractor hiring are covered by NDA. Other users can be validated via the NDA tracking sheet) [] - User has provided the following: developer account username, email address, and full reasoning for access (including what commands and/or tasks they expect to perform) [] - User has provided a public SSH key. This ssh key pair should only be used for WMF cluster access, and not shared with any other service (this includes not sharing with WMCS access, no shared keys.) [] - The provided SSH key has been confirmed out of band and is verified not being used in WMCS. [] - access request (or expansion) has sign off of WMF sponsor/manager (sponsor for volunteers, manager for wmf staff) [] - access request (or expansion) has sign off of group approver indicated by the approval field in data.yaml For additional details regarding access request requirements, please see https://wikitech.wikimedia.org/wiki/Requesting_shell_access
    • Task
    There exists code in the `AbstractCheckUserPager` that combines the rows from the three result tables, groups these rows, sorts these rows, and then truncates to the limit. This code will be duplicated in `Special:Investigate` related code, so moving this to service will help reduce duplication and make testing easier. It should be possible to move the code in `AbstractCheckUserPager::groupResultsByIndexField` and some of `AbstractCheckUserPager::reallyDoQuery` into the service for use by `Special:Investigate` in T347102. However, this code needs to be updated to support grouping by multiple fields so this will involve more than just moving existing code to a service. Doing this will be useful to make T360712 easier to achieve.
    • Task
    ## Background We need to implement the Popover component in Codex. The need for this component stems from having an existing [[ https://doc.wikimedia.org/oojs-ui/master/demos/?page=widgets&theme=wikimediaui&direction=ltr&platform=desktop#PopupButtonWidget-quiet-with-popup-head-with-icon-align-force-left | Popover in OOUI ]] and conversation from the [[ https://phabricator.wikimedia.org/T340456 | Tooltip task ]]. ### Description Popover component provides a localized container based on a trigger to provide long for information, layouts, and interactive elements. ### User stories - As a user, I need additional space to provide information or interactivity based on an existing element on the page. ### Potential use cases | {F37670383} | [[ https://query.wikidata.org/querybuilder/?uselang=en | Wikidata Query Builder ]] uses the Popover component from the WiKit design system (in deprecation). | | {F34408572} {F34408574} | Tooltips in OOUI, seen on Special:Preferences > Notifications (Desktop and mobile) | | {F37114658} | Growth implementation in Vue (See related task T340199) | | {F37121012} | Popover in [[ https://wmde.github.io/wikit/?path=/story/vue_popover--basic | WMDE Wikit ]] | | {F41713385} {F41713386} {F41713387} | Popups with different content in Wikipedia. | ## Acceptance criteria ### Minimum viable product This task covers the minimum viable product (MVP) version of this component. MVP includes basic layout, default states, and most important functionality. **MVP scope** - [] ??? **Design** - [] Implement the component in Figma **Code** - [] Implement the Vue component in Codex - [] Implement the CSS-only component in Codex (optional -- TBD as part of refinement)
    • Task
    This task documents on a high-level how all the pieces of the Future of the Wishlist project tie together. This includes the intake form, voting, the bot, and on-wiki templates and pages. This all subject to debate. === Technical goals * Write our code so that could easily be moved to an extension (if/when that happens) * The survey should work on any MediaWiki installation, especially local environments * We should be able to develop and test without deploying as a gadget, i.e. wishlist-intake is essentially loaded like a user script. * Use the new `package` feature of #mediawiki-extensions-gadgets so that production has separate JS pages for each component. This should not complicate local development in any way. * "Installation" of the survey should be as simple as using Special:Import with an XML file (or running a script) to create the various pages, install the gadget with the deploy script, and starting the bot ** On local envs, this can all happen with a single command * The bot should be written in Node (so it can live in the same repo as wishlist-intake) and contributors with a username other than MusikAnimal should also be able to contribute to it (no more Ruby / MusikBot framework) (T361067) * Try to get everything that is needed into the same git repo * Everything should be highly configurable i.e. setting the wishlist root to "Community Wishlist", and defining the "focus areas" (formerly known as categories) should live as a JSON config file, TBD if this lives on-wiki or in the repo (n.b. T363229) * The bot goes off this same config, and will live in the same repo as a script so that you can run it during development on your local MW installation * All the wishlist pages, templates, and other on-wiki content should be in version control, and the deploy script handles shipping them (with CLI flags to control what gets shipped) ** This is mainly for local envs and new wishlist installations. Production wiki pages can be edited by anyone, so after initial deployment we likely won't touch normal wikitext pages again. **Nice to haves** * Make the wishlist database-driven, build an API for it and let the gadget generate dynamic content instead of the bot editing those pages. === Workflow * User browses to the survey home (T363241) * User clicks button to create a proposal * User is redirected to Special:CreateProposal where the gadget will create a full-page form (no dialogs or anything!) * Upon submission (T362761, T363223), the user is redirected to the proposal page that was created, i.e. `Community Wishlist/My_proposal` ** Note we no longer will put proposals as a subpage of a category * If/when desired, use the #cws_manager gadget to set up the proposal with #mediawiki-extensions-translate * Staff will add the proposal to a focus area (formerly known as proposal categories) ** TBD how -- maybe a field in the intake form shown only to staff (?) * Users can edit a proposal using the normal edit links, which will be intercepted and the user redirected to Special:EditProposal/My_proposal using the same intake form * Users can browse to various other pages such as focus areas (T363240, Meanwhile: * The bot (T361067) picks up that a new proposal was created and updates the corresponding lists and [[ https://meta.wikimedia.org/wiki/Community_Wishlist_Survey/Staff_instructions#Automated_counts | count pages ]], just like in older surveys ** It also process changes to any existing proposals, and updates counts accordingly ** //If time allows:// the bot should instead write the data to a database * Pages that list wishes or have dynamic content (T363241, T363236, T363237, maybe T363240) are managed by the bot ** //If time allows:// the content can instead populated by the gadget, pulling data from an API that reads from the bot's database (note we will get complaints about CSP warnings for external API calls) Voting works just like it has in the past. Focus areas will also be vote-able. === Roadmap The durations shown are conservative estimates of implementation time for a single engineer. [x] Implement wishlist intake dev environment (~1 week) [] Finish up intake form, with or without submission (3-5 weeks) [] Implement module to parse proposals -- to be used by the bot and (possibly) the gadget (1-3 days) [] T362809 Create and package survey pages as XML dump and/or script (~1 week) [] T361067 Rewrite bot to work on new survey structure (1-2 weeks) [] Update #cws_manager as needed (~1 week) [] Rework voting gadget as needed, or merge into intake gadget (1-3 days) [] Rework [[ https://meta.wikimedia.org/wiki/Community_Wishlist_Survey/Staff_instructions#AbuseFilters | AbuseFilters ]] (1-2 days) From here we go one of two routes: **Bot-powered approach** //(the planned route for now)// [] Make the bot populate the dynamic content on focus areas and index pages (3-5 days) [x] No changes needed to gadget for this **Database driven approach** //(if we have time)// [] Expand bot to write to database (1-3 days) [] Write micro app with REST API to serve the data (~1 week) [] Write modules to populate dynamic content on focus area and index pages (1-2 weeks) === In the future The //future// of the Future of the Wishlist * Any production wiki should be able to install the survey software (again with minimal effort), and all proposals etc. still go to Meta. I.e. we'd make use of [[ https://doc.wikimedia.org/mediawiki-core/master/js/module-mediawiki.ForeignApi.html#.ForeignApi | mw.ForeignApi ]]. * Survey dashboard with fancy visualizations and such. * Possibly move everything to a proper MediaWiki extension.
    • Task
    [[ https://phabricator.wikimedia.org/T363370 | Verify 1.43.0-wmf.3 ]] * Verify 1.43.0-wmf.4 * [[ https://phabricator.wikimedia.org/T363372 | Verify 1.43.0-wmf.5 ]] ----- Deployment blockers task: T361398 (Add any blockers as subtasks to this ticket) # TODO [] Run QA in production for the following tickets: https://phabricator.wikimedia.org/maniphest/query/TkPnjkdbHEQ0/#R [] Add #Verified tag to each ticket when done [] Summarize the results of the QA as a comment in this ticket # Sign off steps [] Create tickets for any issues that were detected during deployment verification. == QA Results - Prod ✅ ⬜❌ |**Verified**|**Task**|**Title**|**Test Results/Comments**| |--|--|--|--| | .. | .. | .. | .. |
    • Task
    [[ https://phabricator.wikimedia.org/T363373 | Verify 1.43.0-wmf.4 ]] * Verify 1.43.0-wmf.5 * [[ https://phabricator.wikimedia.org/TBC | Verify 1.43.0-wmf.6 ]] ----- Deployment blockers task: T361399 (Add any blockers as subtasks to this ticket) # TODO [] Run QA in production for the following tickets: https://phabricator.wikimedia.org/maniphest/query/acyz2VtGn7fX/#R [] Add #Verified tag to each ticket when done [] Summarize the results of the QA as a comment in this ticket # Sign off steps [] Create tickets for any issues that were detected during deployment verification. == QA Results - Prod ✅ ⬜❌ |**Verified**|**Task**|**Title**|**Test Results/Comments**| |--|--|--|--| | .. | .. | .. | .. |
    • Task
    It would be nice if we had the ability to make metrics/alerts from future audiences projects. Doing this seems like a few day project though. See https://wikimedia.slack.com/archives/CTFK3B423/p1713981855060969?thread_ts=1713980097.238969&cid=CTFK3B423 for details from the observability team Main open questions form this are - what metrics would we even want (at least latency / traffic / tokens consumed) - How do we update k8s configs for our project given we are not directly setting k8s options - where do we make changes to `modules/profile/manifests/prometheus/ops.pp`
    • Task
    [[ https://phabricator.wikimedia.org/T363369 | Verify 1.43.0-wmf.2 ]] * Verify 1.43.0-wmf.3 * [[ https://phabricator.wikimedia.org/T363373 | Verify 1.43.0-wmf.4 ]] ----- Deployment blockers task: T361397 (Add any blockers as subtasks to this ticket) # TODO [] Run QA in production for the following tickets: https://phabricator.wikimedia.org/maniphest/query/C4mQw9O3j_fl/#R [] Add #Verified tag to each ticket when done [] Summarize the results of the QA as a comment in this ticket # Sign off steps [] Create tickets for any issues that were detected during deployment verification. == QA Results - Prod ✅ ⬜❌ |**Verified**|**Task**|**Title**|**Test Results/Comments**| |--|--|--|--| | .. | .. | .. | .. |
    • Task
    [[ https://phabricator.wikimedia.org/T361456| Verify 1.43.0-wmf.1 ]] * Verify 1.43.0-wmf.2 * [[ https://phabricator.wikimedia.org/T363370 | Verify 1.43.0-wmf.3 ]] ----- Deployment blockers task: T361395 (Add any blockers as subtasks to this ticket) # TODO [] Run QA in production for the following tickets: https://phabricator.wikimedia.org/maniphest/query/oktqvUwwFylh/#R TBC [] Add #Verified tag to each ticket when done [] Summarize the results of the QA as a comment in this ticket # Sign off steps [] Create tickets for any issues that were detected during deployment verification. == QA Results - Prod ✅ ⬜❌ |**Verified**|**Task**|**Title**|**Test Results/Comments**| |--|--|--|--| | .. | .. | .. | .. |
    • Task
    **Steps to replicate the issue** (include links if applicable): * Go to the official Grafana metrics board for the Wikidata Query Builder: https://grafana.wikimedia.org/d/RA1j2T0Mk/wikidata-query-builder * Look at the section "Query-related metrics" **What happens?**: Most of the panels show no data at all. **What should have happened instead?**: Probably one of the following: * there should be data for basically every day that a query is run (that is: every day) * the panels should be removed if no longer tracking that data was an intentional decision * there should be a note explaining that tracking has stopped but the panels are kept for historic reasons (or specify the actual reasons for keeping the panels)
    • Task
    It would be good to know where we are burning our logs in production. Adding the tokens used (input and output) to the structured logs would be helpful.
    • Task
    Although the scraper is still running on all wikis we could add the results from the three runs on dewiki to a spreadsheat in a similar format as in: [[ https://docs.google.com/spreadsheets/d/1q71Swzxpf2U4shhSJl8fry-CHg1RauJXBlXN_PHSlk4/edit#gid=611791940 | 📄 References scraper incomplete results 2023-06-01 ]] Seems best to new sheets in that document. For the three runs on de.wiki we could also experiment with showing trends in the numbers.
    • Task
    Similarly to T363357, show deleted contributions on Special:IPContributions. This might involve making a separate pager and having a "deleted" mode for the special page, rather than combining the two lists into one.
    • Task
    Ahead of T363358, increase test coverage of these classes to help prevent breakages.
    • Task
    == Requestor provided information and prerequisites == **Complete ALL items below as the individual person who is requesting access:** * Wikimedia developer account username: hghani * Email address: hghani-ctr@wikimedia.org * SSH public key (must be a separate key from Wikimedia cloud SSH access): already on file * Requested group membership: airflow-analytics-product-admins * Reason for access: Deploy Airflow DAGs owned by the [Movement Insights team](https://meta.wikimedia.org/wiki/Movement_Insights) * Name of approving party (manager for WMF/WMDE staff): @OSefu-WMF as manager, @mpopov as group approver * Ensure you have signed the L3 Wikimedia Server Access Responsibilities document: already done (T322145) * Please coordinate obtaining a comment of approval on this task from the approving party. == SRE Clinic Duty Confirmation Checklist for Access Requests == This checklist should be used on all access requests to ensure that all steps are covered, including expansion to existing access. Please double check the step has been completed before checking it off. **This section is to be confirmed and completed by a member of the #SRE team.** [x] - User has signed the L3 Acknowledgement of Wikimedia Server Access Responsibilities Document. [x] - User has a valid NDA on file with WMF legal. (All WMF Staff/Contractor hiring are covered by NDA. Other users can be validated via the NDA tracking sheet) [x] - User has provided the following: developer account username, email address, and full reasoning for access (including what commands and/or tasks they expect to perform) [x] - User has provided a public SSH key. This ssh key pair should only be used for WMF cluster access, and not shared with any other service (this includes not sharing with WMCS access, no shared keys.) [x] - The provided SSH key has been confirmed out of band and is verified not being used in WMCS. [x] - access request (or expansion) has sign off of WMF sponsor/manager (sponsor for volunteers, manager for wmf staff) [x] - access request (or expansion) has sign off of group approver indicated by the approval field in data.yaml For additional details regarding access request requirements, please see https://wikitech.wikimedia.org/wiki/Requesting_shell_access
    • Task
    Hi all the web team will be at the Wikimedia Hackathon 2024 and are interested in collaborating with attendees to get projects ready for dark mode. If you are a template editor on any wiki or have the ability to push a Gerrit patch we'd love to chat! # How you can help ## Talk to us and attend our session We'd love to talk to you about any concerns you have about the upcoming dark mode, tell you about it. Please consider attending our workshop on day 1 : T362816 ! Who will be there: - Our designer @JScherer-WMF - Engineers @jdlrobson and @KSarabia-WMF ## Get dark mode enabled Come talk to us and we'll get you setup with dark mode on your phone/desktop. ## Contributing code We have various open bugs across MediaWiki core and Wikimedia extensions: https://phabricator.wikimedia.org/project/board/6717/?filter=aYMQUAzU7wtw We're be around to provide support helping you get setup with MediaWiki, discuss solutions and code review if needed. # Fixing templates Dark mode requires change on certain wikis and we need help from template editors to fix some of these. Fixes will expedite the release for your favorite project. Some guidance is provided on: https://www.mediawiki.org/wiki/Recommendations_for_night_mode_compatibility_on_Wikimedia_wikis and tooling at: https://night-mode-checker.wmcloud.org/ Talk to us and we'll help you get your wiki ready!
    • Task
    Ahead of T363357, split out a parent class `ContributionsSpecialPage` from `SpecialContributions`, so that the form and display can be reused. Similarly split an abstract parent class from `ContribsPager`.
    • Task
    Create a new special page, Special:IPContributions, that works like Special:Contributions but shows contributions from temporary accounts using a particular IP address. * Access is limited in the same way as IP reveal * Access is logged * Valid targets are an IP address or range (within the same limits as for Special:Contributions)
    • Task
    In T361123 a new provider config `skipDashboardListing` was introduced to allow providers to opt-out from the dashboard listing. Since T362203 requires to introduce another config to set a help link, it would be useful to have an array options passed to the constructor instead of individual arguments.
    • Task
    There are multiple pages of documentation that provide conceptual and reference information about the concept of a "session" and how it is handled/measured, both generally and in specific WMF data sources. These documentation pages should be combined or deduplicated, and a single source of truth page should be linked to from the related Glossary entries in DataHub. Currently, https://wikitech.wikimedia.org/wiki/Analytics/Sessions is linked to from the DataHub glossary entries, but the wiki page hasn't had a meaningful content edit since 2023, and it isn't easily discoverable from related documentation in the Data_Lake/Traffic subpages nor from the related documentation on Meta-Wiki. - https://wikitech.wikimedia.org/wiki/Analytics/Sessions - https://wikitech.wikimedia.org/wiki/Analytics/Data_Lake/Traffic/SessionLength - https://wikitech.wikimedia.org/wiki/Analytics/Data_Lake/Traffic/mobile_apps_session_metrics - https://meta.wikimedia.org/wiki/Research:Activity_session - https://meta.wikimedia.org/wiki/Research:Estimating_session_duration - DataHub glossary entry: [[ https://datahub.wikimedia.org/glossaryTerm/urn:li:glossaryTerm:0f6b8770-8db9-4eb5-b205-4225daf5bb1b/Documentation?is_lineage_mode=false | Activity Session ]] - DataHub glossary entry: [[ https://datahub.wikimedia.org/glossaryTerm/urn:li:glossaryTerm:1921bcda-850a-42b4-aa57-9c3cd093d069/Documentation?is_lineage_mode=false | Browser session ]]
    • Task
    **Background** The Edit Patrol feature was scaled over the past 6 months - to Indonesian Wikipedia around November 3, 2023 - to French and Spanish Wikipedias around March 21, 2024 - to Igbo and Chinese Wikipedias around April 24, 2024 **The task** - Share preliminary results for below indicators: **Validation** - Key Indicator 1: 65% of Target mature audiences that use the tool say they find it helpful for maintaining the quality of wikis and would recommend it to other patrollers - Key Indicator 2: Edits made by mature audiences increase by 5% - Key Indicator 3: 10% of target mature audiences engage with filter for preferences - Key Indicator 4: 65% of Target mature audiences engage with the tool at least three times in a thirty day window **Guardrails:** - Experienced users without rollback rights, users that have and have not used alternative patrolling tools equally understand the workflow - We do not receive reports of tool being used to negatively target underrepresented content or contributors based on in-app reporting mechanisms **Curiosities:** - How does use of our tool compare to other patrolling tools when looking at MediaWiki Tags (SWViewer, Huggle, and Twinkle) - Do we see an increase in Undo/Rollback/Thank events - How popular is this task with our target audience relative to other Suggested Edits task? - What actions are most popular in the feature? - For Saved messages: - How often do users create messages from scratch instead of using an example message? - How often do users modify an example message before sending? - How often do users modify an example message and save that version to "Your Messages"? - How often do users click on each of the 10 example messages? - For templates - How often are users using a Template while posting talk pages messages messages in Edit Patrol? - How often are they saving a message to "Your Messages" that contains a template? (30 day analysis to be conducted after release to all Wikis)
    • Task
    ###Background Product teams across our organization have been implementing single page application (SPAs) tools and other pages outside of the MediaWiki domain. These pages are not influenced by skins, and their designs were freely tailored to their projects and users' needs. As a result, the interfaces of these solutions are not consistent with each other or the rest of Wikimedia core pages. Some examples of these tools are: - [[ https://en.wikipedia.org/wiki/Special:ContentTranslation#suggestions | Special:ContentTranslation ]] (Actually part of the MediaWiki domain, but overriding the default skins) - [[ https://mismatch-finder.toolforge.org/ | Wikidata Mismatch Finder ]] - [[ https://query.wikidata.org/querybuilder/?uselang=en | Wikidata Query Builder ]] - [[ https://item-quality-evaluator.toolforge.org/ | Wikidata Item Quality evaluator ]] ###Problem There's ambiguity regarding whether SPAs and other non-MediaWiki pages should conform, and to which degree, to Codex/Wikimedia design style guidelines involving layout and grid design, font styles, etc. The ambiguity increases in cases where the tools need to transition to utilizing Codex components and styles. Without a clear direction, future design efforts may continue to face challenges in deciding whether and when to maintain consistency and alignment with Wikimedia standards. ###Solution We should decide on the expected level of standardization of SPAs and document whether it is acceptable for them to maintain the current level of freedom or if, instead, these pages should adhere more closely to Codex and/or Wikimedia design style guidelines. ###Potential approaches We should evaluate the advantages and disadvantages (implications for scalability, consistency and usability) of the following options: 1. The design of non-MediaWiki pages should adhere to Codex design guidelines: We should define the necessary design requirements (layout, font styles, etc.) that these tools should follow. 2. Non-MediaWiki pages should be aligned with the rest of Wikimedia core pages (e.g. emulate the default Vector 2022 skin). 3. Maintain the current level of freedom. ###Considerations - We should evaluate which (other) key stakeholders should be involved in this decision. ###Aceptance criteria [] We have defined and documented a unified approach to designing non-MediaWiki pages
    • Task
    == Background Once we've switched from the migration build of Vue 3 to the regular build of Vue 3, the compatConfig settings in component definitions will be ignored. We should remove them before that happens to avoid unexpected problems. == User story As an engineer I don't want unexpected issues with my code when the DST switches from Vue 3 migration build. == Acceptance criteria # QuickSurveys [] https://gerrit.wikimedia.org/g/mediawiki/extensions/QuickSurveys/+/9795ea285151cafc5f56cea8e145090c4b870fe7/resources/ext.quicksurveys.lib/vue/QuickSurvey.vue [] https://gerrit.wikimedia.org/g/mediawiki/extensions/QuickSurveys/+/9795ea285151cafc5f56cea8e145090c4b870fe7/resources/ext.quicksurveys.lib/vue/render.js # vector [] https://gerrit.wikimedia.org/g/mediawiki/skins/Vector/+/6531a7bcbae1c744e3e909ac8f916b412f84b55d/resources/skins.vector.search/App.vue # ReadingList [] https://gerrit.wikimedia.org/g/mediawiki/extensions/ReadingLists/+/b34df73650bbcd31d6ccec85281dce15f473d82f/resources/readinglist.scripts/views/IntermediateState.vue#11 [] https://gerrit.wikimedia.org/g/mediawiki/extensions/ReadingLists/+/b34df73650bbcd31d6ccec85281dce15f473d82f/resources/readinglist.scripts/views/ReadingListPage.vue#98 # NearbyPages [] https://gerrit.wikimedia.org/g/mediawiki/extensions/NearbyPages/+/97f816117431ade9cdae759d57357079e0e6dd42/resources/ext.nearby.scripts/App.vue#99 [] https://gerrit.wikimedia.org/g/mediawiki/extensions/NearbyPages/+/97f816117431ade9cdae759d57357079e0e6dd42/resources/ext.nearby.scripts/PageList.vue#34
    • Task
    ==== Background Temporary accounts will first be deployed to testwiki and loginwiki, but not metawiki. The reason for not deploying to metawiki to begin with is to reduce the capacity for vandalism from this new type of user during our testing phase. Normally when a new user is created, an account for them is created on metawiki via `CentralAuthCreateLocalAccountJob`. This will fail before temporary accounts are deployed on metawiki. Once temporary accounts are deployed on metawiki, we will then need to make sure that a script is run to make an account for any temporary accounts that predate the metawiki deployment. This might involve running `createLocalAccount.php` or creating a new script specifically for doing the same thing for all existing temporary accounts. ==== What needs doing [] Before testwiki deployment: ensure that `CentralAuthCreateLocalAccountJob` fails gracefully after the testwiki deployment [] Just after metawiki deployment: run a maintenance script to ensure local accounts are made there for existing temp users.
    • Task
    The research for {T362872} and {T362233} was definitely more complex by the fact that PSP admission were silently disabled in lima-kilo. A healthy kube-apiserver must have this in the command args: ` - --enable-admission-plugins=PodSecurityPolicy,EventRateLimit,NodeRestriction ` In lima-kilo, it has: ` - --enable-admission-plugins=NodeRestriction` This is despite the kind template having this: ``` kind: Cluster apiVersion: kind.x-k8s.io/v1alpha4 featureGates: TTLAfterFinished: true kubeadmConfigPatches: - | apiVersion: kubeadm.k8s.io/v1beta2 kind: ClusterConfiguration metadata: name: config apiServer: extraArgs: enable-admission-plugins: NodeRestriction,PodSecurityPolicy ``` which [[ https://kind.sigs.k8s.io/docs/user/configuration/#kubeadm-config-patches | per the docs ]], is all it takes.
    • Task
    Otherwise once we have authentication per-user, we would not be able to know which tool they are acting on (today we authenticate with the tool certificate, so the authenticated user is the tool itself).
    • Task
    ==== Timing This should be done after deployment of temporary accounts to testwiki and loginwiki, but before deployment to further wikis. ==== Background A request from the Stewards: > Wikimedia Stewards make use of login.wikimedia.org to see what wikis is an IP address or range active at, and make use of this decision while deciding on what IP blocks to implement. Therefore, it is required for temporary accounts to be visible at login.wikimedia.org as well. ==== What needs doing We expect this just to work, since the equivalent is possible on beta loginwiki. [] Confirm that temporary account creations are visible via https://login.wikimedia.org/wiki/Special:CheckUser
    • Task
    This task will track the racking, setup, and OS installation of X == Hostname / Racking / Installation Details == **Hostnames:** cloudcephosd10[35-38].eqiad.wmnet **Racking Proposal:** Two hosts in F4, one each in C8 and D5 **Networking Setup:** 2 x 10G interfaces per server, connected to cloudswitches. Check other hosts (e.g. cloudcephosd1012) for switch config specifics. **Partitioning/Raid:** SW raid mirror for the two smaller OS drives, other drives can be left unpartitioned for ceph management. **OS Distro:** Bullseye (default unless otherwise specified) **Sub-team Technical Contact:** David Caro == Per host setup checklist == Each host should have its own setup checklist copied and pasted into the list below. ==== cloudcephosd1035 [] Receive in system on #procurement task T351332 & in Coupa [] Rack system with proposed racking plan (see above) & update Netbox (include all system info plus location, state of planned) [] Run the [[ https://netbox.wikimedia.org/extras/scripts/provision_server.ProvisionServerNetwork/ | Provision a server's network attributes ]] Netbox script - Note that you must run the DNS and Provision cookbook after completing this step [] **Immediately** run the `sre.dns.netbox` cookbook [] **Immediately** run the `sre.hosts.provision` cookbook [] Run the `sre.hardware.upgrade-firmware` cookbook [] Update the `operations/puppet` repo - this should include updates to preseed.yaml, and site.pp with roles defined by service group: https://wikitech.wikimedia.org/wiki/SRE/Dc-operations [] Run the `sre.hosts.reimage` cookbook ==== cloudcephosd1036 [] Receive in system on #procurement task T351332 & in Coupa [] Rack system with proposed racking plan (see above) & update Netbox (include all system info plus location, state of planned) [] Run the [[ https://netbox.wikimedia.org/extras/scripts/provision_server.ProvisionServerNetwork/ | Provision a server's network attributes ]] Netbox script - Note that you must run the DNS and Provision cookbook after completing this step [] **Immediately** run the `sre.dns.netbox` cookbook [] **Immediately** run the `sre.hosts.provision` cookbook [] Run the `sre.hardware.upgrade-firmware` cookbook [] Update the `operations/puppet` repo - this should include updates to preseed.yaml, and site.pp with roles defined by service group: https://wikitech.wikimedia.org/wiki/SRE/Dc-operations [] Run the `sre.hosts.reimage` cookbook ==== cloudcephosd1037 [] Receive in system on #procurement task T351332 & in Coupa [] Rack system with proposed racking plan (see above) & update Netbox (include all system info plus location, state of planned) [] Run the [[ https://netbox.wikimedia.org/extras/scripts/provision_server.ProvisionServerNetwork/ | Provision a server's network attributes ]] Netbox script - Note that you must run the DNS and Provision cookbook after completing this step [] **Immediately** run the `sre.dns.netbox` cookbook [] **Immediately** run the `sre.hosts.provision` cookbook [] Run the `sre.hardware.upgrade-firmware` cookbook [] Update the `operations/puppet` repo - this should include updates to preseed.yaml, and site.pp with roles defined by service group: https://wikitech.wikimedia.org/wiki/SRE/Dc-operations [] Run the `sre.hosts.reimage` cookbook ==== cloudcephosd1038 [] Receive in system on #procurement task T351332 & in Coupa [] Rack system with proposed racking plan (see above) & update Netbox (include all system info plus location, state of planned) [] Run the [[ https://netbox.wikimedia.org/extras/scripts/provision_server.ProvisionServerNetwork/ | Provision a server's network attributes ]] Netbox script - Note that you must run the DNS and Provision cookbook after completing this step [] **Immediately** run the `sre.dns.netbox` cookbook [] **Immediately** run the `sre.hosts.provision` cookbook [] Run the `sre.hardware.upgrade-firmware` cookbook [] Update the `operations/puppet` repo - this should include updates to preseed.yaml, and site.pp with roles defined by service group: https://wikitech.wikimedia.org/wiki/SRE/Dc-operations [] Run the `sre.hosts.reimage` cookbook
    • Task
    I happened to be looking at the live logs for some mw-on-k8s pods recently and I noticed several messages like this: ``` panic: runtime error: index out of range [1300] with length 1300 goroutine 1 [running]: main.removeControlChars({0xc0002da000, 0x514, 0x2000}) /go/glogger/main.go:124 +0x258 main.(*Glogger).Run(0xc00011af58) /go/glogger/main.go:157 +0x152 main.main() /go/glogger/main.go:182 +0xf3 AH00106: piped log program '/usr/bin/glogger -d -S 16384 -n 127.0.0.1 -P 10200' failed unexpectedly ``` (This was from pod mw-web.codfw.main-789949d94b-p4jvc container mediawiki-main-httpd). Logstash report: https://logstash.wikimedia.org/goto/fe6f5e097453f003eec24979eadb3a3f (Thanks @Clement_Goubert
    • Task
    This task will track the racking, setup, and OS installation of cloudcephosd10[39-41] == Hostname / Racking / Installation Details == Hostnames: **Hostnames:** cloudcephosd10[39-41].eqiad.wmnet Racking Proposal: All three in E4. If we need to remove older osds to make room first, coordinate with andrew or dcaro Networking Setup: 2 x 10G interfaces per server, connected to cloudswitches. Check other hosts (e.g. cloudcephosd1012) for switch config specifics. Partitioning/Raid: SW raid mirror for the two smaller OS drives, other drives can be left unpartitioned for ceph management. OS Distro: Bullseye (default unless otherwise specified) Sub-team Technical Contact: David Caro == Per host setup checklist == Each host should have its own setup checklist copied and pasted into the list below. ==== cloudcephosd10 [] Receive in system on #procurement task T361366 & in Coupa [] Rack system with proposed racking plan (see above) & update Netbox (include all system info plus location, state of planned) [] Run the [[ https://netbox.wikimedia.org/extras/scripts/provision_server.ProvisionServerNetwork/ | Provision a server's network attributes ]] Netbox script - Note that you must run the DNS and Provision cookbook after completing this step [] **Immediately** run the `sre.dns.netbox` cookbook [] **Immediately** run the `sre.hosts.provision` cookbook [] Run the `sre.hardware.upgrade-firmware` cookbook [] Update the `operations/puppet` repo - this should include updates to preseed.yaml, and site.pp with roles defined by service group: https://wikitech.wikimedia.org/wiki/SRE/Dc-operations [] Run the `sre.hosts.reimage` cookbook ==== cloudcephosd1040 [] Receive in system on #procurement task T361366 & in Coupa [] Rack system with proposed racking plan (see above) & update Netbox (include all system info plus location, state of planned) [] Run the [[ https://netbox.wikimedia.org/extras/scripts/provision_server.ProvisionServerNetwork/ | Provision a server's network attributes ]] Netbox script - Note that you must run the DNS and Provision cookbook after completing this step [] **Immediately** run the `sre.dns.netbox` cookbook [] **Immediately** run the `sre.hosts.provision` cookbook [] Run the `sre.hardware.upgrade-firmware` cookbook [] Update the `operations/puppet` repo - this should include updates to preseed.yaml, and site.pp with roles defined by service group: https://wikitech.wikimedia.org/wiki/SRE/Dc-operations [] Run the `sre.hosts.reimage` cookbook ==== cloudcephosd1041 [] Receive in system on #procurement task T361366 & in Coupa [] Rack system with proposed racking plan (see above) & update Netbox (include all system info plus location, state of planned) [] Run the [[ https://netbox.wikimedia.org/extras/scripts/provision_server.ProvisionServerNetwork/ | Provision a server's network attributes ]] Netbox script - Note that you must run the DNS and Provision cookbook after completing this step [] **Immediately** run the `sre.dns.netbox` cookbook [] **Immediately** run the `sre.hosts.provision` cookbook [] Run the `sre.hardware.upgrade-firmware` cookbook [] Update the `operations/puppet` repo - this should include updates to preseed.yaml, and site.pp with roles defined by service group: https://wikitech.wikimedia.org/wiki/SRE/Dc-operations [] Run the `sre.hosts.reimage` cookbook
    • Task
    Name: Runjini Murthy Email address: rmurthy@wikimedia.org Department: Advancement Your team: FR-Analytics Request title: Pageview Data for Fundraising Countries Goals: We'd like to augment the current fundraising analysis we do (for example, performance of banner campaigns out of impressions serviced and email campaigns out of the list sizes we send to) to include pageview data for a given country. This would help give us another data point to see if, for example, declines in a given market are due to overall pageview declines we may see in the region. Provide details about your request here: We can currently access pageview data from the Product Analytics instance. (Our saved links to the queries have been removed with the upgrade, but this should serve as an example: https://superset.wikimedia.org/superset/explore/p/lj9Q8zOb67K/) Our FR-Online colleageus would ideally like to see this integrated into our dashboards in the FR-Analytics instance of Superset, but at a minimum, it would be helpful for us to see this data at a glance on the Product instance of Superset. My teammate Joseph Mando and I pulled some data ourselves manually by country in the past; here is an example here: https://docs.google.com/spreadsheets/d/10xL6alqJNNdBbOij_lq1_1vupvTnehFbMiFhYy3FAzI/edit#gid=1770797619 Could we work with your team to: 1.) Best case scenario, integrate this data into our campaign dashboards (https://analytics.frdev.wikimedia.org/superset/dashboard/campaign_overview_dashboard/) 2.) Or, set up a report that allows our Online partners to get this data on a weekly/real-time basis to monitor page view trends during the course of a campaign? Thank you for your help! Runjini Should any other stakeholders be involved or informed in this request?: For now it can be myself and my colleague, Joseph Mando (jmando@wikimedia.org) Is there a deadline or date we need to be aware of?: May 31, 2024
    • Task
    Trying to build the CI image for Gerrit due to openjdk 11: ```counterexample update-alternatives: using /usr/lib/jvm/java-11-openjdk-amd64/lib/jexec to provide /usr/bin/jexec (jexec) in auto mode update-alternatives: error: error creating symbolic link '/usr/share/binfmts/jar.dpkg-tmp': No such file or directory ``` More log: ``` Setting up openjdk-8-jre-headless:amd64 (8u402-ga-2~deb10u1) ... (image.py:210) update-alternatives: using /usr/lib/jvm/java-8-openjdk-amd64/jre/bin/rmid to provide /usr/bin/rmid (rmid) in auto mode (image.py:210) update-alternatives: using /usr/lib/jvm/java-8-openjdk-amd64/jre/bin/java to provide /usr/bin/java (java) in auto mode (image.py:210) update-alternatives: using /usr/lib/jvm/java-8-openjdk-amd64/jre/bin/keytool to provide /usr/bin/keytool (keytool) in auto mode (image.py:210) update-alternatives: using /usr/lib/jvm/java-8-openjdk-amd64/jre/bin/jjs to provide /usr/bin/jjs (jjs) in auto mode (image.py:210) update-alternatives: using /usr/lib/jvm/java-8-openjdk-amd64/jre/bin/pack200 to provide /usr/bin/pack200 (pack200) in auto mode (image.py:210) update-alternatives: using /usr/lib/jvm/java-8-openjdk-amd64/jre/bin/rmiregistry to provide /usr/bin/rmiregistry (rmiregistry) in auto mode (image.py:210) update-alternatives: using /usr/lib/jvm/java-8-openjdk-amd64/jre/bin/unpack200 to provide /usr/bin/unpack200 (unpack200) in auto mode (image.py:210) update-alternatives: using /usr/lib/jvm/java-8-openjdk-amd64/jre/bin/orbd to provide /usr/bin/orbd (orbd) in auto mode (image.py:210) update-alternatives: using /usr/lib/jvm/java-8-openjdk-amd64/jre/bin/servertool to provide /usr/bin/servertool (servertool) in auto mode (image.py:210) update-alternatives: using /usr/lib/jvm/java-8-openjdk-amd64/jre/bin/tnameserv to provide /usr/bin/tnameserv (tnameserv) in auto mode (image.py:210) Setting up openjdk-11-jre-headless:amd64 (11.0.23+9-1~deb10u1) ... (image.py:210) update-alternatives: using /usr/lib/jvm/java-11-openjdk-amd64/bin/java to provide /usr/bin/java (java) in auto mode (image.py:210) update-alternatives: using /usr/lib/jvm/java-11-openjdk-amd64/bin/jjs to provide /usr/bin/jjs (jjs) in auto mode (image.py:210) update-alternatives: using /usr/lib/jvm/java-11-openjdk-amd64/bin/keytool to provide /usr/bin/keytool (keytool) in auto mode (image.py:210) update-alternatives: using /usr/lib/jvm/java-11-openjdk-amd64/bin/rmid to provide /usr/bin/rmid (rmid) in auto mode (image.py:210) update-alternatives: using /usr/lib/jvm/java-11-openjdk-amd64/bin/rmiregistry to provide /usr/bin/rmiregistry (rmiregistry) in auto mode (image.py:210) update-alternatives: using /usr/lib/jvm/java-11-openjdk-amd64/bin/pack200 to provide /usr/bin/pack200 (pack200) in auto mode (image.py:210) update-alternatives: using /usr/lib/jvm/java-11-openjdk-amd64/bin/unpack200 to provide /usr/bin/unpack200 (unpack200) in auto mode (image.py:210) update-alternatives: using /usr/lib/jvm/java-11-openjdk-amd64/lib/jexec to provide /usr/bin/jexec (jexec) in auto mode update-alternatives: error: error creating symbolic link '/usr/share/binfmts/jar.dpkg-tmp': No such file or directory (image.py:210) dpkg: error processing package openjdk-11-jre-headless:amd64 (--configure): ost-installation script subprocess returned error exit status 2 (image.py:210) dpkg: dependency problems prevent configuration of openjdk-11-jdk-headless:amd64: openjdk-11-jdk-headless:amd64 depends on openjdk-11-jre-headless (= 11.0.23+9-1~deb10u1); however: Package openjdk-11-jre-headless:amd64 is not configured yet. dpkg: error processing package openjdk-11-jdk-headless:amd64 (--configure): dependency problems - leaving unconfigured (image.py:210) ``` ``` Errors were encountered while processing: openjdk-11-jre-headless:amd64 openjdk-11-jdk-headless:amd64 (image.py:210) ```
    • Task
    As part of MinT for Wikipedia Readers MVP (T359072), this ticket proposes to provide an entry point from Wikipedia article pages on mobile web. In this way, users reading an article in their language, especially when it contains little content, will have an easy way to access more content from other languages. This new entry point will be placed in the article footer: after the article contents and the last edited information, and right before the current "Related articles" which provides a list of articles on similar topics. The new footer section will follow a similar style as the "Related articles" one since they support a similar "explore more" function. The proposed change is illustrated below (note that the new element will be shown in the target language, but english is used in the mockup for illustration purposes only). |Current|Proposed| |---|---| |{F48447652, size=full}|{F48447659, size=full}| # Design details {F48447784, size=full} The proposal includes the following elements: - **Footer section title.** The "Automatic translation" label will act as a title for this new area, and will follow the same style and spacing than the current "Related articles" one. - **Translation card.** A [Card Codex Component](https://doc.wikimedia.org/codex/latest/components/demos/card.html) provides access to the content from other languages. It includes the following elements: - **Title and visual element.** The article title and thumbnail help to indicate that the contents are about the current topic (in contrast to the "suggested articles" below and following the same style). In case of articles with no images, the same placeholder approach for "suggested articles" is followed (we need to check whether the placeholder is skipped or a static placeholder is used in this case). - **Subtitle.** A label indicating the number of sections from other languages ("12 more sections in other languages"). We can consider simplified ways to calculate this or alternatives if it becomes technically complex. - **Supporting text.** The "Robot" icon and the "Read automatic translation" anticipate to users that they will access machine translated content and helps associate the robot with the idea of automatically-generated content. ## When to show this entry point We will show this entry point on pages that meet the following criteria (all points): - Part of main namespace. - Available in other languages.
    • Task
    cf. {T133541} or {T819} (both not exactly the same but closely related). Folks changing a task's Priority value (often without bad intentions but "the field was visible so I thought I can") regularly comes up as an issue faced by teams whose workflows rely on the Priority field. Upstream Phorge (Phabricator) code itself has no further differentiation in the Maniphest (=tasks) application settings: The default Maniphest Edit Policy covers all fields of a task - Priority field and any other fields. To some extent this was mitigated by the introduction of [forms](https://www.mediawiki.org/wiki/Phabricator/Forms) allowing to disable or hide specific UI fields, which however still does not solve the situation as a (Create or Edit) form configuration cannot be bound to a specific user group - it's all users or no users affected by the form configuration, once a task has been created via a form. I was quite reluctant in the past to restrict editing the Priority field. After having seen vandalism (though that's only weakly related here) and repeated tension over the years I've started to consider this an option. It would slightly reduce UI complexity for Phab newcomers not aware of its social conventions. It would reduce workflow collisions, see e.g. `https://phabricator.wikimedia.org/T362986#9730145` (random latest example).
    • Task
    Even if I change the language it doesn't reflect and it is in English. This happens in every language. To reproduce the issue, please visit the following page: *https://global-search.toolforge.org/?uselang=ja *https://global-search.toolforge.org/?uselang=ast
    • Task
    While running the htttpbb tests in production I found an issue with the following test: ``` https://enwiki-articletopic.revscoring-articletopic.wikimedia.org/v1/models/enwiki-articletopic:predict (/srv/deployment/httpbb-tests/liftwing/test_liftwing_production.yaml:114) Status code: expected 200, got 500. Body: expected to contain 'probability', got '{"error":"An error happened while fetching feature'... (133 characters total). ```
    • Task
    As an engineer I want to trace calls made by S3API, Redis, Enforcer and httputils. See s3API tracing code for guide. Todo: [] Add tracing methods for S3API, Redis, Enforcer and httputils. [] Update unit tests
    • Task
    I found an issue with the "Image Recommendations" feature: - I only received the tutorial the second time I accessed the tool. It's also bad to be writing the caption or alternative text and, if the user wants to see the image again to be sure about what they are going to add, they have to click back (←), to be able to see the image again (since it appears very small at the side when writing).
    • Task
    As an engineer I would like to trace calls in auth service. Todo: [] Add middleware gin handler using tracing library [] Modify unit tests
    • Task
    * Open the standalone demo and switch to "Complex" * Enable input debugging * Observe an exception is thrown: ``` ve.dm.Node.js:708 Uncaught TypeError: Cannot read properties of null (reading 'isReadOnly') at ve.dm.Node.getOffset (ve.dm.Node.js:708:16) at ve.Node.getRange (ve.Node.js:275:20) at ve.dm.Surface.setSelection (ve.dm.Surface.js:820:63) at ve.ui.DebugBar.js:330:16 ```
    • Task
    As an engineer I would like to trace calls in realtime service. See 'otel-tracing-library' draft MR in structured-data for more details. Todo: [] Commit submodules with OTEL integrated [] Create tracing library constructor in dependency injection libraries [] Modify unit tests Note: * Make sure tracing cognito, enforcer and Redis methods are traced in OTEL Library. See S3API tracing code as a guide.
    • Task
    Make a google drive spreadsheet based on [[ https://docs.google.com/spreadsheets/d/1q71Swzxpf2U4shhSJl8fry-CHg1RauJXBlXN_PHSlk4/edit#gid=611791940 | last year's ]], which makes our scraper results more human-readable. Populate it with the Feb-April dewiki results. * [ ] Don't reorder columns if at all possible. all-wikis output should have a stable column order. * [ ] Copy new [[ https://gitlab.com/wmde/technical-wishes/scrape-wiki-html-dump/-/blob/main/metrics.md | metrics.md ]] documentation strings into a header row. * [ ] Color-code column groups to split by plugin * [ ] Format numbers appropriately, eg. as integers, as percentage, rounding to reasonable significant digits. * [ ] Decide what to do about the sheet with dynamics over time. == Outcome == New spreadsheet: https://docs.google.com/spreadsheets/d/1w1WE8sGfZfIt6gJEY_9wAoxJoYl7-NnCWrSion_CMSs/edit
    • Task
    As an engineer I would like to trace calls in snapshots. See 'otel-tracing-library' draft MR in structured-data for more details. Todo: [] Commit submodules with OTEL integrated [] Create tracing library constructor in dependency injection libraries [] Modify unit tests
    • Task
    As an engineer I would like to trace calls in on-demand. See 'otel-tracing-library' draft MR in structured-data for more details. Todo: [] Commit submodules with OTEL integrated [] Create tracing library constructor in dependency injection libraries [] Inject Tracer into Subscriber [] Modify unit tests
    • Task
    As an engineer I would like to trace calls in strucured-data. See 'otel-tracing-library' draft MR in structured-data for more details. Todo: [] Commit submodules with OTEL integrated [] Create tracing library constructor in dependency injection libraries [] Inject Tracer into Subscriber [] Modify unit tests [] Propagate Event ID in Event-bridge handlers Notes: Make sure the S3API methods are traced in OTEL library
    • Task
    As an engineer I would like to trace calls in strucured-data. See 'otel-tracing-library' draft MR in structured-data for more details. Todo: [] Commit submodules with OTEL integrated [] Create tracing library constructor in dependency injection libraries [] Inject Tracer into Subscriber [] Modify unit tests
    • Task
    As an engineer I would like to trace calls in the ksqldb submodule. See 'otel-tracing-library' draft MR for more details. Todo: [] Inject Tracer into submodule [] Modify Unit Tests
    • Task
    **Steps to replicate the issue** (include links if applicable): (Not sure if this will work for everyone, but this is how I discovered this.) * On enwiki with an account that doesn't have the homepage feature enabled, visit Special:Preferences and enable the homepage * Visit Special:Homepage **What happens?**: * A popup appears informing me about the "updated design" (see screenshot). * However, I never used Special:Homepage with the previous design, so why would I care about the design being updated? {F48443748} **What should have happened instead?**: * When a user visits Special:Homepage for the very first time, no notification about "new features" should be visible, because //everything// is new to that user. **Software version** (on `Special:Version` page; skip for WMF-hosted wikis like Wikipedia): **Other information** (browser name/version, screenshots, etc.):
    • Task
    **Steps to replicate the issue** (include links if applicable): * Go to https://toolsadmin.wikimedia.org/tools/ * Search for tools **What happens?**: The search shows all the tools, presumably because of the `tools.` prefix in the names **What should have happened instead?**: The search shows a list of tools with "tools" in the main name part **Software version** (on `Special:Version` page; skip for WMF-hosted wikis like Wikipedia): **Other information** (browser name/version, screenshots, etc.):
    • Task
    As an engineer I would like to trace calls in the subscriber submodule. See 'otel-tracing-library' draft MR for more details. Todo: [] Inject Tracer into submodule [] Modify Unit Tests
    • Task
    As an engineer I would like to trace calls in parser library. See 'otel-tracing-library' draft MR for more details. Todo: [] Inject Tracer into parser submodule [] Modify Unit Tests **To consider** Will park for now until we know what to trace
    • Task
    As an engineer I would like to trace calls in wmf. **To consider** We have a draft MR Modify the unit test and the draft MR See 'otel-tracing-library' draft MR for more details. Todo: [x] Inject Tracer into library [] Modify Unit Tests Luvo will be a code reviewer and the rest of the team will do the code work