Page MenuHomePhabricator
Search Open Tasks
Use the application-specific Advanced Search to set additional search criteria: Tasks, Commits. (More information)
    • Task
    Due to safety concerns, [[ https://w.wiki/9wG6 | the community has reached a consensus ]] to make a configuration change that would disable the ability for IP users to create pages in the "Module" and "Template" namespaces on the Bosnian WIkipedia.
    • Task
    db1208 has been alerting on the following for a few days: ``` [10:00:22] <jinxer-wm> FIRING: PrometheusMysqldExporterFailed: Prometheus-mysqld-exporter failed (db1208:13351) - TODO - https://grafana.wikimedia.org/d/000000278/mysql-aggregated - https://alerts.wikimedia.org/?q=alertname%3DPrometheusMysqldExporterFailed ```
    • Task
    This task will explore a section for Ikusgela at the main Page. A good and clean layout is visible here: https://www.britannica.com/ (Featured video)
    • Task
    The alert `BGP status` has started firing 1 month ago. ===== Labels ```lang=ini alertname=BGP status instance=cr2-eqord severity=warning source=icinga team=netops ``` ===== Annotations | Name | Content | | --- | --- | | runbook | https://wikitech.wikimedia.org/wiki/Network_monitoring#BGP_status | | summary | BGP WARNING - AS15830/IPv4: Active (for 22d22h), AS15830/IPv6: Active (for 22d22h), AS15830/IPv4: Active (for 22d22h), AS15830/IPv6: Active (for 22d22h) | ===== Links * [[ https://alerts.wikimedia.org?q=alertname%3DBGP+status&q=%40state%3Dactive | Alert dashboard ]] * [[ https://icinga.wikimedia.org/cgi-bin/icinga/extinfo.cgi?type=2&host=cr2-eqord&service=BGP+status | Alert source ]] --- ```lang=ini, name=Triage metadata. Do not delete. fingerprint=050964b8d3ecb506 ```
    • Task
    see also https://github.com/WDscholia/scholia/issues/2412 **Description:** The Wikimedia Foundation is nearing the 4TB limit on its Blazegraph database, necessitating the exploration of federated queries, multiple SPARQL endpoints, and potentially different query languages due to the imminent graph split planned for Q1/2024. The current 1-minute timeout on the official Wikidata Query Service (WDQS) further compounds the issue, making efficient query management crucial. **Problem:** As Blazegraph approaches its storage limit and with a graph split under testing, our ability to handle queries efficiently is becoming strained. This situation may force the adoption of various technical adjustments like federated queries, use of different SPARQL endpoints, and more, potentially complicating the query process. **Proposed Solution:** - Conversion to Named Queries: Shift all relevant queries to a named query format with parameters. This change will make queries easier to manage and modify without altering the core application code. - blackbox style SPARQL compatible middleware Introduction: Implement a SPARQL compatible middleware layer that handles the execution of these queries. The middleware will be responsible for routing queries to the appropriate data stores and translating them as necessary, thereby abstracting the complexities from the end-users. The actual SPARQL query will be hidden from the user This middleware will act as a broker between the client requests and the backend data stores, ensuring queries are executed on the correct store and results are returned efficiently. It will support named queries with parameters, enhancing flexibility and scalability. **Alternatives Considered:** Setting up a private instance of Wikidata as described in the CEUR-WS Vol-3262 paper, though this is resource-intensive and may not be feasible for all users. **Additional Context:** Recent discussions in Search Platform Office Hours and proposals for handling named queries suggest a growing need for more sophisticated query management solutions. Examples include short URLs supported by the Wikidata Query Service and internal handling by QLever, which could be extended further by our middleware. **References:** - [[https://github.com/w3c/sparql-dev/issues/57 | Query Parameterization and label resolving issues on W3C ]] - Previous analysis of Blazegraph alternatives. - Potential for testing on platforms like https://scholia.portal.mardi4nfdi.de/. - Linked queries and their handling in other projects like Scholia and CEUR-WS. - https://www.wikidata.org/wiki/Wikidata:SPARQL_query_service/WDQS_graph_split - https://www.wikidata.org/wiki/Wikidata:SPARQL_query_service/WDQS_backend_update/October_2023_scaling_update - https://www.mediawiki.org/wiki/Wikidata_Query_Service/User_Manual#Federation - https://phabricator.wikimedia.org/T356773 - https://www.wikidata.org/wiki/User:Sannita_(WMF)
    • Task
    Commons image of the day is automatically curated by the community. It would be great to show it in the Main page of other projects. Explore the possibilities to do that without a complex bot system.
    • Task
    **When:** During a pre-defined DBA maintenance windows **Prerequisites:** https://wikitech.wikimedia.org/wiki/MariaDB/Primary_switchover [x] Team calendar invite **Affected wikis:**: https://noc.wikimedia.org/conf/highlight.php?file=dblists/s7.dblist **Checklist:** NEW primary: db1236 OLD primary: db1181 [x] Check configuration differences between new and old primary: ``` sudo pt-config-diff --defaults-file /root/.my.cnf h=db1181.eqiad.wmnet h=db1236.eqiad.wmnet ``` **Failover prep:** [] Silence alerts on all hosts: ``` sudo cookbook sre.hosts.downtime --hours 1 -r "Primary switchover s7 T363892" 'A:db-section-s7' ``` [] Set NEW primary with weight 0 (and depool it from API or vslow/dump groups if it is present). ``` sudo dbctl instance db1236 set-weight 0 sudo dbctl config commit -m "Set db1236 with weight 0 T363892" ``` [] Topology changes, move all replicas under NEW primary ``` sudo db-switchover --timeout=25 --only-slave-move db1181 db1236 ``` [] Disable puppet on both nodes ``` sudo cumin 'db1181* or db1236*' 'disable-puppet "primary switchover T363892"' ``` [] Merge gerrit puppet change to promote NEW primary: https://gerrit.wikimedia.org/r/1025906 **Failover:** [] Log the failover: ``` !log Starting s7 eqiad failover from db1181 to db1236 - T363892 ``` [] Set section read-only: ``` sudo dbctl --scope eqiad section s7 ro "Maintenance until 06:15 UTC - T363892" sudo dbctl --scope codfw section s7 ro "Maintenance until 06:15 UTC - T363892" sudo dbctl config commit -m "Set s7 eqiad as read-only for maintenance - T363892" ``` [] Check s7 is indeed read-only [] Switch primaries: ``` sudo db-switchover --skip-slave-move db1181 db1236 echo "===== db1181 (OLD)"; sudo db-mysql db1181 -e 'show slave status\G' echo "===== db1236 (NEW)"; sudo db-mysql db1236 -e 'show slave status\G' ``` [] Promote NEW primary in dbctl, and remove read-only ``` sudo dbctl --scope eqiad section s7 set-master db1236 sudo dbctl --scope eqiad section s7 rw sudo dbctl --scope codfw section s7 rw sudo dbctl config commit -m "Promote db1236 to s7 primary and set section read-write T363892" ``` [] Restart puppet on both hosts: ``` sudo cumin 'db1181* or db1236*' 'run-puppet-agent -e "primary switchover T363892"' ``` **Clean up tasks:** [] Clean up heartbeat table(s). ``` sudo db-mysql db1236 heartbeat -e "delete from heartbeat where file like 'db1181%';" ``` [] change events for query killer: ``` events_coredb_master.sql on the new primary db1236 events_coredb_slave.sql on the new slave db1181 ``` [] Update DNS: https://gerrit.wikimedia.org/r/1025907 [] Update candidate primary dbctl and orchestrator notes ``` sudo dbctl instance db1181 set-candidate-master --section s7 true sudo dbctl instance db1236 set-candidate-master --section s7 false (dborch1001): sudo orchestrator-client -c untag -i db1236 --tag name=candidate (dborch1001): sudo orchestrator-client -c tag -i db1181 --tag name=candidate ``` [] Check zarcillo was updated ** db-switchover should do this. If it fails, do it manually: https://phabricator.wikimedia.org/P13956 ``` sudo db-mysql db1215 zarcillo -e "select * from masters where section = 's7';" ``` [] (If needed): Depool db1181 for maintenance. ``` sudo dbctl instance db1181 depool sudo dbctl config commit -m "Depool db1181 T363892" ``` [] Change db1181 weight to mimic the previous weight db1236: ``` sudo dbctl instance db1181 edit ``` [] Apply outstanding schema changes to db1181 (if any) [] Update/resolve this ticket.
    • Task
    It would be nice to allow users to deeply zoom into large spherical panoramic images, uncovering more detail as they pan and zoom. We have some large images. In [[https://commons.wikimedia.org/wiki/Category:360%C2%B0_panoramas|Category:360° panoramas]], 9256 images have the correct 2:1 aspect ratio, and of those, 815 images exceed 100 Mpx in size. Attempting to load such a large image on the client side would likely fail due to memory or GPU texture size constraints. To generate Pannellum's multi-resolution format, we need to reproject the equirectangular source image to a set of 6 cube faces. The cube faces are tiled with a configurable tile size and no overlap. Then the cube faces are reduced in size by a factor of 2 and the tiling process repeats. This continues until the cube faces are a single tile each. There is a [[https://github.com/mpetroff/pannellum/blob/master/utils/multires/generate.py|python script]] to do this using Hugin and PIL. We can either queue a job on upload, like video transcode, or generate tiles on the fly in Thumbor as they are requested. Generating on the fly would be convenient, but the feasibility depends on performance. With [[ https://commons.wikimedia.org/wiki/File:Milky_Way_Above_Cerro_Pach%C3%B3n_360_Panorama_(20220408PachonFPano35mmRubinAuxi_MW_EQ-001-CC-002).jpg | a large test image]] on my laptop, generating and tiling a single cube face: ``` $ printf 'o f4 v360 y0 p0 r0\np f0 w10186 h10186 v90 nTIFF_m\n' > ptscript-big $ /usr/bin/time PTmender ptscript-big milky.jpg ... 41.74user 0.80system 0:42.57elapsed 99%CPU (0avgtext+0avgdata 2005112maxresident)k $ /usr/bin/time vips dzsave pano0000.tif mydz --tile-size=1024 --overlap=0 --depth=onetile --vips-concurrency=1 ... 2.00user 0.48system 0:01.69elapsed 146%CPU (0avgtext+0avgdata 149012maxresident)k ``` Maybe it's scraping in under the threshold of feasibility. Investigations will continue.
    • Task
    === Checklist === **Before the meet** [] Gather and review agenda items [] Recruit facilitators [] Finalize a date and time [] Pick a communication medium for the meeting [] Add details on a wiki page and setup a process for attendees to express interest in joining [] Add details about the meeting in the quarterly newsletter [] Promote the meeting and share reminders via mailing lists and other relevant venues [] Add on Diff calendar https://diff.wikimedia.org/event/first-language-community-meeting/ [] Telegram channels - Wikimedia Announcements, Wikimedia Diversity Hub, Wikimedia Language engineering, Participants WCI 2023, Wikimedia Hackathon, Wikidata. [] Mailing lists - Wikitech-l, Wikimedia-l, Wikidata-l, Cloud-l [] Fill out the [[ https://docs.google.com/forms/d/e/1FAIpQLSfRDNvgAklaOqMZsiwaoisSPYpq4CqvbHfUZjOsb18scW1Ucw/viewform | translation & interpretation support request ]] form (staff access only) **During the meet** [] Begin meeting facilitation with a quick introduction round [] Take notes (attendees joined, discussion topics, resources shared) [] Record the meeting; let participants be aware before starting the recording [] Share with all attendees the link to join the Telegram chat for follow-up conversations **After the meet** [] Add the meeting recording & notes document link on the wiki page
    • Task
    From @Jdlrobson in Slack: > In Minerva we were using #D73333 for red links in day theme and #E45D52 for red links in dark theme. The new Codex build uses the same #D73333 for both night and day theme. This causes an accessibility issue: T363778 === Cause of problem This problem is caused by the fact that the component-specific tokens (which is where the red link colors are defined) are directly referencing option tokens like `{ color.red600 }` directly. If the tokens in this file pointed to decision tokens instead (like `{ color.destructive }`, etc) then they would automatically be updated when the dark mode tokens are loaded. No "components-dark" file would be necessary. === Proposed solution I suggest that the `components.json` file should be updated to ensure that all "color" option tokens (anything with `color` in the name) should be updated to use appropriate tokens in the `application.json` file. So for red links, the destructive color tokens could be used, etc. Alternatively, tokens to represent things like red link colors could be moved into the `application.json` file (they would also need corresponding overrides defined in the dark mode tokens file) – this is a better solution if the red link colors need to be truly independent of the destructive colors.
    • Task
    In Jeff's presentation he was pointing out an issue with ho the Wikbase and/or WDQS keys are configured. Look into this further.
    • Task
    ## User experience Suppose you have a large spherical equirectangular image called `Pano.jpg`. You embed it in a page with syntax such as ``` [[File:Pano.jpg|400px|thumb|photosphere]] ``` I am proposing the use of a "photosphere" image parameter which will force the image to be handled as a spherical panorama. Initially, in the page view, we want to show a thumbnail of the image, similar to what it will look like after click in the pan-and-zoom interface. This requires reprojection from equirectangular to rectilinear, and cropping to a reduced field of view. |--|--|--| |Example source image | Simple central crop, no reprojection | Reprojection with PTmender | | {F49538260} | {F49538334} | {F49538429} ## Parameters The yaw, pitch, roll, field of view and aspect ratio are all free parameters which need to be chosen somehow. Pannellum extracts yaw, pitch and roll from the [[https://developers.google.com/streetview/spherical-metadata | GPano metadata]] if it is available, and uses the image centre by default otherwise. That seems like a reasonable convention to follow. Pannellum uses a default initial horizontal FOV of 100°, which is very wide by photography standards. The [[https://en.wikipedia.org/wiki/Angle_of_view_(photography)|Wikipedia article on angle of view]] puts it into the "ultra-wide" category. I would welcome feedback on the choice of this parameter. A default aspect ratio of 4:3 seems reasonable although I would welcome feedback on that also, and on whether there should be a thumbnail parameter to override it. In the portrait orientation, Pannellum's vertical field of view ends up being so large as to cause distracting distortions. Note that rectilinear projection fails at FOV ≥180°. A bounding box of say 100°x100° is probably more reasonable than an unlimited vertical field of view. ## Backend After reviewing a few alternatives, I am suggesting that we use PTmender from the `libpano13-bin` package for this. It is poorly documented and the C code is scary, but it is a small binary and it does the job. As long as we control the input filenames and script contents, it probably won't overflow its stack buffers. It's not parsing EXIF, it just reads the image data, so the attack surface is not too bad. It only supports TIFF output, so it's necessary to pass the resulting image through ImageMagick to convert it to a JPEG. There will be an implementation in core and an implementation in Thumbor. MediaWiki will include an `spc` option in the image filename, short for spherical panorama crop. This will distinguish flat thumbnails from reprojected thumbnails.
    • Task
    **Steps to replicate the issue** (include links if applicable): * visit a wiki * press `F12` to open browser devtools * click console tab * type `mw.util.addPortlet( 'p-links', 'More tools', '#p-i-do-not-exist-09740892798236732' );` * press enter **What happens?**: * nothing **What should have happened instead?**: * warning or error to console with a human readable error message such as "Error: attempting to addPortlet to a non-existent selector. Bad 'before' parameter." **Software version** (on `Special:Version` page; skip for WMF-hosted wikis like Wikipedia): **Other information** (browser name/version, screenshots, etc.):
    • Task
    **Steps to replicate the issue** (include links if applicable): * visit a wiki * press `F12` to open browser devtools * click console tab * type `mw.util.addPortletLink( 'p-i-do-not-exist-09740892798236732', 'https://www.test.com/', 'Test' );` * press enter **What happens?**: * nothing **What should have happened instead?**: * warning or error to console with a human readable error message such as "Error: attempting to addPortletLink to a non-existent portlet. Bad portletId." **Software version** (on `Special:Version` page; skip for WMF-hosted wikis like Wikipedia): **Other information** (browser name/version, screenshots, etc.):
    • Task
    Cloud VPS Project Tested: N/A, established tech Site/Location: magru Number of systems: 2 Service: ncredir Networking Requirements: external IP, for public traffic rerouting Processor Requirements: 2 Memory: 4G Disks: 20G Other Requirements:
    • Task
    ####User story & summary: As Growth Program Manager, I want a design for Growth Team swag, so new team members and volunteers can receive team swag. ####Background: Team swag is meant for Growth team members and community members who have helped with Growth team work. We will use https://www.printful.com/. Ideally, it will be great to have two or three swag options so we can allow folks to choose what they prefer. ####Acceptance Criteria: - Create a t-shirt design that incorporates "Wikimedia Foundation" and "Growth Team" in some way. - Create a Zip Hoodie design that incorporates "Wikimedia Foundation" and "Growth Team" in some way. - Ensure the design is different from previous designs - [Nice to have] Create a long-sleeve t-shirt design, hat, or other accessory design.
    • Task
    Your attention is requested: A new upstream version of OpenRefine is now available: 3.8.0. * https://github.com/OpenRefine/OpenRefine/releases/tag/3.8.0
    • Task
    **Steps to replicate the issue** (include links if applicable): * Visit https://en.m.wikipedia.beta.wmflabs.org/w/index.php?title=T352930&oldid=612901&minervanightmode=1 **What happens?**: * →‎This is a section! appears in gray and is unreadable. * This is also an issue in the day theme. **What should have happened instead?**: * →‎This is a section! should be the same color as the rest of the text and meet AA guidelines. **Software version** (on `Special:Version` page; skip for WMF-hosted wikis like Wikipedia): **Other information** (browser name/version, screenshots, etc.): The rules in https://gerrit.wikimedia.org/g/mediawiki/core/+/a0fc7538f31e83e5ee9d345f05d7af19f2ec75f0/resources/src/mediawiki.interface.helpers.styles/linker.styles.less#55 are intended for display inside changes list * Add a rule in https://gerrit.wikimedia.org/g/mediawiki/core/+/a0fc7538f31e83e5ee9d345f05d7af19f2ec75f0/resources/src/mediawiki.action.styles/styles.less#26 that overrides it to color: inherit;
    • Task
    I am trying to add a server-sent events client to wikibugs to watch for new events from https://gitlab-webhooks.toolforge.org/. When I use `curl -v --no-buffer -H 'Accept: text/event-stream' 'https://gitlab-webhooks.toolforge.org/sse/'` as a client I see events in real-time as expected. When I try to connect using an asyncio client that is largely copied from the working client that wikibugs uses to connect it's web and irc components however things get strange. The asyncio appears to hang while completing the initial connection. When I have the patience (or wander away to do something else with it running) the hang seems to eventually clear at least temporarily as noted in these irc rants: ```lang=irc [20:11:24] <bd808> My sse magic for getting the webhook events into wikibugs is not playing nice. :/ The endpoint is working great via curl, but my asyncio python client gets lost somewhere in establishing the connection. Very frustrating at the moment. [21:03:33] <bd808> LOL. I left my broken client running in a background window while I worked on other things. 26 minutes after I started it the client finished the initial https handshake and started outputing data. This almost certainly is a problem in my client as the curl test work functionally instantly. Still maddening, but at least showing something more than "it doesn't work". [21:14:50] <brennen> 26 minutes is... impressive [21:16:25] <bd808> once it started actually working it seemed be perfectly happy. A second test in the same fashion started working after "only" 9 minutes. [21:17:26] <bd808> It must be something I did wrong in the asyncio loop setup. I'm not sure what else could pause that long before doing the needful ``` The exact same client code works completely as expected when connected to a local gitlab-webhooks instance running on the same laptop. This makes the problem seem more likely to be somehow related to the two nginx reverse proxies that sit between the client and the https://gitlab-webhooks.toolforge.org/sse/ service. The fact that `curl` works unimpeded in both localhost and toolforge endpoint tests however makes me cautious about directly implicating nginx.
    • Task
    I plan to talk to folks about various aspects of [[ https://meta.wikimedia.org/wiki/Tech/News | Tech News ]] (primarily content-wise). Including what is currently valued or challenging, and what aspects could potentially be improved. Or re-phrased (draft) questions: * What info/details do people really want (or already appreciate), in Tech News? * What is the related technical information you wish you could find the fastest? In other words: There's over a decade of history (cf. [[ https://meta.wikimedia.org/wiki/Tech/News/Archives | archives ]]), and not much has changed in that time. Stability is good, but incremental evolution can also be good! If you see this task and have thoughts, please find me ([[ https://meta.wikimedia.org/wiki/User:Quiddity_(WMF) | photo reference ]]) at the event to discuss in detail (and with the shoulder-surfing insights of fingers for pointing and a screen), or subscribe here for future-details (I will add some specific questions later).
    • Task
    I have been making changes to the citation watchlist gadget. I should document them.
    • Task
    Either https://en.wikipedia.org/w/index.php?title=List_of_Pixar_films&curid=16885554&diff=1221596988&oldid=1221596475 or https://en.wikipedia.org/w/index.php?title=List_of_Pixar_films&curid=16885554&diff=1221596475&oldid=1221261763 Causes a match even though no URL is added
    • Task
    This edit to Rosabel Watson matched, even though no URLs were added. Should check for diffing glitch. https://en.wikipedia.org/w/index.php?title=Rosabel_Watson&curid=76776487&diff=1221596919&oldid=1221596571
    • Task
    sciencedirect.com should not match BUT sciencedirect.com/topics/ should At the moment, https://en.wikipedia.org/wiki/Wikipedia:Citation_Watchlist/Lists/RSP includes this line: * sciencedirect.com/topics/ However, matching is on whole domains, not specific paths.
    • Task
    //(Please set yourself as task assignee of this session)// * Title of session: [[ https://meta.wikimedia.org/wiki/Wikimedia_Cuteness_Association | Cuteness association ]] meetup * Session description: Rumor[0] has it that several people[1] are travelling with a blåhaj(ar) and/or other plushie(s). It is absolutely mission-critical to hold a meetup to further spread the cuteness. * Username for contact: @taavi's still-unnamed blåhaj * Session duration (25 or 50 min): * Session type (presentation, workshop, discussion, etc.): * Language of session (English, Arabic, etc.): * Prerequisites (some Python, etc.): * Any other details to share?: * Interested? Add your username below: [0]: my Fediverse feed [1]: at least two
    • Task
    https://en.wikipedia.org/w/index.php?title=Walking_on_Broken_Glass&curid=10716195&diff=1221253384&oldid=1221253149 What I am wondering is if swapping out the URL, instead of wholesale addition/removal, is causing issues with the diff engine
    • Task
    How and where can VR and Wikimedia projects meet? This project would explore options for VR integration and browsing. Starting points might include: - The thoughts around Apple Vision at https://diff.wikimedia.org/2024/03/29/the-wikipedia-app-meets-spatial-computing/ (but covering all platforms) - Horizons Workrooms - Spatial - Open source vs. closed source approaches - 3D media on Commons - VR meeting venues (online/hybrid)
    • Task
    We should determine how to display Event timezone in the Special:AllEvents list. See T353382#9743080 for background Acceptance criteria: Each event list event should include Event timezone Also see closely related ticket T363866
    • Task
    We should determine how to display Event time in the Special:AllEvents list. See T353382#9743080 for background **Acceptance criteria: ** Each event list event should include Event time Also see closely related ticket T363867
    • Task
    If an event starts Jan 1 2024 and goes through Dec 31 2024, but we set our filter to start at today's date (April 30, 2024) that event will currently not show up even though it is an active event. Instead of displaying events that start on the date of the filter, should the filter instead show events that are active on the date of the "From" filter? Or have a default view that is all open and active events? The way it is set up now, it excludes events that are active and ongoing events because they do not start on the Start date selector of the filter. To me, this is a less intuitive user experience. |gif of current behavior| |{F49333406}| **Acceptance criteria:** There is no real AC here, but this is just a placeholder to remind us to discuss this issue and then this ticket can then be edited to add some AC after a decision is made.
    • Task
    On Special:AllEvents if there is a long username it will break the layout on smaller breakpoints |gif of long username breaking layout| |{F49503131}| **Acceptance criteria:** Special:AllEvents long usernames should wrap on smaller breakpoints
    • Task
    Each event on Special:AllEvents displays only one organizer per event, regardless of how many organizers there are on the event |gif of current behavior| |{F49500276}| **Acceptance criteria:** Event should display all organizers (per AC at T353382)
    • Task
    **Steps to replicate the issue** (include links if applicable): * Visit https://patchdemo.wmflabs.org/wikis/abfa33da2d/w/index.php?title=Dog&veaction=edit making sure VE loads. * Click insert and then template **What happens?**: Color contrast isuses {F49498667} **What should have happened instead?**: No color contrast issues **Software version** (on `Special:Version` page; skip for WMF-hosted wikis like Wikipedia): **Other information** (browser name/version, screenshots, etc.): Adding notheme to oo-ui-windowManager should fix this.
    • Task
    This task is for testing of Google Pay integration with our donation workflow, and resolving any remaining issues or inconsistencies that may be found. The following APK can be used, which has Google Pay enabled (and live), and unrestricted to any locale or language: https://drive.google.com/file/d/1WaTqTUu3exSEbeFPoO0SLZgj3PgPXDDH/view?usp=sharing This installs as the Wikipedia Beta app, so it can exist alongside any Production version of the app that might already be on your device. --- The app uses the current GeoIP country code for determining all of the following: * Which currency to use for the amount selection interface. * Whether to show the email opt-in checkbox. * Whether the Google Pay method is accepted in this region. To override your GeoIP code for testing purposes, go to Developer settings in the app and select "GeoIP Country Override". (must be uppercase country codes, e.g. "JP", "DE"; then change this field back to blank to remove the override.)
    • Task
    In [[ https://en.wikipedia.beta.wmflabs.org/w/index.php?title=Special:AllEvents&wpEndDate=&wpMeetingType=&wpSearch=&wpStartDate=2023-04-30&offset=&limit=500 | Special:AllEvents ]] the `r` cuts off in September **AC:** The entire word "September should display in the list header {F49493447}
    • Task
    ## Background Currently the menu component can only be triggered by a Select or Lookup component. We want to expand this to include the ability to trigger a menu with a button. {F49489018} ### Known use cases - The Catalog Table in the metrics platform will need this capability - Multi-blocks project ### Existing implementations Wikifunctions currently triggers a menu through a button, see under "Try this function": https://www.wikifunctions.org/view/en/Z10012 {F49489447} --- ## Codex implementation TBD ## Acceptance criteria TBD ### Minimum viable product TBD **MVP scope** - [] //List all parts of the MVP scope for this component// **Design** - [] Design the Figma spec sheet and add a link to it in this task - [] Update the component in the [Figma library](https://www.figma.com/file/KoDuJMadWBXtsOtzGS4134/%E2%9D%96-Codex-components?node-id=1891%3A4420&viewport=287%2C338%2C0.28). //This step will be done by a DST member.// **Code** - [] Implement the component in Codex
    • Task
    == Background As of v.1.3.6 of Codex, the legacy build of Codex design tokens uses rems for size values. i.e: ```lang=less @font-size-base: 1rem; ``` In order for OOUI to be compatible with this update, it has to be modified to support `rem` values. Currently, if v.1.3.6 of Codex is used (and the CSS variable issues outlined in T363849 have been resolved) the following error prevents the build from generating: ``` Running "less:wikimediaui" (less) task >> src/styles/elements/ButtonElement.less: [L40:C1] Incompatible units. Change the units or use the unit function. Bad units: 'em' and 'rem'. Warning: Error compiling src/styles/elements/ButtonElement.less Use --force to continue. Aborted due to warnings. ``` This error stems from somewhere in the `.theme-oo-ui-buttonElement()` mixin. It was generated by using [[ https://gerrit.wikimedia.org/r/c/oojs/ui/+/1024424 | this POC patch ]] and upgrading the version of Codex to 1.3.6. == User story - As a Wikimedia developer/maintainer, I strive to maintain consistency across interfaces by ensuring the same underlying design token values are used across interfaces. == Requirements NOTE: This task is about enabling OOUI to use a newer version of Codex design tokens, but that version should not be changed yet. The OOUI build should pass without error when using Codex v1.3.6. //Whether that means changing one instance of an em+rem unit addition or switch to rems in OOUI more broadly remains to be seen.// == Design - No visual changes are expected from this change, however switching to rem's might have unintended consequences with regards to accessibility and font-sizes. == Acceptance criteria - The OOUI WikimediaUI theme is capable with successfully compiling using Codex v.1.3.6 == Communication criteria - does this need an announcement or discussion? - N/A
    • Task
    datahub-mae-consumer appears to [[ https://grafana-rw.wikimedia.org/d/VCK8-FpZz/cwhite-logstash?orgId=1&from=1714479379716&to=1714488869552 | produce logs at ~15k events/sec when experiencing kafka issues ]]. The excessive rate appears to be due to the project not using structured logs and instead [[ https://logstash.wikimedia.org/goto/9f693bc18d3a158b959ac1f938c1d20b | logging each line of a java stack trace as a separate event ]]. We should enable datahub-mae-consumer to produce structured logs so that all lines of a stack trace show up on one event and reduce the rate of log production.
    • Task
    Currently the validation error message content is: `"communityconfiguration-schema-validation-error": "DRAFT: $2. Key: $1"`. A made up example of this is: `DRAFT: String value found, but a number is required. Key: section_image_recommendation.maxTasksPerDay`. We cannot do much about the english string returned by the validation library `justinrainbow` (argument `$2`) until we tackle T351879. But we can improve the `Key: <JsonPointer>` part (argument `$1`). **Proposed design** In T359928#9751547, there's an informal design proposal: "higlighting + scroll". **Acceptance criteria** TBD
    • Task
    Could the toolforge build service upgrade the buildpack being used for golang? Currently it is using golang 1.21.5, which was released in December 2023. Meanwhile, various security-related bugs have been fixed in the standard libraries. As of filing this bug, the most current version would be 1.22.2 but see https://go.dev/doc/devel/release for up-to-date information.
    • Task
    Before we do T338598, we should reconsider the Typography scale entirely based on the needs of T363845.
    • Task
    == Background As of [[ https://gerrit.wikimedia.org/r/plugins/gitiles/design/codex/+/c33fcb8bd9787ddf811bae6a6c52df09358a5073 | v.1.4.0 of Codex ]], the legacy build of Codex design tokens is being discontinued and will eventually be removed. The newest version of the Codex design token uses CSS custom properties in place of color values. i.e: ```lang=less @color-progressive: "var( --color-progressive, #36c )"; ``` This change enables color theming and support for dark-mode in various skins (Vector 2022 and Minerva). In order for OOUI to be made compatible with this update, Less color functions such as `lighten()` `darken()` and `mix()` have to be removed from the WikimediaUI theme and replaced with standard design tokens or custom color values. //These color functions expect the token value to be of type color, but it is now of type string.// == User story - As a general Wikimedia user and/or editor, I expect features that use OOUI to support dark-mode. - As a Wikimedia developer/maintainer, I strive to maintain consistency across interfaces by ensuring the same underlying design token values are used across interfaces. == Requirements NOTE: This task is about //enabling// OOUI to use a newer version of Codex design tokens, but that version should not be changed yet. The following functions `lighten()`, `darken()` and `mix()` are remove from the WikimediaUI OOUI theme and replaced with the closest approximate Codex color token. These functions are used in the following places: - In `.mw-framed-button-colored()` mixin: `lighten( @active, 60% );` - In `.mw-tool-colored()` mixin: ` lighten( @active, 60% );` - In `.theme-oo-ui-buttonElement()` mixin: `mix( @color-progressive--active, @background-color-disabled, 35% );` - In `.theme-oo-ui-popupWidget() ` mixin: `darken( @border-color-base, 14% );` POC patch: https://gerrit.wikimedia.org/r/c/oojs/ui/+/1024424 == Design //Slight// color changes are expected from this change, as colors produced by functions such as `lighten(@color-progressive, 10%)` are replaced with different color tokens. These changes should be verified by Design. The following codepen outlines the proposed color changes: https://codepen.io/j4n/pen/poBYJvq {F49496983, width=700} == Acceptance criteria - Add acceptance criteria == Communication criteria - does this need an announcement or discussion? - N/A
    • Task
    - Do we want the “chrome” of the skin to respond to font size options, or just certain things like the article and potentially talk page text? -- Depending on this answer, we will need to support not only 14px and 16px body font sizes, but also more sizes (like 18px or 20px). - Should we have all size modes on all skins? - What factors affect the "default" behavior of the system? What if different wikis configure different defaults for each skin? -- Should the “chrome” be sized to the default option? - What about other use cases which don’t have font size options, like Wikifunctions?
    • Task
    TASK AUTO-GENERATED by Nagios/Icinga RAID event handler A degraded RAID (md) [[ https://icinga.wikimedia.org/cgi-bin/icinga/extinfo.cgi?type=2&host=mw2382&service=MD RAID | was detected ]] on host `mw2382`. An automatic snapshot of the current RAID status is attached below. Please **sync with the service owner** to find the appropriate time window before actually replacing any failed hardware. ``` CRITICAL: State: degraded, Active: 1, Working: 1, Failed: 0, Spare: 0 $ sudo /usr/local/lib/nagios/plugins/get-raid-status-md Personalities : [raid1] [linear] [multipath] [raid0] [raid6] [raid5] [raid4] [raid10] md0 : active raid1 sda2[0] 937267200 blocks super 1.2 [2/1] [U_] bitmap: 6/7 pages [24KB], 65536KB chunk unused devices: <none> ```
    • Task
    ##Context Based on our latest proposal to create a new type scale with specifically defined font-size and line-height combinations, and to then adapt components to adjust padding and therefore sizing based on different body font size options, we need to explore how each component changes based on these varying font scales. [[ https://www.figma.com/file/3JrVdPjEQYjWaenlVsEmww/Codex-2?type=design&node-id=1%3A119&mode=design&t=tR0pccS3ukXFArit-1 | Figma file ]]
    • Task
    This is a placeholder for the work being considered around the typography scale and font sizing. ### Problem There are inconsistencies in how elements within Codex behave in different skins and between Figma and code. This has lead to confusion between design and engineering and raised questions about what the expected behavior should be. ### Goal Codex system elements should behave predictably when used within different contexts (e.g. skins) and between different representations (e.g. Figma vs code) using a standard type scale. ### Related / Past Work - {T358038} - {T338598} - {T333890} - {T358485} - {T360092} - {T344515}
    • Task
    User Story: “As a Signals PM, I want to experiment with article PageRank scores, CheiRank scores, and PageView scores so that we can eventually add them to Integrity Signals product roadmap.” User Story: “As an SC PM, I want to cluster Wikipedia articles into sub-domains, I want to label clusters by Wikipedia Categories or other classification labels (ArticleTopic, Infobox Name, WD InstanceOf... etc). It's hoped this experiment will lead to features for Machine Readability product roadmap.” **Acceptance criteria** # Experiment with EN Wikipedia to generate PageRank, CheiRank and PageView scores for each article # Experiment with ML cluster of articles by using the internal Wikilinks to build a Graph Database of connected articles # Save the article Graph hierarchy to a Postgres database # Save the [[ https://en.wikipedia.org/wiki/PageRank#:~:text=PageRank%20(PR)%20is%20an%20algorithm,illustration%20of%20the%20Pagerank%20algorithm. | PageRank ]], [[ https://en.wikipedia.org/wiki/CheiRank | CheiRank ]] and [[ https://gitlab.enterprise.wikimedia.com/wikimedia-enterprise/experiments/store-page-views | PageView ]] scores to the same Postgres database Any code examples here are just for guidance on sub-task complexity. The code is just a starting point for discussion and can completely change as the work is started. **ToDo** - [ ] 0. Database Schema Setup # Articles: Contains all articles with columns pageURL and title. # PageLinks: Contains all links between pages with columns pageURL (source page) and outboundLinks (target page). # Categories: Mapping of articles to categories with columns pageURL, CategoryName. # PageViews: Tracks views per page with columns pageURL, ViewCount. - [ ] 1. Calculating CheiRank and PageRank (a subset of CheiRank) - [ ] 2. Create a Link Matrix Table. Something like below, but look for ways to optimise CPU and RAM for large projects ``` CREATE TABLE LinkMatrix AS SELECT pl1.pageURL AS source, pl2.pageURL AS target FROM PageLinks pl1 JOIN PageLinks pl2 ON pl1.outboundLinks = pl2.pageURL; ``` - [] 3. Calculate CheiRank. This script initializes CheiRank scores to 1 for each article and iteratively updates them based on the link matrix. The process continues until changes between iterations are minimal. ``` CREATE TABLE CheiRankScores ( pageURL VARCHAR PRIMARY KEY, score NUMERIC ); -- Initialize CheiRank scores INSERT INTO CheiRankScores (pageURL, score) SELECT pageURL, 1.0 FROM Articles; -- Iteratively update CheiRank scores. Modify the CheiRank to incorporate PageView data, adjusting influence by page popularity: DO $$ DECLARE v_diff NUMERIC; BEGIN v_diff := 1; WHILE v_diff > 0.0001 LOOP UPDATE CheiRankScores cs SET score = 0.85 * (SELECT SUM(c.score) FROM CheiRankScores c JOIN LinkMatrix l ON c.pageURL = l.target WHERE l.source = cs.pageURL) + 0.15; GET DIAGNOSTICS v_diff = ROW_COUNT; END LOOP; END $$; ``` - [] 4. Integrate PageView Data ``` -- Update CheiRank score calculation to factor in PageViews DO $$ DECLARE v_diff NUMERIC; BEGIN v_diff := 1; WHILE v_diff > 0.0001 LOOP UPDATE CheiRankScores cs SET score = 0.85 * (SELECT SUM(c.score * pv.ViewCount) FROM CheiRankScores c JOIN LinkMatrix l ON c.pageURL = l.target JOIN PageViews pv ON c.pageURL = pv.pageURL WHERE l.source = cs.pageURL) + 0.15; GET DIAGNOSTICS v_diff = ROW_COUNT; END LOOP; END $$; ``` - [] 5. Dimensionality Reduction. To visualize the data in 3D, use dimensionality reduction techniques such as LDA, PCA or t-SNE. This process is typically done outside of SQL, in Python: 5.1) PCA Exp 1 ``` import pandas as pd from sklearn.decomposition import PCA from sklearn.manifold import TSNE import matplotlib.pyplot as plt # Load data from PostgreSQL connection = "your_connection_string" query = "SELECT pageURL, score FROM CheiRankScores" data = pd.read_sql(query, connection) # Use PCA to reduce dimensions pca = PCA(n_components=3) reduced_data = pca.fit_transform(data[['score']]) # Optionally use t-SNE for better clustering visualization tsne = TSNE(n_components=3, verbose=1, perplexity=40, n_iter=300) tsne_results = tsne.fit_transform(data[['score']]) # Plot fig = plt.figure() ax = fig.add_subplot(111, projection='3d') ax.scatter(tsne_results[:,0], tsne_results[:,1], tsne_results[:,2]) plt.show() ``` Different methods will show us different patterns and connections when mapping from higher dimensions to lower dimensions. Given enough time we should experiment with a few of these following techniques and see if there is useful semantics to be drawn from the dimension simplification. 5.2) PCA Exp 2 ``` from sklearn.decomposition import PCA import matplotlib.pyplot as plt # Assuming 'data' is your DataFrame loaded from the database pca = PCA(n_components=3) pca_results = pca.fit_transform(data) # Plotting plt.figure(figsize=(8, 6)) plt.scatter(pca_results[:, 0], pca_results[:, 1]) plt.title('PCA Results') plt.xlabel('PC1') plt.ylabel('PC2') plt.show() ``` 5.3) UMAP ``` import umap umap_results = umap.UMAP(n_neighbors=15, n_components=3, min_dist=0.1, metric='euclidean').fit_transform(data) plt.figure(figsize=(8, 6)) plt.scatter(umap_results[:, 0], umap_results[:, 1]) plt.title('UMAP Results') plt.xlabel('UMAP1') plt.ylabel('UMAP2') plt.show() ``` 5.4) MDS ``` from sklearn.manifold import MDS mds = MDS(n_components=3, metric=True) mds_results = mds.fit_transform(data) plt.figure(figsize=(8, 6)) plt.scatter(mds_results[:, 0], mds_results[:, 1]) plt.title('MDS Results') plt.xlabel('Dimension 1') plt.ylabel('Dimension 2') plt.show() ``` 5.5) ISOMAP ``` from sklearn.manifold import Isomap isomap = Isomap(n_components=3, n_neighbors=5) isomap_results = isomap.fit_transform(data) plt.figure(figsize=(8, 6)) plt.scatter(isomap_results[:, 0], isomap_results[:, 1]) plt.title('Isomap Results') plt.xlabel('Dimension 1') plt.ylabel('Dimension 2') plt.show() ``` - [] 6. Cluster Labeling. Cluster the articles based on their t-SNE coordinates. You can use a clustering algorithm like K-means or DBSCAN. Here's an example using K-means: ``` from sklearn.cluster import KMeans # Perform clustering kmeans = KMeans(n_clusters=10, random_state=42) data['cluster'] = kmeans.fit_predict(embedding) #Identify the top 5 categories for each cluster using the PageView data as weights: top_categories = data.groupby(['cluster', 'CategoryName'])['ViewCount'].sum().reset_index() top_categories = top_categories.sort_values(['cluster', 'ViewCount'], ascending=[True, False]) top_categories = top_categories.groupby('cluster').head(5) ``` Another approach: ``` # Calculate top categories for each cluster def get_top_categories(cluster_id): cluster_pages = data[data['cluster'] == cluster_id]['pageURL'] categories = pd.read_sql("SELECT CategoryName FROM Categories WHERE pageURL IN {}".format(tuple(cluster_pages)), connection) weighted_counts = categories['CategoryName'].value_counts().head(5) return weighted_counts.index.tolist() # Map top categories to each cluster cluster_labels = {cluster: get_top_categories(cluster) for cluster in data['cluster'].unique()} # Now, you can add labels to your plot or use them for analysis print(cluster_labels) ``` - [] 7. Can we compress the Link Matrix with a Neural Network encoder/decoder, without losing too much information? Maybe this could be our API payload format for clients. Here we're using: Variational Autoencoder (VAE) ``` import numpy as np import tensorflow as tf from tensorflow.keras.layers import Input, Dense, Lambda from tensorflow.keras.models import Model from tensorflow.keras.losses import binary_crossentropy from tensorflow.keras import backend as K def sampling(args): z_mean, z_log_var = args batch = K.shape(z_mean)[0] dim = K.int_shape(z_mean)[1] epsilon = K.random_normal(shape=(batch, dim)) return z_mean + K.exp(0.5 * z_log_var) * epsilon input_dim = data.shape[1] # number of features encoding_dim = 3 # number of dimensions in the bottleneck inputs = Input(shape=(input_dim,)) encoded = Dense(128, activation='relu')(inputs) z_mean = Dense(encoding_dim)(encoded) z_log_var = Dense(encoding_dim)(encoded) z = Lambda(sampling, output_shape=(encoding_dim,))([z_mean, z_log_var]) decoded = Dense(128, activation='relu')(z) decoded = Dense(input_dim, activation='sigmoid')(decoded) # Encoder and Decoder models encoder = Model(inputs, [z_mean, z_log_var, z]) decoder_input = Input(shape=(encoding_dim,)) decoder_layer = encoder.layers[-2](decoder_input) decoder_layer = encoder.layers[-1](decoder_layer) decoder = Model(decoder_input, decoder_layer) # VAE model vae = Model(inputs, decoder(encoder(inputs)[2])) # Loss function reconstruction_loss = binary_crossentropy(inputs, decoder(encoder(inputs)[2])) reconstruction_loss *= input_dim kl_loss = 1 + z_log_var - K.square(z_mean) - K.exp(z_log_var) kl_loss = K.sum(kl_loss, axis=-1) kl_loss *= -0.5 vae_loss = K.mean(reconstruction_loss + kl_loss) vae.add_loss(vae_loss) vae.compile(optimizer='adam') # Train VAE vae.fit(data, epochs=50, batch_size=32 ``` - [] 8. Visualize the article clusters in 3D using a plotting library like Matplotlib or Plotly, and label each cluster with its top 5 categories. ===== Test Strategy ===== This is a POC experiment, no testing at this stage ===== Things to consider: ===== * As a POC this will run locally on a desktop. When deciding on a tech solution, consider scaling this for all WME projects and running daily. We'll need to keep costs down. ===== Description (optional) ===== * Build a Wikilinks database using the internal project links in HTML pages. Then build a weighting system that mimics Google's PageRank. Since CheiRank is a superset of PageRank, we'll build the CheiRank scores. Combine the weights with PageView scores to improve the graph network representation of how users read Wikipedia. * Since we have the Link matrix, we can investigate mapping this high-dimensional data into something that is easier to visualise. In doing so we can create a classification system by using our WMF categories (and/or Infobox Names or WD instanceOf labels)
    • Task
    The last leg of WE3.2.1 hypo work. These changes mean in some cases we should soft-deprecate the method, sometimes hard-deprecate, sometimes complete removal, sometimes removing from the interface while keeping it in the class (to hide it to the outside of rdbms lib), and some cases, only mark them `@internal` in documentation. Candidates for such changes are: [] ISQLPlatform::limitResult() [] ISQLPlatform::buildLike() [] ISQLPlatform::unionQueries() [] IReadableDatabase::lastErrno() [] IReadableDatabase::selectField() [] IReadableDatabase::selectFieldValues() [] IReadableDatabase::selectRow() [] IReadableDatabase::estimateRowCount() [] IReadableDatabase::selectRowCount() [] IReadableDatabase::databasesAreIndependent() [] IReadableDatabase::selectDomain() [] IReadableDatabase::wasDeadlock() [] IReadableDatabase::wasReadOnlyError() [] IReadableDatabase::primaryPosWait() [] IReadableDatabase::getReplicaPos() [] IReadableDatabase::getSessionLagStatus() [] IDatabase::getTopologyBasedServerId() [] IDatabase::getTopologyRole() [] IDatabase::lastDoneWrites() [] IDatabase::writesPending() [] IDatabase::writesOrCallbacksPending() [] IDatabase::pendingWriteQueryDuration() [] IDatabase::pendingWriteCallers() [] IDatabase::lockForUpdate() [] IDatabase::nextSequenceValue() [] IDatabase::getPrimaryPos() [] IDatabase::serverIsReadOnly() [] IDatabase::setTransactionListener() [] IDatabase::flushSession() [] IDatabase::lockIsFree() [] IDatabase::namedLocksEnqueue() [] ILoadBalancer::reuseConnection() [] ILoadBalancer::getConnectionRef() [] ILoadBalancer::getServerConnection() [] ILoadBalancer::getConnectionInternal() [] ILoadBalancer::getWriterIndex() [] ILoadBalancer::laggedReplicaUsed() [] ILoadBalancer::getReadOnlyReason() [] ILBFactory::newMainLB() [] ILBFactory::newExternalLB() [] ILBFactory::getAllMainLBs() [] ILBFactory::getAllExternalLBs() [] ILBFactory::hasPrimaryChanges() [] ILBFactory::laggedReplicaUsed() [] ILBFactory::disableChronologyProtection() [] ILBFactory::setAgentName() [] ILBFactory::setIndexAliases()
    • Task
    TASK AUTO-GENERATED by Nagios/Icinga RAID event handler A degraded RAID (md) [[ https://icinga.wikimedia.org/cgi-bin/icinga/extinfo.cgi?type=2&host=mw2382&service=MD RAID | was detected ]] on host `mw2382`. An automatic snapshot of the current RAID status is attached below. Please **sync with the service owner** to find the appropriate time window before actually replacing any failed hardware. ``` CRITICAL: State: degraded, Active: 1, Working: 1, Failed: 0, Spare: 0 $ sudo /usr/local/lib/nagios/plugins/get-raid-status-md Personalities : [raid1] [linear] [multipath] [raid0] [raid6] [raid5] [raid4] [raid10] md0 : active raid1 sda2[0] 937267200 blocks super 1.2 [2/1] [U_] bitmap: 6/7 pages [24KB], 65536KB chunk unused devices: <none> ```
    • Task
    ==Background/Goal In order to transfer knowledge about Webrequests and Pageviews, we need diagrams that document the data lifecycle and infrastructure ==KR/Hypothesis(Initiative) Not yet prioritized within OKR level work. This falls into Essential Work. ==Success metrics - How we will measure success Example areas: Timeboxed to 4 hours Presentation given and recorded for onboarding purposes DPE Team is satisfied and demonstrates understanding of the lifecycle Diagrams have a maintenance plan ==In scope - Lifecycle for all 21 technologies and 36 datasets that are part of webrequests is included ==Out of Scope - anything outside webrequests ==Artifacts & Resources See examples: https://wikitech.wikimedia.org/wiki/Data_Engineering/Systems/Hadoop_Event_Ingestion_Lifecycle pipeline tree: https://miro.com/app/board/uXjVNG_RlDk=/ Webrequest intro slides: https://docs.google.com/presentation/d/1UKISVD5FEhBNDrxX-_sBYY8Xno0AR-J7fU32k_c-ung/edit#slide=id.g2d09cd1c67a_0_71
    • Task
    == Background Parent: T360092 To enhance the flexibility and customisation capabilities of developers working on the `Vector` and `Minerva` skins, this task is to expand the configuration options. The goal is to allow developers to define multiple default values for various features such as page width, font size, and themes depending on specific conditions like main page status, page title, namespace, or query string actions. This approach builds upon the previous exclusion logic and introduces the ability to specify exceptions within certain namespaces. This task is designed to refine the developer experience so it would be easier to tailor the user experience more accurately based on content type or user interactions, thereby enhancing accessibility and usability across Wikimedia sites. == User story As a developer of `Vector` and `Minerva` skins, I want the ability to configure multiple default feature values conditionally based on page characteristics such as title, namespace, or specific user actions, so that I can deliver a more customised and effective user experience. == Requirements - Purpose and Discuss the suggested configuration scheme, and get feedback and agreements from other team members. - Enhance the configuration system to allow developers to dynamically set multiple and different default values for a feature in the `skin.json` file. - Configuration options should support: -- **Conditional Application:** Capability to apply settings based on the main page status, specific page titles, namespaces, and query string parameters. -- **Flexibility and Control:** Provide developers with the ability to specify exceptions in settings within defined namespaces. - Utilise, refactor and enhance the `ConfigHelper` class in the PHP code base to develop logic that reads and enforces these settings from the configuration file during runtime. - Ensure the configuration changes are parsed efficiently and do not impact the performance of the skins. - Ensure compatibility with both Vector and Minerva skins, with configurations being easy to manage and update == Acceptance criteria [] Developers can specify and save conditional configurations for feature defaults in the `skin.json` file. [] The system must accurately apply these settings based on the conditions outlined in the configuration. [] Documentation must be clear, providing sufficient examples to assist developers in setting up and customising configurations. [] The following pages should get Standard as the default font size: Namespaces=0,4,12,118. [] All other pages would get Small as the default font size. == Open questions [] Is this for mobile and desktop or just desktop? [] When I visit a page where small is the default font size, can I change font size in the appearance menu? Is it disabled? [] When I visit a page where small is the default font size would small be selected? [] When I visit a page where small is the default should there be a banner saying "This page always appears as small?"
    • Task
    To make things a little bit easier and reduce the chances to forget stuff, let's modify the switchover template to split the following step into two: The current template says (this is an example of https://phabricator.wikimedia.org/T363672): [] Set NEW primary with weight 0 (and depool it from API or vslow/dump groups if it is present). ``` sudo dbctl instance db1223 set-weight 0 sudo dbctl config commit -m "Set db1223 with weight 0 T363672" ``` Let's split that into two steps: [] Set NEW primary with weight 0. ``` sudo dbctl instance db1223 set-weight 0 sudo dbctl config commit -m "Set db1223 with weight 0 T363672" ``` [] Depool NEW from any specific group (API, vslow, dump) if present ``` sudo dbctl instance db1223 edit # If some changes were made: sudo dbctl config commit -m "Remove db1223 from API T363672" ```
    • Task
    Setting up the kubeconfig files for datahub-next as per: https://wikitech.wikimedia.org/wiki/Kubernetes/Add_a_new_service#Tell_the_deployment_server_how_to_set_up_the_kubeconfig_files.
    • Task
    One of the items of data we can collect from users is their [[ https://github.com/WikipediaLibrary/TWLight/blob/master/TWLight/users/models.py#L299 | occupation ]]. This was requested by one or two partners many years ago but hasn't been needed for some time, and we don't anticipate needing it in the future. We can safely remove `occupation` as an Editor model field, and the associated features in the application/partner flow which incorporate this field.
    • Task
    I've marked the PKI hosts in https://wikitech.wikimedia.org/wiki/News/Cloud_VPS_2024_Purge as "In Use" since deployment-prep and other projects rely on them to issue TLS certs. The nodes should be upgraded to Bullseye or Bookworm, but I am not 100% sure what would be the best procedure to follow.
    • Task
    See the "Changes since" hearings on https://m.mediawiki.org/wiki/Specs/HTML/2.8.0: {F49458286} The proper space is present on the non-mobile site: https://mediawiki.org/wiki/Specs/HTML/2.8.0
    • Task
    ### Scenario L'attuale processo di elezione candidatura prevede: - aver raggiunto da almeno tre mesi 600 edit non automatizzati/automatici su Wikipedia in lingua italiana; - aver effettuato almeno 50 edit non automatizzati/automatici su Wikipedia in lingua italiana negli ultimi sei mesi. Sarebbe utile creare un tool che escluda automaticamente gli edit individuati come automatici. ### Passaggi [ ] Individuare lista di tool da escludere [ ] Ottimizzazione della query al db [ ] Individuare il linguaggio con cui scrivere il tool [ ] Creare l'interfaccia utente
    • Task
    Hi, as better described on T363805, itwiki has decided to adopt an ArbCom and a new private wiki needs to be created. **Request page:** N/A **Language code:** it **Site URL:** wikipedia-it-arbcom.wikimedia.org **Project name:** Itwiki arbcom wiki **Project namespace:** Project **Project talk namespace:** Project_talk **Project logo:** https://commons.wikimedia.org/wiki/File:Wikimedia_logo_blue.svg **Timezone:** Europe/Rome **Visibility:** Private. Thanks!
    • Task
    * align the validate method signature of `PropertyLabelValidator` and `PropertyDescriptionValidator` with those of the corresponding item label/description validators * after that, the `(Item|Property)(Labels|Descriptions)ContentsValidator`s implementations should look more or less identical and the same test can be reused: `extensions/Wikibase/repo/rest-api/tests/phpunit/Application/Validation/LabelsAndDescriptionsContentsValidatorTest.php` * `extensions/Wikibase/repo/rest-api/tests/mocha/api-testing/PatchPropertyTest.js` already has a skipped test for this case
    • Task
    Letting users pick what kind of lexemes they want or don't want to see. For instance, I could say, "Show me only Lexemes that have P1343, with values like Q115665905."
    • Task
    The full stop/period which is automatically added when missing is on the wrong line. We should probably not add one at all and let documentation writers decide if they have written a sentence. {F49447094} https://doc.wikimedia.org/VisualEditor/master/js/ve.ce.MWBlockImageNode.html#getCssClass
    • Task
    This ticket includes form design, voting process, viewing and editing a wish and dashboard.
    • Task
    In T346327 (and T300273), @Cyndymediawiksim wrote code for instrumenting temporary accounts flow. This instrumentation aims to record events when: * a temporary account logins as an already existing registered account, or * a temporary account creates a new registered account In both cases, we record impressions, actual creations and errors, which will allow us to determine how often those events are attempted, and what is the success rate. For both events, we record the temporary account name, registered account name and the event type (plus the standard data provided by the eventlogging infrastructure). We do not record the IP address for either account (this info can be derrived, if needed, by correlating with the `cu_changes` table and/or other sources, but it is not a part of the instrumentation we're adding). The related schema is `analytics/mediawiki/accountcreation/account_conversion` (see [source](https://schema.wikimedia.org/#!//secondary/jsonschema/analytics/mediawiki/accountcreation/account_conversion)). Within this task, we should deploy the instrumentation to production. This requires [L3SC](https://office.wikimedia.org/wiki/Legal,_Safety_%26_Security_Service_Center) approval, which is why it is factored into a separate task. ==== Acceptance Criteria [ ] Request and receive approval from L3SC of the new instrumentation [ ] Enable the instrumentation by merging https://gerrit.wikimedia.org/r/c/operations/mediawiki-config/+/989216 [ ] Verify the instrumentation produces events into the data lake
    • Task
    == Background This ticket will track the release of dark mode as a beta feature on desktop across wikis. == User story - As a logged-in user, I want the ability to opt into dark mode as a beta feature, so that I can quickly benefit from the feature and test the new feature before wider release == Requirements [] All subtasks are resolved and their code is in production [] Dark mode must be packaged with the current "Accessibility for reading" beta feature, which enables the appearance menu and its functionality [] Upon deployment, the dark mode option will be made available in the appearance menu [] Selecting dark mode will switch the interface to dark mode [] The default setting for the dark mode option will be "light" == Design - see subtasks == Acceptance criteria [] Ensure all requirements are complete [] Check with @ovasileva and @sgrabarczuk before deployment == Communication criteria - does this need an announcement or discussion? [] An cross-wiki announcement of the deployment must be prepared ahead of deployment and ready to post
    • Task
    Setup regular backups of new active read-write hosts, and once the old ones are in read-only mode, send them to the backup archival, where they are not refreshed regularly. Steps: [] Setup new config (including grants) [] Remove old one [] Archival of old section data
    • Task
    Otherwise once we have authentication per-user, we would not be able to know which tool they are acting on (today we authenticate with the tool certificate, so the authenticated user is the tool itself). Note that the procedure to avoid downtime can be: * Change API to support both endpoints (add a check in the API between the path and the auth user for consistency for now) * Change the client to use the new url * Remove support for the non-prefixed url on the api side
    • Task
    Otherwise once we have authentication per-user, we would not be able to know which tool they are acting on (today we authenticate with the tool certificate, so the authenticated user is the tool itself). Note that the procedure to avoid downtime can be: * Change API to support both endpoints (add a check in the API between the path and the auth user for consistency for now) * Change the client to use the new url * Remove support for the non-prefixed url on the api side
    • Task
    The UAT environment is not used but it still adds to the costs and maintenance. This environment has to be purged.
    • Task
    AWS Ohio was primarily used when the WME AWS account was setup. Later, the services were provisioned in the us-east-1 region. Hence, there are stale resources in Ohio region that have to be cleaned up.
    • Task
    E.g. increase from 0.1 to 0.2, 0.3, etc. [] Everyone think about what is required [] Agree on what we want to do * do we want to start doing release notes as well? [] Test it [] Document agreement (ADR?) [] Create CI job to ensure all changes increase the version number?
    • Task
    == Problem Categories are outdated. I think that no one will argue that the current implementation of categories is inconvenient for the user: they are difficult to remove from all pages, they are difficult to rename on all pages, and they are difficult to add. The first two difficulties are solved with the help of bots, the last with the help of the hotcat gadget. |{F49427028}|{F49427097}| Currently, they are not intuitive for new users, and there are no tips on how to add them. This is especially true if the user uses a wikieditor rather than a visual one. {F49428445} It seems to me that it is necessary to treat the categories as interwikis and move them into a separate work area that will not be related to the content. == What I want? * As a user, I want to be able to easily remove and add a category to a page without editing it. * As a user, I want to be able to intuitively add a sort key. * As a user, I want to create wikilinks to a category without specifying a : in front of it, since I often forget this. * As an administrator, I want that by renaming a category, it will be renamed on all pages. * As an administrator, I want to be able to delete a category on all pages at once. == Questions * Many templates add categories using wikitext, perhaps there is another solution. * Categories are often enclosed within tags `<includeonly>` etc
    • Task
    ====Problem |{F49423263 width=1080}| The close icon on Wikipedia's desktop preview popups for readers may be redundant since the previews can be dismissed by moving the cursor away. ====Solution - Remove close icon for Desktop readers. - Don't remove it for Mobile readers. Note: This change also removes close icon for editors too which should be fine.
    • Task
    **Description**: When navigating to the Special:CommunityConfiguration/Mentorship page and attempting to save the configuration form, an error message is displayed with an unparsed link. The expected behavior is if the form has errors, any links in error messages to be correctly parsed. **Steps to Reproduce**: 1. Ensure your language is set to `es` 1. Go to Special:CommunityConfiguration/Mentorship.(you must be logged out or not have permissions to save a configuration form for the error to show) 2. Make changes to the configuration form. 3. Click the 'Save' button. **Actual Results**: The following error message is displayed with an unparsed link: ``` Something went wrong when you saved your changes. Please try again later. This page provides interface text for the software on this wiki, and is protected to prevent abuse. To add or change translations for all wikis, please use [https://translatewiki.net/ translatewiki.net], the MediaWiki localisation project. If the problem persists, consider [$1 fill in a subject] in the bug reporting tool. ``` **Expected Results**: If an error occurs, the error message should display a correctly parsed link. **Attachments**: {F49417249}
    • Task
    **Request' history:** Raafi from the Bangladesh Wiki reached out to us with a request for some specific data related to mobile phone users and editors in South Asia: Raafi is the founder of the NDEC Wikipedia Editorial and Research Team, an extracurricular Wikimedia organization primarily comprising high school students. Their team faces a challenge with editor retention, particularly among mobile users. In their recent bootcamp in 2022, they had over 220 registrations, with a significant portion being mobile phone users. However, they observed a high dropout rate among mobile users, with only around 20 students remaining active after 5-6 months; Raafi believes that intervening to improve the mobile editing experience could help reduce the dropout rate and increase editor retention. Therefore, Raafi is seeking data to support their efforts in avoiding this. **What's requested:** Specifically, Raafi is requesting data on: Number of mobile phone users who contribute to the movement in South Asia. Number of editors in South Asia who use mobile phones to contribute. Any studies on the difference between editing experience on PCs and mobile phones.
    • Task
    The [[ https://artportal.hu/ | Hungarian Artportal]] is currently down and will be probably closed permanently, according to [[ https://telex.hu/kult/2024/04/24/artportal-megszunes-alapitas-2003-nagy-gergely-foszerkeszto | this recent article ]]. As I have no access to [[ https://iabot.wmcloud.org/index.php?page=manageurldomain | manage an entire domain ]], please change the status of the website so IABot can start correcting the dead links on the Hungarian Wikipedia. Thanks!
    • Task
    **Problem** The `DifferenceEngineTest::testMapDiffPrevNext` test fails if the `DifferenceEngineTest` class is run standalone in a Quibble / CI test context. The test fails with the message: ``` 1) DifferenceEngineTest::testShowDiffPage with data set "missing prev" (array('rev[0]', 'prev'), array('read'), array('\(diff-empty\).*<div class="m...ittens')) OOUI\Exception: OOUI\Theme::singleton was called with no singleton theme set. in /workspace/src/vendor/oojs/oojs-ui/php/Theme.php:31 ``` **Steps to reproduce** 1. Open the Quibble shell by launching quibble and passing `-c bash` as the test command ([[ https://doc.wikimedia.org/quibble/#interacting-with-local-quibble-runs | see docs ]]) 2. Run the `DifferenceEngineTest` class on its own **Observed behaviour** The test run fails: ``` $ composer run --timeout=0 phpunit:entrypoint -- ./tests/phpunit/includes/diff/DifferenceEngineTest.php > phpunit './tests/phpunit/includes/diff/DifferenceEngineTest.php' Using PHP 7.4.33 Running with MediaWiki settings because there might be integration tests PHPUnit 9.6.16 by Sebastian Bergmann and contributors. ......................................E.......... 49 / 49 (100%) Time: 00:01.073, Memory: 111.00 MB There was 1 error: 1) DifferenceEngineTest::testShowDiffPage with data set "missing prev" (array('rev[0]', 'prev'), array('read'), array('\(diff-empty\).*<div class="m...ittens')) OOUI\Exception: OOUI\Theme::singleton was called with no singleton theme set. in /workspace/src/vendor/oojs/oojs-ui/php/Theme.php:31 Stack trace: #0 /workspace/src/vendor/oojs/oojs-ui/php/Element.php(259): OOUI\Theme::singleton() ... ``` **Expected Behaviour** The tests should pass.
    • Task
    May 7, 2024. Will be done digitally. Focus on solidifying basic skills, practical work on Wikidata, SPARQL, and working with the museum's datasets in OpenRefine for those who feel ready for it.
    • Task
    Currently the producer creates an event-time based window operator that waits 5min for potential duplicates. It is currently unclear when, during such window, deduplications/merges happen. If, for example, most of those happen during the first 2 minutes, we might as well narrow that window. To get that insight, I would like to be able to monitor a histogram for the delay of the latest deduplicated/merged event. **AC:** * A histogram for `deduplicationAndMergeDelay` exists and is updated in `org.wikimedia.discovery.cirrus.updater.producer.graph.DeduplicateAndMerge#process`
    • Task
    The Wikibase image takes a `JOBRUNNER_MAX_JOBS` setting for use in running JobRunner. When no value is provided from the user it defaults to 2 in the image. Research and/or test to confirm whether this is the best general JOBRUNNER_MAX_JOBS setting, and update to a better default if there is one. I suspect that a higher number may be a better default (5 or 10?), but in preliminary research haven't came up with a clear recommendation.
    • Task
    Issue: Throughout the project we reference the WDQS, but in the test suite we use the naming QueryService, and sometimes query service. This naming inconsistency is distracting, and would be confusing for any new developers or outside contributors to our project. Propose Resolution: In test code and throughout the Pipeline repo and any external docs standardise on using either WDQS or QueryService, and apply changes where to needed to bring things into congruence. For now use WDQS within Pipeline, as the user-facing code and Docker Images use that naming. NOTE: While the simplicity and clarity of the QueryService naming may be appealing as a standard over WDQS, changing the user-facing naming is out-of-scope for this ticket. Having the files in congruence throughout the repo would make a name change to all QueryService later, if was decided to do so, easy to make.
    • Task
    [x] dbstore1009 [x] db2200 [x] db2198 [x] db2195 [x] db2186 [x] db2181 [x] db2167 [x] db2166 [] db2165 master [] db2164 [] db2163 [] db2162 [] db2161 [] db2154 [] db2152 [] db2098 [x] db1226 [] db1216 [] db1214 [x] db1211 [] db1209 [] db1203 [] db1193 [] db1192 [] db1178 [] db1177 [] db1172 [] db1171 [] db1167 [x] db1154 [x] clouddb1021 [x] clouddb1020 [x] clouddb1016
    • Task
    {F49389171} Markup: ``` ==Definition== {{Image frame|caption=Teenage pregnancy rate in the United States by age group in 2013.<ref>Kost, K., Maddow-Zimet, I., & Arpaia, A. (2017). [https://www.guttmacher.org/sites/default/files/report_pdf/us-adolescent-pregnancy-trends-2013.pdf Pregnancies, births, and abortions among adolescents and young women in the United States, 2013: National and state trends by age, race and ethnicity] {{Webarchive|url=https://web.archive.org/web/20211208194741/https://www.guttmacher.org/sites/default/files/report_pdf/us-adolescent-pregnancy-trends-2013.pdf |date=2021-12-08 }}. Washington, DC: Guttmacher Institute. Lay summary at [https://www.childtrends.org/indicators/teen-pregnancy ChildTrends.org] {{Webarchive|url=https://web.archive.org/web/20211111210541/https://www.childtrends.org/indicators/teen-pregnancy |date=2021-11-11 }}</ref> |content={{Graph:Chart|width=300|height=100|xAxisTitle=Age|yAxisTitle=Pregnancy rate per 10,000|type=rect|x=under 15,15–17,18–19|y=7,208,761}} }} <!--Numbers in original source are measured as rate per 1,000--> ```
    • Task
    - Wikimedia wikis + non-Wikimedia wikis (mapped via Open Miscellaneous Wikibase) - Primary template, derivatives sync with the primary - Diffing? - Dependencies like Lua modules, CSS, gadgets? https://www.mediawiki.org/wiki/Global_templates A use case for the Convergent Template Manager.
    • Task
    ####User story & summary: As a Wikimedian reviewing Community Configuration settings, I want to easily access edit history, so that I can review and audit changes. ####Design: [[ https://www.figma.com/file/bT1O4TChNV5TpwF5JHgKAK/Community-Configuration-2.0?type=design&node-id=1229-33012&mode=design&t=pDkUOCfHYKZrvP0p-0|Figma designs]] {F45437093} ####Acceptance Criteria: Given I'm viewing a Community Configuration form, Then each form has a linked Edit History Given I'm viewing the Edit History of a Community Configuration form, Then I can easily navigate back to the Community Configuration form.
    • Task
    For {T348501}, the notifications sent out to translators redirect them to the message group for which the notification was delivered, but as [[ https://translatewiki.net/wiki/Support#c-Stjn-20240429131000-Abijeet_Patro-20240424152800 | mentioned by translators ]], this is not useful for identifying the new or changed messages. Instead, linking to Recent additions/changes might be more useful. The motivation behind adding the link to the message group, instead of Recent additions/changes, was that they may no longer contain messages from the message group that the user is interested in.
    • Task
    This is the frontend component of the work to provide pan and zoom capabilities for panoramic images on Wikimedia websites. The idea is to tightly integrate Pannellum with MMV. I've noticed that the 3D extension adds a special viewer to MMV with loose integration, but this design choice is visible to the user in the form of performance degradation and visible flickering on load. So for this project, I'm arguing for implementation in core and MMV. [[ https://gerrit.wikimedia.org/r/c/mediawiki/extensions/MultimediaViewer/+/115678 | In 2014 ]], Aaron Arcos split out the Canvas class from LightboxImage, writing: ```lang=js /** * UI component that contains the multimedia element to be displayed. * This first version assumes an image but it can be extended to other * media types (video, sound, presentation, etc.). */ class Canvas extends UiElement { ``` That sounds promising, although the public methods of Canvas are still quite specific to image display. The assumption that MMV displays flat images is distributed throughout the codebase. For example in mmv.js class MultimediaViewer we have resize(), for responding to a resize of the viewport, which doesn't delegate the logic, it just requests the new image and notifies the Canvas when it is loaded. There is similarly no delegation to a media type handler in preloadThumbnails(). Canvas is a singleton and so the implementation can't easily be split by media type. I don't know what Aaron was planning when he wrote that comment. The model and provider classes are also quite specific to flat image display. In summary, substantial refactoring will be needed to introduce support for panoramic images to MMV.
    • Task
    We use error code strings to tell `ValidationError` objects apart. Some of those codes are identical across validators which led to some code only working by accident, and may cause bugs later on. We should make sure that those don't overlap. Idea: add some kind of prefix to all error codes (and context keys), e.g. the class/interface name.
    • Task
    **Feature summary** When viewing an article in the Wikipedia android app, tap on the side of the screen or bottom of the screen to move to the next page, instead of scrolling the page with a finger. **Use case(s)** Android tablets with eink screens are becoming popular among enthusiasts. An android tablet with an eink screen seems like the perfect device for reading consuming Wikipedia articles. Due to how eink displays work, animations and transitions should be avoided. The fast-updating nature of scrolling a page up the screen causes flickering, smearing and blurring on an eink screen. The normal mitigation for this is to provide a "pagination" mode, where you can tap on the right-side or bottom-side of the page and it immediately moves a whole page length. **Benefits** Pagination mode provides a faster, clearer and cleaner reading experience for users of Android tablets with Eink displays, also reducing the number of screen refreshes by not scrolling the screen does increase the battery life of Android devices with eink displays.
    • Task
    ####Background We want to expose native GPay to external users in a controlled setting. Because Japan is a relatively low-traffic market for Android, and we have an upcoming campaign in Japan, we want to release GPay to external users in Japan as a first step before releasing everywhere. ####Requirements - GPay flow & donation Form is translated for Japanese before deployment - Native GPay is deployed behind a Feature Flag for users that are not in Japan - Native GPay is only visible to app users located in Japan, who are using English Wikipedia and Japanese Wikipedia - Instrumentation for GPay should be included in the same release: T363194 ####References **Full release plan** # End of April/Early May - internal testing via APK, especially for Japanese/Japan and Spanish/Spain # Week of May 13: Low-traffic external exposure: Release on Japanese and English Wikis, geolimited to Japan # Week of 27 May - Release to production everywhere # May 28 - June 30: High-traffic exposure during Q4 App Campaigns (Japan will run May 28 - June 25, Spain will run June 3 - June 30)
    • Task
    **Steps to replicate the issue** (include links if applicable): * Navigate to a user's talk page that does not yet exist (User talk: QTE-Test25-WMF) * Write a talk page message * Preview **What happens?**: - See an unexpected screen and no preview: see screenshots {F49352430} {F49352452} **What should have happened instead?**: - See a preview of the talk page message in the same way I can for users that already have a talk page created (QTE-Test21-WMF) **Software version** (on `Special:Version` page; skip for WMF-hosted wikis like Wikipedia): 2.7.50484-r-2024-04-19
    • Task
    == Background In T354889 Vector 2022 will add a night theme. We will want to explore every nook and cranny of the interface to identify any issues that were not problems in the mobile version of the site. == User story As a product manager I want to understand how ready Vector 2022's night theme is before immediate release. == Acceptance criteria [] Document any issues found == QA steps Force night theme on desktop using query string parameter. Explore UI Flag any issues with color contrast issues either in content or interface. == Sign off steps [] Create tickets for any issues found during QA.