- Create any missing indices
- Re-index all wikis starting from 2024-02-01
- Queries
- All Stories
- Search
- Advanced Search
- Transactions
- Transaction Logs
Advanced Search
Apr 25 2024
Apr 23 2024
Apr 22 2024
Apr 18 2024
Apr 16 2024
Apr 12 2024
Apr 11 2024
Apr 9 2024
Apr 8 2024
Apr 2 2024
Mar 15 2024
Mar 14 2024
Mar 13 2024
Waiting for cluster to stabilize again after expansion
Mar 12 2024
Mar 11 2024
Waiting on T359791
We split Elasticsearch's master and data nodes into their own GKE node pools and configured those pools to use a blue-green upgrade strategy. That way when a GKE node upgrade runs, only one Elasticsearch node will be taken down at a time. Since our Elasticsearch shards have node redundancy, search should continue to operate normally even with a slightly degraded cluster.
Feb 2 2024
Feb 1 2024
Waiting on T350394
Jan 31 2024
Jan 30 2024
Jan 23 2024
Jan 22 2024
Jan 19 2024
In T352426#9472025, @Deniz_WMDE wrote:enabling/disabling the feature via the checkbox currently requires to press the button to save the questions (not sure if this is intentional)
As discussed in the daily today the desired behavior is that the checkbox/toggle to enable/disable the feature is supposed to
- trigger to save the change (enable/disable, but not the questions) as soon as it is toggled
- if there's an error, an error snackbar should appear like it already does for saving the questions
- in the case a user enables the feature for the first time, the default questions should get saved as well
(even if the question form wasn't used)- the checkbox should get turned into a toggle switch
In T354744#9472027, @Deniz_WMDE wrote:We discussed in the daily that it would be acceptable to break the logo upload feature for the time being, but this should also be reflected in the UI (by disabling the option, so users don't run into an mysterious error when we already know it is "intentionally" broken.
Jan 16 2024
Jan 12 2024
Elasticsearch shards can be rebalanced using:
Steps taken to stabilize the cluster:
- Added an additional Elasticsearch data node on production
- Reduced max shards per node back down to 800
- Increased startup probe timeout to 6hrs (it now takes slightly more than 4hrs for a data node to fully rejoin)
- Rebalanced all shards
Jan 9 2024
Recap: We hit our shard limit set in T350404#9340256 on 25th of December. This limit was then bumped to 850 as we still had enough heap left on our data nodes to accommodate these extra shards. Then on the 28th the master nodes started to OOM.