- dbstore1005.eqiad.wmnet T334652
- db2187.codfw.wmnet
- db2180.codfw.wmnet
- db2171.codfw.wmnet
- db2169.codfw.wmnet
- db2158.codfw.wmnet
- db2151.codfw.wmnet
- db2141.codfw.wmnet
- db2129.codfw.wmnet
- db2124.codfw.wmnet
- db2117.codfw.wmnet
- db2114.codfw.wmnet
- db1224.eqiad.wmnet
- db1201.eqiad.wmnet
- db1187.eqiad.wmnet
- db1180.eqiad.wmnet
- db1173.eqiad.wmnet
- db1168.eqiad.wmnet
- db1165.eqiad.wmnet
- db1155.eqiad.wmnet
- db1140.eqiad.wmnet
- db1131.eqiad.wmnet
- db1213.eqiad.wmnet
- clouddb1021.eqiad.wmnet T334651
- clouddb1019.eqiad.wmnet T334651
- clouddb1015.eqiad.wmnet T334651
Description
Details
Status | Subtype | Assigned | Task | ||
---|---|---|---|---|---|
Stalled | • Marostegui | T334650 Migrate s6 to MariaDB 10.6 | |||
Open | • Marostegui | T334651 Migrate wiki replicas (clouddb*) hosts to MariaDB 10.6 | |||
Open | • Marostegui | T334652 Migrate dbstore1005 to MariaDB 10.6 |
Event Timeline
@jcrespo how do you want to approach the backup sources migration?
My plan is, once codfw is back as secondary, start the migration there.
We should probably get all the replicas migrated, and right before switching the master, migrate the backup sources.
Would you feel comfortable with that?
Obviously the current backup source for s6 is db2141 which also holds s1 too so...what would like to do there?
Your general plan seems good to me.
Regarding the shared instances we can do 2 things:
- Ignore the backup sources, because backups from 10.4 should be very quickly usable for 10.6
- Start moving instances around and upgrade the right ones (assuming there is an order of what will be upgraded next)
#2 is more time consuming, and may block you more, but #1 may be less ideal (I accept suggestions)
Mostly it will affect dbprov hosts, as they currently can only prepare snapshots with 1 version.
In an ideal world, we could install 10.6 and 10.4 packages on the same system, side by side.
for more context, there are 2 backup sources for enwiki in codfw:
- db2141:3311
- db2097:3311
Both plans are ok with. Whatever you feel more comfortable with I am happy to adapt to.
My suggestions if for you to do what you did here: announce your plans (and if possible, what the next section would be in advance), and do things at your own pace- if I see something that will cause me issues I will shout as early as possible 0:-)
Sounds good. So for now I am going to go only for codfw normal replicas (once codfw is back as secondary, so after the 26th April). I will ping you before doing the candidate master and of course the master switch.
Yes, but what is going to be the next section (even if it will take a lot of time?). This way, I can put it there together with s6 in the backup source.
Change 914717 had a related patch set uploaded (by Marostegui; author: Marostegui):
[operations/puppet@production] db2124: Migrate to 10.6
Change 914717 merged by Marostegui:
[operations/puppet@production] db2124: Migrate to 10.6
Change 918240 had a related patch set uploaded (by Marostegui; author: Marostegui):
[operations/puppet@production] db2151: Migrat to 10.6
Change 918240 merged by Marostegui:
[operations/puppet@production] db2151: Migrat to 10.6
Change 918355 had a related patch set uploaded (by Marostegui; author: Marostegui):
[operations/puppet@production] db2117: Migrate to 10.6
Mentioned in SAL (#wikimedia-operations) [2023-05-10T07:42:37Z] <marostegui@cumin1001> dbctl commit (dc=all): 'Depool db2117 T334650', diff saved to https://phabricator.wikimedia.org/P48080 and previous config saved to /var/cache/conftool/dbconfig/20230510-074237-root.json
Change 918355 merged by Marostegui:
[operations/puppet@production] db2117: Migrate to 10.6
Change 918363 had a related patch set uploaded (by Marostegui; author: Marostegui):
[operations/puppet@production] db2187: Migrate to 10.6
Change 918363 merged by Marostegui:
[operations/puppet@production] db2187: Migrate to 10.6
I have been talking to Marko from MariaDB InnoDB team and there might be a regression on 10.6.12 for compressed tables due to: https://github.com/MariaDB/server/commit/8442bc6e13cea49a51bc12fc0100a0f5e9de37e4
They are still investigating, but for now I am going to stall this migration - I am going to be kept in the loop
For now this looks like the tracking issue: https://jira.mariadb.org/browse/MDEV-30531