db1119 needs to be reimaged
When: Tuesday 14th at 08:00 AM UTC
Impact: Read only for a few seconds on the services below:
Services running on m1:
- bacula
- cas (and cas staging)
- backups
- etherpaddb1164
- librenms
- pki
- rt
Switchover steps:
OLD MASTER: db1164
NEW MASTER: db1119
- Check configuration differences between new and old master pt-config-diff h=db1164.eqiad.wmnet,F=/root/.my.cnf h=db1119.eqiad.wmnet,F=/root/.my.cnf
- Enable notifications on db1119
- Silence alerts on all hosts
- Topology changes: move everything under db1119
db-switchover --timeout=1 --only-slave-move db1164.eqiad.wmnet db1119.eqiad.wmnet
- Disable puppet @db1119 and puppet @db1164 sudo cumin 'db1164* or db1119*' 'disable-puppet "primary switchover T350022"'
- Merge gerrit: https://gerrit.wikimedia.org/r/c/operations/puppet/+/973351
- Run puppet on dbproxy1022 and dbproxy1024 and check the config
run-puppet-agent && cat /etc/haproxy/conf.d/db-master.cfg
- Start the failover
!log Failover m1 from db1164 to db1119 - T350022
root@cumin1001:~/wmfmariadbpy/wmfmariadbpy# db-switchover --skip-slave-move db1164 db1119
- Reload haproxies
dbproxy1022: systemctl reload haproxy && echo "show stat" | socat /run/haproxy/haproxy.sock stdio dbproxy1024: systemctl reload haproxy && echo "show stat" | socat /run/haproxy/haproxy.sock stdio
- kill connections on the old master (db1164)
pt-kill --print --kill --victims all --match-all F=/dev/null,S=/run/mysqld/mysqld.sock
- Restart puppet on old and new masters (for heartbeat):db1119 and db1164
sudo cumin 'db1164* or db1119*' 'run-puppet-agent -e "primary switchover T350022"'
- Check services affected (librenms, racktables, etherpad...)
- Clean orchestrator heartbeat to remove the old masters' one: sudo db-mysql db1119 heartbeat -e "delete from heartbeat where file like 'db1164%';"
- Merge backup ticket: https://gerrit.wikimedia.org/r/c/operations/puppet/+/969753
- Update/resolve phabricator ticket about failover