db1054 was s2 primary master and was failed over to db1066 (T194870)
Let's wait a couple of days before decommissioning
- Set up a new candidate master for s2 - db1076
- Compare data between db1054 and db1076
Decommission Checklist
- - all system services confirmed offline from production use - should be done by DBA team
- - set all icinga checks to maint mode/disabled while reclaim/decommmission takes place.
- - remove system from all lvs/pybal active configuration - should be done by DBA team
- - any service group puppet/heira/dsh config removed - should be done by DBA team
- - remove site.pp (replace with role(spare::system) if system isn't shut down immediately during this process.) - should be done by DBA team: https://gerrit.wikimedia.org/r/#/c/operations/puppet/+/442014/
START NON-INTERRUPPTABLE STEPS
- - disable puppet on host
- - power down host
- - disable switch port
- - switch port assignment noted on this task (for later removal) asw-a-eqiad:ge-3/0/32
- - remove all remaining puppet references (include role::spare)
- - remove production dns entries
- - puppet node clean, puppet node deactivate
END NON-INTERRUPPTABLE STEPS
- - system disks wiped (by onsite)
- - IF DECOM: system unracked and decommissioned (by onsite), update racktables with result
- - IF DECOM: switch port configration removed from switch once system is unracked.
- - IF DECOM: add system to decommission tracking google sheet
- - IF DECOM: mgmt dns entries removed.
- - IF RECLAIM: system added back to spares tracking (by onsite)