db1054 was s2 primary master and was failed over to db1066 (T194870)
Let's wait a couple of days before decommissioning
[x] Set up a new candidate master for s2 - db1076
[x] Compare data between db1054 and db1076
== Decommission Checklist ==
[x] - all system services confirmed offline from production use - should be done by #DBA team
[x] - set all icinga checks to maint mode/disabled while reclaim/decommmission takes place.
[x] - remove system from all lvs/pybal active configuration - should be done by #DBA team
[x] - any service group puppet/heira/dsh config removed - should be done by #DBA team
[x] - remove site.pp (replace with role(spare::system) if system isn't shut down immediately during this process.) - should be done by #DBA team: https://gerrit.wikimedia.org/r/#/c/operations/puppet/+/442014/
START NON-INTERRUPPTABLE STEPS
[x] - disable puppet on host
[x] - power down host
[x] - disable switch port
[x] - switch port assignment noted on this task (for later removal) asw-a-eqiad:ge-3/0/32
[] - remove all remaining puppet references (include role::spare)
[] - remove production dns entries
[] - puppet node clean, puppet node deactivate
END NON-INTERRUPPTABLE STEPS
[] - system disks wiped (by onsite)
[] - IF DECOM: system unracked and decommissioned (by onsite), update racktables with result
[] - IF DECOM: switch port configration removed from switch once system is unracked.
[] - IF DECOM: add system to decommission tracking google sheet
[] - IF DECOM: mgmt dns entries removed.
[] - IF RECLAIM: system added back to spares tracking (by onsite)