We're down to 659 servers using Puppet 5 and we can start reducing the number of Puppet 5 servers. One of the codfw nodes is being repurposed as a Puppet 7 server (where we previously only have/had two), but we alreay have three Puppet 7 servers in eqiad, so the old Puppet 5 servers can simply be decommissioned over time.
puppetmaster1002 is the first which can be decommisioned in eqiad; it is long out of warranty (bought in 2016)
Steps for service owner:
[x] - all system services confirmed offline from production use
[x] - set all icinga checks to maint mode/disabled while reclaim/decommmission takes place. (likely done by script)
[x] - remove system from all lvs/pybal active configuration
[x] - any service group puppet/hiera/dsh config removed
[x] - remove site.pp, replace with role(spare::system) recommended to ensure services offline but not 100% required as long as the decom script is IMMEDIATELY run below.
[x] - login to cumin host and run the decom cookbook: cookbook sre.hosts.decommission <host fqdn> -t <phab task>. This does: bootloader wipe, host power down, netbox update to decommissioning status, puppet node clean, puppet node deactivate, debmonitor removal, and run homer.
[x] - remove all remaining puppet references and all host entries in the puppet repo
[x] - reassign task from service owner to DC ops team member and site project (ops-sitename) depending on site of server
End service owner steps / Begin DC-Ops team steps:
[] - system disks removed (by onsite)
[] - determine system age, under 5 years are reclaimed to spare, over 5 years are decommissioned.
[] - IF DECOM: system unracked and decommissioned (by onsite), update netbox with result and set state to offline
[] - IF DECOM: mgmt dns entries removed.