Split off from T245161 and T254226. Once T254226 is closed (a couple of days) we should be able to decomission the old oresrdb host in codfw. I 'll create a separate task and handle personally the VM oresrdb2001
oresrdb2002:
**Steps for service owner:**
[x] - all system services confirmed offline from production use
[x] - set all icinga checks to maint mode/disabled while reclaim/decommmission takes place.
[x] - remove system from all lvs/pybal active configuration
[x] - any service group puppet/hiera/dsh config removed
[x] - remove site.pp, replace with role(spare::system) recommended to ensure services offline but not 100% required as long as the decom script is IMMEDIATELY run below.
[] - login to cumin host and run the decom cookbook: cookbook sre.hosts.decommission <host fqdn> -t <phab task>. This does: bootloader wipe, host power down, netbox update to decommissioning status, puppet node clean, puppet node deactivate, debmonitor removal.
[] - remove all remaining puppet references (include role::spare) and all host entries in the puppet repo
[] - remove ALL dns entries except the asset tag mgmt entries.
[] - reassign task from service owner to DC ops team member depending on site of server
**End service owner steps / Begin DC-Ops team steps:**
[] - disable switch port / set to asset tag if host isn't being unracked / remove from switch if being unracked.
[] - system disks wiped (by onsite)
[] - determine system age, under 5 years are reclaimed to spare, over 5 years are decommissioned.
[] - IF DECOM: system unracked and decommissioned (by onsite), update netbox with result and set state to offline
[] - IF DECOM: switch port configration removed from switch once system is unracked.
[] - IF DECOM: add system to decommission tracking google sheet
[] - IF DECOM: mgmt dns entries removed.
[] - IF RECLAIM: set netbox state to 'inventory' and hostname to asset tag