This task will track the decommission-hardware of server cloudvirt1018.eqiad.wmnet
This host has a failed drive and is due to be refreshed in a few months anyway.
With the launch of updates to the decom cookbook, the majority of these steps can be handled by the service owners directly. The DC Ops team only gets involved once the system has been fully removed from service and powered down by the decommission cookbook.
Steps for service owner:
- - all system services confirmed offline from production use
- - set all icinga checks to maint mode/disabled while reclaim/decommmission takes place. (likely done by script)
- - remove system from all lvs/pybal active configuration
- - any service group puppet/hiera/dsh config removed
- - remove site.pp, replace with role(spare::system) recommended to ensure services offline but not 100% required as long as the decom script is IMMEDIATELY run below.
- - login to cumin host and run the decom cookbook: cookbook sre.hosts.decommission <host fqdn> -t <phab task>. This does: bootloader wipe, host power down, netbox update to decommissioning status, puppet node clean, puppet node deactivate, debmonitor removal, and run homer.
- - remove all remaining puppet references and all host entries in the puppet repo
- - reassign task from service owner to DC ops team member depending on site of server.
End service owner steps / Begin DC-Ops team steps:
- - system disks removed (by onsite)
- - determine system age, under 5 years are reclaimed to spare, over 5 years are decommissioned.
- - IF DECOM: system unracked and decommissioned (by onsite), update netbox with result and set state to offline
x - IF DECOM: mgmt dns entries removed.
- - IF RECLAIM: set netbox state to 'inventory' and hostname to asset tag