This task will track the decommission-hardware of servers elastic10[18-31].eqiad.wmnet (note that elastic1017 and elastic1021 are already tracked for decommision in T234045 / T189727).
With the launch of updates to the decom cookbook, the majority of these steps can be handled by the service owners directly. The DC Ops team only gets involved once the system has been fully removed from service and powered down by the decommission cookbook.
elastic1018
Steps for service owner:
- - all system services confirmed offline from production use
- - set all icinga checks to maint mode/disabled while reclaim/decommmission takes place.
- - remove system from all lvs/pybal active configuration
- - any service group puppet/hiera/dsh config removed
- - remove site.pp, replace with role(spare::system) recommended to ensure services offline but not 100% required as long as the decom script is IMMEDIATELY run below.
- - login to cumin host and run the decom cookbook: cookbook sre.hosts.decommission <host fqdn> -t <phab task>. This does: bootloader wipe, host power down, netbox update to decommissioning status, puppet node clean, puppet node deactivate, debmonitor removal.
- - remove all remaining puppet references (include role::spare) and all host entries in the puppet repo
- - remove ALL dns entries except the asset tag mgmt entries.
- - reassign task from service owner to DC ops team member depending on site of server: codfw = @Papaul, eqiad = @Jclark-ctr, all other sites = @RobH.
End service owner steps / Begin DC-Ops team steps:
- - disable switch port / set to asset tag if host isn't being unracked / remove from switch if being unracked.
- - system disks wiped (by onsite)
- - determine system age, under 5 years are reclaimed to spare, over 5 years are decommissioned. If uncertain, ask @wiki_willy.
- - IF DECOM: system unracked and decommissioned (by onsite), update netbox with result and set state to offline
- - IF DECOM: switch port configration removed from switch once system is unracked.
- - IF DECOM: add system to decommission tracking google sheet
- - IF DECOM: mgmt dns entries removed.
- - IF RECLAIM: set netbox state to 'inventory' and hostname to asset tag
elastic1019
Steps for service owner:
- - all system services confirmed offline from production use
- - set all icinga checks to maint mode/disabled while reclaim/decommmission takes place.
- - remove system from all lvs/pybal active configuration
- - any service group puppet/hiera/dsh config removed
- - remove site.pp, replace with role(spare::system) recommended to ensure services offline but not 100% required as long as the decom script is IMMEDIATELY run below.
- - login to cumin host and run the decom cookbook: cookbook sre.hosts.decommission <host fqdn> -t <phab task>. This does: bootloader wipe, host power down, netbox update to decommissioning status, puppet node clean, puppet node deactivate, debmonitor removal.
- - remove all remaining puppet references (include role::spare) and all host entries in the puppet repo
- - remove ALL dns entries except the asset tag mgmt entries.
- - reassign task from service owner to DC ops team member depending on site of server: codfw = @Papaul, eqiad = @Jclark-ctr, all other sites = @RobH.
End service owner steps / Begin DC-Ops team steps:
- - disable switch port / set to asset tag if host isn't being unracked / remove from switch if being unracked.
- - system disks wiped (by onsite)
- - determine system age, under 5 years are reclaimed to spare, over 5 years are decommissioned. If uncertain, ask @wiki_willy.
- - IF DECOM: system unracked and decommissioned (by onsite), update netbox with result and set state to offline
- - IF DECOM: switch port configration removed from switch once system is unracked.
- - IF DECOM: add system to decommission tracking google sheet
- - IF DECOM: mgmt dns entries removed.
- - IF RECLAIM: set netbox state to 'inventory' and hostname to asset tag
elastic1020
Steps for service owner:
- - all system services confirmed offline from production use
- - set all icinga checks to maint mode/disabled while reclaim/decommmission takes place.
- - remove system from all lvs/pybal active configuration
- - any service group puppet/hiera/dsh config removed
- - remove site.pp, replace with role(spare::system) recommended to ensure services offline but not 100% required as long as the decom script is IMMEDIATELY run below.
- - login to cumin host and run the decom cookbook: cookbook sre.hosts.decommission <host fqdn> -t <phab task>. This does: bootloader wipe, host power down, netbox update to decommissioning status, puppet node clean, puppet node deactivate, debmonitor removal.
- - remove all remaining puppet references (include role::spare) and all host entries in the puppet repo
- - remove ALL dns entries except the asset tag mgmt entries.
- - reassign task from service owner to DC ops team member depending on site of server: codfw = @Papaul, eqiad = @Jclark-ctr, all other sites = @RobH.
End service owner steps / Begin DC-Ops team steps:
- - disable switch port / set to asset tag if host isn't being unracked / remove from switch if being unracked.
- - system disks wiped (by onsite)
- - determine system age, under 5 years are reclaimed to spare, over 5 years are decommissioned. If uncertain, ask @wiki_willy.
- - IF DECOM: system unracked and decommissioned (by onsite), update netbox with result and set state to offline
- - IF DECOM: switch port configration removed from switch once system is unracked.
- - IF DECOM: add system to decommission tracking google sheet
- - IF DECOM: mgmt dns entries removed.
- - IF RECLAIM: set netbox state to 'inventory' and hostname to asset tag
elastic1021
this server was already decommissioned in T189727 after hardware failure
elastic1022
Steps for service owner:
- - all system services confirmed offline from production use
- - set all icinga checks to maint mode/disabled while reclaim/decommmission takes place.
- - remove system from all lvs/pybal active configuration
- - any service group puppet/hiera/dsh config removed
- - remove site.pp, replace with role(spare::system) recommended to ensure services offline but not 100% required as long as the decom script is IMMEDIATELY run below.
- - login to cumin host and run the decom cookbook: cookbook sre.hosts.decommission <host fqdn> -t <phab task>. This does: bootloader wipe, host power down, netbox update to decommissioning status, puppet node clean, puppet node deactivate, debmonitor removal.
- - remove all remaining puppet references (include role::spare) and all host entries in the puppet repo
- - remove ALL dns entries except the asset tag mgmt entries.
- - reassign task from service owner to DC ops team member depending on site of server: codfw = @Papaul, eqiad = @Jclark-ctr, all other sites = @RobH.
End service owner steps / Begin DC-Ops team steps:
- - disable switch port / set to asset tag if host isn't being unracked / remove from switch if being unracked.
- - system disks wiped (by onsite)
- - determine system age, under 5 years are reclaimed to spare, over 5 years are decommissioned. If uncertain, ask @wiki_willy.
- - IF DECOM: system unracked and decommissioned (by onsite), update netbox with result and set state to offline
- - IF DECOM: switch port configration removed from switch once system is unracked.
- - IF DECOM: add system to decommission tracking google sheet
- - IF DECOM: mgmt dns entries removed.
- - IF RECLAIM: set netbox state to 'inventory' and hostname to asset tag
elastic1023
Steps for service owner:
- - all system services confirmed offline from production use
- - set all icinga checks to maint mode/disabled while reclaim/decommmission takes place.
- - remove system from all lvs/pybal active configuration
- - any service group puppet/hiera/dsh config removed
- - remove site.pp, replace with role(spare::system) recommended to ensure services offline but not 100% required as long as the decom script is IMMEDIATELY run below.
- - login to cumin host and run the decom cookbook: cookbook sre.hosts.decommission <host fqdn> -t <phab task>. This does: bootloader wipe, host power down, netbox update to decommissioning status, puppet node clean, puppet node deactivate, debmonitor removal.
- - remove all remaining puppet references (include role::spare) and all host entries in the puppet repo
- - remove ALL dns entries except the asset tag mgmt entries.
- - reassign task from service owner to DC ops team member depending on site of server: codfw = @Papaul, eqiad = @Jclark-ctr, all other sites = @RobH.
End service owner steps / Begin DC-Ops team steps:
- - disable switch port / set to asset tag if host isn't being unracked / remove from switch if being unracked.
- - system disks wiped (by onsite)
- - determine system age, under 5 years are reclaimed to spare, over 5 years are decommissioned. If uncertain, ask @wiki_willy.
- - IF DECOM: system unracked and decommissioned (by onsite), update netbox with result and set state to offline
- - IF DECOM: switch port configration removed from switch once system is unracked.
- - IF DECOM: add system to decommission tracking google sheet
- - IF DECOM: mgmt dns entries removed.
- - IF RECLAIM: set netbox state to 'inventory' and hostname to asset tag
elastic1024
Steps for service owner:
- - all system services confirmed offline from production use
- - set all icinga checks to maint mode/disabled while reclaim/decommmission takes place.
- - remove system from all lvs/pybal active configuration
- - any service group puppet/hiera/dsh config removed
- - remove site.pp, replace with role(spare::system) recommended to ensure services offline but not 100% required as long as the decom script is IMMEDIATELY run below.
- - login to cumin host and run the decom cookbook: cookbook sre.hosts.decommission <host fqdn> -t <phab task>. This does: bootloader wipe, host power down, netbox update to decommissioning status, puppet node clean, puppet node deactivate, debmonitor removal.
- - remove all remaining puppet references (include role::spare) and all host entries in the puppet repo
- - remove ALL dns entries except the asset tag mgmt entries.
- - reassign task from service owner to DC ops team member depending on site of server: codfw = @Papaul, eqiad = @Jclark-ctr, all other sites = @RobH.
End service owner steps / Begin DC-Ops team steps:
- - disable switch port / set to asset tag if host isn't being unracked / remove from switch if being unracked.
- - system disks wiped (by onsite)
- - determine system age, under 5 years are reclaimed to spare, over 5 years are decommissioned. If uncertain, ask @wiki_willy.
- - IF DECOM: system unracked and decommissioned (by onsite), update netbox with result and set state to offline
- - IF DECOM: switch port configration removed from switch once system is unracked.
- - IF DECOM: add system to decommission tracking google sheet
- - IF DECOM: mgmt dns entries removed.
- - IF RECLAIM: set netbox state to 'inventory' and hostname to asset tag
elastic1025
Steps for service owner:
- - all system services confirmed offline from production use
- - set all icinga checks to maint mode/disabled while reclaim/decommmission takes place.
- - remove system from all lvs/pybal active configuration
- - any service group puppet/hiera/dsh config removed
- - remove site.pp, replace with role(spare::system) recommended to ensure services offline but not 100% required as long as the decom script is IMMEDIATELY run below.
- - login to cumin host and run the decom cookbook: cookbook sre.hosts.decommission <host fqdn> -t <phab task>. This does: bootloader wipe, host power down, netbox update to decommissioning status, puppet node clean, puppet node deactivate, debmonitor removal.
- - remove all remaining puppet references (include role::spare) and all host entries in the puppet repo
- - remove ALL dns entries except the asset tag mgmt entries.
- - reassign task from service owner to DC ops team member depending on site of server: codfw = @Papaul, eqiad = @Jclark-ctr, all other sites = @RobH.
End service owner steps / Begin DC-Ops team steps:
- - disable switch port / set to asset tag if host isn't being unracked / remove from switch if being unracked.
- - system disks wiped (by onsite)
- - determine system age, under 5 years are reclaimed to spare, over 5 years are decommissioned. If uncertain, ask @wiki_willy.
- - IF DECOM: system unracked and decommissioned (by onsite), update netbox with result and set state to offline
- - IF DECOM: switch port configration removed from switch once system is unracked.
- - IF DECOM: add system to decommission tracking google sheet
- - IF DECOM: mgmt dns entries removed.
- - IF RECLAIM: set netbox state to 'inventory' and hostname to asset tag
elastic1026
Steps for service owner:
- - all system services confirmed offline from production use
- - set all icinga checks to maint mode/disabled while reclaim/decommmission takes place.
- - remove system from all lvs/pybal active configuration
- - any service group puppet/hiera/dsh config removed
- - remove site.pp, replace with role(spare::system) recommended to ensure services offline but not 100% required as long as the decom script is IMMEDIATELY run below.
- - login to cumin host and run the decom cookbook: cookbook sre.hosts.decommission <host fqdn> -t <phab task>. This does: bootloader wipe, host power down, netbox update to decommissioning status, puppet node clean, puppet node deactivate, debmonitor removal.
- - remove all remaining puppet references (include role::spare) and all host entries in the puppet repo
- - remove ALL dns entries except the asset tag mgmt entries.
- - reassign task from service owner to DC ops team member depending on site of server: codfw = @Papaul, eqiad = @Jclark-ctr, all other sites = @RobH.
End service owner steps / Begin DC-Ops team steps:
- - disable switch port / set to asset tag if host isn't being unracked / remove from switch if being unracked.
- - system disks wiped (by onsite)
- - determine system age, under 5 years are reclaimed to spare, over 5 years are decommissioned. If uncertain, ask @wiki_willy.
- - IF DECOM: system unracked and decommissioned (by onsite), update netbox with result and set state to offline
- - IF DECOM: switch port configration removed from switch once system is unracked.
- - IF DECOM: add system to decommission tracking google sheet
- - IF DECOM: mgmt dns entries removed.
- - IF RECLAIM: set netbox state to 'inventory' and hostname to asset tag
elastic1027
Steps for service owner:
- - all system services confirmed offline from production use
- - set all icinga checks to maint mode/disabled while reclaim/decommmission takes place.
- - remove system from all lvs/pybal active configuration
- - any service group puppet/hiera/dsh config removed
- - remove site.pp, replace with role(spare::system) recommended to ensure services offline but not 100% required as long as the decom script is IMMEDIATELY run below.
- - login to cumin host and run the decom cookbook: cookbook sre.hosts.decommission <host fqdn> -t <phab task>. This does: bootloader wipe, host power down, netbox update to decommissioning status, puppet node clean, puppet node deactivate, debmonitor removal.
- - remove all remaining puppet references (include role::spare) and all host entries in the puppet repo
- - remove ALL dns entries except the asset tag mgmt entries.
- - reassign task from service owner to DC ops team member depending on site of server: codfw = @Papaul, eqiad = @Jclark-ctr, all other sites = @RobH.
End service owner steps / Begin DC-Ops team steps:
- - disable switch port / set to asset tag if host isn't being unracked / remove from switch if being unracked.
- - system disks wiped (by onsite)
- - determine system age, under 5 years are reclaimed to spare, over 5 years are decommissioned. If uncertain, ask @wiki_willy.
- - IF DECOM: system unracked and decommissioned (by onsite), update netbox with result and set state to offline
- - IF DECOM: switch port configration removed from switch once system is unracked.
- - IF DECOM: add system to decommission tracking google sheet
- - IF DECOM: mgmt dns entries removed.
- - IF RECLAIM: set netbox state to 'inventory' and hostname to asset tag
elastic1028
Steps for service owner:
- - all system services confirmed offline from production use
- - set all icinga checks to maint mode/disabled while reclaim/decommmission takes place.
- - remove system from all lvs/pybal active configuration
- - any service group puppet/hiera/dsh config removed
- - remove site.pp, replace with role(spare::system) recommended to ensure services offline but not 100% required as long as the decom script is IMMEDIATELY run below.
- - login to cumin host and run the decom cookbook: cookbook sre.hosts.decommission <host fqdn> -t <phab task>. This does: bootloader wipe, host power down, netbox update to decommissioning status, puppet node clean, puppet node deactivate, debmonitor removal.
- - remove all remaining puppet references (include role::spare) and all host entries in the puppet repo
- - remove ALL dns entries except the asset tag mgmt entries.
- - reassign task from service owner to DC ops team member depending on site of server: codfw = @Papaul, eqiad = @Jclark-ctr, all other sites = @RobH.
End service owner steps / Begin DC-Ops team steps:
- - disable switch port / set to asset tag if host isn't being unracked / remove from switch if being unracked.
- - system disks wiped (by onsite)
- - determine system age, under 5 years are reclaimed to spare, over 5 years are decommissioned. If uncertain, ask @wiki_willy.
- - IF DECOM: system unracked and decommissioned (by onsite), update netbox with result and set state to offline
- - IF DECOM: switch port configration removed from switch once system is unracked.
- - IF DECOM: add system to decommission tracking google sheet
- - IF DECOM: mgmt dns entries removed.
- - IF RECLAIM: set netbox state to 'inventory' and hostname to asset tag
elastic1029
Steps for service owner:
- - all system services confirmed offline from production use
- - set all icinga checks to maint mode/disabled while reclaim/decommmission takes place.
- - remove system from all lvs/pybal active configuration
- - any service group puppet/hiera/dsh config removed
- - remove site.pp, replace with role(spare::system) recommended to ensure services offline but not 100% required as long as the decom script is IMMEDIATELY run below.
- - login to cumin host and run the decom cookbook: cookbook sre.hosts.decommission <host fqdn> -t <phab task>. This does: bootloader wipe, host power down, netbox update to decommissioning status, puppet node clean, puppet node deactivate, debmonitor removal.
- - remove all remaining puppet references (include role::spare) and all host entries in the puppet repo
- - remove ALL dns entries except the asset tag mgmt entries.
- - reassign task from service owner to DC ops team member depending on site of server: codfw = @Papaul, eqiad = @Jclark-ctr, all other sites = @RobH.
End service owner steps / Begin DC-Ops team steps:
- - disable switch port / set to asset tag if host isn't being unracked / remove from switch if being unracked.
- - system disks wiped (by onsite)
- - determine system age, under 5 years are reclaimed to spare, over 5 years are decommissioned. If uncertain, ask @wiki_willy.
- - IF DECOM: system unracked and decommissioned (by onsite), update netbox with result and set state to offline
- - IF DECOM: switch port configration removed from switch once system is unracked.
- - IF DECOM: add system to decommission tracking google sheet
- - IF DECOM: mgmt dns entries removed.
- - IF RECLAIM: set netbox state to 'inventory' and hostname to asset tag
elastic1030
Steps for service owner:
- - all system services confirmed offline from production use
- - set all icinga checks to maint mode/disabled while reclaim/decommmission takes place.
- - remove system from all lvs/pybal active configuration
- - any service group puppet/hiera/dsh config removed
- - remove site.pp, replace with role(spare::system) recommended to ensure services offline but not 100% required as long as the decom script is IMMEDIATELY run below.
- - login to cumin host and run the decom cookbook: cookbook sre.hosts.decommission <host fqdn> -t <phab task>. This does: bootloader wipe, host power down, netbox update to decommissioning status, puppet node clean, puppet node deactivate, debmonitor removal.
- - remove all remaining puppet references (include role::spare) and all host entries in the puppet repo
- - remove ALL dns entries except the asset tag mgmt entries.
- - reassign task from service owner to DC ops team member depending on site of server: codfw = @Papaul, eqiad = @Jclark-ctr, all other sites = @RobH.
End service owner steps / Begin DC-Ops team steps:
- - disable switch port / set to asset tag if host isn't being unracked / remove from switch if being unracked.
- - system disks wiped (by onsite)
- - determine system age, under 5 years are reclaimed to spare, over 5 years are decommissioned. If uncertain, ask @wiki_willy.
- - IF DECOM: system unracked and decommissioned (by onsite), update netbox with result and set state to offline
- - IF DECOM: switch port configration removed from switch once system is unracked.
- - IF DECOM: add system to decommission tracking google sheet
- - IF DECOM: mgmt dns entries removed.
- - IF RECLAIM: set netbox state to 'inventory' and hostname to asset tag
elastic1031
Steps for service owner:
- - all system services confirmed offline from production use
- - set all icinga checks to maint mode/disabled while reclaim/decommmission takes place.
- - remove system from all lvs/pybal active configuration
- - any service group puppet/hiera/dsh config removed
- - remove site.pp, replace with role(spare::system) recommended to ensure services offline but not 100% required as long as the decom script is IMMEDIATELY run below.
- - login to cumin host and run the decom cookbook: cookbook sre.hosts.decommission <host fqdn> -t <phab task>. This does: bootloader wipe, host power down, netbox update to decommissioning status, puppet node clean, puppet node deactivate, debmonitor removal.
- - remove all remaining puppet references (include role::spare) and all host entries in the puppet repo
- - remove ALL dns entries except the asset tag mgmt entries.
- - reassign task from service owner to DC ops team member depending on site of server: codfw = @Papaul, eqiad = @Jclark-ctr, all other sites = @RobH.
End service owner steps / Begin DC-Ops team steps:
- - disable switch port / set to asset tag if host isn't being unracked / remove from switch if being unracked.
- - system disks wiped (by onsite)
- - determine system age, under 5 years are reclaimed to spare, over 5 years are decommissioned. If uncertain, ask @wiki_willy.
- - IF DECOM: system unracked and decommissioned (by onsite), update netbox with result and set state to offline
- - IF DECOM: switch port configration removed from switch once system is unracked.
- - IF DECOM: add system to decommission tracking google sheet
- - IF DECOM: mgmt dns entries removed.
- - IF RECLAIM: set netbox state to 'inventory' and hostname to asset tag