Page MenuHomePhabricator

decommission cloudvirt10[2,3,4].eqiad.wmnet
Closed, ResolvedPublicRequest

Description

This task will track the decommission-hardware of server cloudvirt10[2,3,4].eqiad.wmnet

With the launch of updates to the decom cookbook, the majority of these steps can be handled by the service owners directly. The DC Ops team only gets involved once the system has been fully removed from service and powered down by the decommission cookbook.

cloudvirt1012.eqiad.wmnet

Steps for service owner:

  • - all system services confirmed offline from production use

[x]x - set all icinga checks to maint mode/disabled while reclaim/decommmission takes place. (likely done by script)

  • - remove system from all lvs/pybal active configuration
  • - any service group puppet/hiera/dsh config removed
  • - remove site.pp, replace with role(spare::system) recommended to ensure services offline but not 100% required as long as the decom script is IMMEDIATELY run below.
  • - login to cumin host and run the decom cookbook: cookbook sre.hosts.decommission <host fqdn> -t <phab task>. This does: bootloader wipe, host power down, netbox update to decommissioning status, puppet node clean, puppet node deactivate, debmonitor removal, and run homer.
  • - remove all remaining puppet references and all host entries in the puppet repo
  • - reassign task from service owner to DC ops team member depending on site of server.

End service owner steps / Begin DC-Ops team steps:

  • - system disks removed (by onsite)
  • - determine system age, under 5 years are reclaimed to spare, over 5 years are decommissioned.
  • - IF DECOM: system unracked and decommissioned (by onsite), update netbox with result and set state to offline
  • - IF DECOM: mgmt dns entries removed.
  • - IF RECLAIM: set netbox state to 'inventory' and hostname to asset tag

cloudvirt1013.eqiad.wmnet

Steps for service owner:

  • - all system services confirmed offline from production use
  • - set all icinga checks to maint mode/disabled while reclaim/decommmission takes place. (likely done by script)
  • - remove system from all lvs/pybal active configuration
  • - any service group puppet/hiera/dsh config removed
  • - remove site.pp, replace with role(spare::system) recommended to ensure services offline but not 100% required as long as the decom script is IMMEDIATELY run below.
  • - login to cumin host and run the decom cookbook: cookbook sre.hosts.decommission <host fqdn> -t <phab task>. This does: bootloader wipe, host power down, netbox update to decommissioning status, puppet node clean, puppet node deactivate, debmonitor removal, and run homer.
  • - remove all remaining puppet references and all host entries in the puppet repo
  • - reassign task from service owner to DC ops team member depending on site of server.

End service owner steps / Begin DC-Ops team steps:

  • - system disks removed (by onsite)
  • - determine system age, under 5 years are reclaimed to spare, over 5 years are decommissioned.
  • - IF DECOM: system unracked and decommissioned (by onsite), update netbox with result and set state to offline
  • - IF DECOM: mgmt dns entries removed.
  • - IF RECLAIM: set netbox state to 'inventory' and hostname to asset tag

cloudvirt1014.eqiad.wmnet

Steps for service owner:

  • - all system services confirmed offline from production use
  • - set all icinga checks to maint mode/disabled while reclaim/decommmission takes place. (likely done by script)
  • - remove system from all lvs/pybal active configuration
  • - any service group puppet/hiera/dsh config removed
  • - remove site.pp, replace with role(spare::system) recommended to ensure services offline but not 100% required as long as the decom script is IMMEDIATELY run below.
  • - login to cumin host and run the decom cookbook: cookbook sre.hosts.decommission <host fqdn> -t <phab task>. This does: bootloader wipe, host power down, netbox update to decommissioning status, puppet node clean, puppet node deactivate, debmonitor removal, and run homer.
  • - remove all remaining puppet references and all host entries in the puppet repo
  • - reassign task from service owner to DC ops team member depending on site of server.

End service owner steps / Begin DC-Ops team steps:

  • - system disks removed (by onsite)
  • - determine system age, under 5 years are reclaimed to spare, over 5 years are decommissioned.
  • - IF DECOM: system unracked and decommissioned (by onsite), update netbox with result and set state to offline
  • - IF DECOM: mgmt dns entries removed.
  • - IF RECLAIM: set netbox state to 'inventory' and hostname to asset tag

Event Timeline

Mentioned in SAL (#wikimedia-cloud) [2021-12-01T23:53:41Z] <andrewbogott> adding spare cloudvirts 1044 and 1055 to the 'ceph' pool in order to make space for future juggling around T296790 and T296792

Mentioned in SAL (#wikimedia-cloud) [2021-12-01T23:54:43Z] <andrewbogott> *correction* adding spare cloudvirts 1044 and 1045 to the 'ceph' pool in order to make space for future juggling around T296790 and T296792

Andrew added a subscriber: rook.

@mdipietro, I suggest that you get these hosts ready for decom, as practice. The steps in this task are pretty clear; to remove the hosts from service you'll want to run the cloudvirt/drain cookbook; that should take the hosts out of service and put them in the 'maintenance' aggregate where they won't get new VMs scheduled.

cloudvirt101[2,3,4].eqiad.wmnet drained

cookbooks.sre.hosts.decommission executed by mdipietro@cumin1001 for hosts: cloudvirt1012.eqiad.wmnet

  • cloudvirt1012.eqiad.wmnet (PASS)
    • Downtimed host on Icinga
    • Found physical host
    • Downtimed management interface on Icinga
    • Wiped all swraid, partition-table and filesystem signatures
    • Powered off
    • Set Netbox status to Decommissioning and deleted all non-mgmt interfaces and related IPs
    • Removed from DebMonitor
    • Removed from Puppet master and PuppetDB

cookbooks.sre.hosts.decommission executed by mdipietro@cumin1001 for hosts: cloudvirt1013.eqiad.wmnet

  • cloudvirt1013.eqiad.wmnet (PASS)
    • Downtimed host on Icinga
    • Found physical host
    • Downtimed management interface on Icinga
    • Wiped all swraid, partition-table and filesystem signatures
    • Powered off
    • Set Netbox status to Decommissioning and deleted all non-mgmt interfaces and related IPs
    • Removed from DebMonitor
    • Removed from Puppet master and PuppetDB

cookbooks.sre.hosts.decommission executed by mdipietro@cumin1001 for hosts: cloudvirt1014.eqiad.wmnet

  • cloudvirt1014.eqiad.wmnet (PASS)
    • Downtimed host on Icinga
    • Found physical host
    • Downtimed management interface on Icinga
    • Wiped all swraid, partition-table and filesystem signatures
    • Powered off
    • Set Netbox status to Decommissioning and deleted all non-mgmt interfaces and related IPs
    • Removed from DebMonitor
    • Removed from Puppet master and PuppetDB

cloudvirt101[2,3,4].eqiad.wmnet decommissioned. Not seeing any references in puppet repo

Cmjohnson updated the task description. (Show Details)

Change 788375 had a related patch set uploaded (by Andrew Bogott; author: Andrew Bogott):

[operations/puppet@production] site.pp: remove cloudvirt101[2,3,4,5].eqiad.wmnet

https://gerrit.wikimedia.org/r/788375

Change 788375 merged by Andrew Bogott:

[operations/puppet@production] site.pp: remove cloudvirt101[2,3,4,5].eqiad.wmnet

https://gerrit.wikimedia.org/r/788375