Page MenuHomePhabricator

decommission xhgui1002
Closed, ResolvedPublicRequest

Description

This task will track the decommission-hardware of server xhgui1002

With the launch of updates to the decom cookbook, the majority of these steps can be handled by the service owners directly. The DC Ops team only gets involved once the system has been fully removed from service and powered down by the decommission cookbook.

Steps for service owner:

  • - all system services confirmed offline from production use
  • - set all icinga checks to maint mode/disabled while reclaim/decommmission takes place. (likely done by script)
  • - remove system from all lvs/pybal active configuration
  • - any service group puppet/hiera/dsh config removed
  • - remove site.pp, replace with role(spare::system) recommended to ensure services offline but not 100% required as long as the decom script is IMMEDIATELY run below.
  • - login to cumin host and run the decom cookbook: cookbook sre.hosts.decommission <host fqdn> -t <phab task>. This does: bootloader wipe, host power down, netbox update to decommissioning status, puppet node clean, puppet node deactivate, debmonitor removal, and run homer.
  • - remove all remaining puppet references and all host entries in the puppet repo
  • - reassign task from service owner to DC ops team member and site project (ops-sitename) depending on site of server

End service owner steps / Begin DC-Ops team steps:

  • - system disks removed (by onsite)
  • - determine system age, under 5 years are reclaimed to spare, over 5 years are decommissioned.
  • - IF DECOM: system unracked and decommissioned (by onsite), update netbox with result and set state to offline
  • - IF DECOM: mgmt dns entries removed.
  • - IF RECLAIM: set netbox state to 'inventory' and hostname to asset tag

Event Timeline

cookbooks.sre.hosts.decommission executed by denisse@cumin1001 for hosts: xhgui1002

  • xhgui1002 (PASS)
    • Downtimed host on Icinga/Alertmanager
    • Found Ganeti VM
    • VM shutdown
    • Started forced sync of VMs in Ganeti cluster eqiad to Netbox
    • Removed from DebMonitor
    • Removed from Puppet master and PuppetDB
    • VM removed
    • Started forced sync of VMs in Ganeti cluster eqiad to Netbox

Change 935816 had a related patch set uploaded (by Andrea Denisse; author: Andrea Denisse):

[operations/puppet@production] xhgui: Decommission xhgui1002 and xhgui2002 hosts to deploy xhgui in webperf1003

https://gerrit.wikimedia.org/r/935816

Change 935816 merged by Andrea Denisse:

[operations/puppet@production] xhgui: Decommission xhgui1002 and xhgui2002 hosts to deploy xhgui in webperf1003

https://gerrit.wikimedia.org/r/935816

andrea.denisse updated the task description. (Show Details)
andrea.denisse edited projects, added ops-eqiad; removed Patch-For-Review.
andrea.denisse claimed this task.
andrea.denisse removed a project: ops-eqiad.

Something went wrong there, these are still in puppetdb:

jmm@cumin2002:~$ sudo cumin xhgui*
4 hosts will be targeted:
xhgui[2001-2002].codfw.wmnet,xhgui[1001-1002].eqiad.wmnet
DRY-RUN mode enabled, aborting

There were stale entries in puppetdb left after running the decommission cookbook.

Since the VM was decommissioned and other steps finished successfully I manually removed the stale XHGui entries them from PuppetDB while I investigate why they were not removed in the cookbook.

denisse@cumin1001:~$ sudo cumin xhgui*
2 hosts will be targeted:
xhgui2001.codfw.wmnet,xhgui1001.eqiad.wmnet
DRY-RUN mode enabled, aborting