Page MenuHomePhabricator

decommission db1120.eqiad.wmnet
Closed, ResolvedPublicRequest

Description

This task will track the decommission-hardware of server db1120.eqiad.wmnet

With the launch of updates to the decom cookbook, the majority of these steps can be handled by the service owners directly. The DC Ops team only gets involved once the system has been fully removed from service and powered down by the decommission cookbook.

db1120.eqiad.wmnet

Steps for service owner:

  • - all system services confirmed offline from production use
  • - set all icinga checks to maint mode/disabled while reclaim/decommmission takes place. (likely done by script)
  • - remove system from all lvs/pybal active configuration
  • - any service group puppet/hiera/dsh config removed
  • - remove site.pp, replace with role(spare::system) recommended to ensure services offline but not 100% required as long as the decom script is IMMEDIATELY run below.
  • - login to cumin host and run the decom cookbook: cookbook sre.hosts.decommission <host fqdn> -t <phab task>. This does: bootloader wipe, host power down, netbox update to decommissioning status, puppet node clean, puppet node deactivate, debmonitor removal, and run homer.
  • - remove all remaining puppet references and all host entries in the puppet repo
  • - reassign task from service owner to DC ops team member and site project (ops-sitename) depending on site of server

End service owner steps / Begin DC-Ops team steps:

  • - system disks removed (by onsite)
  • - determine system age, under 5 years are reclaimed to spare, over 5 years are decommissioned.
  • - IF DECOM: system unracked and decommissioned (by onsite), update netbox with result and set state to offline
  • - IF DECOM: mgmt dns entries removed.

Event Timeline

Mentioned in SAL (#wikimedia-operations) [2023-04-12T12:14:20Z] <marostegui@cumin1001> dbctl commit (dc=all): 'Depool db1120 T334580', diff saved to https://phabricator.wikimedia.org/P46518 and previous config saved to /var/cache/conftool/dbconfig/20230412-121420-marostegui.json

Change 908522 had a related patch set uploaded (by Marostegui; author: Marostegui):

[operations/puppet@production] instances.yaml: Remove db1120 from dbctl

https://gerrit.wikimedia.org/r/908522

Change 908522 merged by Marostegui:

[operations/puppet@production] instances.yaml: Remove db1120 from dbctl

https://gerrit.wikimedia.org/r/908522

Mentioned in SAL (#wikimedia-operations) [2023-04-13T11:34:35Z] <marostegui@cumin1001> dbctl commit (dc=all): 'Remove db1120 from dbctl T334580', diff saved to https://phabricator.wikimedia.org/P46665 and previous config saved to /var/cache/conftool/dbconfig/20230413-113435-marostegui.json

Change 908526 had a related patch set uploaded (by Marostegui; author: Marostegui):

[operations/puppet@production] db1120: Disable notifications

https://gerrit.wikimedia.org/r/908526

Change 908526 merged by Marostegui:

[operations/puppet@production] db1120: Disable notifications

https://gerrit.wikimedia.org/r/908526

Change 908793 had a related patch set uploaded (by Marostegui; author: Marostegui):

[operations/puppet@production] mariadb: Remove db1120 from puppet

https://gerrit.wikimedia.org/r/908793

Change 908793 merged by Marostegui:

[operations/puppet@production] mariadb: Remove db1120 from puppet

https://gerrit.wikimedia.org/r/908793

cookbooks.sre.hosts.decommission executed by marostegui@cumin1001 for hosts: db1120.eqiad.wmnet

  • db1120.eqiad.wmnet (WARN)
    • Downtimed host on Icinga/Alertmanager
    • Found physical host
    • Management interface not found on Icinga, unable to downtime it
    • Wiped all swraid, partition-table and filesystem signatures
    • Powered off
    • [Netbox] Set status to Decommissioning, deleted all non-mgmt IPs, updated switch interfaces (disabled, removed vlans, etc)
    • Configured the linked switch interface(s)
    • Removed from DebMonitor
    • Removed from Puppet master and PuppetDB
Jclark-ctr updated the task description. (Show Details)