Page MenuHomePhabricator

decommission cloudcephmon100[1-3].eqiad.wmnet
Closed, ResolvedPublicRequest

Description

This task will track the decommission-hardware of server <enter FQDN of server here>.

With the launch of updates to the decom cookbook, the majority of these steps can be handled by the service owners directly. The DC Ops team only gets involved once the system has been fully removed from service and powered down by the decommission cookbook.

cloudcephmon1001.eqiad.wmnet

Steps for service owner:

  • - all system services confirmed offline from production use
  • - set all icinga checks to maint mode/disabled while reclaim/decommmission takes place. (likely done by script)
  • - remove system from all lvs/pybal active configuration
  • - any service group puppet/hiera/dsh config removed
  • - remove site.pp, replace with role(spare::system) recommended to ensure services offline but not 100% required as long as the decom script is IMMEDIATELY run below.
  • - login to cumin host and run the decom cookbook: cookbook sre.hosts.decommission <host fqdn> -t <phab task>. This does: bootloader wipe, host power down, netbox update to decommissioning status, puppet node clean, puppet node deactivate, debmonitor removal, and run homer.
  • - remove all remaining puppet references and all host entries in the puppet repo
  • - reassign task from service owner to no owner and ensure the site project (ops-sitename depending on site of server) is assigned.

End service owner steps / Begin DC-Ops team steps:

  • - system disks removed (by onsite)
  • - determine system age, under 5 years are reclaimed to spare, over 5 years are decommissioned.
  • - IF DECOM: system unracked and decommissioned (by onsite), update netbox with result and set state to offline
  • - IF DECOM: mgmt dns entries removed.

cloudcephmon1002.eqiad.wmnet

Steps for service owner:

  • - all system services confirmed offline from production use
  • - set all icinga checks to maint mode/disabled while reclaim/decommmission takes place. (likely done by script)
  • - remove system from all lvs/pybal active configuration
  • - any service group puppet/hiera/dsh config removed
  • - remove site.pp, replace with role(spare::system) recommended to ensure services offline but not 100% required as long as the decom script is IMMEDIATELY run below.
  • - login to cumin host and run the decom cookbook: cookbook sre.hosts.decommission <host fqdn> -t <phab task>. This does: bootloader wipe, host power down, netbox update to decommissioning status, puppet node clean, puppet node deactivate, debmonitor removal, and run homer.
  • - remove all remaining puppet references and all host entries in the puppet repo
  • - reassign task from service owner to no owner and ensure the site project (ops-sitename depending on site of server) is assigned.

End service owner steps / Begin DC-Ops team steps:

  • - system disks removed (by onsite)
  • - determine system age, under 5 years are reclaimed to spare, over 5 years are decommissioned.
  • - IF DECOM: system unracked and decommissioned (by onsite), update netbox with result and set state to offline
  • - IF DECOM: mgmt dns entries removed.
  • - IF RECLAIM: set netbox state to 'inventory' and hostname to asset tag

cloudcephmon1003.eqiad.wmnet

Steps for service owner:

  • - all system services confirmed offline from production use
  • - set all icinga checks to maint mode/disabled while reclaim/decommmission takes place. (likely done by script)
  • - remove system from all lvs/pybal active configuration
  • - any service group puppet/hiera/dsh config removed
  • - remove site.pp, replace with role(spare::system) recommended to ensure services offline but not 100% required as long as the decom script is IMMEDIATELY run below.
  • - login to cumin host and run the decom cookbook: cookbook sre.hosts.decommission <host fqdn> -t <phab task>. This does: bootloader wipe, host power down, netbox update to decommissioning status, puppet node clean, puppet node deactivate, debmonitor removal, and run homer.
  • - remove all remaining puppet references and all host entries in the puppet repo
  • - reassign task from service owner to no owner and ensure the site project (ops-sitename depending on site of server) is assigned.

End service owner steps / Begin DC-Ops team steps:

  • - system disks removed (by onsite)
  • - determine system age, under 5 years are reclaimed to spare, over 5 years are decommissioned.
  • - IF DECOM: system unracked and decommissioned (by onsite), update netbox with result and set state to offline
  • - IF DECOM: mgmt dns entries removed.
  • - IF RECLAIM: set netbox state to 'inventory' and hostname to asset tag

Event Timeline

Change #1098095 had a related patch set uploaded (by Andrew Bogott; author: Andrew Bogott):

[operations/puppet@production] Remove ceph references to cloudcephosd100[1-3]

https://gerrit.wikimedia.org/r/1098095

Change #1098096 had a related patch set uploaded (by Andrew Bogott; author: Andrew Bogott):

[operations/puppet@production] Remove refs to cloudcephmon100[1-3]

https://gerrit.wikimedia.org/r/1098096

Change #1098095 merged by Andrew Bogott:

[operations/puppet@production] Remove ceph references to cloudcephosd100[1-3]

https://gerrit.wikimedia.org/r/1098095

Mentioned in SAL (#wikimedia-cloud-feed) [2024-12-04T14:45:50Z] <andrew@cloudcumin1001> START - Cookbook wmcs.openstack.cloudvirt.drain on host 'cloudvirt1035.eqiad.wmnet' (T380893)

Mentioned in SAL (#wikimedia-cloud-feed) [2024-12-04T14:48:40Z] <andrew@cloudcumin1001> END (PASS) - Cookbook wmcs.openstack.cloudvirt.drain (exit_code=0) on host 'cloudvirt1035.eqiad.wmnet' (T380893)

cookbooks.sre.hosts.decommission executed by andrew@cumin1002 for hosts: cloudcephmon1001.eqiad.wmnet

  • cloudcephmon1001.eqiad.wmnet (PASS)
    • Downtimed host on Icinga/Alertmanager
    • Found physical host
    • Downtimed management interface on Alertmanager
    • Wiped all swraid, partition-table and filesystem signatures
    • Powered off
    • [Netbox] Set status to Decommissioning, deleted all non-mgmt IPs, updated switch interfaces (disabled, removed vlans, etc)
    • Configured the linked switch interface(s)
    • Removed from DebMonitor
    • Removed from Puppet master and PuppetDB

cookbooks.sre.hosts.decommission executed by andrew@cumin1002 for hosts: cloudcephmon1002.eqiad.wmnet

  • cloudcephmon1002.eqiad.wmnet (PASS)
    • Downtimed host on Icinga/Alertmanager
    • Found physical host
    • Downtimed management interface on Alertmanager
    • Wiped all swraid, partition-table and filesystem signatures
    • Powered off
    • [Netbox] Set status to Decommissioning, deleted all non-mgmt IPs, updated switch interfaces (disabled, removed vlans, etc)
    • Configured the linked switch interface(s)
    • Removed from DebMonitor
    • Removed from Puppet master and PuppetDB

cookbooks.sre.hosts.decommission executed by andrew@cumin1002 for hosts: cloudcephmon1003.eqiad.wmnet

  • cloudcephmon1003.eqiad.wmnet (PASS)
    • Downtimed host on Icinga/Alertmanager
    • Found physical host
    • Downtimed management interface on Alertmanager
    • Wiped all swraid, partition-table and filesystem signatures
    • Powered off
    • [Netbox] Set status to Decommissioning, deleted all non-mgmt IPs, updated switch interfaces (disabled, removed vlans, etc)
    • Configured the linked switch interface(s)
    • Removed from DebMonitor
    • Removed from Puppet master and PuppetDB

Change #1098096 merged by Andrew Bogott:

[operations/puppet@production] Remove refs to cloudcephmon100[1-3]

https://gerrit.wikimedia.org/r/1098096

Andrew added a project: ops-eqiad.
Andrew unsubscribed.

Mentioned in SAL (#wikimedia-cloud-feed) [2024-12-04T17:16:39Z] <andrew@cloudcumin1001> START - Cookbook wmcs.openstack.cloudvirt.drain on host 'cloudvirt1035.eqiad.wmnet' (T380893)

Mentioned in SAL (#wikimedia-cloud-feed) [2024-12-04T17:16:44Z] <andrew@cloudcumin1001> END (ERROR) - Cookbook wmcs.openstack.cloudvirt.drain (exit_code=97) on host 'cloudvirt1035.eqiad.wmnet' (T380893)

Hey @Andrew I was able to do the first unit, however, when I was running the script on the other 2 devices, it seems to error out in netbox. Is there something I may be missing that is making this happen? Let me know, thanks!

An exception occurred: KeyError: 'device_name'

Traceback (most recent call last):
File "/srv/netbox/customscripts/offline_device.py", line 24, in run
self._run(data)
File "/srv/netbox/customscripts/offline_device.py", line 36, in _run
self._run_device(device)
File "/srv/netbox/customscripts/offline_device.py", line 69, in _run_device
interface.delete()
File "/srv/deployment/netbox/venv/lib/python3.11/site-packages/django/db/models/base.py", line 1182, in delete
collector.collect([self], keep_parents=keep_parents)
File "/srv/deployment/netbox/venv/lib/python3.11/site-packages/django/db/models/deletion.py", line 392, in collect
raise RestrictedError(
django.db.models.deletion.RestrictedError: ("Cannot delete some instances of model 'Interface' because they are referenced through restricted foreign keys: 'Interface.parent'.", {<Interface: vlan1152>})

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "/srv/deployment/netbox/current/src/netbox/extras/scripts.py", line 662, in _run_script
script.output = script.run(data, commit)

File "/srv/netbox/customscripts/offline_device.py", line 26, in run
self.log_failure(f"Failed to offline device(s) {data['device_name']}: {e}")
KeyError: 'device_name'

These hosts have a somewhat unusual vlan setup, so my guess is something is tripping on that -- paging @cmooney for manual cleanup.

These hosts have a somewhat unusual vlan setup, so my guess is something is tripping on that -- paging @cmooney for manual cleanup.

Apologies for the delay picking this one up I'd somehow missed it.

Yeah the 'offline' script isn't handling the relations between interfaces introduced under T296832. I will need to work on an update to prevent this happening again.

For now I've manually deleted the interfaces on cloudcephmon1002 and cloudcephmon1003 in Netbox, and disabled the associated switch ports. So I think we should be ok to proceed with running the script again or whatever the normal process is for those.

Thanks @cmooney ! @VRiley-WMF, you can give this another try at your convenience.

This comment was removed by Papaul.
Papaul claimed this task.

This is complete