Page MenuHomePhabricator

decommission helium.eqiad.wmnet and helium-array
Closed, ResolvedPublicRequest

Description

This task will track the decommission-hardware of server helium.eqiad.wmnet and its attached array of disks.

Steps for service owner:

  • - all system services confirmed offline from production use
  • - set all icinga checks to maint mode/disabled while reclaim/decommmission takes place. (likely done by script)
  • - remove system from all lvs/pybal active configuration
  • - any service group puppet/hiera/dsh config removed
  • - remove site.pp, replace with role(spare::system) recommended to ensure services offline but not 100% required as long as the decom script is IMMEDIATELY run below.
  • - login to cumin host and run the decom cookbook: cookbook sre.hosts.decommission <host fqdn> -t <phab task>. This does: bootloader wipe, host power down, netbox update to decommissioning status, puppet node clean, puppet node deactivate, debmonitor removal.
  • - run homer on cumin host to update switch stack
  • - remove all remaining puppet references (include role::spare) and all host entries in the puppet repo
  • - remove ALL dns entries except the asset tag mgmt entries via netbox, run sre dns cookbook.
  • - reassign task from service owner to DC ops team member depending on site of servee.

End service owner steps / Begin DC-Ops team steps:

DC OPS: This host has a disk array attached which needs to be decommissioned as well.

  • - system disks removed (by onsite)
  • - determine system age, under 5 years are reclaimed to spare, over 5 years are decommissioned.
  • - IF DECOM: system unracked and decommissioned (by onsite), update netbox with result and set state to offline
  • - IF DECOM: mgmt dns entries removed.

Event Timeline

jcrespo renamed this task from decommission helium.eqiad.wmnet to decommission helium.eqiad.wmnet and helium-array.Jan 27 2021, 11:58 AM
jcrespo created this task.

@RobH This is not yet ready for dc-ops processing, but do we need a separate checklist for the system and the attached array, or one is enough?

Change 658969 had a related patch set uploaded (by Jcrespo; owner: Jcrespo):
[operations/puppet@production] Remove helium and heze references from puppet

https://gerrit.wikimedia.org/r/658969

@RobH This is not yet ready for dc-ops processing, but do we need a separate checklist for the system and the attached array, or one is enough?

One task, just ensure the disk array is clearly listed in the task description. I'm updating now.

RobH added a project: ops-eqiad.
RobH moved this task from Backlog to Decommission on the ops-eqiad board.
RobH unsubscribed.

Change 658969 merged by Jcrespo:
[operations/puppet@production] bacula: Remove helium and heze references from puppet

https://gerrit.wikimedia.org/r/658969

cookbooks.sre.hosts.decommission executed by jynus@cumin1001 for hosts: helium.eqiad.wmnet

  • helium.eqiad.wmnet (PASS)
    • Downtimed host on Icinga
    • Found physical host
    • Downtimed management interface on Icinga
    • Wiped bootloaders
    • Powered off
    • Set Netbox status to Decommissioning and deleted all non-mgmt interfaces and related IPs
    • Removed from DebMonitor
    • Removed from Puppet master and PuppetDB
jcrespo added subscribers: Jclark-ctr, Cmjohnson.

This is ready for full decommission, many people will be happy to get rid of these 2 boxes.

reassign task from service owner to DC ops team member depending on site of servee.

That can be @Cmjohnson or @Jclark-ctr here.

@wiki_willy As promised, we sped up the decommissioning of eqiad hw, this should free 3Us of space. No blocker on us, but I thought you would appreciate the ping given your comments on past SRE meeting.

Thanks a lot @jcrespo, it's much appreciated!

@wiki_willy As promised, we sped up the decommissioning of eqiad hw, this should free 3Us of space. No blocker on us, but I thought you would appreciate the ping given your comments on past SRE meeting.

Both have been removed from rack and netbox updated