Page MenuHomePhabricator

Closed, ResolvedPublicRequest


This task will track the decommission-hardware of server The server was disabled in T257324 and we are going to reuse its IP for a Wikidough Ganeti VM.

With the launch of updates to the decom cookbook, the majority of these steps can be handled by the service owners directly. The DC Ops team only gets involved once the system has been fully removed from service and powered down by the decommission cookbook.


Steps for service owner:

  • - all system services confirmed offline from production use
  • - set all icinga checks to maint mode/disabled while reclaim/decommmission takes place. (likely done by script)
  • - remove system from all lvs/pybal active configuration
  • - any service group puppet/hiera/dsh config removed
  • - remove site.pp, replace with role(spare::system) recommended to ensure services offline but not 100% required as long as the decom script is IMMEDIATELY run below.
  • - login to cumin host and run the decom cookbook: cookbook sre.hosts.decommission <host fqdn> -t <phab task>. This does: bootloader wipe, host power down, netbox update to decommissioning status, puppet node clean, puppet node deactivate, debmonitor removal, and run homer.
  • - remove all remaining puppet references and all host entries in the puppet repo
  • - reassign task from service owner to DC ops team member depending on site of server.

End service owner steps / Begin DC-Ops team steps:

  • - system disks removed (by onsite) - nope, reusing the host
  • - determine system age, under 5 years are reclaimed to spare, over 5 years are decommissioned. - subtask T289715 created for reallocation
  • - IF DECOM: system unracked and decommissioned (by onsite), update netbox with result and set state to offline
  • - IF DECOM: mgmt dns entries removed.
  • - IF RECLAIM: set netbox state to 'inventory' and hostname to asset tag - renamed to ganeti4004 and re-ran dns cookbook to update

Event Timeline

Mentioned in SAL (#wikimedia-operations) [2021-08-11T14:43:57Z] <sukhe> depool - T288579

Mentioned in SAL (#wikimedia-operations) [2021-08-11T14:44:24Z] <sukhe> s/depool/decommission - T288579

cookbooks.sre.hosts.decommission executed by sukhe@cumin1001 for hosts:

  • (FAIL)
    • Host steps raised exception: Host bast4002 was not found in Icinga status - no hosts have been downtimed.

ERROR: some step on some host failed, check the bolded items above

wiki_willy added projects: ops-ulsfo, DC-Ops.
wiki_willy added subscribers: Jclark-ctr, wiki_willy.

Hi @ssingh - just a heads up to add "ops-ulsfo" as a project tag, when this is ready for dc-ops to unrack. Thanks, Willy

So I'm not sure if we want to power this off and not use it at all, or re-allocate it as another service/host in ulsfo. My first thought was potential ganeti use, but ulsfo's bastion order predates our unification of caching site misc spec and caching site CP spec.

specs of bast4002:

  • R430
  • (2) Intel Xeon E5-2620 v4 2.1GHz,2 0M Cache,8.0GT/s QPI,Turbo,HT, 8C/16T (85W) Max Mem 2133MHz* (2) 32GB RDIMM, 2400MT/s, Dual Ran k, x4 Data Width
  • 10GB NIC
  • (2) 480GB Solid State Drive SATA R ead Intensive MLC 6Gbps 2.5in Hot-plug Drive, S3520

for an example of a ganeti host in ulsfo

So this older bastion host actually has half the ram of a ganeti host.

IRC Update from my chat with @BBlack

This old host is non-ideal but would work as a fallback ganeti host. I'll create a setup task for the host to rename and relabel to ganeti4XXX and then deploy it as role(insetup). Then if Traffic ever needs it, one of their team can apply the correct role and push it into full service.

Overall, caching sites are rolling things into the ganeti clusters at each site. As this happens, the bast/dns hosts in caching sites will likely reallocate over to ganeti hosts for caching sites.

RobH updated the task description. (Show Details)