Page MenuHomePhabricator

decommission cloudelastic100[5-6]
Closed, ResolvedPublicRequest

Description

This task will track the decommission-hardware of server cloudelastic100[5-6] .

Refresh ticket: T376166

With the launch of updates to the decom cookbook, the majority of these steps can be handled by the service owners directly. The DC Ops team only gets involved once the system has been fully removed from service and powered down by the decommission cookbook.

cloudelastic1005

Steps for service owner:

  • - all system services confirmed offline from production use
  • - set all icinga checks to maint mode/disabled while reclaim/decommmission takes place. (likely done by script)
  • - remove system from all lvs/pybal active configuration
  • - any service group puppet/hiera/dsh config removed
  • - remove site.pp, replace with role(spare::system) recommended to ensure services offline but not 100% required as long as the decom script is IMMEDIATELY run below.
  • - login to cumin host and run the decom cookbook: cookbook sre.hosts.decommission <host fqdn> -t <phab task>. This does: bootloader wipe, host power down, netbox update to decommissioning status, puppet node clean, puppet node deactivate, debmonitor removal, and run homer.
  • - remove all remaining puppet references and all host entries in the puppet repo
  • - reassign task from service owner to no owner and ensure the site project (ops-sitename depending on site of server) is assigned.

End service owner steps / Begin DC-Ops team steps:

  • - system disks removed (by onsite)
  • - determine system age, under 5 years are reclaimed to spare, over 5 years are decommissioned.
  • - IF DECOM: system unracked and decommissioned (by onsite), update netbox with result and set state to offline
  • - IF DECOM: mgmt dns entries removed.
  • - IF RECLAIM: set netbox state to 'inventory' and hostname to asset tag

cloudelastic1006

Steps for service owner:

  • - all system services confirmed offline from production use
  • - set all icinga checks to maint mode/disabled while reclaim/decommmission takes place. (likely done by script)
  • - remove system from all lvs/pybal active configuration
  • - any service group puppet/hiera/dsh config removed
  • - remove site.pp, replace with role(spare::system) recommended to ensure services offline but not 100% required as long as the decom script is IMMEDIATELY run below.
  • - login to cumin host and run the decom cookbook: cookbook sre.hosts.decommission <host fqdn> -t <phab task>. This does: bootloader wipe, host power down, netbox update to decommissioning status, puppet node clean, puppet node deactivate, debmonitor removal, and run homer.
  • - remove all remaining puppet references and all host entries in the puppet repo
  • - reassign task from service owner to no owner and ensure the site project (ops-sitename depending on site of server) is assigned.

End service owner steps / Begin DC-Ops team steps:

  • - system disks removed (by onsite)
  • - determine system age, under 5 years are reclaimed to spare, over 5 years are decommissioned.
  • - IF DECOM: system unracked and decommissioned (by onsite), update netbox with result and set state to offline
  • - IF DECOM: mgmt dns entries removed.
  • - IF RECLAIM: set netbox state to 'inventory' and hostname to asset tag

Event Timeline

bking mentioned this in Unknown Object (Task).Nov 26 2024, 10:39 PM
bking renamed this task from decommission cloudelastic100[5-6] : Don't decommission until we have cloudelastic101[12]! to decommission cloudelastic100[5-6].Jan 10 2025, 10:39 PM

Mentioned in SAL (#wikimedia-operations) [2025-01-10T22:45:42Z] <bking@cumin2002> START - Cookbook sre.elasticsearch.ban Banning hosts: cloudelastic1005,cloudelastic1006 for ban hosts prior to decom - bking@cumin2002 - T380937

Mentioned in SAL (#wikimedia-operations) [2025-01-10T22:45:45Z] <bking@cumin2002> END (FAIL) - Cookbook sre.elasticsearch.ban (exit_code=99) Banning hosts: cloudelastic1005,cloudelastic1006 for ban hosts prior to decom - bking@cumin2002 - T380937

Mentioned in SAL (#wikimedia-operations) [2025-01-10T22:45:52Z] <bking@cumin2002> START - Cookbook sre.elasticsearch.ban Banning hosts: cloudelastic1005*,cloudelastic1006* for ban hosts prior to decom - bking@cumin2002 - T380937

Mentioned in SAL (#wikimedia-operations) [2025-01-10T22:45:56Z] <bking@cumin2002> END (PASS) - Cookbook sre.elasticsearch.ban (exit_code=0) Banning hosts: cloudelastic1005*,cloudelastic1006* for ban hosts prior to decom - bking@cumin2002 - T380937

Change #1110862 had a related patch set uploaded (by Bking; author: Bking):

[operations/puppet@production] cloudelastic: remove cloudelastic100[56] from conftool, add 101[12]

https://gerrit.wikimedia.org/r/1110862

Change #1110862 merged by Bking:

[operations/puppet@production] cloudelastic: remove cloudelastic100[56] from conftool, add 101[12]

https://gerrit.wikimedia.org/r/1110862

Change #1111326 had a related patch set uploaded (by Bking; author: Bking):

[operations/puppet@production] cloudelastic: remove references to cloudelastic hosts before 1007

https://gerrit.wikimedia.org/r/1111326

Change #1111326 merged by Bking:

[operations/puppet@production] cloudelastic: decom cloudelastic100[5,6]

https://gerrit.wikimedia.org/r/1111326

cookbooks.sre.hosts.decommission executed by bking@cumin2002 for hosts: cloudelastic[1005-1006].eqiad.wmnet

  • cloudelastic1005.eqiad.wmnet (PASS)
    • Downtimed host on Icinga/Alertmanager
    • Found physical host
    • Downtimed management interface on Alertmanager
    • Wiped all swraid, partition-table and filesystem signatures
    • Powered off
    • [Netbox] Set status to Decommissioning, deleted all non-mgmt IPs, updated switch interfaces (disabled, removed vlans, etc)
    • Configured the linked switch interface(s)
    • Removed from DebMonitor
    • Removed from Puppet master and PuppetDB
  • cloudelastic1006.eqiad.wmnet (PASS)
    • Downtimed host on Icinga/Alertmanager
    • Found physical host
    • Downtimed management interface on Alertmanager
    • Wiped all swraid, partition-table and filesystem signatures
    • Powered off
    • [Netbox] Set status to Decommissioning, deleted all non-mgmt IPs, updated switch interfaces (disabled, removed vlans, etc)
    • Configured the linked switch interface(s)
    • Removed from DebMonitor
    • Removed from Puppet master and PuppetDB

Hello DC Ops,

I think we have finished our service owner steps, so assigning over to y'all. Hit me up here or in IRC (inflatador) if we need to do anything else.

For some reason this breaks PCC on deploy servers:

Error: Evaluation Error: Error while evaluating a Function Call, DNS lookup failed for cloudelastic1005.eqiad.wmnet Resolv::DNS::Resource::IN::A (file: /srv/jenkins/puppet-compiler/4806/change/src/modules/profile/functions/kubernetes/deployment_server/elasticsearch_external_services_config.pp, line: 22, column: 12) on node deploy2002.codfw.wmnet
Error: Evaluation Error: Error while evaluating a Function Call, DNS lookup failed for cloudelastic1005.eqiad.wmnet Resolv::DNS::Resource::IN::A (file: /srv/jenkins/puppet-compiler/4806/change/src/modules/profile/functions/kubernetes/deployment_server/elasticsearch_external_services_config.pp, line: 22, column: 12) on node deploy2002.codfw.wmnet
Error: Could not call 'find' on 'catalog': Evaluation Error: Error while evaluating a Function Call, DNS lookup failed for cloudelastic1005.eqiad.wmnet Resolv::DNS::Resource::IN::A (file: /srv/jenkins/puppet-compiler/4806/change/src/modules/profile/functions/kubernetes/deployment_server/elasticsearch_external_services_config.pp, line: 22, column: 12) on node deploy2002.codfw.wmnet
Error: Could not call 'find' on 'catalog': Evaluation Error: Error while evaluating a Function Call, DNS lookup failed for cloudelastic1005.eqiad.wmnet Resolv::DNS::Resource::IN::A (file: /srv/jenkins/puppet-compiler/4806/change/src/modules/profile/functions/kubernetes/deployment_server/elasticsearch_external_services_config.pp, line: 22, column: 12) on node deploy2002.codfw.wmnet
Error: Try 'puppet help catalog compile' for usage

The PQL query in question does no longer return cloudelastic1005.eqiad.wmnet when the production puppet db is queried, but it still does when querying the pcc db. I ran https://wikitech.wikimedia.org/wiki/Help:Puppet-compiler#Manually_update_production, but the issue persists. (cc @fgiunchedi)

Papaul claimed this task.
Papaul updated the task description. (Show Details)
Papaul subscribed.

complete

this task was marked as complete but the servers still have a status of decommissioining instead of offline.
@Jclark-ctr or @VRiley-WMF no rush at all on this, but when one of you has time can you confirm these have been removed from the racks.
1005 was in A4, U34
1006 was in B4, U23
thanks!