Page MenuHomePhabricator

decommission dbproxy1017.eqiad.wmnet
Closed, ResolvedPublicRequest

Description

This task will track the decommission-hardware of server dbproxy1017.eqiad.wmnet

With the launch of updates to the decom cookbook, the majority of these steps can be handled by the service owners directly. The DC Ops team only gets involved once the system has been fully removed from service and powered down by the decommission cookbook.

dbproxy1017.eqiad.wmnet

Steps for service owner:

  • - all system services confirmed offline from production use - Haproxy has been stopped

[] - set all icinga checks to maint mode/disabled while reclaim/decommmission takes place. (likely done by script)
[] - remove system from all lvs/pybal active configuration

  • - any service group puppet/hiera/dsh config removed

[] - remove site.pp, replace with role(spare::system) recommended to ensure services offline but not 100% required as long as the decom script is IMMEDIATELY run below.

  • - login to cumin host and run the decom cookbook: cookbook sre.hosts.decommission <host fqdn> -t <phab task>. This does: bootloader wipe, host power down, netbox update to decommissioning status, puppet node clean, puppet node deactivate, debmonitor removal, and run homer.
  • - remove all remaining puppet references and all host entries in the puppet repo
  • - reassign task from service owner to DC ops team member and site project (ops-sitename) depending on site of server

End service owner steps / Begin DC-Ops team steps:

  • - system disks removed (by onsite)
  • - determine system age, under 5 years are reclaimed to spare, over 5 years are decommissioned.
  • - IF DECOM: system unracked and decommissioned (by onsite), update netbox with result and set state to offline
  • - IF DECOM: mgmt dns entries removed.

Event Timeline

This should be fine to go see T341121#9156136, however I'd suggest to stop haproxy for a few days first (and/or disable the service, as otherwise puppet will bring it up) and if nothing breaks, proceed with the full decommission

This is also worth checking: Reminder, the user 'haproxy'@'$HOSTIP' needs to be removed from the databases (on the misc cluster the proxy used to serve) in this case, it is on m5.

$ drop user haproxy@'10.64.48.43';
Query OK, 0 rows affected (0.001 sec)
[...]
$ systemctl disable --now haproxy
Synchronizing state of haproxy.service with SysV service script with /lib/systemd/systemd-sysv-install.
Executing: /lib/systemd/systemd-sysv-install disable haproxy
Removed /etc/systemd/system/multi-user.target.wants/haproxy.service.

Mentioned in SAL (#wikimedia-operations) [2023-11-02T08:57:11Z] <arnaudb@cumin1001> START - Cookbook sre.hosts.downtime for 14 days, 0:00:00 on dbproxy1017.eqiad.wmnet with reason: decomissionning via T348956

Icinga downtime and Alertmanager silence (ID=b8b358a3-33f5-4408-bcf5-1522e6cb989a) set by arnaudb@cumin1001 for 14 days, 0:00:00 on 1 host(s) and their services with reason: decomissionning via T348956

dbproxy1017.eqiad.wmnet

Mentioned in SAL (#wikimedia-operations) [2023-11-02T08:57:25Z] <arnaudb@cumin1001> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 14 days, 0:00:00 on dbproxy1017.eqiad.wmnet with reason: decomissionning via T348956

Change 970833 had a related patch set uploaded (by Arnaudb; author: Arnaudb):

[operations/puppet@production] haproxy: disabling notifications on dbproxy1017

https://gerrit.wikimedia.org/r/970833

Change 970833 merged by Arnaudb:

[operations/puppet@production] haproxy: disabling notifications on dbproxy1017

https://gerrit.wikimedia.org/r/970833

HAProxy was restarted (with no available config via SQL since the user has been dropped) → sudo systemctl disable --now haproxy.service && sudo disable-puppet "will remove this host T348956

Change 972509 had a related patch set uploaded (by Arnaudb; author: Arnaudb):

[operations/puppet@production] haproxy: remove dbproxy1017 from production

https://gerrit.wikimedia.org/r/972509

patch ready and waiting for review. Will run the cookbook before merging.

@ABran-WMF please remember to remove 'haproxy'@'10.64.48.43' from the databases as well as double check if it is exists somewhere in puppet.

@ABran-WMF please remember to remove 'haproxy'@'10.64.48.43' from the databases as well as double check if it is exists somewhere in puppet.

looks OK to me:

root@db1217:m5[(none)]> select user,host from mysql.user where user like '%haproxy%';
+---------+--------------+
| User    | Host         |
+---------+--------------+
| haproxy | 10.64.134.16 |
| haproxy | 10.64.32.180 |
+---------+--------------+
2 rows in set (0.002 sec)
arnaudb@db1176:~ $ sudo mysql -e "select user,host from mysql.user where user like '%haproxy%';"
+---------+--------------+
| User    | Host         |
+---------+--------------+
| haproxy | 10.64.134.16 |
| haproxy | 10.64.32.180 |
+---------+--------------+

on the hosts mentionned in here

Is there something else I should check for this @Marostegui ?

cookbooks.sre.hosts.decommission executed by arnaudb@cumin1001 for hosts: dbproxy1017.eqiad.wmnet

  • dbproxy1017.eqiad.wmnet (PASS)
    • Downtimed host on Icinga/Alertmanager
    • Found physical host
    • Downtimed management interface on Alertmanager
    • Wiped all swraid, partition-table and filesystem signatures
    • Powered off
    • [Netbox] Set status to Decommissioning, deleted all non-mgmt IPs, updated switch interfaces (disabled, removed vlans, etc)
    • Configured the linked switch interface(s)
    • Removed from DebMonitor
    • Removed from Puppet master and PuppetDB

Change 972509 merged by Arnaudb:

[operations/puppet@production] haproxy: remove dbproxy1017 from production

https://gerrit.wikimedia.org/r/972509

ABran-WMF changed the task status from Open to In Progress.Nov 9 2023, 3:25 PM
ABran-WMF removed ABran-WMF as the assignee of this task.
ABran-WMF updated the task description. (Show Details)
ABran-WMF added a project: DC-Ops.
ABran-WMF subscribed.
Jclark-ctr claimed this task.
Jclark-ctr updated the task description. (Show Details)