We ordered a single set of PDUs to test before ordering the rest of a batch to update rows A and B.
This single set will need installation into b5-eqiad. For this, each host in b5 will need to be prepared for a maint window in which power may end up lost.
Proposed Window: Thursday, May 16th @ 0900 AM Eastern / 1300 GMT. Estimated window is 3 hours, but nothing is certain as this is the first PDU swap and will be used to judge the timeline for the full rows A and B swap.
Hostname Proposal: Change the existing PDU tower names in netbox to the asset tag on the day of the swap, and use the old hostnames for the new pdus (since they won't exist in the rack together on network, no reason not to keep things simple this way.)
Hosts to plan for downtime during a window:
Active hosts:
cloudvirt1014 - good to go per T223148: Cloud Services: reallocate workload from rack B5-eqiad
cloudvirt1028 - good to go per T223148: Cloud Services: reallocate workload from rack B5-eqiad
db1098 - non master, can depool with a few hours heads up per T223126#5177373
db1131 - non master, can depool with a few hours heads up per T223126#5177373
db1139 - non master, can depool with a few hours heads up per T223126#5177373
dbproxy1004 - not in use at the moment per T223126#5177373
dbproxy1005 - not in use at the moment per T223126#5177373
dbproxy1006 - active m1 proxy can fail over a day in advance per T223126#5177373
labweb1001- good to go per T223148: Cloud Services: reallocate workload from rack B5-eqiad
ms-be1016 - will need to have swift + rsync stopped for good measure
ms-be1017 - will need to have swift + rsync stopped for good measure
ms-be1018 - will need to have swift + rsync stopped for good measure
ms-be1032 - will need to have swift + rsync stopped for good measure
ms-be1033 - will need to have swift + rsync stopped for good measure
Staged Host:
restbase1023 - staged per task T219404 but not yet in service (no data to lose, can just power off at start and power back on afterwards to make life easier.)