We ordered a single set of PDUs to test before ordering the rest of a batch to update rows A and B.
This single set will need installation into [[ https://netbox.wikimedia.org/dcim/racks/13/ | b5-eqiad ]]. For this, each host in b5 will need to be prepared for a maint window in which power may end up lost.
Proposed Window: Thursday, May 16th @ 0900 AM Eastern / 1300 GMT. Estimated window is 3 hours, but nothing is certain as this is the first PDU swap and will be used to judge the timeline for the full rows A and B swap.
[[ https://netbox.wikimedia.org/dcim/racks/13/ | Netbox listing for b5-eqiad ]]
Hosts to plan for downtime during a window:
**Active hosts:**
cloudvirt1014
cloudvirt1028
db1098 - non master, can depool with a few hours heads up per T223126#5177373
db1131 - non master, can depool with a few hours heads up per T223126#5177373
db1139 - non master, can depool with a few hours heads up per T223126#5177373
dbproxy1004 - not in use at the moment per T223126#5177373
dbproxy1005 - not in use at the moment per T223126#5177373
dbproxy1006 - active m1 proxy can fail over a day in advance per T223126#5177373
labweb1001
ms-be1016 - will need to have swift + rsync stopped for good measure
ms-be1017 - will need to have swift + rsync stopped for good measure
ms-be1018 - will need to have swift + rsync stopped for good measure
ms-be1032 - will need to have swift + rsync stopped for good measure
ms-be1033 - will need to have swift + rsync stopped for good measure
**Staged Host:**
restbase1023 - staged per task T219404 but not yet in service (no data to lose, can just power off at start and power back on afterwards to make life easier.)