They will require new IPs:
- cloudcephosd1005.eqiad.wmnet:
public: addr: "10.64.149.15" iface: "ens3f0np0" cluster: addr: "192.168.6.7" prefix: "24" iface: "ens3f1np1"
- cloudcephosd1010.eqiad.wmnet:
public: addr: "10.64.149.16" iface: "ens3f0np0" cluster: addr: "192.168.6.8" prefix: "24" iface: "ens3f1np1"
- wmcs.ceph.osd.depool_and_destroy cookbook (remove all the osds from the host and remove CRUSH entries for the)
- sre.hosts.decomission
- Move the hosts to the new racks
- In puppet, edit hieradata/eqiad/profile/cloudceph/osd.yaml with new IPs on the new ranges (public and cluster networks) if needed (search the range in Netbox for the next free IP in the range)
- Follow https://wikitech.wikimedia.org/wiki/Server_Lifecycle#Move_existing_server_between_rows/racks,_changing_IPs
- Note that for the new interfaces to come up, puppet has to run once, so that happens after reimage, then you have to ensure that the new interface is setup in the right vlan (cloud-storage one)
- BEFORE REIMAGE Upgrade the idrac firmware (cookbook sre.hardware.upgrade-firmware -n -c idrac cloudcephosd1004)
- BEFORE REIMAGE Upgrade the nic firmware (cookbook sre.hardware.upgrade-firmware -n -c nic cloudcephosd1004)
- IF REIMAGE FAILS Repeat the reimage until it works (puppet might timeout, etc., you can check the console by sshing to root@<hostname>.mgmt.eqiad.wmnet, use mgmt pass)
- Merge the patch with the new IPs
- Put the host back in ceph (wmcs.ceph.osd.bootstrap_and_add), it might take a while to finish the rebalancing
- Profit!