They will require new IPs:
* cloudcephosd1003.eqiad.wmnet:
```
public:
addr: "10.64.149.15"
iface: "ens3f0np0"
cluster:
addr: "192.168.6.7"
prefix: "24"
iface: "ens3f1np1"
```
* cloudcephosd1004.eqiad.wmnet:
```
public:
addr: "10.64.149.16"
iface: "ens3f0np0"
cluster:
addr: "192.168.6.8"
prefix: "24"
iface: "ens3f1np1"
```
* []wmcs.ceph.osd.depool_and_destroy cookbook (remove all the osds from the host and remove CRUSH entries for the)
* []sre.hosts.decomission
* []Move the hosts to the new racks
* [x]In puppet, edit hieradata/eqiad/profile/cloudceph/osd.yaml with new IPs on the new ranges (public and cluster networks) if needed (search the range in Netbox for the next free IP in the range)
* []Half-follow https://wikitech.wikimedia.org/wiki/Server_Lifecycle#Rename_while_reimaging
** []Move from DECOMMISSIONING to PLANNED
** []Add only the public IP to the main interface
** []**flag that intefrace as primary**
** []Add also an fqdn to that new IP
** []Add also an fqdn to the mgmt IP (if not there)
** []Run the sre.dns.netbox
* []Merge the patch with the new IPs
* []Upgrade the idrac firmware (cookbook sre.hardware.upgrade-firmware -n -c idrac cloudcephosd1004)
* []Upgrade the nic firmware (cookbook sre.hardware.upgrade-firmware -n -c nic cloudcephosd1004)
* []Reimage the host (cookbook sre.hosts.reimage --os bullseye --new -t T329502 cloudcephosd1004)
** []Repeat the reimage until it works (puppet might timeout, etc., you can check the console by sshing to root@<hostname>.mgmt.eqiad.wmnet, use mgmt pass)
* []Put the host back in ceph (wmcs.ceph.osd.bootstrap_and_add), it might take a while to finish the rebalancing
* []Profit!