= Deployment
The service should stay up, with some impact to the performance while the shifting of data is going on.
== Setting it manually, all at once
Reference: https://docs.ceph.com/en/latest/rados/operations/crush-map-edits/
* [] dump current crushmap:
```
ceph osd getcrushmap -o crushmap.bin
```
* [] Decompile into text
```
crushtool -d crushmap.bin -o crushmap.txt
```
* [] Make a copy
```
cp crushmap.txt crushmap.$(date +%Y%m%d%H%M%S).before_rack_ha.txt
```
* [] Get the new prepared crushmap to a mon host from:
{P44926}
Test results for the above crushmap:
{P44927}
** [] Compile it:
```
crushtool -c new_crushmap.txt -o crushmap.bin
```
** [] Test that the rules still work well (check that there's no misplaced pgs, that is shows 1024/1024, and that there's more or less the same placements on each device, current output example P44923):
```
crushtool --test -i crushmap.bin --show-utilization --num-rep=3
```
** [] Load the new crush map and wait for the cluster to shift data around (will take a looong time)
```
ceph osd setcrushmap -i crushmap.bin
```