We need to be certain that the bootstrapping process works for our ceph servers (cephosd*) without adversely affecting the cluster availability or its configuration.
The last time we tried a reimage, the 20 osds that were previously known to the host were not detected and the server ended up creatng 20 new OSDs
Something went wrong with the unless condition here: https://github.com/wikimedia/operations-puppet/blob/production/modules/ceph/manifests/osd.pp#L82-L94 which means that the cluster could not associate its local disks with OSDs that were present in the cluster.
The command: ceph-volume lvm list ${device} on a newly reimaged ceph server doesn't return the expected value, so another OSD is created instead.