We need to make several decisions about how to install the new ceph cluster.
These decisions include:
[x] Which version of ceph should we target? - **Quincy** - That's [[https://docs.ceph.com/en/latest/releases/quincy/#v17-2-5-quincy|17.2.5]] at the time of writing.
[x] Should we use packages or containers? **Packages**
[x] Where exactly do we get our Ceph builds? **download.ceph.com**
[x] What installation and bootstrapping method will we use? i.e.
* Existing puppet manifests: [[https://github.com/wikimedia/puppet/tree/production/modules/ceph|modules/ceph]]
* ~~Import/adapt third-party puppet module: https://opendev.org/openstack/puppet-ceph~~
* ~~`cephadm`~~
* ~~`ceph-deploy`~~
* ~~[[https://docs.ceph.com/en/latest/install/index_manual/#install-manual|Manul installation]]~~
[x] What will the pool names be? **4 initial pools configred for RBD**
[x] What will the replication settings and/or erasure coding settings be for the pools? **Currently evaluating erasure coding for RBD, with replicated pools for metadata**
[x] How many placement groups should be configured for each pool? **Initial settings: 1200 PGs for the HDD pools, 800 for the SSD pools. Autotuning enabled**
[] Should we add buckets for row and rack to the CRUSH maps now? **Yes**