Hi!
As part of the parent task, we are working on moving maps servers on Bookworm. They run postgres, that in turn it is used by Tegola to fetch data and render its tile cache on Thanos Swift (without it, we'll not be able to sustain the current traffic).
The current user is tegola:prod, and we are currently storing ~350M tile objects, for a total storage of 450G in each of the following buckets: tegola-swift-codfw-v002 and tegola-swift-eqiad-v002 (one for each DC).
We'd need to do the following:
- create two new buckets, one for each DC, called tegola-swift-codfw-v003 and tegola-swift-eqiad-v003 (easy enough with s3cmd).
- Use the new Postgres cluster on maps-test2* to regenerate the tile cache, since we want to make sure that we can re-render everything with the new setup. This will effectively double the current capacity used in each Thanos Swift cluster.
- Eventually we'll drop all the data in the old buckets, when we'll feel ready (we do want to have a fallback if needed).
The old buckets will not receive new data.
Is it something that we can do right now? Or are we at capacity on Thanos Swift and it would be preferrable to drop the data first, and then warm up the cache to generate the new one?