Some of my Toolforge MariaDB databases have very long tables, where much of the contents could be "archived" but should still be available.
One solution I have found for "glamtools" to store view data is to group the data, then generate individual sqlite3 database files for each group.
MariaDB is used to keep track of these files, and to store some legacy data.
This makes managing data a bit tricky, but it works.
However, I am storing the sqlite files in the tool path, and currently they take up 47GB ( /data/project/glamtools/viewdata ).
You might not want all that in a single MariaDB table (which would be the alternative).
This works fine for me, but it might be suboptimal from the NFS perspective (or not, I don't know how big/efficient the Toolforge NFS is, though it can drag a bit at times).
I am thinking of doing the same to older QuickStatements batches, where the command table is now at 9GB.
Is there/could there be a large, backed-up area on NFS for this purpose?
Or a read/write NFS mount for an object store (T225190)?
I'm happy to do this in the tool path, but maybe there is a better way?