Page MenuHomePhabricator

Requesting /data/project NFS share for Nova_Resource:Twl
Closed, ResolvedPublic

Description

Primarily we need a persistent place to drop db dumps from several instances. I'm cognizant of the pain of setting up NFS, especially with mutual chap auth, so I would like to be as low-maintenance of a customer on this as possible. I'm happy to configure client-side nfs with autofs or what have you. I'm also happy to rsync to some target or otherwise go fly a kite if there is a lower-ops datastore suited for this purpose.

Event Timeline

Aklapper removed a subscriber: The-Wikipedia-Library.

I assume that context is T149433. (Removing VPS-project-Phabricator as this has nothing to do with that Labs project)

You are correct, Aklapper. Sorry for completely mistagging this, and thanks for correcting it.

I'm not sure if this is the right solution, almost certainly it's not a good solution. How large are the backups expected to be?

@Samwalton9 can you give us some estimates of the space you need for these backups? The related ticket mentions 30 days of daily backups. Are we talking about 10Mb, 10Gb, 1Tb? Are you worried about loss of the database server instance or just inadvertent loss of data in the database itself?

@chasemp @bd808 We're talking 10GB range for 30 days dumps for the foreseeable future. Our goals are to be able to recover from a lost instance, and to be able to revert the state on a live instance if we need to. Not at all attached to using NFS. Open to any secure method of getting private state data off box.

@bd808 @chasemp is there any other information you need? Or do we need to adjust our expectations/plan regarding backup? We landed on requesting this because storing db dumps is one of the example use-cases for /data/project and I didn't find another method of backing up our dbs on offer. Right now we have state that only exists on box. Do we need to be backing up to external infrastructure?

bd808 triaged this task as Medium priority.
bd808 added a subscriber: madhuvishy.

Assigning to @madhuvishy for the backend puppet changes needed.

@jsn.sherman or @Samwalton9, can you update the task with host/list of hosts where you need access to /data/project? The fewer hosts we expose the NFS mount to the better. Fundamentally the changes that need to be made are:

  • Add project to nfs-mounts.yaml
  • Update NFS server config
  • Disable NFS mounts globally for the Twl project in hiera
  • Enable NFS mount of /data/project for specific hosts

It would be good to end up with documentation for this procedure on wikitech as a side effect of the task as well.

Right now we just need:
Twlight-test.twl.eqiad.wmflabs

I anticipate having 3 hosts in total in the future, but I will try to batch those changes so it can be one shot.

bd808 moved this task from Tools to Storage on the Cloud-Services board.

Change 344993 had a related patch set uploaded (by Madhuvishy):
[operations/puppet@production] nfs: Enable mounting /data/project from nfs on project twl

https://gerrit.wikimedia.org/r/344993

Mentioned in SAL (#wikimedia-labs) [2017-03-27T20:10:28Z] <madhuvishy> add Madhuvishy as project admin ( for enabling nfs - T159407)

Change 344993 merged by Madhuvishy:
[operations/puppet@production] nfs: Enable mounting /data/project from nfs on project twl

https://gerrit.wikimedia.org/r/344993

This is now done and /data/project is available for project twl. I only switched mount_nfs to true for the twlight-test instance, but you can easily enable it for any other instance in the project through horizon - via instance puppet configuration -> edit Hiera Config and adding, mount_nfs: true, and running puppet on the instance.