Page MenuHomePhabricator

Investigate and remove NFS mounts in the snuggle project
Closed, ResolvedPublic

Description

Currently, /home and /data/project are shared across all instances of the design project, allowing files to be shared easily across instances. This however, comes at a cost of less reliability - your instances are unavailable during NFS outages (this is the most unreliable part of all of labs), home directory access is slower, etc.

Additionally, there's /data/scratch which is a labs-wide shared space, and /public/mounts which is a public readonly mount of wikimedia data dumps (which maybe you would find useful?)

Ideally, I'd love to get rid of all of them - your project gets more stable, Yuvi gets happier, win-win!

Event Timeline

yuvipanda claimed this task.
yuvipanda raised the priority of this task from to Needs Triage.
yuvipanda updated the task description. (Show Details)
yuvipanda added a project: Cloud-Services.
yuvipanda added subscribers: Matanya, Ricordisamoa, Andrew and 4 others.

Can all be killed!

/data/scratch and /data/project can be killed

I'll need to do some work to deal with /home

Cool!

Ideally code should run from /srv on instances, and I can help get bigger disks if needed.

Ok, looks like I can just copy halfak/projects onto /srv and modify symlinks and everything should work. I'm not doing anything atm, however - just assessing how much work this will be.

chasemp added a subscriber: chasemp.

Change 635068 had a related patch set uploaded (by Andrew Bogott; owner: Andrew Bogott):
[operations/puppet@production] cloud-vps snuggle project: remove a couple of NFS mounts

https://gerrit.wikimedia.org/r/635068

Change 635068 merged by Andrew Bogott:
[operations/puppet@production] cloud-vps snuggle project: remove a couple of NFS mounts

https://gerrit.wikimedia.org/r/635068

Andrew claimed this task.

I removed the /data/scratch and /data/project mounts. I'm closing this as is since I doubt @Halfak is still maintaining this.

Still around. Definitely wanting to find a new maintainer though.