We're going to move tools-beta into a VM-hosted NFS server and see how it goes.
|Open||None||T291405 [NFS] Reduce or eliminate bare-metal NFS servers|
|Resolved||Andrew||T291406 POC: puppet-provision a cinder-backed NFS server in eqiad1|
|Open||None||T291409 [NFS] Update maintain_dbusers so it can run on a VM|
Ok, with the general deploy setup I suggested in my puppet patch, that would get you a server that probably cannot do much. From there, that server would want a floating IP that we can move between hosts easily and a cinder volume. The floating IP should have a DNS name attached that we can use in the client manifests, and then we need a security group that opens port 2049 only to the cloud internal range. That should do it for a very basic setup and test without using a special hypervisor and such.
Right now I'm assuming this will look like the "misc" volume and will only actually be something you'd want to mount on toolsbeta when we migrate the data (which would require an nfs outage to toolsbeta...ideally by unexporting it's share with puppet off or something on the labstore1004 server while completing the rsync)
Ok, a test server is live now in the cloudstore project. Since I did it in the lowest-effort method, it is just using the current edition of nfs-exportd, which assumes you host all projects. One problem with it is that I see it uses public IPs for clients that have them. I'm not sure that will work, but there's one way to find out! Time to tinker with the nfs client mounts for toolsbeta.
One thing that could use doing is setting up nfs-exportd (or a certain forked version of it) to accept a list of projects hosted on the VM and only bother exporting those?
tc_setup.sh should have a "clean" option that will remove all local traffic shaping. If that works correctly, then we should be able to run a fair test between our current setup and the current PoC VM NFS server setup.
Some test results https://docs.google.com/spreadsheets/d/1rXXxZwwB9yPir3LrgMdwfM0oTl_5Nv0AyrdnC4rpH5E/edit#gid=2062561847 The doc is, unfortunately, not public, but I also didn't want it owned by me in Gdocs either.
In general, it is clear that the cinder-based VM has some actual advantages over our current NFS server for performance, likely because the disk on our current server is pretty cheap stuff vs. SSDs backing cinder, regardless of replication and networking. On the other hand, the latency is a bit squirrelly in some cases. There are a lot of variables in play. The one thing I think it does show for sure is that fast disks still matter even if you access through a large number of abstractions. Those abstractions are probably the cause of some numbers not being as good as on the rather old labstore1004. Direct NFS is bound to be faster than NFS via ceph->cinder->VM->NFS, but the VM doesn't seem totally incapable though with a high queue length it struggles more. Sequential writes are just not as good, but random ops are nearly always much better on SSD regardless of how you use them. I imagine copying large files (dumps) around would be painful.
Interesting results in general!