Currently, that partition is at 60GB and is maxed out. We have a few people in there, juggling multi-gigabyte repos. Doubling this space would be great.
Description
Event Timeline
This is the process to request it, we have a few options but they all involve building a new instance I believe: https://phabricator.wikimedia.org/project/view/2880/
@Halfak @Ladsgroup @Sumit @Catrope
Please clean up your stuff in the backup directory on that server, ores-misc-01:/srv/ores-compute-01-20170711/
@chasemp We're probably fine with a rebuild, but I was hoping that using the /srv mount would have made it easier for us to stretch our elbows out a bit?
@Catrope nvm, your dir is just a few hundred MBs.
Anyone know if user "agx" is in Phabricator?
I was hoping that using the /srv mount would have made it easier for us to stretch our elbows out a bit?
If you have already mounted the full quota that's as far as you can grow an existing instance. The quota for a given instance is baked into the image "flavor". That is taken into account when the VM is assigned to a particular OpenStack exec host. Someday™ we hope to have attachable block storage that will allow VMs to provision and mount additional disk, but that someday is in the distant foggy future.
Thanks for all the help! Our team can drop about 15GB of cruft, which will buy us time.
I deleted @jonas.agx's stuff. He hasn't been working with us for a while so I'm sure it won't be a problem. I've cleaned up a bit of my own stuff too.
OK we've cleaned up all we can and yet we're still stuck with not enough space.
So I think it might be time to build a new image and move things over.
Right now, we're using the big RAM image (32GB) and that's been very useful for us. If we switch to the xlarge, we'll get more file space but lose out on half the RAM. Is it possible to image a VM with 32GB of ram and 160GB of filespace without much pain? It sounds like it's not possible right now from what @bd808 says. If that's the case, another option is to provision an additional VM (ores-misc-02) that we can use to split our work between. Is that a crazy misuse of Cloud resources?
The instance is renewed and everything in it is cleaned:
ladsgroup@ores-misc-01:/srv$ df -h Filesystem Size Used Avail Use% Mounted on udev 18G 0 18G 0% /dev tmpfs 3.6G 172M 3.4G 5% /run /dev/vda3 19G 6.7G 11G 39% / tmpfs 18G 0 18G 0% /dev/shm tmpfs 5.0M 0 5.0M 0% /run/lock tmpfs 18G 0 18G 0% /sys/fs/cgroup /dev/mapper/vd-srv 60G 15G 42G 26% /srv tmpfs 3.6G 0 3.6G 0% /run/user/3182
I'm closing this.