Page MenuHomePhabricator

Kill NFS in scrumbugz project
Closed, ResolvedPublic

Description

Currently, /home and /data/project are shared across all instances of the design project, allowing files to be shared easily across instances. This however, comes at a cost of less reliability - your instances are unavailable during NFS outages (this is the most unreliable part of all of labs), home directory access is slower, etc.

Additionally, there's /data/scratch which is a labs-wide shared space, and /public/mounts which is a public readonly mount of wikimedia data dumps

Ideally, I'd love to get rid of all of them - your project gets more stable, Yuvi gets happier, win-win!

Event Timeline

yuvipanda claimed this task.
yuvipanda raised the priority of this task from to Needs Triage.
yuvipanda updated the task description. (Show Details)
yuvipanda added a project: Cloud-Services.
yuvipanda added subscribers: Matanya, Ricordisamoa, Andrew and 3 others.
  • has no idea if this is still needed *

If I had to bet on it I would say kill it all!

@aude? :)

agree with addshore, but think we should ask Tobi to be sure.

I am not using it anymore, and @Christopher killed the scrumbugz instance some time ago, so I assume we can get rid of the data there.

Can I get rid of the entire project as well? :)

@yuvipanda from the mail thread between @Andrew and @Christopher (May 2015):

"Hi Andrew,

I deleted the problem instance, scrumbugz-mail, along with the scrumbugz, scrum and otrs instances. I am still using the project for Phabricator testing, so I prefer to keep it active for now.

I hope that this helps."

Seems like @Christopher still wants to keep the project. Or at least we should wait for him to answer..

Alright :) I'd prefer phabricator testing happen in the phabricator project, but is ok if you guys want to keep this :) Would still like to disable NFS though :)

Killing NFS now and rebooting it

This comment was removed by Christopher.

oh, nvm. they are still there, (wikitech page cache delusion)

:) I did remove NFS - let me know if you want that recovered.

no thanks, NFS was not needed it seems. I am however, having a problem now getting connected to phab08 with ssh. I do not know if it is related to this relatively new error "Warning: the ECDSA host key for 'bastion.wmflabs.org' differs from the key for the IP address '208.80.155.129'" or with an unknown puppet status or something else. It is definitely a consequence of the project being rebooted, though.

I can get ssh access to the other instances, but not phab08. Puppet needs to be disabled on this phabricator test instance for several reasons. It should not be in the role of phabricator::labs in any case. I reallly do not want to have to recreate this instance if possible. I normally have to open port 80 with this command "iptables -A INPUT -p tcp --dport 80 -j ACCEPT" in order for the web proxy to connect to the instance after puppet runs for some reason. Right now, it is not available and I get a gateway time out.

If you could help with this, that would be great. thanks.

The bastion warning seems to be just an outcome of https://lists.wikimedia.org/pipermail/labs-l/2015-June/003781.html - remove the old keys and you should be able to log back in?

And if puppet has been disabled completely for a long time, that instance might not be recoverable at all. That's not a supported mode of operating in labs, and you might have to recreate the instance. I will attempt to bring it back up, however.

I think that phab08 may not recoverable because it seems to be on a different network for some reason. It has the IP address of 10.68.17.0. The other project instances are on 10.68.16.* Anyway, no big deal. I have already recreated the new test phabricator instance (phab09).

phab08 isn't recoverable yes. They are all on the same network though, the ip address doesn't make a difference in this case :(

Can this bug be closed now?