Page MenuHomePhabricator

nova: Provide a simple way to disable all VM creation
Closed, ResolvedPublic

Description

This was requested as a possible response to capacity crunches, e.g. last week when two cloudvirts died at once.

Event Timeline

Andrew created this task.Feb 21 2019, 9:19 PM
Restricted Application added a subscriber: Aklapper. · View Herald TranscriptFeb 21 2019, 9:19 PM

can't just remove all the hypervisors from the pool or something?

bd808 added a subscriber: bd808.Feb 21 2019, 11:46 PM

can't just remove all the hypervisors from the pool or something?

Yes, we just want a simple and fast way to do (and undo!) that.

Andrew added a subscriber: Bstorm.Feb 26 2019, 3:08 PM

I'm looking at this again, and I really think that emptying the scheduler pool is the right way to do this. It's easy and avoids the complexity of adding yet another hiera setting. Here's what it looks like:

diff --git a/hieradata/eqiad/profile/openstack/eqiad1/nova.yaml b/hieradata/eqiad/profile/openstack/eqiad1/nova.yaml
index c68e789..1394efd 100644
--- a/hieradata/eqiad/profile/openstack/eqiad1/nova.yaml
+++ b/hieradata/eqiad/profile/openstack/eqiad1/nova.yaml
@@ -35,10 +35,4 @@ profile::openstack::eqiad1::nova::physical_interface_mappings:
 # cloudvirtanXXXX: reserved for gigantic cloud-analytics worker nodes
 #
 #
-profile::openstack::eqiad1::nova::scheduler_pool:
-  - cloudvirt1013
-  - cloudvirt1025
-  - cloudvirt1026
-  - cloudvirt1027
-  - cloudvirt1028
-  - cloudvirt1029
+profile::openstack::eqiad1::nova::scheduler_pool: []

@Bstorm can you live with that? If so, I'll document in the nova troubleshooting runbook and declare this done.

If it's well-documented, I think it's fine. It's not something we should do often or lightly, but it will be a high pressure situation. If it's all-but copy-paste, that's great.

Gerrit makes the undo step pretty easy with the revert button.