1060 is only in the maintenance aggregate:
taavi@cloudcontrol1006 ~ $ os hypervisor show cloudvirt1060.eqiad.wmnet +---------------------+--------------------------------------+ | Field | Value | +---------------------+--------------------------------------+ | aggregates | ['maintenance'] | | cpu_info | None | | host_ip | 10.64.149.12 | | host_time | 14:34:46 | | hypervisor_hostname | cloudvirt1060.eqiad.wmnet | | hypervisor_type | QEMU | | hypervisor_version | 7002007 | | id | b5a14b7c-c4a7-4a1c-8c09-7eccdb235b9b | | load_average | 10.47, 9.82, 9.50 | | service_host | cloudvirt1060 | | service_id | 8ec33070-080a-467c-9dc5-b75483d18c2f | | state | up | | status | enabled | | uptime | 66 days, 18:49 | | users | 1 | +---------------------+--------------------------------------+
However, the migration script does not manage to drain the host. Logs like the following are printed for each VM. The retry does not work, either.
wmcs-drain-hypervisor: 2024-01-15 14:33:28,669: INFO: Migrating control-plane (9cd703fb-7f53-4458-937a-9e34c16726f8) wmcs-drain-hypervisor: 2024-01-15 14:33:31,072: INFO: current status is ACTIVE; waiting for it to change to ['MIGRATING'] wmcs-drain-hypervisor: 2024-01-15 14:33:32,371: INFO: current status is MIGRATING; waiting for it to change to ['ACTIVE'] wmcs-drain-hypervisor: 2024-01-15 14:33:34,944: INFO: instance 9cd703fb-7f53-4458-937a-9e34c16726f8 (control-plane) is now on host cloudvirt1060 with status ACTIVE wmcs-drain-hypervisor: 2024-01-15 14:33:34,944: WARNING: control-plane (9cd703fb-7f53-4458-937a-9e34c16726f8) didn't actually migrate, got scheduled on the same hypervisor. Will try again!