Page MenuHomePhabricator

Cinder volumes getting stuck on 'reserved' after detach
Closed, ResolvedPublicBUG REPORT

Description

While working on T404584: [tools,nfs,infra] Address tools NFS getting stuck with processes in D state we have run into an issue where (some?) cinder NFS volumes get in status 'reserved' after being detached by the wmcs.nfs.migrate_service cookbook (or the equivalent wmcs-openstack server volume remove <server id> <volume id>.

Further testing and debugging and investigation is needed at this time, first of all to understand the impact better.

We did take a closer look yesterday and for example this is what nova has to say about the operation:

2025-10-07 13:39:58.308 3211222 WARNING nova.virt.libvirt.driver [None req-0e2bde0e-9de3-4e99-a763-06dc81d1b637 novaadmin admin - - default default] Failed to detach device sdb from instance 19c9ecd1-6fb2-4a2d-954a-c1dc6c956034 from the persistent domain config. Libvirt did not report any error but the device is still in the config.

Current reproducer in toolsbeta:

Stop puppet and nfs-server on current nfs server.
NB THIS CAUSES TOOLSBETA NFS OUTAGE.

root@toolsbeta-nfs-4:~# disable-puppet T406688
root@toolsbeta-nfs-4:~# systemctl stop nfs-server
root@toolsbeta-nfs-4:~# umount /srv/toolsbeta

Detach volume from host and observe it getting into state 'reserved'

root@cloudcontrol1006:~# wmcs-openstack server remove volume $(wmcs-server-id toolsbeta-nfs-4.toolsbeta.eqiad1.wikimedia.cloud ) 648504db-18c2-4cee-b731-567dcb4dadf6
root@cloudcontrol1006:~# wmcs-openstack volume show 648504db-18c2-4cee-b731-567dcb4dadf6

To put things back, first set the volume available and then reattach:

root@cloudcontrol1006:~# wmcs-openstack volume set --state available 648504db-18c2-4cee-b731-567dcb4dadf6
root@cloudcontrol1006:~# wmcs-openstack server add volume $(wmcs-server-id toolsbeta-nfs-4.toolsbeta.eqiad1.wikimedia.cloud ) 648504db-18c2-4cee-b731-567dcb4dadf6

Then get puppet going again:

root@toolsbeta-nfs-4:~# run-puppet-agent --force

Event Timeline

One of the first things I'd like to do is reproduce the problem reliably, ideally in codfw, unfortunately this currently fails for example:

root@cloudcontrol2006-dev:~# wmcs-openstack volume list --all-projects
An unexpected error prevented the server from fulfilling your request. (HTTP 500) (Request-ID: req-2851fb77-bbfc-4497-9127-3585a25432a3)

Though I'm not sure if I'm holding it wrong or the above is expected not to work (?)

When I said reproduce reliably it is because for example I can't get the problem to trigger on testlabs-nfs volume:

root@cloudcontrol1006:~# wmcs-openstack server remove volume $(wmcs-server-id testlabs-nfs-2.testlabs.eqiad1.wikimedia.cloud) 0fa71972-9c7c-4657-928b-271df7fea14f
root@cloudcontrol1006:~# wmcs-openstack volume show 0fa71972-9c7c-4657-928b-271df7fea14f
+--------------------------------+--------------------------------------+
| Field                          | Value                                |
+--------------------------------+--------------------------------------+
| attachments                    | []                                   |
| availability_zone              | nova                                 |
| bootable                       | false                                |
| consistencygroup_id            | None                                 |
| created_at                     | 2022-08-30T21:34:22.000000           |
| description                    | None                                 |
| encrypted                      | False                                |
| id                             | 0fa71972-9c7c-4657-928b-271df7fea14f |
| migration_status               | None                                 |
| multiattach                    | False                                |
| name                           | testlabs-nfs                         |
| os-vol-host-attr:host          | cloudcontrol1007@rbd#RBD             |
| os-vol-mig-status-attr:migstat | None                                 |
| os-vol-mig-status-attr:name_id | None                                 |
| os-vol-tenant-attr:tenant_id   | testlabs                             |
| properties                     |                                      |
| replication_status             | None                                 |
| size                           | 8                                    |
| snapshot_id                    | None                                 |
| source_volid                   | None                                 |
| status                         | available                            |
| type                           | standard                             |
| updated_at                     | 2025-10-08T07:40:55.000000           |
| user_id                        | novaadmin                            |
+--------------------------------+--------------------------------------+

Ditto with a newly-created volume in testlabs (40060b96-919b-489c-b9dd-c9b5cfa3c789) I am unable to reproduce the problem, will keep investigating

Mentioned in SAL (#wikimedia-cloud) [2025-10-08T09:19:35Z] <godog> shut down nfs while investigating T406688

Mentioned in SAL (#wikimedia-cloud) [2025-10-08T11:40:24Z] <godog> nfs back up and has been for some time T406688

status update: I ran nova-compute with debug logging on cloudvirt1046 though, full sanitized paste is at P83751.

This is the flow I could gather from the logs above, all from nova-compute's POV

PUT https://openstack.eqiad1.wikimediacloud.org:28776/v3/attachments/7557617a-1b44-4c30-b2ea-b3bb40943d54

With this body

{
  "attachment": {
    "connector": {
      "platform": "x86_64",
      "os_type": "linux",
      "ip": "10.64.20.12",
      "host": "cloudvirt1046",
      "multipath": false,
      "enforce_multipath": true,
      "initiator": "iqn.1993-08.org.debian:01:2a1d9e998251",
      "do_local_attach": false,
      "nvme_hostid": "88801c22-69a3-4df9-8582-415a99b53908",
      "uuid": "4272fbf9-5b0b-4659-904d-ba30ebb67414",
      "system uuid": "4c4c4544-0037-5a10-8056-b5c04f384233",
      "nqn": "nqn.2014-08.org.nvmexpress:uuid:4c4c4544-0037-5a10-8056-b5c04f384233",
      "nvme_native_multipath": true,
      "found_dsc": "",
      "host_ips": [
        "10.64.20.12",
        "2620:0:861:118:10:64:20:12",
        "fe80::be97:e1ff:feb9:75ae",
        "172.20.2.29",
        "2a02:ec80:a000:202::29",
        "fe80::be97:e1ff:feb9:75ae",
        "fe80::4c21:31ff:fe61:5f44",
        "fe80::be97:e1ff:feb9:75ae",
        "fe80::fc16:3eff:fe90:2451",
        "fe80::fc16:3eff:fe31:591a",
        "fe80::fc16:3eff:fe44:a1e9"
      ],
      "mountpoint": "/dev/sdb"
    }
  }
}

The cinder replies

{
  "attachment": {
    "id": "7557617a-1b44-4c30-b2ea-b3bb40943d54",
    "status": "reserved",
    "instance": "19c9ecd1-6fb2-4a2d-954a-c1dc6c956034",
    "volume_id": "648504db-18c2-4cee-b731-567dcb4dadf6",
    "attached_at": "",
    "detached_at": "",
    "attach_mode": "null",
    "connection_info": {
      "name": "eqiad1-cinder/volume-648504db-18c2-4cee-b731-567dcb4dadf6",
      "hosts": [
        "10.64.148.27",
        "10.64.149.19",
        "10.64.151.5"
      ],
      "ports": [
        "6789",
        "6789",
        "6789"
      ],
      "cluster_name": "ceph",
      "auth_enabled": true,
      "auth_username": "eqiad1-cinder",
      "secret_type": "ceph",
      "secret_uuid": "9dc683f1-f3d4-4c12-8b8f-f3ffdf36364d",
      "volume_id": "648504db-18c2-4cee-b731-567dcb4dadf6",
      "discard": true,
      "qos_specs": {
        "write_iops_sec": "500",
        "iops_sec": "5000",
        "total_bytes_sec": "200000000"
      },
      "access_mode": "rw",
      "encrypted": false,
      "cacheable": false,
      "driver_volume_type": "rbd",
      "attachment_id": "7557617a-1b44-4c30-b2ea-b3bb40943d54",
      "enforce_multipath": true
    }
  }
}

Then POST https://openstack.eqiad1.wikimediacloud.org:28776/v3/attachments/7557617a-1b44-4c30-b2ea-b3bb40943d54/action with body '{"os-complete": null}' and 204 response

Then GET https://openstack.eqiad1.wikimediacloud.org:28776/v3/volumes/648504db-18c2-4cee-b731-567dcb4dadf6 with response

{
  "volume": {
    "id": "648504db-18c2-4cee-b731-567dcb4dadf6",
    "status": "detaching",
    "size": 10,
    "availability_zone": "nova",
    "created_at": "2021-09-20T22:35:30.000000",
    "updated_at": "2025-10-08T09:34:01.000000",
    "name": "toolsbeta-nfs",
    "description": "First NFS utility volume - managed by tofu",
    "volume_type": "standard",
    "snapshot_id": null,
    "source_volid": null,
    "metadata": {},
    "links": [
      {
        "rel": "self",
        "href": "https://openstack.eqiad1.wikimediacloud.org:28776/v3/volumes/648504db-18c2-4cee-b731-567dcb4dadf6"
      },
      {
        "rel": "bookmark",
        "href": "https://openstack.eqiad1.wikimediacloud.org:28776/volumes/648504db-18c2-4cee-b731-567dcb4dadf6"
      }
    ],
    "user_id": "andrew",
    "bootable": "false",
    "encrypted": false,
    "replication_status": null,
    "consistencygroup_id": null,
    "multiattach": false,
    "attachments": [
      {
        "id": "648504db-18c2-4cee-b731-567dcb4dadf6",
        "attachment_id": "7557617a-1b44-4c30-b2ea-b3bb40943d54",
        "volume_id": "648504db-18c2-4cee-b731-567dcb4dadf6",
        "server_id": "19c9ecd1-6fb2-4a2d-954a-c1dc6c956034",
        "host_name": "cloudvirt1046",
        "device": "/dev/sdb",
        "attached_at": "2025-10-08T09:33:43.000000"
      }
    ],
    "migration_status": null,
    "group_id": null,
    "provider_id": null,
    "shared_targets": false,
    "service_uuid": "222b4100-eccf-4b7d-889e-09284e49d963",
    "cluster_name": null,
    "volume_type_id": "bd8c6115-b5e6-4542-920e-c0067299e27a",
    "consumes_quota": true,
    "os-vol-tenant-attr:tenant_id": "toolsbeta",
    "os-vol-mig-status-attr:migstat": null,
    "os-vol-mig-status-attr:name_id": null,
    "os-vol-host-attr:host": "cloudcontrol1007@rbd#RBD"
  }
}

At this point this shows up in the logs

2025-10-08 09:34:02.817 1916003 WARNING nova.virt.libvirt.driver [None req-32cb24d9-58c1-4ebf-9a0b-cacc4572fd78 novaadmin admin - - default default] Failed to detach device sdb from instance 19c9ecd1-6fb2-4a2d-954a-c1dc6c956034 from the persistent domain config. Libvirt did not report any error but the device is still in the config.

Though this gets issued afterwards DELETE https://openstack.eqiad1.wikimediacloud.org:28776/v3/attachments/7557617a-1b44-4c30-b2ea-b3bb40943d54 to which nova/cinder replies 200 and this body

{
  "attachments": [
    {
      "id": "330bbdd1-2dd4-4fc9-bde5-25bd99a645f6",
      "status": "reserved",
      "instance": "a1cb92ab-6083-4465-81a9-a283918f13eb",
      "volume_id": "648504db-18c2-4cee-b731-567dcb4dadf6"
    },
    {
      "id": "47770248-1a95-4fe8-9437-7d718db96f6c",
      "status": "reserved",
      "instance": "a1cb92ab-6083-4465-81a9-a283918f13eb",
      "volume_id": "648504db-18c2-4cee-b731-567dcb4dadf6"
    },
    {
      "id": "70ad744c-16be-48aa-bae6-dfcbddd0afdd",
      "status": "reserved",
      "instance": "a1cb92ab-6083-4465-81a9-a283918f13eb",
      "volume_id": "648504db-18c2-4cee-b731-567dcb4dadf6"
    },
    {
      "id": "8b38c7f7-e76b-44d9-8768-127ed2355ee9",
      "status": "reserved",
      "instance": "a1cb92ab-6083-4465-81a9-a283918f13eb",
      "volume_id": "648504db-18c2-4cee-b731-567dcb4dadf6"
    },
    {
      "id": "9011ad55-84ac-47b9-8c24-973a5f3fadbe",
      "status": "reserved",
      "instance": "a1cb92ab-6083-4465-81a9-a283918f13eb",
      "volume_id": "648504db-18c2-4cee-b731-567dcb4dadf6"
    },
    {
      "id": "dd61f254-1df6-4e26-8e79-cd97dc0039ad",
      "status": "reserved",
      "instance": "a1cb92ab-6083-4465-81a9-a283918f13eb",
      "volume_id": "648504db-18c2-4cee-b731-567dcb4dadf6"
    },
    {
      "id": "fe9292b3-7eaa-4896-a098-731a352d726b",
      "status": "reserved",
      "instance": "a1cb92ab-6083-4465-81a9-a283918f13eb",
      "volume_id": "648504db-18c2-4cee-b731-567dcb4dadf6"
    }
  ]
}

And nova-compute reports success for the RPC AFAICS

2025-10-08 09:34:05.093 1916003 DEBUG oslo_messaging.rpc.server [None req-32cb24d9-58c1-4ebf-9a0b-cacc4572fd78 novaadmin admin - - default default] Replied success message with id 12d4d7d6-61d7-4b98-8a37-d70f20dd8f7c and method: detach_volume. Time elapsed: 3.447 _process_incoming /usr/lib/python3/dist-packages/oslo_messaging/rpc/server.py:194

The last response seems the most interesting to me: i.e. cinder reports a list of reserved attachments, NOT including the one we just did (7557617a-1b44-4c30-b2ea-b3bb40943d54) and all for instance a1cb92ab-6083-4465-81a9-a283918f13eb (that's toolsbeta-nfs-3).

It seems to me that these reserved attachments are what make the volume be reserved. I was not able to query for said attachments from the openstack volume cli, and I don't know if clearing those would actually make the volume work again as expected. Something to be tried next

An issue that might be related https://github.com/zonca/jupyterhub-deploy-kubernetes-jetstream/issues/40

I wanted to run cinder attachment-list --volume-id 648504db-18c2-4cee-b731-567dcb4dadf6 and failed miserably to get the right combination of OS_ environment variables so far

taavi triaged this task as Medium priority.
taavi changed the subtype of this task from "Task" to "Bug Report".

I wanted to run cinder attachment-list --volume-id 648504db-18c2-4cee-b731-567dcb4dadf6 and failed miserably to get the right combination of OS_ environment variables so far

Typically the per-service commands (cinder/nova/keystone/etc) are no longer supported, although some of them might work. I think what you want is:

andrew@cloudcontrol1006:~$ sudo wmcs-openstack volume attachment list --volume-id 648504db-18c2-4cee-b731-567dcb4dadf6 --all-projects

wmcs-openstack is just a wrapper around the 'openstack' cli that sets up auth before hand. You can do the same with:

andrew@cloudcontrol1006:~$ sudo openstack --os-cloud novaadmin volume attachment list --volume-id 648504db-18c2-4cee-b731-567dcb4dadf6 --all-projects

I wanted to run cinder attachment-list --volume-id 648504db-18c2-4cee-b731-567dcb4dadf6 and failed miserably to get the right combination of OS_ environment variables so far

Typically the per-service commands (cinder/nova/keystone/etc) are no longer supported, although some of them might work. I think what you want is:

andrew@cloudcontrol1006:~$ sudo wmcs-openstack volume attachment list --volume-id 648504db-18c2-4cee-b731-567dcb4dadf6 --all-projects

wmcs-openstack is just a wrapper around the 'openstack' cli that sets up auth before hand. You can do the same with:

andrew@cloudcontrol1006:~$ sudo openstack --os-cloud novaadmin volume attachment list --volume-id 648504db-18c2-4cee-b731-567dcb4dadf6 --all-projects

Sorry, reading more I see that you know all that and are hoping that the cinder command will respond differently. Possible!

Here are those attachments in the database:

mysql:root@localhost [cinder]> select id, instance_uuid, volume_id,  attach_status from volume_attachment where volume_id='648504db-18c2-4cee-b731-567dcb4dadf6' a
nd deleted=0;
+--------------------------------------+--------------------------------------+--------------------------------------+---------------+
| id                                   | instance_uuid                        | volume_id                            | attach_status |
+--------------------------------------+--------------------------------------+--------------------------------------+---------------+
| 330bbdd1-2dd4-4fc9-bde5-25bd99a645f6 | a1cb92ab-6083-4465-81a9-a283918f13eb | 648504db-18c2-4cee-b731-567dcb4dadf6 | reserved      |
| 47770248-1a95-4fe8-9437-7d718db96f6c | a1cb92ab-6083-4465-81a9-a283918f13eb | 648504db-18c2-4cee-b731-567dcb4dadf6 | reserved      |
| 5a24744f-fffc-4a93-9cb6-f7a31915bddc | 19c9ecd1-6fb2-4a2d-954a-c1dc6c956034 | 648504db-18c2-4cee-b731-567dcb4dadf6 | attached      |
| 70ad744c-16be-48aa-bae6-dfcbddd0afdd | a1cb92ab-6083-4465-81a9-a283918f13eb | 648504db-18c2-4cee-b731-567dcb4dadf6 | reserved      |
| 8b38c7f7-e76b-44d9-8768-127ed2355ee9 | a1cb92ab-6083-4465-81a9-a283918f13eb | 648504db-18c2-4cee-b731-567dcb4dadf6 | reserved      |
| 9011ad55-84ac-47b9-8c24-973a5f3fadbe | a1cb92ab-6083-4465-81a9-a283918f13eb | 648504db-18c2-4cee-b731-567dcb4dadf6 | reserved      |
| dd61f254-1df6-4e26-8e79-cd97dc0039ad | a1cb92ab-6083-4465-81a9-a283918f13eb | 648504db-18c2-4cee-b731-567dcb4dadf6 | reserved      |
| fe9292b3-7eaa-4896-a098-731a352d726b | a1cb92ab-6083-4465-81a9-a283918f13eb | 648504db-18c2-4cee-b731-567dcb4dadf6 | reserved      |
+--------------------------------------+--------------------------------------+--------------------------------------+---------------+

let's see if there are other volumes with those leftover reservations!

mysql:root@localhost [cinder]> select id, instance_uuid, volume_id,  attach_status from volume_attachment where volume_id!='648504db-18c2-4cee-b731-567dcb4dadf6'
and deleted=0 and attach_status='reserved' group by volume_id;
+--------------------------------------+--------------------------------------+--------------------------------------+---------------+
| id                                   | instance_uuid                        | volume_id                            | attach_status |
+--------------------------------------+--------------------------------------+--------------------------------------+---------------+
| 9de54f35-cb47-4f3f-ac7a-df833731fddb | 54d71241-5b63-456a-8d0c-5cb245a55c16 | 09981cae-b100-4693-baa6-5afe160182c4 | reserved      |
| 174eb78a-5112-4827-8c6e-1168fb0d9cdc | 7f4d473c-18e7-406f-b0d1-bd0c5fe112a9 | 20cba4c8-4335-408b-be34-15e644b2d615 | reserved      |
| d9af7430-29f7-4db6-8a70-9b5558c3c829 | de63de80-8f02-47e0-9ad2-cfe407ca99be | 26a93554-0c80-441e-9a0b-cddebcda0521 | reserved      |
| 9e4b6ed6-ee95-454b-b820-ab9ad12a0763 | eed19c47-6450-40ac-9cd0-26d5912a791f | 3f90c3f2-158d-4e45-a919-0f048f47c3b6 | reserved      |
| 6b5ef141-8210-47e2-87d8-9f506ffe790f | 28a468c6-0b98-4de8-959f-2d194fb6e9e5 | 40c3ffd6-b9f8-4d2d-940a-2b43a2beb336 | reserved      |
| 152fbddf-7ca7-4d07-9a2d-175c9cb25265 | 56bc21a6-4fd4-4ea8-9cb1-2c9d9f170c15 | 55b9b62d-59cc-44c0-b25f-616391831464 | reserved      |
| 37a8f7c6-770c-4202-8815-69dbc81697d7 | d7c826ce-1b23-4b63-bf2c-280b9396644b | 61a6ef96-c73e-4dd7-aa04-16abeaa9bd3d | reserved      |
| 65f8b04f-9109-40df-8424-cbcd4aeb2ade | c8e67174-5188-4fe6-8f65-4caf997e5fa1 | 74e4019e-1bce-4307-8659-1feb734e3f31 | reserved      |
| 101c6ddf-16d6-48a2-b2ef-c67d122d3854 | 39737a21-257d-4267-9aa6-d2c411501b5f | 9c2b23a6-2d68-45aa-bbdf-6d6438d019a9 | reserved      |
| 6505ada3-518f-4120-ad07-99b64a621471 | 30be0c78-390f-4d43-9b51-01e06889f4ba | a4b68de6-9405-4529-90ea-b889d91fd1cd | reserved      |
| 16175cac-c9c1-4ad0-8675-01299b00684b | b615abe0-9cc2-4cee-a224-c1f0b90e7957 | ac3934f4-fbf5-4064-a583-346f207e21aa | reserved      |
| 44043608-fee3-49f0-91c8-039d01b6026c | 4b932658-2b4d-4157-b62e-1e73faaed8aa | c6c88ead-996b-4f91-9444-fc6964958337 | reserved      |
| be472bd2-5221-4571-bc16-7ea05bae8bba | 457f53c3-fbce-4b3d-ab72-bbb08c2716b7 | dc70d302-161a-4b66-b6cd-4d7458c33e89 | reserved      |
+--------------------------------------+--------------------------------------+--------------------------------------+---------------+
13 rows in set (0.034 sec)

There are :(

mysql:root@localhost [cinder]> delete from volume_attachment where volume_id='648504db-18c2-4cee-b731-567dcb4dadf6' and deleted=0 and attach_status='reserved';
Query OK, 7 rows affected (0.004 sec)
root@cloudcontrol1006:~# wmcs-openstack volume show 648504db-18c2-4cee-b731-567dcb4dadf6 -c attachments -c status
+-------------+--------------------------------------------------------------------------------------------------------------------------------------------------+
| Field       | Value                                                                                                                                            |
+-------------+--------------------------------------------------------------------------------------------------------------------------------------------------+
| attachments | [{'id': '648504db-18c2-4cee-b731-567dcb4dadf6', 'attachment_id': 'e3eb31a7-d284-44c3-93c1-65b60e96b0b8', 'volume_id':                            |
|             | '648504db-18c2-4cee-b731-567dcb4dadf6', 'server_id': '19c9ecd1-6fb2-4a2d-954a-c1dc6c956034', 'host_name': 'cloudvirt1046', 'device': '/dev/sdb', |
|             | 'attached_at': '2025-10-15T20:20:50.000000'}]                                                                                                    |
| status      | in-use                                                                                                                                           |
+-------------+--------------------------------------------------------------------------------------------------------------------------------------------------+
root@cloudcontrol1006:~# wmcs-openstack server remove volume $(wmcs-server-id toolsbeta-nfs-4.toolsbeta.eqiad1.wikimedia.cloud ) 648504db-18c2-4cee-b731-567dcb4dadf6
root@cloudcontrol1006:~# wmcs-openstack volume show 648504db-18c2-4cee-b731-567dcb4dadf6 -c attachments -c status
+-------------+-----------+
| Field       | Value     |
+-------------+-----------+
| attachments | []        |
| status      | available |
+-------------+-----------+

...so that fixed things for that exact volume, but doesn't really explain how we got here in the first place.

@fgiunchedi, I dropped novaadmin.sh in your home directory on cloudcontrol1006; if you source it you can then run cinder commands.

Do you want to explore before I start cleaning up those stray reservation records for other volumes? They are present in the following projects:

cyberbot
deployment-prep
integration
deployment-prep
incubator
globaleducation
osmit

thank you very much @Andrew for digging into the issue -- appreciate it, I agree that while the fix seems to work it doesn't explain how we got there in the first place. If there's more attachment metadata available maybe we can at least see how old these stuck attachments are?

Having said that, I don't think I'd need to run cinder at this point, and thank you for providing novaadmin.sh. Please feel free to fix the remaining projects/attachments!

mysql:root@localhost [cinder]> select created_at from volume_attachment where deleted=0 and attach_status='reserved';
+---------------------+
| created_at          |
+---------------------+
| 2025-06-20 19:54:22 |
| 2025-06-20 17:57:40 |
| 2025-06-21 00:57:57 |
| 2025-06-20 21:14:57 |
| 2025-06-20 18:21:31 |
| 2025-06-20 20:38:31 |
| 2025-06-20 21:11:16 |
| 2025-06-20 18:01:57 |
| 2025-06-20 20:47:45 |
| 2025-06-20 20:32:16 |
| 2025-06-22 00:04:09 |
| 2025-06-20 20:45:28 |
| 2025-06-20 19:31:59 |
| 2025-06-20 21:16:25 |
| 2025-06-20 19:36:43 |
| 2025-06-20 18:04:42 |
| 2025-06-20 21:54:16 |
| 2025-06-21 01:16:33 |
| 2025-06-21 01:17:53 |
| 2025-06-20 20:35:07 |
| 2025-06-20 19:19:28 |
| 2025-06-20 19:05:55 |
| 2025-06-21 00:59:27 |
| 2025-06-20 19:49:45 |
| 2025-05-21 01:55:57 |
| 2025-06-20 21:03:43 |
| 2025-06-22 00:05:37 |
| 2025-06-20 21:01:23 |
| 2025-06-20 20:40:51 |
| 2025-06-20 21:09:55 |
| 2022-08-16 17:19:54 |
| 2025-06-20 21:56:42 |
| 2025-05-21 02:06:40 |
| 2025-06-20 18:19:12 |
| 2025-06-20 18:23:50 |
| 2025-06-20 18:07:01 |
| 2025-06-20 18:26:10 |
+---------------------+
37 rows in set (0.009 sec)

So, the bad news is, those were created recently. The good news is they were all created during a 3-day period. So I'm going to take a leap and assume that something was broken during those three days that is no longer broken.

I deleted all the remaining 'reserved' attachments using the cinder command.

mysql:root@localhost [cinder]> select * from volume_attachment where deleted=0 and attach_status='reserved';
Empty set (0.020 sec)

Change #1196794 had a related patch set uploaded (by Filippo Giunchedi; author: Filippo Giunchedi):

[cloud/wmcs-cookbooks@main] nfs: log a warning when forcing volumes to available

https://gerrit.wikimedia.org/r/1196794

Change #1196794 merged by Filippo Giunchedi:

[cloud/wmcs-cookbooks@main] nfs: log a warning when forcing volumes to available

https://gerrit.wikimedia.org/r/1196794