Page MenuHomePhabricator

upgrade codfw1dev to wallaby
Closed, ResolvedPublic

Description

Start with cloudservices200[23]-dev.wikimedia.org (T304702). The current Horizon deploy is backwards-compatible with W (How is this learned?). So that leaves the cloudcontrol, cloudnet, and cloudvirt nodes to upgrade.

  • update IRC topic
  • downtime everything in icinga through 14:00CDT

    aborrero@cumin1001:~ $ sudo cookbook sre.hosts.downtime -r "upgrading openstack" --min 120 lab*

    aborrero@cumin1001:~ $ sudo cookbook sre.hosts.downtime -r "upgrading openstack" --min 120 cloud*
  • dump databases on cloudcontrol2001-dev.wikimedia.org: nova_eqiad1, nova_api_eqiad1, nova_cell0_eqiad1, neutron, glance, keystone, cinder:
    1. mysqldump -u root nova > /root/wallabydbbackups/nova.sql
    2. mysqldump -u root nova_api > /root/wallabydbbackups/nova_api.sql
    3. mysqldump -u root nova_cell0 > /root/wallabydbbackups/nova_cell0.sql
    4. mysqldump -u root neutron > /root/wallabydbbackups/neutron.sql
    5. mysqldump -u root glance > /root/wallabydbbackups/glance.sql
    6. mysqldump -u root placement > /root/wallabydbbackups/placement.sql
    7. mysqldump -u root keystone > /root/wallabydbbackups/keystone.sql

Cloudcontrols:

All open database connections post-upgrade https://phabricator.wikimedia.org/P10999
Checking haproxy status echo "show stat" | socat /var/run/haproxy/haproxy.sock stdio | grep DOWN

cloudcontrol2001-dev.wikimedia.org:

  • puppet agent --enable && puppet agent -tv
  • apt-get update
  • systemctl unmask keystone
  • DEBIAN_FRONTEND=noninteractive apt-get install glance python3-eventlet=0.30.2-1 glance-api glance-common keystone nova-api nova-conductor nova-scheduler nova-common glance neutron-server python3-requests python3-urllib3 placement-api cinder-volume cinder-scheduler cinder-api python3-oslo.messaging python3-tooz -o "Dpkg::Options::=--force-confdef" -o "Dpkg::Options::=--force-confold"
  • DEBIAN_FRONTEND=noninteractive apt-get install python3-trove trove-api trove-common trove-conductor trove-taskmanager -o "Dpkg::Options::=--force-confdef" -o "Dpkg::Options::=--force-confold"
  • DEBIAN_FRONTEND=noninteractive apt-get upgrade -o "Dpkg::Options::=--force-confdef" -o "Dpkg::Options::=--force-confold"
  • systemctl mask keystone
  • puppet agent -tv
  • nova-manage api_db sync
  • nova-manage db sync
  • placement-manage db sync
  • glance-manage db_sync
  • keystone-manage db_sync
  • cinder-manage db online_data_migrations
  • cinder-manage db sync
  • trove-manage db_sync
  • puppet agent -tv
  • nova-manage db online_data_migrations
  • systemctl list-units --failed
  • neutron-db-manage upgrade heads (should show nothing failed, or just keystone. If keystone is failed just reset with systemctl reset-failed)

cloudcontrol2003-dev.wikimedia.org:

  • puppet agent --enable && puppet agent -tv
  • apt-get update
  • systemctl unmask keystone
  • DEBIAN_FRONTEND=noninteractive apt-get install glance python3-eventlet=0.30.2-1 glance-api glance-common keystone nova-api nova-conductor nova-scheduler nova-common glance neutron-server python3-requests python3-urllib3 placement-api cinder-volume cinder-scheduler cinder-api python3-oslo.messaging python3-tooz -o "Dpkg::Options::=--force-confdef" -o "Dpkg::Options::=--force-confold"
  • DEBIAN_FRONTEND=noninteractive apt-get install python3-trove trove-api trove-common trove-conductor trove-taskmanager -o "Dpkg::Options::=--force-confdef" -o "Dpkg::Options::=--force-confold"
  • DEBIAN_FRONTEND=noninteractive apt-get upgrade -o "Dpkg::Options::=--force-confdef" -o "Dpkg::Options::=--force-confold"
  • systemctl mask keystone
  • puppet agent -tv
  • puppet agent -tv
  • systemctl list-units --failed (should show nothing failed, or just keystone. If keystone is failed just reset with systemctl reset-failed)

cloudcontrol2004-dev.wikimedia.org:

  • puppet agent --enable && puppet agent -tv
  • apt-get update
  • systemctl unmask keystone
  • DEBIAN_FRONTEND=noninteractive apt-get install glance python3-eventlet=0.30.2-1 glance-api glance-common keystone nova-api nova-conductor nova-scheduler nova-common glance neutron-server python3-requests python3-urllib3 placement-api cinder-volume cinder-scheduler cinder-api python3-oslo.messaging python3-tooz -o "Dpkg::Options::=--force-confdef" -o "Dpkg::Options::=--force-confold"
  • DEBIAN_FRONTEND=noninteractive apt-get install python3-trove trove-api trove-common trove-conductor trove-taskmanager -o "Dpkg::Options::=--force-confdef" -o "Dpkg::Options::=--force-confold"
  • DEBIAN_FRONTEND=noninteractive apt-get upgrade -o "Dpkg::Options::=--force-confdef" -o "Dpkg::Options::=--force-confold"
  • systemctl mask keystone
  • puppet agent -tv
  • puppet agent -tv
  • systemctl list-units --failed (should show nothing failed, or just keystone. If keystone is failed just reset with systemctl reset-failed)

cloudnets (one at a time please):

Begin with the standby node, as determined with:

$ neutron l3-agent-list-hosting-router cloudinstances2b-gw

Standby node (cloudnet2002-dev.codfw.wmnet):

  • puppet agent --enable && puppet agent -tv
  • apt-get update
  • DEBIAN_FRONTEND=noninteractive apt-get install -o "Dpkg::Options::=--force-confdef" -o "Dpkg::Options::=--force-confold" neutron-l3-agent python3-oslo.messaging python3-neutronclient python3-glanceclient
  • DEBIAN_FRONTEND=noninteractive apt-get upgrade -o "Dpkg::Options::=--force-confdef" -o "Dpkg::Options::=--force-confold"
  • puppet agent -tv
  • run neutron-db-manage upgrade heads on cloudcontrol2001-dev.wikimedia.org

Active node (cloudnet2004-dev.codfw.wmnet):

  • puppet agent --enable && puppet agent -tv
  • apt-get update
  • DEBIAN_FRONTEND=noninteractive apt-get install -o "Dpkg::Options::=--force-confdef" -o "Dpkg::Options::=--force-confold" neutron-l3-agent python3-oslo.messaging python3-neutronclient python3-glanceclient
  • DEBIAN_FRONTEND=noninteractive apt-get upgrade -o "Dpkg::Options::=--force-confdef" -o "Dpkg::Options::=--force-confold"
  • puppet agent -tv

Break Time

Cloudvirts (cloudvirt2001-dev.codfw.wmnet, cloudvirt2002-dev.codfw.wmnet, cloudvirt2003-dev.codfw.wmnet) (start with one test host first):

  • puppet agent --enable && puppet agent -tv
  • apt-get update
  • DEBIAN_FRONTEND=noninteractive apt-get install -y python3-libvirt python3-eventlet python3-os-brick python3-os-vif nova-compute neutron-common nova-compute-kvm neutron-linuxbridge-agent python3-neutron python3-oslo.messaging python3-taskflow python3-tooz python3-keystoneauth1 python3-requests python3-urllib3 -o "Dpkg::Options::=--force-confdef" -o "Dpkg::Options::=--force-confold"
  • DEBIAN_FRONTEND=noninteractive apt-get dist-upgrade -y --allow-downgrades -o "Dpkg::Options::=--force-confdef" -o "Dpkg::Options::=--force-confold"
  • puppet agent -tv
  • service neutron-linuxbridge-agent restart
  • service libvirtd restart
  • service nova-compute restart
  • update IRC topic
  • enable puppet on all cloud* hosts

    $ sudo cumin 'cloud*dev*' "enable-puppet 'Upgrading to openstack Wallaby - T304694 - ${USER}'"

update https://phabricator.wikimedia.org/source/operations-puppet/browse/production/modules/openstack/files/victoria/cinder/hacks/backup/chunkeddriver.py.patch to match current /usr/lib/python3/dist-packages/cinder/backup/chunkeddriver.py file:
https://github.com/openstack/cinder/blob/master/cinder/backup/chunkeddriver.py matched to current branch
https://gerrit.wikimedia.org/r/c/operations/puppet/+/777873

cloudbackup1001-dev.eqiad.wmnet:

  • puppet agent --enable && puppet agent -tv
  • apt-get update
  • DEBIAN_FRONTEND=noninteractive apt upgrade cinder-backup
  • puppet agent -tv
  • (test from cloudcontrol2004-dev.wikimedia.org) sudo wmcs-cinder-backup-manager

Things to check

  • Check 'openstack region list'. There should be exactly one region, codfw1dev-r. If there is a second region named 'RegionOne' (this happened in codfw1dev), delete it; otherwise scripts that enumerate regions will be confused.
  • Clean up VMs in the admin-monitoring project that leaked during upgrade; delete them.
  • Create a new VM and confirm that DNS and ssh work properly
  • Logs will be extremely noisy about policy deprecations and value checks; this is expected because OpenStack is poised between two different policy systems; our existing policies are still (noisily) supported in U.

Event Timeline

Restricted Application added a subscriber: Aklapper. · View Herald Transcript
rook updated the task description. (Show Details)

Change 775278 had a related patch set uploaded (by Vivian Rook; author: Vivian Rook):

[operations/puppet@production] upgrade codfw1dev to wallaby

https://gerrit.wikimedia.org/r/775278

Change 775278 merged by Vivian Rook:

[operations/puppet@production] upgrade codfw1dev to wallaby

https://gerrit.wikimedia.org/r/775278

rook updated the task description. (Show Details)

cloudcontrol2003-dev:

  UNIT              LOAD   ACTIVE SUB    DESCRIPTION
● logrotate.service loaded failed failed Rotate log files

systemctl reset-failed cleared it

mysql:root@localhost [(none)]> select user, db, SUBSTRING_INDEX(host,':',1) as host, count(*) from information_schema.processlist group by user, db, SUBSTRING_INDEX(host,':',1);
+-----------------+------+-----------+----------+
| user            | db   | host      | count(*) |
+-----------------+------+-----------+----------+
| event_scheduler | NULL | localhost |        1 |
| root            | NULL | localhost |        1 |
| system user     | NULL |           |       25 |
+-----------------+------+-----------+----------+
3 rows in set (0.001 sec)

mysql:root@localhost [(none)]> select user, db, SUBSTRING_INDEX(host,':',1) as host, count(*) from information_schema.processlist group by user, db, SUBSTRING_INDEX(host,':',1);
+-----------------+------+-----------+----------+
| user            | db   | host      | count(*) |
+-----------------+------+-----------+----------+
| event_scheduler | NULL | localhost |        1 |
| root            | NULL | localhost |        1 |
| system user     | NULL |           |       25 |
+-----------------+------+-----------+----------+
3 rows in set (0.001 sec)

echo "show stat" | socat /var/run/haproxy/haproxy.sock stdio | grep DOWN showed nothing

cloudcontrol2004-dev:

  UNIT              LOAD   ACTIVE SUB    DESCRIPTION
● logrotate.service loaded failed failed Rotate log files

systemctl reset-failed cleared it

mysql:root@localhost [(none)]> select user, db, SUBSTRING_INDEX(host,':',1) as host, count(*) from information_schema.processlist group by user, db, SUBSTRING_INDEX(host,':',1);
+-----------------+-----------------+---------------+----------+
| user            | db              | host          | count(*) |
+-----------------+-----------------+---------------+----------+
| barbican        | barbican        | 208.80.153.59 |        8 |
| cinder          | cinder          | 208.80.153.59 |       32 |
| designate       | designate       | 208.80.153.59 |       17 |
| event_scheduler | NULL            | localhost     |        1 |
| keystone        | keystone        | 208.80.153.59 |       33 |
| neutron         | neutron         | 208.80.153.59 |       17 |
| nova            | nova            | 208.80.153.59 |       80 |
| nova            | nova_api        | 208.80.153.59 |       26 |
| nova            | nova_cell0      | 208.80.153.59 |       26 |
| placement       | placement       | 208.80.153.59 |       24 |
| root            | NULL            | localhost     |        1 |
| system user     | NULL            |               |       25 |
| trove           | trove_codfw1dev | 208.80.153.59 |        1 |
+-----------------+-----------------+---------------+----------+
13 rows in set (0.005 sec)

mysql:root@localhost [(none)]> select user, db, SUBSTRING_INDEX(host,':',1) as host, count(*) from information_schema.processlist group by user, db, SUBSTRING_INDEX(host,':',1);
+-----------------+-----------------+---------------+----------+
| user            | db              | host          | count(*) |
+-----------------+-----------------+---------------+----------+
| barbican        | barbican        | 208.80.153.59 |       16 |
| cinder          | cinder          | 208.80.153.59 |       27 |
| designate       | designate       | 208.80.153.59 |       17 |
| event_scheduler | NULL            | localhost     |        1 |
| keystone        | keystone        | 208.80.153.59 |       34 |
| neutron         | neutron         | 208.80.153.59 |        6 |
| nova            | nova            | 208.80.153.59 |      128 |
| nova            | nova_api        | 208.80.153.59 |       52 |
| nova            | nova_cell0      | 208.80.153.59 |       52 |
| placement       | placement       | 208.80.153.59 |       24 |
| root            | NULL            | localhost     |        1 |
| system user     | NULL            |               |       25 |
| trove           | trove_codfw1dev | 208.80.153.59 |        2 |
+-----------------+-----------------+---------------+----------+
13 rows in set (0.006 sec)

echo "show stat" | socat /var/run/haproxy/haproxy.sock stdio | grep DOWN showed nothing

cloudcontrol2001-dev.wikimedia.org:

added 'python3-eventlet=0.30.2-1' to apt as was trying to install wrong version and causing conflicts.
nova-manage db sync gave many:

  warnings.warn(deprecated_msg)
/usr/lib/python3/dist-packages/oslo_policy/policy.py:736: UserWarning: Policy "os_compute_api:os-attach-interfaces":"rule:admin_or_owner" was deprecated in 21.0.0 in favor of "os_compute_api:os-attach-interfaces:create":"rule:system_admin_or_owner". Reason:
Nova API policies are introducing new default roles with scope_type
capabilities. Old policies are deprecated and silently going to be ignored
in nova 23.0.0 release.
. Either ensure your deployment is ready for the new default or copy/paste the deprecated policy into your policy file and maintain it manually.

type notes

  UNIT                                                             LOAD   ACTIVE SUB    DESCRIPTION
● keystone_sync_keys_to_cloudcontrol2003-dev.wikimedia.org.service loaded failed failed Sync keys for Keystone fernet tokens to cloud>
● logrotate.service                                                loaded failed failed Rotate log files
● wmf_auto_restart_prometheus-rabbitmq-exporter.service            loaded failed failed Auto restart job: prometheus-rabbitmq-exporter

systemctl reset-failed cleared it

same before and after:

mysql:root@localhost [(none)]> select user, db, SUBSTRING_INDEX(host,':',1) as host, count(*) from information_schema.processlist group by user, db, SUBSTRING_INDEX(host,':',1);
+-----------------+------+-----------+----------+
| user            | db   | host      | count(*) |
+-----------------+------+-----------+----------+
| event_scheduler | NULL | localhost |        1 |
| root            | NULL | localhost |        1 |
| system user     | NULL |           |       25 |
+-----------------+------+-----------+----------+
3 rows in set (0.002 sec)

mysql:root@localhost [(none)]> select user, db, SUBSTRING_INDEX(host,':',1) as host, count(*) from information_schema.processlist group by user, db, SUBSTRING_INDEX(host,':',1);
+-----------------+------+-----------+----------+
| user            | db   | host      | count(*) |
+-----------------+------+-----------+----------+
| event_scheduler | NULL | localhost |        1 |
| root            | NULL | localhost |        1 |
| system user     | NULL |           |       25 |
+-----------------+------+-----------+----------+

echo "show stat" | socat /var/run/haproxy/haproxy.sock stdio | grep DOWN finds nothing

ran neutron-db-manage upgrade heads after other cloudcontrol servers finished to fix neutron l3-agent-list-hosting-router cloudinstances2b-gw

cloudnet2002-dev.codfw.wmnet and cloudnet2004-dev.codfw.wmnet appeared to have run without issue.

rook updated the task description. (Show Details)

Change 775365 had a related patch set uploaded (by Andrew Bogott; author: Andrew Bogott):

[operations/puppet@production] openstack::serverpackages::wallaby::bullseye: install python3-eventlet from bpo

https://gerrit.wikimedia.org/r/775365

Change 775365 merged by Vivian Rook:

[operations/puppet@production] openstack::serverpackages::wallaby::bullseye: python3-eventlet from nochange

https://gerrit.wikimedia.org/r/775365

cloudvirt2001-dev.codfw.wmnet:
removed python3-positional as it didn't seem to be installed and did not exist in repo. added python3-os-brick for dep resolution.
https://gerrit.wikimedia.org/r/c/operations/puppet/+/775365
was added for python3-eventlet. As it could be pinned like cloudcontrol to do the install, but puppet would undo it here, unlike cloudcontrol. It is likely that cloudcontrol can drop the pinned version on the update now.

rook reopened this task as Open.
rook claimed this task.
rook updated the task description. (Show Details)

Neutron has been detected to be down @ codfw1dev after the upgrade.

This yields:

arturo@nostromo:~ [spicerack] $ cookbook wmcs.openstack.network.tests -d codfw1dev
[..]
[2022-03-31 10:53:05] INFO: ---
[2022-03-31 10:53:05] INFO: --- passed tests: 2
[2022-03-31 10:53:05] INFO: --- failed tests: 17
[2022-03-31 10:53:05] INFO: --- total tests: 19

All the neutron agents are down:

root@cloudcontrol2001-dev:~# source novaenv.sh 
root@cloudcontrol2001-dev:~# neutron agent-list
neutron CLI is deprecated and will be removed in the future. Use openstack CLI instead.
+--------------------------------------+--------------------+-------------------+-------------------+-------+----------------+---------------------------+
| id                                   | agent_type         | host              | availability_zone | alive | admin_state_up | binary                    |
+--------------------------------------+--------------------+-------------------+-------------------+-------+----------------+---------------------------+
| 228b6925-6b3e-464f-9d23-70e250b928f2 | Linux bridge agent | cloudnet2004-dev  |                   | xxx   | True           | neutron-linuxbridge-agent |
| 2f9bd1b1-e51f-47d4-b527-ccfd6b062f8b | DHCP agent         | cloudnet2004-dev  | nova              | xxx   | True           | neutron-dhcp-agent        |
| 46573e30-a4f0-4424-84c5-e18d7a1d0902 | Linux bridge agent | cloudvirt2003-dev |                   | xxx   | True           | neutron-linuxbridge-agent |
| 4a0e32d8-f231-4e50-9636-414b3e44cd53 | L3 agent           | cloudnet2002-dev  | nova              | xxx   | True           | neutron-l3-agent          |
| 5584e5f9-1e37-430c-b1cd-a3be0a1f1c5b | L3 agent           | cloudnet2004-dev  | nova              | xxx   | True           | neutron-l3-agent          |
| 6be877da-0221-4d44-813a-7e77868a2364 | Metadata agent     | cloudnet2002-dev  |                   | xxx   | True           | neutron-metadata-agent    |
| 73206678-6394-4d0e-9668-2c6cdf28b595 | Linux bridge agent | cloudvirt2002-dev |                   | xxx   | True           | neutron-linuxbridge-agent |
| 865072bb-941d-4d89-bb39-282df7fe7110 | DHCP agent         | cloudnet2002-dev  | nova              | xxx   | True           | neutron-dhcp-agent        |
| 98f75540-ec40-4b32-be19-33dd3c24c5b5 | Linux bridge agent | cloudvirt2001-dev |                   | xxx   | True           | neutron-linuxbridge-agent |
| cf504178-7bfe-4972-b2c6-0872cb829f2a | Metadata agent     | cloudnet2004-dev  |                   | xxx   | True           | neutron-metadata-agent    |
| e4828358-0291-4d00-a493-a866183689ee | Linux bridge agent | cloudnet2002-dev  |                   | xxx   | True           | neutron-linuxbridge-agent |
+--------------------------------------+--------------------+-------------------+-------------------+-------+----------------+---------------------------+

This is usually caused by the neutron-rpc-server service being in trouble @ cloudcontrols.

root@cloudcontrol2001-dev:~# tail -1 /var/log/neutron/neutron-rpc-server.log
2022-03-31 11:33:35.335 2446864 WARNING oslo_db.sqlalchemy.engines [req-f619729c-d589-475e-957d-24025061c418 - - - - -] SQL connection failed. 10 attempts left.: oslo_db.exception.DBConnectionError: (pymysql.err.OperationalError) (2003, "Can't connect to MySQL server on 'openstack.codfw1dev.wikimediacloud.org' ([Errno -3] Lookup timed out)")

So the neutron server can't contact the DB for whatever reason. This doesn't improve on a neutron server restart.

All 3 cloudcontrols show the same mariadb connectivity problem:

aborrero@cumin2002:~$ sudo cumin cloudcontrol200*.wikimedia.org 'tail -1 /var/log/neutron/neutron-rpc-server.log'
3 hosts will be targeted:
cloudcontrol[2001,2003-2004]-dev.wikimedia.org
Ok to proceed on 3 hosts? Enter the number of affected hosts to confirm or "q" to quit 3
===== NODE GROUP =====                                                                                                                                       
(1) cloudcontrol2001-dev.wikimedia.org                                                                                                                       
----- OUTPUT of 'tail -1 /var/log...n-rpc-server.log' -----                                                                                                  
2022-03-31 11:43:21.821 2454403 WARNING oslo_db.sqlalchemy.engines [req-0f132073-40e1-4a3b-b8aa-e7799c8b0dbe - - - - -] SQL connection failed. 4 attempts left.: oslo_db.exception.DBConnectionError: (pymysql.err.OperationalError) (2003, "Can't connect to MySQL server on 'openstack.codfw1dev.wikimediacloud.org' ([Errno -3] Lookup timed out)")
===== NODE GROUP =====                                                                                                                                       
(1) cloudcontrol2004-dev.wikimedia.org                                                                                                                       
----- OUTPUT of 'tail -1 /var/log...n-rpc-server.log' -----                                                                                                  
2022-03-31 11:43:21.464 623713 WARNING oslo_db.sqlalchemy.engines [req-451394bb-a88b-44c1-b63d-b2a801f47bba - - - - -] SQL connection failed. 4 attempts left.: oslo_db.exception.DBConnectionError: (pymysql.err.OperationalError) (2003, "Can't connect to MySQL server on 'openstack.codfw1dev.wikimediacloud.org' ([Errno -3] Lookup timed out)")
===== NODE GROUP =====                                                                                                                                       
(1) cloudcontrol2003-dev.wikimedia.org                                                                                                                       
----- OUTPUT of 'tail -1 /var/log...n-rpc-server.log' -----                                                                                                  
2022-03-31 11:43:21.772 2573180 WARNING oslo_db.sqlalchemy.engines [req-3f176d3b-5744-4863-a7f1-47c4185d1921 - - - - -] SQL connection failed. 4 attempts left.: oslo_db.exception.DBConnectionError: (pymysql.err.OperationalError) (2003, "Can't connect to MySQL server on 'openstack.codfw1dev.wikimediacloud.org' ([Errno -3] Lookup timed out)")
================       

Latest theory by @dcaro is name resolution intermixed with IPv6 connectivity issues.

We confirmed this by replacing the FQDN openstack.codfw1dev.wikimediacloud.org address in the neutron config file with the raw IPv4 address. Then it get past connecting to the DB but fails with rabbitmq:

2022-03-31 12:15:13.544 2526134 ERROR oslo.messaging._drivers.impl_rabbit [req-9e8fcdf4-8cd0-452d-822a-7aae3053ae5b - - - - -] Connection failed: failed to resolve broker hostname (retrying in 0 seconds): OSError: failed to resolve broker hostname

Likely for the same DNS+IPv6 reason.

Again, @dcaro pointed at the combo of dnspython/eventlet as being troubled.

root@cloudcontrol2001-dev:~# apt-cache policy python3-eventlet python3-dnspython
python3-eventlet:
  Installed: 0.30.2-1
  Candidate: 0.30.2-1
  Version table:
 *** 0.30.2-1 1002
        500 http://mirrors.wikimedia.org/osbpo bullseye-wallaby-backports-nochange/main amd64 Packages
        100 /var/lib/dpkg/status
     0.26.1-8~wmf1 1001
       1001 http://apt.wikimedia.org/wikimedia bullseye-wikimedia/main amd64 Packages
     0.26.1-7+deb11u1 500
        500 http://mirrors.wikimedia.org/debian bullseye/main amd64 Packages
python3-dnspython:
  Installed: 2.0.0-1
  Candidate: 2.0.0-1
  Version table:
 *** 2.0.0-1 500
        500 http://mirrors.wikimedia.org/debian bullseye/main amd64 Packages
        100 /var/lib/dpkg/status

Supported by online comments:

Change 775949 had a related patch set uploaded (by Andrew Bogott; author: Andrew Bogott):

[operations/puppet@production] OpenStack nova: Fix the regex hack that validates new VM names

https://gerrit.wikimedia.org/r/775949

Change 775949 abandoned by Andrew Bogott:

[operations/puppet@production] OpenStack nova: Fix the regex hack that validates new VM names

Reason:

dropping in favor of a diff-based approach

https://gerrit.wikimedia.org/r/775949

Change 777873 had a related patch set uploaded (by Vivian Rook; author: Vivian Rook):

[operations/puppet@production] add chunkeddriver.py.patch to wallaby

https://gerrit.wikimedia.org/r/777873

Change 778660 had a related patch set uploaded (by Andrew Bogott; author: Andrew Bogott):

[operations/puppet@production] autoinstall: cloudbackup100[1,2] -> Bullseye

https://gerrit.wikimedia.org/r/778660

Change 778660 merged by Andrew Bogott:

[operations/puppet@production] autoinstall: cloudbackup100[1,2] -> Bullseye

https://gerrit.wikimedia.org/r/778660

Change 777873 merged by Vivian Rook:

[operations/puppet@production] add chunkeddriver.py.patch to wallaby

https://gerrit.wikimedia.org/r/777873