Page MenuHomePhabricator

Draft a plan for upgrading kubernetes machines to buster
Closed, ResolvedPublic

Description

Intro

Per https://wikitech.wikimedia.org/wiki/Operating_system_upgrade_policy, by mid 2020 we will begin the deprecation phase of Stretch, with a deadline for removal on early to mid 2021. This task is about identifying our various blockers and drafting a plan for the migration of our kubernetes infrastructure to buster

Components

The major components are below. They are grouped in rather large groups, as there is little benefit in listing them one by one (e.g. kube-scheduler+kube-controller-manager etc).

Calico/CNI

We still haven't upgraded to newer calico versions. This is an unknown, we need to investigate/test more before we have a verdict on versions for this component.

Kubernetes

That is the component that is expected to have the least possible friction. It's golang, statically built, easy to share between our wikimedia repos.

Docker

Buster comes with docker 18.09.1+dfsg1-7.1+deb10u1. We probably want to run extensive tests before widely using it. We 've been holding off from upgrading from our current docker version as it has caused no issues up to now.

Kernel

buster comes with a newer kernel (4.19) that includes the patches listed at https://bugzilla.kernel.org/show_bug.cgi?id=198197 so that's great.

iptables

iptables in buster is 1.8.2, however we want to at least target 1.8.3 which is in buster-backports. The rationale of for that decision is based on https://github.com/kubernetes/kubernetes/issues/71305. However, https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/#ensure-iptables-tooling-does-not-use-the-nftables-backend way more clearly says to switch to iptables-legacy. Both should be evaluated.

Event Timeline

Buster comes with docker 18.09.1+dfsg1-7.1+deb10u1. We probably want to run extensive tests before widely using it. We 've been holding off from upgrading from our current docker version as it has caused no issues up to now.

Note also that we ran into significant performance issues with new Docker for CI jobs (as part of the Jessie->Stretch migraton), and so downgraded it: T236675: Investigate Docker slowness between 18.06.2 and 18.09.7

Now that stretch-backports is end-of-lifed and Stretch in LTS, there's an additional, officially supported 4.19 kernel in stretch (based on the 4.19 updates for Buster): https://lists.debian.org/debian-lts-announce/2020/08/msg00019.html

Maybe that's helpful to break the update in chunks and first move to 4.19 on stretch and then to buster for good.

Buster comes with docker 18.09.1+dfsg1-7.1+deb10u1. We probably want to run extensive tests before widely using it. We 've been holding off from upgrading from our current docker version as it has caused no issues up to now.

Note also that we ran into significant performance issues with new Docker for CI jobs (as part of the Jessie->Stretch migraton), and so downgraded it: T236675: Investigate Docker slowness between 18.06.2 and 18.09.7

Kubernetes passes unconfined as a seccomp profile by default [1] so we probably aren't gonna experience this.

[1] https://v1-15.docs.kubernetes.io/docs/concepts/policy/pod-security-policy/#seccomp

Now that stretch-backports is end-of-lifed and Stretch in LTS, there's an additional, officially supported 4.19 kernel in stretch (based on the 4.19 updates for Buster): https://lists.debian.org/debian-lts-announce/2020/08/msg00019.html

Maybe that's helpful to break the update in chunks and first move to 4.19 on stretch and then to buster for good.

Wow, that's awesome! Thanks for pointing that out.

A couple of things that are happening on the ml-serve nodes:

  1. We are using docker.io as package name for profile::docker::engine, and it seems that it leads to failure due to the docker service unit tries to start during the docker.io install, and it doesn't find the LVM volumes configured in /etc/docker/daemon.json:
Mar 18 07:41:00 ml-serve1001 dockerd[20957]: time="2021-03-18T07:41:00.152166085Z" level=warning msg="[graphdriver] WARNING: the devicemapper storage-driver is deprecated, and will be removed in a future release"
Mar 18 07:41:01 ml-serve1001 dockerd[20957]: Error starting daemon: error initializing graphdriver: open /dev/mapper/docker-data: no such file or directory

We should probably migrate to overlay2 IIUC, but in this case it seems to me that there is a race condition in puppet: docker.io gets installed before the lvm class in profile::docker::storage. Was docker-engine behaving in a different way?

  1. The calico packages are not on buster-wikimedia, and I see the point in the description about a possible upgrade. We can help in packaging/testing if needed :)

A couple of things that are happening on the ml-serve nodes:

  1. We are using docker.io as package name for profile::docker::engine, and it seems that it leads to failure due to the docker service unit tries to start during the docker.io install, and it doesn't find the LVM volumes configured in /etc/docker/daemon.json:
Mar 18 07:41:00 ml-serve1001 dockerd[20957]: time="2021-03-18T07:41:00.152166085Z" level=warning msg="[graphdriver] WARNING: the devicemapper storage-driver is deprecated, and will be removed in a future release"
Mar 18 07:41:01 ml-serve1001 dockerd[20957]: Error starting daemon: error initializing graphdriver: open /dev/mapper/docker-data: no such file or directory

We should probably migrate to overlay2 IIUC, but in this case it seems to me that there is a race condition in puppet: docker.io gets installed before the lvm class in profile::docker::storage. Was docker-engine behaving in a different way?

After merging https://gerrit.wikimedia.org/r/c/operations/puppet/+/673199 this was solved. What was happening was that while the catalog did include the docker volume_group, it was never applied. It's not yet clear to me how the change solved that, probably some dependency related to calico. Version 3 doesn't use docker to deploy the calico-node component directly, whereas 2.2.0 does. I am still chasing down that dependency, it's not clear to me yet.

As far as the devicemapper stuff goes, yes we are going to switch to overlay2 eventually, but it's quite a bit of work to do so while maintaining backwards compatibility in our puppet code, with 0 gain which is why it's low priority.

  1. The calico packages are not on buster-wikimedia, and I see the point in the description about a possible upgrade. We can help in packaging/testing if needed :)

They are now. sudo -E reprepro --ignore=wrongdistribution -C component/calico-future include buster-wikimedia calico_3.17.0-2_amd64.changes. Thankfully they are golang code so the binaries work fine across distributions.

Adding a 3) rsyslog needs a rebuild and upload to buster-wikimedia

We use the mmkubernetes rsyslog module to send pod logs to logstash as the default debian build doesn't have it. You will need to rebuild that for buster and upload it. Repo is at https://gerrit.wikimedia.org/r/admin/repos/operations/debs/rsyslog,branches

We use the mmkubernetes rsyslog module to send pod logs to logstash as the default debian build doesn't have it. You will need to rebuild that for buster and upload it. Repo is at https://gerrit.wikimedia.org/r/admin/repos/operations/debs/rsyslog,branches

Created T277739 to get consensus about what to do :)

We skipped buster with T300744