Apply the RBAC for "pod" or "namespace" labels from
https://review.opendev.org/c/zuul/nodepool/+/953479/1/doc/source/kubernetes.rst#60 to our cluster via tofu.
Description
Related Objects
Event Timeline
Managed to create the role and binding in the main (project Zuul) cluster :)
zuuldevopsbot@zuul-bastion-01:~$ kubectl describe clusterrole nodepool Name: nodepool Labels: <none> Annotations: <none> PolicyRule: Resources Non-Resource URLs Resource Names Verbs --------- ----------------- -------------- ----- configmaps [] [] [get list create delete patch update watch] crontabs [] [] [get list create delete patch update watch] deployments [] [] [get list create delete patch update watch] endpoints [] [] [get list create delete patch update watch] jobs [] [] [get list create delete patch update watch] pods/exec [] [] [get list create delete patch update watch] pods/log [] [] [get list create delete patch update watch] pods/portforward [] [] [get list create delete patch update watch] pods [] [] [get list create delete patch update watch] replicasets [] [] [get list create delete patch update watch] secrets [] [] [get list create delete patch update watch] services [] [] [get list create delete patch update watch] namespaces [] [] [get list create delete] serviceaccounts [] [] [get list create delete] rolebindings.rbac.authorization.k8s.io [] [] [get list create delete] roles.rbac.authorization.k8s.io [] [] [get list create delete]
In the end I needed to resort to provisioners, which is not ideal. My current draft is here: https://gitlab.wikimedia.org/repos/releng/zuul/tofu-provisioning/-/merge_requests/33
I'm going to add a README to follow the pattern of the other modules and then will promote the MR
While playing around to learn more about the keystone-webhook-authenticator I managed to find a way to make the Kubernetes provider work.
provider "kubernetes" { host = "https://[${module.haproxy.haproxy_pooled[0].access_ip_v6}]:6443" cluster_ca_certificate = module.kubernetes.active_cluster.kubeconfig.cluster_ca_certificate client_certificate = module.kubernetes.active_cluster.kubeconfig.client_certificate client_key = module.kubernetes.active_cluster.kubeconfig.client_key tls_server_name = "127.0.0.1" } data "kubernetes_all_namespaces" "allns" {} output "all-ns" { value = data.kubernetes_all_namespaces.allns.namespaces }
somebody@deployer:/srv/app$ tofu output all-ns tolist([ "default", "kube-node-lease", "kube-public", "kube-system", ])
The magic bits:
- provider.kubernetes.host targets the public IPv6 address of the haproxy directly. This requires that the runtime location has IPv6 connectivity, which is something I have not attempted to verify in the Digital Ocean GitLab runner where this executes normally. We could work around the IPv6 requirement if necessary by adding a ssh tunnel based SOCKS5 proxy to the setup and then setting provider.kubernetes.proxy_url to point to that proxy (e.g. socks5://localhost:1080). Doing that would also let us switch to using the k8s-api.svc.zuul.eqiad1.wikimedia.cloud service name that the haproxy module manages.
- provider.kubernetes.tls_server_name is the really magic thing here. That setting changes the SNI hostname for the connection which doesn't really matter in our environment. It also changes the hostname that is used to validate the server's TLS certificate, and that is what really makes this work.
bd808 opened https://gitlab.wikimedia.org/repos/releng/zuul/tofu-provisioning/-/merge_requests/34
ssh and kubernetes updates
jnuche merged https://gitlab.wikimedia.org/repos/releng/zuul/tofu-provisioning/-/merge_requests/34
ssh and kubernetes updates
bd808 merged https://gitlab.wikimedia.org/repos/releng/zuul/tofu-provisioning/-/merge_requests/33
zuul: add cluster role and binding for Zuul Nodepool