For the new k8s service in Toolforge, evaluate how we will be doing ingress.
Some related docs:
https://kubernetes.io/docs/concepts/services-networking/ingress/
aborrero | |
Jul 19 2019, 9:45 AM |
F29930871: image.png | |
Aug 1 2019, 1:30 PM |
F29812211: image.png | |
Jul 19 2019, 12:11 PM |
For the new k8s service in Toolforge, evaluate how we will be doing ingress.
Some related docs:
https://kubernetes.io/docs/concepts/services-networking/ingress/
Here is a preliminary proposal for the team to evaluate.
The ingress controller does a simple redirect based on the Host: header in the HTTP query
Concrete details and things that we will need to figure out:
The config [0] doesn't look very ugly:
apiVersion: networking.k8s.io/v1beta1 kind: Ingress metadata: name: toolforge-ingress spec: tls: - hosts: - "*.toolforge.org" secretName: toolforge-wildcard-secret rules: - host: tool1.toolforge.org http: paths: - backend: serviceName: tool1 servicePort: 80 - host: tool2.toolforge.org http: paths: - backend: serviceName: tool2 servicePort: 80 - http: paths: - backend: serviceName: proper-error-message servicePort: 80
The TLS secret:
apiVersion: v1 kind: Secret metadata: name: toolforge-wildcard-secret namespace: default data: tls.crt: base64 encoded cert tls.key: base64 encoded key type: kubernetes.io/tls
[0] https://kubernetes.io/docs/concepts/services-networking/ingress/#name-based-virtual-hosting
[1] https://kubernetes.io/docs/concepts/services-networking/ingress/#tls
I really want to get to <tool>.toolforge.org (T125589: Allow each tool to have its own subdomain for browser sandbox/cookie isolation), but we will still need to support tools.wmflabs.org/<tool> somehow. This might be something that we continue to do in a layer in front of the Kubernetes ingress though.
We also need to consider whether <tool>.toolforge.org will only be an option for Kubernetes hosted webservices, or if the improved tool isolation will also be available for Grid Engine hosted webservices.
For these reasons, I have a feeling that we will need to have a second layer of reverse proxy in front of the Kubernetes ingress, and that that proxy will also need to be able to inspect or be informed of the state of the Kubernetes cluster at some level.
- and related: should the ingress setup be managed dynamically? (i.e, a maintain-kubeusers.py-type script)
I'm not sure I understand this question. Can you elaborate on the issue you are thinking about here?
The ingress we setup should definitely work with Service objects associated with Deployments/ReplicaSets/Pods. This is basically what our kube2proxy.py hack does today. You can run kubectl get svc --all-namespaces on tools-k8s-master-01 to get an idea about why this has to be a dynamic thing. Piping that through wc -l shows 679 Services on the current k8s cluster, and these can come and go rapidly.
- I still don't have a concrete proposal on how to generate the DNS setup for $toolname.toolforge.org. I guess using the designate API should work.
tools.wmflabs.org and *.toolforge.org can both have A/AAAA records pointing to a single front proxy (which as noted above will probably be another layer in front of a general k8s ingress). The Toolforge k8s cluster does not need to host webservices appearing at arbitrary hostnames/domains so I think we can ignore DNS publication based on Service objects in Kubernetes.
- we could evaluate having a bunch of floating IP addresses and use them for ingress/egress. I hope this is not a big deal to setup.
I think a "normal" setup for an on-prem Kubernetes cluster would be to deploy the Ingress Controller (nginx, traefik, ...) as a DaemonSet using a NodePort Service. Then some additional "edge service" is deployed outside of the Kubernetes cluster (HAproxy, Octavia, our existing Dynamicproxy system might work here too) to hide the unprivileged port from external clients.
Using a pool of IPs is more like the approach of MetalLB. This is a neat thing too, but I think its more than we need for Toolforge. We have a limited use case of exposing HTTP services to the public. We do not need to build out for the complexity of exposing arbitrary ports or protocols to the public.
- I would try creating all the ingress stuff in an ingress namespace, or at least try to share the setup for all the pods. Not sure if this is possible though.
The ingress service itself should definitely be in an isolated namespace. Most of the tutorials I have seen show doing that. RBAC and service accounts are generally involved as well.
Side note: we need to update the PodSecurityPolicy in order to be able to deploy the ingress controller:
/var/log/pods/kube-system_kube-controller-manager-toolsbeta-test-k8s-master-1_389fff2e2e6c803f828653a4f18c838f/kube-controller-manager/0.log:{"log":"I0722 11:22:12.775033 1 event.go:258] Event(v1.ObjectReference{Kind:\"ReplicaSet\", Namespace:\"nginx-ingress\", Name:\"nginx-ingress-59c8769f89\", UID:\"e9ee5c80-a74f-40ac-8286-5ade8d6b8c33\", APIVersion:\"apps/v1\", ResourceVersion:\"996756\", FieldPath:\"\"}): type: 'Warning' reason: 'FailedCreate' Error creating: pods \"nginx-ingress-59c8769f89-\" is forbidden: unable to validate against any pod security policy: []\n","stream":"stderr","time":"2019-07-22T11:22:12.775535696Z"}
This is T227290: Design and document how to integrate the new Toolforge k8s cluster with PodSecurityPolicy
Change 524759 had a related patch set uploaded (by Arturo Borrero Gonzalez; owner: Arturo Borrero Gonzalez):
[operations/puppet@production] toolforge: k8s: add PSP for nginx-ingress
We probably need to finish T228660: Toolforge: new k8s: issues with the initial coredns setup before continuing in this one.
Change 524809 had a related patch set uploaded (by Bstorm; owner: Bstorm):
[operations/puppet@production] tooforge: actually place the default-psp file on the master server
Change 524809 merged by Arturo Borrero Gonzalez:
[operations/puppet@production] tooforge: actually place the default-psp file on the master server
Change 525074 had a related patch set uploaded (by Arturo Borrero Gonzalez; owner: Arturo Borrero Gonzalez):
[labs/private@master] secrets: toolforge: add default k8s nginx-ingress key pair
Change 525074 merged by Arturo Borrero Gonzalez:
[labs/private@master] secrets: toolforge: add default k8s nginx-ingress key pair
I agree with all your comments @bd808.
My next question/clarification is: the SSL termination for both tools.wmflabs.org and *.toolforge.org is in the external proxy, right? i.e, kubernetes ingress will see HTTP only traffic?.
That is true with the current setup (termination is at the proxy). If this is a question, then this explains more about what I asked on the patch :)
Client facing TLS would terminate at the front proxy. I think it would be great if we also had TLS between the front proxy and the Kubernetes ingress (and between the ingress and the Pods), but I think we can skip that too if it is hard to work into the initial implementation.
It may be worth it to set up a script that can be used by puppet execs to create kubernetes certs via the certificates API. However, I wouldn't want to block all this on that. The maintain-kubeusers script update will be doing just that in python, and some pieces of that could be abstracted to a script that puppet runs when a file doesn't exist, so here's mentioning T228499 to refer back to this. It seems like it wouldn't be terribly hard to push such things back into the ingress from there once we've got that piece sorted out.
Let me know if I can help with whatever is not working here with the ingress. Is it still some weird routing thing?
Ok this is my lastest attempt to deploy nginx-ingress:
root@toolsbeta-test-k8s-master-2:~# kubectl logs kube-controller-manager-toolsbeta-test-k8s-master-1 -n kube-system | grep nginx I0730 10:30:42.760807 1 event.go:258] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"nginx-ingress", Name:"nginx-ingress-768f66f848", UID:"e9b882f1-a02c-428d-ad3f-703c35c6f3b6", APIVersion:"apps/v1", ResourceVersion:"2255373", FieldPath:""}): type: 'Warning' reason: 'FailedCreate' Error creating: pods "nginx-ingress-768f66f848-" is forbidden: error looking up service account nginx-ingress/nginx-ingress: serviceaccount "nginx-ingress" not found
There should be some issue somewhere.
Seviceaccounts are wrong?
root@toolsbeta-test-k8s-master-1:~# kubectl get serviceaccounts -n nginx-ingress NAME SECRETS AGE default 1 4m14s root@toolsbeta-test-k8s-master-1:~# kubectl get serviceaccounts -n default NAME SECRETS AGE default 1 5d14h nginx-ingress 1 4m28s
so something is wrong in my patch https://gerrit.wikimedia.org/r/c/operations/puppet/+/524759
That was it. Now at least the pod starts with errors :-)
root@toolsbeta-test-k8s-master-1:~# kubectl logs nginx-ingress-768f66f848-c8zq7 -n nginx-ingress I0730 10:41:13.918550 1 main.go:155] Starting NGINX Ingress controller Version=edge GitCommit=e2337bb7 F0730 10:41:13.925725 1 main.go:269] A TLS cert and key for the default server is not found
So it seems I didn't properly disable TLS.
ok, so nginx-ingress requires TLS to be present by default even if we don't use it. Using the default self-signed certs from the upstream example.
Change 524759 merged by Arturo Borrero Gonzalez:
[operations/puppet@production] toolforge: k8s: add nginx-ingress configuration.
couple of question for @Bstorm and @bd808:
I'm evaluating different setups to communicate the outside world with the ingress mechanism and these questions could help decide different approaches.
The existing reverse proxy layers for both Toolforge and Cloud VPS hide the requesting user's IP address from the proxied webservices, so the Kubernetes ingress will only see the IP address of the front proxy. I don't think there is any strong reason to pass the address of the front proxy through to the Kubernetes hosted webservices.
It should be fairly simple to add in an X-Forwarded-For (XFF) header in both reverse proxy layers if we find a compelling need in the future. Today the lack of an XFF header is a feature of the front proxies in that it prevents collection and storage of client IP addresses by Cloud Services hosted webservices. The only downside that I am aware of in the current practice is that it makes blocking poorly behaved clients by IP-range impossible at the individual webservice level. This problem is orthogonal to this task however, so it need not be considered in the Kubernetes ingress solution.
- any opinions on having nginx-ingress directly listening on 80/tcp on k8s nodes? i.e, listening on a privileged port.
If I remember the things I have read about Kubernetes ingress correctly, the main disadvantage of using a privileged port is that the pod running the ingress would then need to have elevated rights. If we can avoid that it seems like one more small layer of protection against abuse.
After re-reading (ok, skimming) https://kubernetes.github.io/ingress-nginx/deploy/baremetal/ it seems likely that you are trying to evaluate the pros and cons of the NodePort service and hostNetwork options. Based on our needs described in T228500#5350594 to continue having a front proxy which actually terminates the client TLS connections and performs other routing for Grid Engine webservices, I think that the self-provisioned edge configuration which is a variant of the NodePort service seems reasonable. Traffic would then flow something like: client -> {internet} -> [front proxy] -> {tools project network} -> [nginx ingress] -> {k8s internal network} -> [pod]. Inside the pod, the actual webservice should end up seeing headers added at the front proxy helping it understand the hostname it is exposed on (for example tools.wmflabs.org or my-cool-tool.toolforge.org) and that TLS is present (X-Original-URI & X-Forwarded-Proto). The nginx ingress should just pass these headers through to the pod's http server without change.
More out of scope thoughts... if we go with this setup we should consider adding something like lua-resty-upstream-healthcheck to the front proxy to give it active health checks for the pool of ingress nodes rather than using the normal passive checks or (gross!) manually pooling and depooling each NodePort host.
I'll vote for anything that avoids doing manual pooling and depooling. Also, the NodePort setup like you are describing here is used by paws, and it is kind of terrible because of, specifically, this problem (in fact, there's a manual entry for a single node instead of any kind of roundrobin or whatever in paws). Figuring out that weird detail is something we end up doing all over again every time we have to change anything in paws. Let's make a subtask to add that to the proxy--and if we do, maybe in the paws proxy as well as we figure out how to manage ingresses!
I also second the preference for NodePort over hostNetwork for Toolforge webservices.
Ok, I think I have a working setup that may (or may not) be headed in the right direction. First iteration anyway.
Let me try to explain it and show how to configure it as well.
apiVersion: v1 kind: Service metadata: name: nginx-ingress-svc namespace: nginx-ingress labels: app.kubernetes.io/name: nginx-ingress app.kubernetes.io/part-of: nginx-ingress spec: type: NodePort ports: - name: http nodePort: 30000 port: 30000 targetPort: 80 protocol: TCP selector: app: nginx-ingress
apiVersion: networking.k8s.io/v1beta1 kind: Ingress metadata: name: toolforge-ingress spec: rules: - host: hello.toolsbeta.wmflabs.org http: paths: - backend: serviceName: hello-svc servicePort: 8081
apiVersion: v1 kind: Service metadata: name: hello-svc spec: selector: app: hello-node ports: - protocol: TCP port: 8081 targetPort: 8080
Example usage of this setup:
aborrero@toolsbeta-test-k8s-lb-01:~ $ curl hello.toolsbeta.wmflabs.org Hello World!
Awesome! I suspect that this implies a fair bit of work on the webservice/toollabs side to work with this, since we'll need to kubectl edit/PATCH/kubectl apply the resource as new webservices register/deregister....Like the submission from webservice could annotate the launched service with the needed tool info to be read by a service that will maintain the ingress config (and then the proxy just needs to "know" what's running on k8s one way or another). The automated updating of the ingress is probably another subtask.
If you have this working, I'm not sure there would be any advantage to the other available options like traefik (other than smaller containers). The other options that don't use "load balancer" service types don't seem to offer much beyond this. It always requires either a service that maintains the ingress config or GCE/AWS type load balancer object and Istio Envoy.
I do have a question on it: What namespace does it run in, and do we need to whitelist the namespace in the docker registry restrictions or to construct a container in our internal registry? The latter might be the better option. What do you think @aborrero?
You refer to the ingress, right? The ingress itself is running in a specific namespace called nginx-ingress. This is using a docker registry nginx container from upstream, but we could cache it in our own registry or whatever (more or less the same for the other k8s components).
The routing information in (5) can be namespaced as required I think. I should do some extended tests (more iterations on this) to be able to have any strong opinion.
Yup! If we can cache it in our registry, that would probably be best (and then test). I did that with the pause container so far (following what we do now). Only the kube-system namespace is exempt from the controller (which is still not deployed on the test cluster) without changes.
Hopefully the Ingress object for a given tool can live in that tool's namespace. The current logic of webservice when starting a job on the k8s backend is to create a Deployment and a Service in the tool's namespace. Adding an Ingress to this set of created/destroyed objects will be trivially easy.
note to self, for next time I get into this again (hopefully next week):
Change 527541 had a related patch set uploaded (by Arturo Borrero Gonzalez; owner: Arturo Borrero Gonzalez):
[operations/puppet@production] toolforge: k8s: ingress: nginx-ingress listen on 8082/tcp
Change 527542 had a related patch set uploaded (by Arturo Borrero Gonzalez; owner: Arturo Borrero Gonzalez):
[operations/puppet@production] toolforge: k8s: ingress: add frontend service
Change 527544 had a related patch set uploaded (by Arturo Borrero Gonzalez; owner: Arturo Borrero Gonzalez):
[operations/puppet@production] toolforge: k8s: haproxy: add proxy redirection for nginx-ingress
I don't think we'd want that. We'd limit ourselves to a very small set of possible tools (because each one would need a different nodeport -- or host port!) vs. the way we are doing it here. The idea here is that ingress objects handle routing for the tools cluster and are quite deliberately not in the control of users directly since they are shared objects. Because it recognizes the name coming in and routes it to the right service name (thanks to CoreDNS) there wouldn't be any real issue with ports. Every tool service object can use the same port at the service level, and the ingress object(s) get nodeport(s) to the outside. An upstream proxy just needs those node ports and a few k8s worker nodes to watch, everything else happens on the backend. If webservice can add the services and remove them from the ingress object, that would make it all work. Basically: if they have a Service, they don't need an Ingress. We might find ourselves needing to use a node-affinity for ingresses, though, to keep them on their own nodes so that user pods don't kill them. We can't do this like gridengine because nodeport places the port on every node, like the cluster is some single host. Using the host network will basically introduce all the limitations of gridengine to k8s instead of phasing them out over time.
Hrm, and as I type that it might be simpler on the security model if ingresses cannot be affected by user actions at all unless they are from their own namespace. The math might not be good through. I need to think about that.
Also, there might be some mechanism I'm missing here in general when it comes to routing this. NodePort will generally use one of 2767 ports or each service using it.
To put my mind at ease that we aren't going to end up limited to under 3000 tools, and because I want to understand a bit better, which one of these are you currently testing @aborrero? https://github.com/nginxinc/kubernetes-ingress/blob/master/docs/nginx-ingress-controllers.md
It seems the answers to many of these questions change depending on which nginx controller we are using.
We are using nginxinc/kubernetes-ingress because is the default one if you follow nginx-ingress docs.
Fair enough. I'm concerned we may need to change to the community-supported one at some point (which doesn't need to be now since there are bound to be similarities). Once this is working, we can try stuff and will know more. If the community supported one supports dynamic changes of endpoints (as is suggested by that chart), it may be a better fit for many reasons.
I did a little research to make sure I'm not being unhelpful on this ticket by commenting (and yes, some of my comments were probably useless).
In the course of that, I do think that we strongly should be using the community supported one (https://kubernetes.github.io/ingress-nginx/) because the nginxinc ingress is severely weakened in that it requires resetting the controller every time an ingress resource is changed. They enable dynamic reconfiguration only for the enterprise product (see my github link above). The Kubernetes community supported version implements dynamic updating of backends via lua -- https://kubernetes.github.io/ingress-nginx/how-it-works/#nginx-configuration
That part is absolutely essential for us to succeed, I think, and we should shift gears before going too much deeper with the NGINX Inc. supported version. Also, I think that we can totally create the ingresses with webservice just like @bd808 was saying with the community supported version. I'll make sure that Toolforge users in the RBAC can control ingress resources.
Obviously, please let me know if I'm totally wrong and spewing raving nonsense 🙂
This is doable with a slightly different config
apiVersion: networking.k8s.io/v1beta1 kind: Ingress metadata: name: toolforge-ingress spec: tls: - hosts: - "*.toolforge.org" secretName: toolforge-wildcard-secret rules: - host: tools.toolforge.org http: paths: - path: /tool1 backend: serviceName: tool1 servicePort: 80 - path: /tool2 backend: serviceName: tool2 servicePort: 80
( Some other proxy still needs to be in front figuring out if the request should be sent to k8s or grid proxy though.)
Great to see this moving forward, btw!
Change 539087 had a related patch set uploaded (by Arturo Borrero Gonzalez; owner: Arturo Borrero Gonzalez):
[operations/puppet@production] toolforge: update nginx-ingress configuration
Change 527541 abandoned by Arturo Borrero Gonzalez:
toolforge: k8s: ingress: nginx-ingress listen on 8082/tcp
Reason:
Doing https://gerrit.wikimedia.org/r/c/operations/puppet/ /539087 instead
Change 527542 abandoned by Arturo Borrero Gonzalez:
toolforge: k8s: ingress: add frontend service
Reason:
Doing https://gerrit.wikimedia.org/r/c/operations/puppet/ /539087 instead
Note to self:
We may need a simple default web service pod (a simple web page or whatever) running always on k8s, otherwise the external haproxy in front of the cluster won't see any backend. This could be a default route too, that can also work in case someone tries an URL not present in our ingress routing setup.
Some thoughts about webservice creating the ingress object.
I see a problem with letting users create their own ingress routing information, because we should then enforce that they don't 'hijack' traffic meant to other tools (or completely de-configure the ingress).
I don't know a way to enforce that you are changing the routing specification only for your own URL (be it hostname or path).
There seem to be some mechanisms to allow merging ingresses for the same host (https://github.com/nginxinc/kubernetes-ingress/tree/master/examples/mergeable-ingress-types) but is not clear to me.
Specifically I have this question. Given the following ingress config:
apiVersion: networking.k8s.io/v1beta1 kind: Ingress metadata: name: tool1-ingress namespace: tool1 spec: rules: - host: tool1.toolsbeta.wmflabs.org http: paths: - backend: serviceName: tool1-svc servicePort: 8081
How do we prevent tool2 maintainer from adding this config?:
apiVersion: networking.k8s.io/v1beta1 kind: Ingress metadata: name: tool2-ingress namespace: tool2 spec: rules: - host: tool1.toolsbeta.wmflabs.org <---- wrong! http: paths: - backend: serviceName: tool2-svc servicePort: 8081
which the API may accept but may result in the ingress not properly working for neither tool1 and tool2.
The alternative seems to be to integrate ingress object maintenance in our own privileged script, away from user control.
If we cannot enforce standards like that, running separate service might be an ok option. It's easy to not give users the right to create ingress and networkpolicy. We can namespace their ingress, but restricting the rules they can make is tougher. webservice is very restrictive in the options it offers right now (it defines the objects for the user), so that is one fix, but we allow access directly to the apiserver (so they can make anything that way).
It's also possible that another webhook admission controller might be able to check the URLs. Actually, I know we can do that. The one I wrote so far checks pods. A little modification will produce one that checks ingresses. That might be the easiest way. I kind of like that because then we could basically ignore this issue for now.
I wonder if could set up a CRD (custom, defined object stored in etcd that just describes something) that describes an ingress that users can interact with that another service reads and makes ingresses from (but automatically provides the URL based on the namespace--which I'm not even sure if we can do). That would abstract the users away from it while still giving them the ability to maintain it. Honestly, that pattern might really be better for a few things in the future, but I'd rather be researching and testing that after the upgrade.
I mean, we can hack the ingress controller (I'm mostly saying that to worry @bd808). However, I do kind of wonder if there isn't something in there that could do a check with a small modification?
If we haven't solved this by November, I can pester people at kubecon with it, but I really hope we have!
The custom admission controller sounds nice, also the CRD thingy.
It seems we have 3 options:
What do you think would be easiest/cheapest? and more future-proof? My knowledge for doing either 2) or 3) would be really limited.
If we open the ingress object in API to allow end users self-management (i.e, option 2)), we may allow more complex use cases on toolforge (users cooking their own complex ingress stuff) which may be interesting in the long run.
Change 539087 merged by Arturo Borrero Gonzalez:
[operations/puppet@production] toolforge: update nginx-ingress configuration
Change 527544 merged by Arturo Borrero Gonzalez:
[operations/puppet@production] toolforge: k8s: haproxy: add proxy redirection for nginx-ingress
According to https://github.com/kubernetes/ingress-nginx/issues/875, ingress-nginx can be daemonset if it's a small cluster (<10 nodes).. once it's beyond that, then replicaset is the way to go
Change 539583 had a related patch set uploaded (by Arturo Borrero Gonzalez; owner: Arturo Borrero Gonzalez):
[operations/puppet@production] toolforge: k8s: ingress: make the nginx-ingress's nginx listen in 8080/tcp
Change 539583 merged by Arturo Borrero Gonzalez:
[operations/puppet@production] toolforge: k8s: ingress: make the nginx-ingress's nginx listen in 8080/tcp
I only just noticed that the docs suggest the validating webhook mechanism for handling the situation we have up for discussion: https://kubernetes.github.io/ingress-nginx/deploy/validating-webhook/
I'll have a proof of concept code ready for our discussion. It's really easy since I already did most of the work for registry-checking pods.
I took the code I used for the registry webhook and adapted it. I still need to change the yaml/service definition, establish the golang module versions and request a repo for it, but the rest of the code is just a copy-paste since this is the actual logic. It would need to fail requests that include a "backend" argument likely so that people cannot get around the rule-checking by using that optional field in the ingress, I think.
package server import ( "encoding/json" "fmt" "regexp" "github.com/sirupsen/logrus" "k8s.io/api/admission/v1beta1" netv1beta1 "k8s.io/api/networking/v1beta1" "k8s.io/apimachinery/pkg/apis/meta/v1" ) // IngressAdmission type is where the project is stored and the handler method is linked type IngressAdmission struct { Project string } // HandleAdmission is the logic of the whole webhook, really. This is where // the decision to allow a Kubernetes ingress update or create or not takes place. func (r *IngressAdmission) HandleAdmission(review *v1beta1.AdmissionReview) error { // logrus.Debugln(review.Request) req := review.Request var ingress netv1beta1.Ingress if err := json.Unmarshal(req.Object.Raw, &ingress); err != nil { logrus.Errorf("Could not unmarshal raw object: %v", err) review.Response = &v1beta1.AdmissionResponse{ Result: &v1.Status{ Message: err.Error(), }, } return nil } logrus.Debugf("AdmissionReview for Kind=%v, Namespace=%v Name=%v (%v) UID=%v patchOperation=%v UserInfo=%v", req.Kind, req.Namespace, req.Name, ingress.Name, req.UID, req.Operation, req.UserInfo) matchstr := fmt.Sprintf("%s\\.%s\\.toolforge\\.org", req.Namespace[5:], r.Project) hostre := regexp.MustCompile(matchstr) for i := 0; i < len(ingress.Spec.Rules); i++ { rule := &ingress.Spec.Rules[i] if !hostre.MatchString(rule.Host) && req.Namespace != "kube-system" { logrus.Errorf( "Attempt to incorrect host name in Ingress: %v, namespace: %v", rule.Host, req.Namespace, ) review.Response = &v1beta1.AdmissionResponse{ Allowed: false, Result: &v1.Status{ Message: "Ingress host must be <toolname>.<project eg. tools>.toolforge.org", }, } return nil } logrus.Debugf("Found ingress host: %v", rule.Host) } review.Response = &v1beta1.AdmissionResponse{ Allowed: true, Result: &v1.Status{ Message: "Welcome to the fantasy zone!", }, } return nil }
I also will need to adapt the tests, etc. I'm just showing that the actual machinery is super-easy (partly because I'm cheating and importing code from k8s, which eliminates a big source of error).
Awesome! I think I get the idea. Perhaps we should continue conversation about this at T234231: Toolforge ingress: decide on how ingress configuration objects will be managed.
BTW I believe the FQDN scheme is <$toolname>.toolforge.org and not <$toolname>.tools.toolforge.org.
I consider this done. We can follow-up in other subtasks. How the ingress works is described here: https://wikitech.wikimedia.org/wiki/Portal:Toolforge/Admin/Networking_and_ingress