Page MenuHomePhabricator

Toolforge: evaluate ingress mechanism
Open, Needs TriagePublic

Description

For the new k8s service in Toolforge, evaluate how we will be doing ingress.

Some related docs:
https://kubernetes.io/docs/concepts/services-networking/ingress/

Event Timeline

aborrero created this task.Jul 19 2019, 9:45 AM
aborrero added a comment.EditedJul 19 2019, 12:11 PM

Here is a preliminary proposal for the team to evaluate.

  • use an ingress controller based on nginx, using name based virtual hosting [0]
  • use native k8s TLS termination [1]
  • introduce the $tool.toolforge.org naming scheme for this new k8s cluster

The ingress controller does a simple redirect based on the Host: header in the HTTP query

Concrete details and things that we will need to figure out:

  • we would need to use a wildcard SSL certificate for the ingress controller as described in the docs [1]. Probably *.toolforge.org
  • we will need to figure out what to do with missing tools and fake domain names. i.e, bogus.toolforge.org should probably redirect to a pod containing a proper error message/info?
  • we will need to figure out what to do for tools that are shutdown (i.e, tool not running).
  • and related: should the ingress setup be managed dynamically? (i.e, a maintain-kubeusers.py-type script)
  • I still don't have a concrete proposal on how to generate the DNS setup for $toolname.toolforge.org. I guess using the designate API should work.
  • we could evaluate having a bunch of floating IP addresses and use them for ingress/egress. I hope this is not a big deal to setup.
  • I would try creating all the ingress stuff in an ingress namespace, or at least try to share the setup for all the pods. Not sure if this is possible though.

The config [0] doesn't look very ugly:

apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
  name: toolforge-ingress
spec:
  tls:
  - hosts:
    - "*.toolforge.org"
    secretName: toolforge-wildcard-secret
  rules:
  - host: tool1.toolforge.org
    http:
      paths:
      - backend:
          serviceName: tool1
          servicePort: 80
  - host: tool2.toolforge.org
    http:
      paths:
      - backend:
          serviceName: tool2
          servicePort: 80
  - http:
      paths:
      - backend:
          serviceName: proper-error-message
          servicePort: 80

The TLS secret:

apiVersion: v1
kind: Secret
metadata:
  name: toolforge-wildcard-secret
  namespace: default
data:
  tls.crt: base64 encoded cert
  tls.key: base64 encoded key
type: kubernetes.io/tls

[0] https://kubernetes.io/docs/concepts/services-networking/ingress/#name-based-virtual-hosting
[1] https://kubernetes.io/docs/concepts/services-networking/ingress/#tls

Concrete details and things that we will need to figure out:

  • we would need to use a wildcard SSL certificate for the ingress controller as described in the docs [1]. Probably *.toolforge.org
  • we will need to figure out what to do with missing tools and fake domain names. i.e, bogus.toolforge.org should probably redirect to a pod containing a proper error message/info?
  • we will need to figure out what to do for tools that are shutdown (i.e, tool not running).

I really want to get to <tool>.toolforge.org (T125589: Allow each tool to have its own subdomain for browser sandbox/cookie isolation), but we will still need to support tools.wmflabs.org/<tool> somehow. This might be something that we continue to do in a layer in front of the Kubernetes ingress though.

We also need to consider whether <tool>.toolforge.org will only be an option for Kubernetes hosted webservices, or if the improved tool isolation will also be available for Grid Engine hosted webservices.

For these reasons, I have a feeling that we will need to have a second layer of reverse proxy in front of the Kubernetes ingress, and that that proxy will also need to be able to inspect or be informed of the state of the Kubernetes cluster at some level.

  • and related: should the ingress setup be managed dynamically? (i.e, a maintain-kubeusers.py-type script)

I'm not sure I understand this question. Can you elaborate on the issue you are thinking about here?

The ingress we setup should definitely work with Service objects associated with Deployments/ReplicaSets/Pods. This is basically what our kube2proxy.py hack does today. You can run kubectl get svc --all-namespaces on tools-k8s-master-01 to get an idea about why this has to be a dynamic thing. Piping that through wc -l shows 679 Services on the current k8s cluster, and these can come and go rapidly.

  • I still don't have a concrete proposal on how to generate the DNS setup for $toolname.toolforge.org. I guess using the designate API should work.

tools.wmflabs.org and *.toolforge.org can both have A/AAAA records pointing to a single front proxy (which as noted above will probably be another layer in front of a general k8s ingress). The Toolforge k8s cluster does not need to host webservices appearing at arbitrary hostnames/domains so I think we can ignore DNS publication based on Service objects in Kubernetes.

  • we could evaluate having a bunch of floating IP addresses and use them for ingress/egress. I hope this is not a big deal to setup.

I think a "normal" setup for an on-prem Kubernetes cluster would be to deploy the Ingress Controller (nginx, traefik, ...) as a DaemonSet using a NodePort Service. Then some additional "edge service" is deployed outside of the Kubernetes cluster (HAproxy, Octavia, our existing Dynamicproxy system might work here too) to hide the unprivileged port from external clients.

Using a pool of IPs is more like the approach of MetalLB. This is a neat thing too, but I think its more than we need for Toolforge. We have a limited use case of exposing HTTP services to the public. We do not need to build out for the complexity of exposing arbitrary ports or protocols to the public.

  • I would try creating all the ingress stuff in an ingress namespace, or at least try to share the setup for all the pods. Not sure if this is possible though.

The ingress service itself should definitely be in an isolated namespace. Most of the tutorials I have seen show doing that. RBAC and service accounts are generally involved as well.

aborrero added a comment.EditedJul 22 2019, 11:36 AM

Side note: we need to update the PodSecurityPolicy in order to be able to deploy the ingress controller:

/var/log/pods/kube-system_kube-controller-manager-toolsbeta-test-k8s-master-1_389fff2e2e6c803f828653a4f18c838f/kube-controller-manager/0.log:{"log":"I0722 11:22:12.775033       1 event.go:258] Event(v1.ObjectReference{Kind:\"ReplicaSet\", Namespace:\"nginx-ingress\", Name:\"nginx-ingress-59c8769f89\", UID:\"e9ee5c80-a74f-40ac-8286-5ade8d6b8c33\", APIVersion:\"apps/v1\", ResourceVersion:\"996756\", FieldPath:\"\"}): type: 'Warning' reason: 'FailedCreate' Error creating: pods \"nginx-ingress-59c8769f89-\" is forbidden: unable to validate against any pod security policy: []\n","stream":"stderr","time":"2019-07-22T11:22:12.775535696Z"}

This is T227290: Design and document how to integrate the new Toolforge k8s cluster with PodSecurityPolicy

Change 524759 had a related patch set uploaded (by Arturo Borrero Gonzalez; owner: Arturo Borrero Gonzalez):
[operations/puppet@production] toolforge: k8s: add PSP for nginx-ingress

https://gerrit.wikimedia.org/r/524759

We probably need to finish T228660: Toolforge: new k8s: evaluate DNS setup for coredns before continuing in this one.

Change 524809 had a related patch set uploaded (by Bstorm; owner: Bstorm):
[operations/puppet@production] tooforge: actually place the default-psp file on the master server

https://gerrit.wikimedia.org/r/524809

Change 524809 merged by Arturo Borrero Gonzalez:
[operations/puppet@production] tooforge: actually place the default-psp file on the master server

https://gerrit.wikimedia.org/r/524809

Change 525074 had a related patch set uploaded (by Arturo Borrero Gonzalez; owner: Arturo Borrero Gonzalez):
[labs/private@master] secrets: toolforge: add default k8s nginx-ingress key pair

https://gerrit.wikimedia.org/r/525074

Change 525074 merged by Arturo Borrero Gonzalez:
[labs/private@master] secrets: toolforge: add default k8s nginx-ingress key pair

https://gerrit.wikimedia.org/r/525074

[..]

I agree with all your comments @bd808.

My next question/clarification is: the SSL termination for both tools.wmflabs.org and *.toolforge.org is in the external proxy, right? i.e, kubernetes ingress will see HTTP only traffic?.

That is true with the current setup (termination is at the proxy). If this is a question, then this explains more about what I asked on the patch :)

bd808 added a comment.Jul 23 2019, 3:45 PM

My next question/clarification is: the SSL termination for both tools.wmflabs.org and *.toolforge.org is in the external proxy, right? i.e, kubernetes ingress will see HTTP only traffic?.

That is true with the current setup (termination is at the proxy). If this is a question, then this explains more about what I asked on the patch :)

Client facing TLS would terminate at the front proxy. I think it would be great if we also had TLS between the front proxy and the Kubernetes ingress (and between the ingress and the Pods), but I think we can skip that too if it is hard to work into the initial implementation.

It may be worth it to set up a script that can be used by puppet execs to create kubernetes certs via the certificates API. However, I wouldn't want to block all this on that. The maintain-kubeusers script update will be doing just that in python, and some pieces of that could be abstracted to a script that puppet runs when a file doesn't exist, so here's mentioning T228499 to refer back to this. It seems like it wouldn't be terribly hard to push such things back into the ingress from there once we've got that piece sorted out.

Let me know if I can help with whatever is not working here with the ingress. Is it still some weird routing thing?

aborrero added a comment.EditedJul 30 2019, 10:37 AM

Ok this is my lastest attempt to deploy nginx-ingress:

root@toolsbeta-test-k8s-master-2:~# kubectl logs kube-controller-manager-toolsbeta-test-k8s-master-1 -n kube-system | grep nginx
I0730 10:30:42.760807       1 event.go:258] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"nginx-ingress", Name:"nginx-ingress-768f66f848", UID:"e9b882f1-a02c-428d-ad3f-703c35c6f3b6", APIVersion:"apps/v1", ResourceVersion:"2255373", FieldPath:""}): type: 'Warning' reason: 'FailedCreate' Error creating: pods "nginx-ingress-768f66f848-" is forbidden: error looking up service account nginx-ingress/nginx-ingress: serviceaccount "nginx-ingress" not found

There should be some issue somewhere.

Seviceaccounts are wrong?

root@toolsbeta-test-k8s-master-1:~# kubectl get serviceaccounts -n nginx-ingress
NAME      SECRETS   AGE
default   1         4m14s
root@toolsbeta-test-k8s-master-1:~# kubectl get serviceaccounts -n default
NAME            SECRETS   AGE
default         1         5d14h
nginx-ingress   1         4m28s

so something is wrong in my patch https://gerrit.wikimedia.org/r/c/operations/puppet/+/524759

That was it. Now at least the pod starts with errors :-)

root@toolsbeta-test-k8s-master-1:~# kubectl logs nginx-ingress-768f66f848-c8zq7 -n nginx-ingress
I0730 10:41:13.918550       1 main.go:155] Starting NGINX Ingress controller Version=edge GitCommit=e2337bb7
F0730 10:41:13.925725       1 main.go:269] A TLS cert and key for the default server is not found

So it seems I didn't properly disable TLS.

ok, so nginx-ingress requires TLS to be present by default even if we don't use it. Using the default self-signed certs from the upstream example.

Change 524759 merged by Arturo Borrero Gonzalez:
[operations/puppet@production] toolforge: k8s: add nginx-ingress configuration.

https://gerrit.wikimedia.org/r/524759

couple of question for @Bstorm and @bd808:

  • how important is for us to preserve the original source IP of the client contacting a tool? I would say is interesting for logging purposes, but not sure if you already had something in mind
  • any opinions on having nginx-ingress directly listening on 80/tcp on k8s nodes? i.e, listening on a privileged port.

I'm evaluating different setups to communicate the outside world with the ingress mechanism and these questions could help decide different approaches.

bd808 added a comment.Jul 31 2019, 9:03 PM
  • how important is for us to preserve the original source IP of the client contacting a tool? I would say is interesting for logging purposes, but not sure if you already had something in mind

The existing reverse proxy layers for both Toolforge and Cloud VPS hide the requesting user's IP address from the proxied webservices, so the Kubernetes ingress will only see the IP address of the front proxy. I don't think there is any strong reason to pass the address of the front proxy through to the Kubernetes hosted webservices.

It should be fairly simple to add in an X-Forwarded-For (XFF) header in both reverse proxy layers if we find a compelling need in the future. Today the lack of an XFF header is a feature of the front proxies in that it prevents collection and storage of client IP addresses by Cloud Services hosted webservices. The only downside that I am aware of in the current practice is that it makes blocking poorly behaved clients by IP-range impossible at the individual webservice level. This problem is orthogonal to this task however, so it need not be considered in the Kubernetes ingress solution.

  • any opinions on having nginx-ingress directly listening on 80/tcp on k8s nodes? i.e, listening on a privileged port.

If I remember the things I have read about Kubernetes ingress correctly, the main disadvantage of using a privileged port is that the pod running the ingress would then need to have elevated rights. If we can avoid that it seems like one more small layer of protection against abuse.


After re-reading (ok, skimming) https://kubernetes.github.io/ingress-nginx/deploy/baremetal/ it seems likely that you are trying to evaluate the pros and cons of the NodePort service and hostNetwork options. Based on our needs described in T228500#5350594 to continue having a front proxy which actually terminates the client TLS connections and performs other routing for Grid Engine webservices, I think that the self-provisioned edge configuration which is a variant of the NodePort service seems reasonable. Traffic would then flow something like: client -> {internet} -> [front proxy] -> {tools project network} -> [nginx ingress] -> {k8s internal network} -> [pod]. Inside the pod, the actual webservice should end up seeing headers added at the front proxy helping it understand the hostname it is exposed on (for example tools.wmflabs.org or my-cool-tool.toolforge.org) and that TLS is present (X-Original-URI & X-Forwarded-Proto). The nginx ingress should just pass these headers through to the pod's http server without change.

More out of scope thoughts... if we go with this setup we should consider adding something like lua-resty-upstream-healthcheck to the front proxy to give it active health checks for the pool of ingress nodes rather than using the normal passive checks or (gross!) manually pooling and depooling each NodePort host.

More out of scope thoughts... if we go with this setup we should consider adding something like lua-resty-upstream-healthcheck to the front proxy to give it active health checks for the pool of ingress nodes rather than using the normal passive checks or (gross!) manually pooling and depooling each NodePort host.

I'll vote for anything that avoids doing manual pooling and depooling. Also, the NodePort setup like you are describing here is used by paws, and it is kind of terrible because of, specifically, this problem (in fact, there's a manual entry for a single node instead of any kind of roundrobin or whatever in paws). Figuring out that weird detail is something we end up doing all over again every time we have to change anything in paws. Let's make a subtask to add that to the proxy--and if we do, maybe in the paws proxy as well as we figure out how to manage ingresses!

I also second the preference for NodePort over hostNetwork for Toolforge webservices.

Ok, I think I have a working setup that may (or may not) be headed in the right direction. First iteration anyway.

Let me try to explain it and show how to configure it as well.

  1. The DNS setup is simple. I have a FQDN $tool.toolsbeta.wmflabs.org pointing to the external load balancer. This could be anything, from $tool.toolforge.org to tools.wmflabs.org. This is the equivalent of our current DynamicProxy (but not doing anything dynamic ATM).
  2. The front external proxy listens on tcp/80. Simple. The backends are the kubernetes worker nodes at tcp/30000 (arbitrary port that I picked and that doesn't change automatically). This is configured in /etc/haproxy/conf.d/k8s-ingress.cfg @ toolsbeta-test-k8s-lb-01.eqiad.wmflabs.
  3. This is one of the key points of the setup. A NodePort service ensures that every node listens on tcp/30000 and will forward connections to the nginx-ingress app (the actual nginx-ingress pod). This is created with a yaml like this:
apiVersion: v1
kind: Service
metadata:
  name: nginx-ingress-svc
  namespace: nginx-ingress
  labels:
    app.kubernetes.io/name: nginx-ingress
    app.kubernetes.io/part-of: nginx-ingress
spec:
  type: NodePort
  ports:
    - name: http
      nodePort: 30000
      port: 30000
      targetPort: 80
      protocol: TCP
  selector:
    app: nginx-ingress
  1. by default, the nginx-ingress pod listens on 80/tcp. This is where nginx is running to handle the redirections. This is deployed using modules/toolforge/files/k8s/kubeadm-nginx-ingress.yaml.
  2. The routing information. This file should be dynamically managed, i.e, there should be an entry per tool running in the k8s backend in toolforge. The pattern for each entry should be easy to detect. The yaml looks like this:
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
  name: toolforge-ingress
spec:
  rules:
  - host: hello.toolsbeta.wmflabs.org
    http:
      paths:
      - backend:
          serviceName: hello-svc
          servicePort: 8081
  1. each tool has a service in front of them (nginx-ingress works like this apparently). So this point will need to be dynamically generated as well. To reduce confusion with port numbers I used tcp/8081 to listen and tcp/8080 as target. Pretty simple, the yaml looks like this:
apiVersion: v1
kind: Service
metadata:
  name: hello-svc
spec:
  selector:
    app: hello-node
  ports:
    - protocol: TCP
      port: 8081
      targetPort: 8080
  1. Finally the tool itself (the final pod). In this example, the tool is called hello. Nothing fancy, it just returns Hello world! and the pod listens on tcp/8080.

Example usage of this setup:

aborrero@toolsbeta-test-k8s-lb-01:~ $ curl hello.toolsbeta.wmflabs.org
Hello World!
Bstorm added a comment.Aug 1 2019, 3:15 PM

Awesome! I suspect that this implies a fair bit of work on the webservice/toollabs side to work with this, since we'll need to kubectl edit/PATCH/kubectl apply the resource as new webservices register/deregister....Like the submission from webservice could annotate the launched service with the needed tool info to be read by a service that will maintain the ingress config (and then the proxy just needs to "know" what's running on k8s one way or another). The automated updating of the ingress is probably another subtask.

Bstorm added a comment.Aug 1 2019, 3:18 PM

If you have this working, I'm not sure there would be any advantage to the other available options like traefik (other than smaller containers). The other options that don't use "load balancer" service types don't seem to offer much beyond this. It always requires either a service that maintains the ingress config or GCE/AWS type load balancer object and Istio Envoy.

Bstorm added a comment.Aug 1 2019, 3:20 PM

I do have a question on it: What namespace does it run in, and do we need to whitelist the namespace in the docker registry restrictions or to construct a container in our internal registry? The latter might be the better option. What do you think @aborrero?

I do have a question on it: What namespace does it run in, and do we need to whitelist the namespace in the docker registry restrictions or to construct a container in our internal registry? The latter might be the better option. What do you think @aborrero?

You refer to the ingress, right? The ingress itself is running in a specific namespace called nginx-ingress. This is using a docker registry nginx container from upstream, but we could cache it in our own registry or whatever (more or less the same for the other k8s components).

The routing information in (5) can be namespaced as required I think. I should do some extended tests (more iterations on this) to be able to have any strong opinion.

Bstorm added a comment.Aug 1 2019, 4:14 PM

Yup! If we can cache it in our registry, that would probably be best (and then test). I did that with the pause container so far (following what we do now). Only the kube-system namespace is exempt from the controller (which is still not deployed on the test cluster) without changes.

bd808 added a comment.Aug 1 2019, 10:51 PM

The routing information in (5) can be namespaced as required I think. I should do some extended tests (more iterations on this) to be able to have any strong opinion.

Hopefully the Ingress object for a given tool can live in that tool's namespace. The current logic of webservice when starting a job on the k8s backend is to create a Deployment and a Service in the tool's namespace. Adding an Ingress to this set of created/destroyed objects will be trivially easy.

note to self, for next time I get into this again (hopefully next week):

  • I will need to re-deploy my testing tool into its own namespace to better test all this stuff
  • the ingress object should be namespaced: $tool-ingress
  • the nginx-ingress pod should not listen in 80/tcp, but other unprivileged port
  • the nginx-ingress-svc object should be added to modules/toolforge/files/k8s/kubeadm-nginx-ingress.yaml

Change 527541 had a related patch set uploaded (by Arturo Borrero Gonzalez; owner: Arturo Borrero Gonzalez):
[operations/puppet@production] toolforge: k8s: ingress: nginx-ingress listen on 8082/tcp

https://gerrit.wikimedia.org/r/527541

Change 527542 had a related patch set uploaded (by Arturo Borrero Gonzalez; owner: Arturo Borrero Gonzalez):
[operations/puppet@production] toolforge: k8s: ingress: add frontend service

https://gerrit.wikimedia.org/r/527542

Change 527544 had a related patch set uploaded (by Arturo Borrero Gonzalez; owner: Arturo Borrero Gonzalez):
[operations/puppet@production] toolforge: k8s: haproxy: add proxy redirection for nginx-ingress

https://gerrit.wikimedia.org/r/527544

Bstorm added a comment.EditedAug 2 2019, 2:46 PM

Hopefully the Ingress object for a given tool can live in that tool's namespace. The current logic of webservice when starting a job on the k8s backend is to create a Deployment and a Service in the tool's namespace. Adding an Ingress to this set of created/destroyed objects will be trivially easy.

I don't think we'd want that. We'd limit ourselves to a very small set of possible tools (because each one would need a different nodeport -- or host port!) vs. the way we are doing it here. The idea here is that ingress objects handle routing for the tools cluster and are quite deliberately not in the control of users directly since they are shared objects. Because it recognizes the name coming in and routes it to the right service name (thanks to CoreDNS) there wouldn't be any real issue with ports. Every tool service object can use the same port at the service level, and the ingress object(s) get nodeport(s) to the outside. An upstream proxy just needs those node ports and a few k8s worker nodes to watch, everything else happens on the backend. If webservice can add the services and remove them from the ingress object, that would make it all work. Basically: if they have a Service, they don't need an Ingress. We might find ourselves needing to use a node-affinity for ingresses, though, to keep them on their own nodes so that user pods don't kill them. We can't do this like gridengine because nodeport places the port on every node, like the cluster is some single host. Using the host network will basically introduce all the limitations of gridengine to k8s instead of phasing them out over time.

Bstorm added a comment.Aug 2 2019, 2:51 PM

Hrm, and as I type that it might be simpler on the security model if ingresses cannot be affected by user actions at all unless they are from their own namespace. The math might not be good through. I need to think about that.

Bstorm added a comment.Aug 2 2019, 3:10 PM

Also, there might be some mechanism I'm missing here in general when it comes to routing this. NodePort will generally use one of 2767 ports or each service using it.

To put my mind at ease that we aren't going to end up limited to under 3000 tools, and because I want to understand a bit better, which one of these are you currently testing @aborrero? https://github.com/nginxinc/kubernetes-ingress/blob/master/docs/nginx-ingress-controllers.md
It seems the answers to many of these questions change depending on which nginx controller we are using.

To put my mind at ease that we aren't going to end up limited to under 3000 tools, and because I want to understand a bit better, which one of these are you currently testing @aborrero? https://github.com/nginxinc/kubernetes-ingress/blob/master/docs/nginx-ingress-controllers.md
It seems the answers to many of these questions change depending on which nginx controller we are using.

We are using nginxinc/kubernetes-ingress because is the default one if you follow nginx-ingress docs.

Fair enough. I'm concerned we may need to change to the community-supported one at some point (which doesn't need to be now since there are bound to be similarities). Once this is working, we can try stuff and will know more. If the community supported one supports dynamic changes of endpoints (as is suggested by that chart), it may be a better fit for many reasons.

I did a little research to make sure I'm not being unhelpful on this ticket by commenting (and yes, some of my comments were probably useless).

In the course of that, I do think that we strongly should be using the community supported one (https://kubernetes.github.io/ingress-nginx/) because the nginxinc ingress is severely weakened in that it requires resetting the controller every time an ingress resource is changed. They enable dynamic reconfiguration only for the enterprise product (see my github link above). The Kubernetes community supported version implements dynamic updating of backends via lua -- https://kubernetes.github.io/ingress-nginx/how-it-works/#nginx-configuration
That part is absolutely essential for us to succeed, I think, and we should shift gears before going too much deeper with the NGINX Inc. supported version. Also, I think that we can totally create the ingresses with webservice just like @bd808 was saying with the community supported version. I'll make sure that Toolforge users in the RBAC can control ingress resources.

Obviously, please let me know if I'm totally wrong and spewing raving nonsense 🙂

Chicocvenancio added a subscriber: Chicocvenancio.EditedAug 15 2019, 5:46 PM

I really want to get to <tool>.toolforge.org (T125589: Allow each tool to have its own subdomain for browser sandbox/cookie isolation), but we will still need to support tools.wmflabs.org/<tool> somehow. This might be something that we continue to do in a layer in front of the Kubernetes ingress though.

This is doable with a slightly different config

apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
  name: toolforge-ingress
spec:
  tls:
  - hosts:
    - "*.toolforge.org"
    secretName: toolforge-wildcard-secret
  rules:
  - host: tools.toolforge.org
    http:
      paths:
      - path: /tool1
        backend:
          serviceName: tool1
          servicePort: 80
      - path: /tool2
        backend:
          serviceName: tool2
          servicePort: 80

( Some other proxy still needs to be in front figuring out if the request should be sent to k8s or grid proxy though.)

Great to see this moving forward, btw!