Page MenuHomePhabricator

Evaluate nginx-controller as an Ingress
Closed, ResolvedPublic

Description

This task is for the evaluation of nginx ingress https://github.com/kubernetes/ingress-nginx as a potential ingress controller.

Specifically, we need to answer the following questions:

  • What is the general architecture?
  • How can we deploy it on bare metal?
  • Do we need to build and maintain docker images ourselves?
  • How it can be configured to proxy various services with easy parametrization?
  • How do we operate on it?
  • Is it easy to collect metrics?
  • How do we collect logs?

Event Timeline

Joe updated the task description. (Show Details)

@aborrero may be able to provide some information from his past work to setup ingress-nginx for Toolforge.

Sharing a bit our experience @ WMCS with ingress-nginx:

What is the general architecture?

Basically you deploy a kubernetes Deployment with a tailored NGINX that is able to process Ingress kubernetes objects.
Ingress objects (and others, like ConfigMaps) are parsed by NGINX to generate a nginx.conf dynamically at runtime using a series of LUA tricks and other smart hacks.

For this NGINX to be reachable from outside the cluster you need to consider several other pieces.

In general, it is assumed you are running kubernetes on a vendor cloud (eg Google, Amazon, etc). In our experience there is no reference implementation for how this internet <-> nginx-ingress connectivity should be created if you aren't running on a vendor cloud
Given this is your case most likely you will need a NodePort kubernetes Service and an external load balancer (we use haproxy).

To "help" or "optimize" a bit this NodePort thing and avoid the TCP packets bouncing through layers of kube-proxy inside the k8s cluster network, we have dedicated kubernetes nodes to host the ingress pods (and thus, the NodePort).

More information on this https://wikitech.wikimedia.org/wiki/Portal:Toolforge/Admin/Kubernetes/Networking_and_ingress

How can we deploy it on bare metal?

No particular thing here. The ingress-nginx itself Is just deployed as any other kubernetes app.
The connectivity internet <-> cluster is another topic though (see above).

Do we need to build and maintain docker images ourselves?

I don't think so. What we do is to cache the docker image in our toolforge docker registry though.

How it can be configured to proxy various services with easy parametrization?

Kubernetes Ingress objects are rather robust and highly customizable. However I'm pretty sure that if you focus on it, you can find missing features :-)
Anyway, nginx-ingress supports injecting arbitrary configuration to the NGINX proc AFAIK.

How do we operate on it?

In general, like any other k8s app. You can scale, downscale, rollout versions, etc. The usual kubernetes kungfu.

Is it easy to collect metrics?

No :-( The metrics offered by default by ingress-nginx are very poor. Only a few of them are available. Isn't this why you started using envoy?

How do we collect logs?

By default: kubectl logs xxxx These logs are stored by kubernetes like for any other app (etcd / docker metadata files involved).
More elaborated/scalable setups are available like for any other k8s app (external logging etc).
We don't do any of this though.

My past impression of the nginx-ingress was that while it's okay for low traffic stuff you would start getting trouble with increased traffic. That probably is mostly due to the lack of metrics and therefore insights of what is going on/what exactly causes issues.

Joe claimed this task.
Joe triaged this task as Medium priority.

We went with istio-ingress after some evaluation which wasn't reported here.