Page MenuHomePhabricator

Define the Helm charts and helmfile deployments for Datahub
Closed, ResolvedPublic

Description

We will need to define four separate components in order to run DataHub in Kubernetes:

  • DataHub Metadata Server (GMS)
  • DataHub Frontend
  • MCE Consumer Job
  • MAE Consumer Job

Each of these components will use its own docker image, built using the deployment pipline.

We also have three setup tasks that pre-populate the MySQL, Elasticsearch, and Kafka data stores. These tasks should be configured not to run in production, but they will be required in a development environment.

All of these components of DataHub are stateless.

Details

SubjectRepoBranchLines +/-
labs/privatemaster+1 -0
operations/deployment-chartsmaster+7 -5
operations/deployment-chartsmaster+10 -2
operations/deployment-chartsmaster+6 -4
operations/deployment-chartsmaster+7 -0
operations/deployment-chartsmaster+1 -1
operations/deployment-chartsmaster+1 -1
operations/deployment-chartsmaster+3 -0
operations/deployment-chartsmaster+5 -5
operations/deployment-chartsmaster+1 -1
operations/deployment-chartsmaster+6 -0
operations/deployment-chartsmaster+94 -56
operations/deployment-chartsmaster+11 -0
operations/deployment-chartsmaster+6 -1
operations/deployment-chartsmaster+6 -0
operations/deployment-chartsmaster+6 -6
operations/deployment-chartsmaster+0 -11
operations/deployment-chartsmaster+2 K -0
operations/deployment-chartsmaster+3 -1
operations/deployment-chartsmaster+2 -2
analytics/datahubwmf+167 -5
analytics/datahubwmf+10 -0
operations/deployment-chartsmaster+2 K -252
Show related patches Customize query in gerrit

Event Timeline

There are a very large number of changes, so older changes are hidden. Show Older Changes

As designed, we asked SRE if we can deploy on the existing Service Ops main kubernetes cluster (WikiKube). We are planning on moving to the DSE cluster when it's available.

BTullis moved this task from Next Up to In Progress on the Data-Catalog board.
BTullis added a subscriber: akosiaris.

Beginning work on this.

Change 763246 had a related patch set uploaded (by Btullis; author: Btullis):

[operations/deployment-charts@master] Add default chart and helmfile for datahub

https://gerrit.wikimedia.org/r/763246

BTullis triaged this task as High priority.Feb 16 2022, 3:33 PM

@akosiaris - would you be able to advise a little on this please, just to get me going?

I'm not sure, but think that I want to create the following in operations/deployment-charts.

  • One chart in charts/datahub
  • Four services beneath helmfile.d/services/ named:
    • datahub-gms
    • datahub-frontend
    • datahub-mce-consumer
    • datahub-mae-consumer

Each of the services should specify chart: wmf-stable/datahub in its default template
...but the image used should be overridden in the values.yaml file in each service, since each uses a different container.

e.g. in helmfile.d/services/values.yaml

main_app:
  image: docker-registry.wikimedia.org/datahub-gms

Am I on the right track so far?
The only thing bothering me about this setup is that I wonder whether we should group all four of the services together somehow.
They're all built from the same codebase, so we would always want to run the same build in all four services. Is there a recommended way to do this, or is it not worth worrying about?

The official helm charts for datahub use a number of subcharts: https://github.com/acryldata/datahub-helm/tree/master/charts/datahub

So perhaps it would be better to define:

  • One chart in charts/datahub
  • Four subcharts in charts/datahub/charts
  • One service in helmfile/services/datahub

I can see a couple of other uses of subcharts in the deployment-charts repository, but it looks like those are generally used for local development.

Hi,

That could be a valid way forward, however there are others. Let me point out some pros and cons with this approach

I'm not sure, but think that I want to create the following in operations/deployment-charts.

  • One chart in charts/datahub
  • Four services beneath helmfile.d/services/ named:
    • datahub-gms
    • datahub-frontend
    • datahub-mce-consumer
    • datahub-mae-consumer

Each of the services should specify chart: wmf-stable/datahub in its default template
...but the image used should be overridden in the values.yaml file in each service, since each uses a different container.

This would work only if the command run by the image is never to be altered. The moment that command is no longer sufficient, it would either requiring updating the container image or would require updating the chart. The former is not particularly flexible or forward/backward compatible much (or easy to figure out). So we tend to use the latter which is more auditable.

The 2 predominant ways of altering the behavior of a command is to either pass argument or popualte environment variables. Both would require somewhat complex data structures to be created (how else would you differentiate an env var for X vs Y container or even worse a cli parameter?) to allow for passing different arguments or env variables to each of the images if a single chart is used.

As a pro however, it would probably be easier to manage 1 chart instead of 4.

Having 4 charts on the other hand would maximize flexibility as far as chart customization goes, at the cost of having to maintain 4 charts.

As far as the 4 services go, I think the answer lies in the deployment patterns of datahub. If all 4 components need to be deployed in tandem either we go for 1 service of a more involved deployment process will need to be invented (even if it is a simple shell script looping through all 4 services). I 'd suggest the 1 service pattern. On the other hand, if it is not only possible but also desirable that the 4 components are deployed separately, then the 4 services make more sense.

The official helm charts for datahub use a number of subcharts: https://github.com/acryldata/datahub-helm/tree/master/charts/datahub

So perhaps it would be better to define:

  • One chart in charts/datahub
  • Four subcharts in charts/datahub/subcharts
  • One service in helmfile/services/datahub

I can see a couple of other uses of subcharts in the deployment-charts repository, but it looks like those are generally used for local development.

Yes that is a valid pattern. With the datahub chart being an umbrella chart that makes deployment via helmfile be 1 atomic-ish thing. Also probably makes easier to have integration tests. Feel free to use subcharts for what is worth, we have a few already and it's a pattern we are ok with.

Thanks @akosiaris - I'll go for this subcharts pattern with one service and see where I get.

Change 764375 had a related patch set uploaded (by Btullis; author: Btullis):

[operations/deployment-charts@master] Add a set of charts for datahub

https://gerrit.wikimedia.org/r/764375

Change 763246 abandoned by Btullis:

[operations/deployment-charts@master] Add default chart and helmfile for datahub

Reason:

Mistakenly added

https://gerrit.wikimedia.org/r/763246

I have a blocker on this and I can't seem to work out the right way to get past it.
Currently when I run the following: helm lint charts/datahub/ then I get an error from the wmf.networkpolicy.egress helper function.

[ERROR] templates/: template: datahub/charts/datahub-mce-consumer/templates/networkpolicy.yaml:37:8: executing "datahub/charts/datahub-mce-consumer/templates/networkpolicy.yaml" at <include "wmf.networkpolicy.egress" (.Files.Get "default-network-policy-conf.yaml" | fromYaml)>: error calling include: template: datahub/templates/_helpers.tpl:52:21: executing "wmf.networkpolicy.egress" at <.networkpolicy.egress.dst_ports>: nil pointer evaluating interface {}.egress

The following four symlinks are present in my working copy.

walk.go:74: found symbolic link in path: /home/btullis/wmf/deployment-charts/charts/datahub/default-network-policy-conf.yaml resolves to /home/btullis/wmf/deployment-charts/common_templates/0.4/default-network-policy-conf.yaml
walk.go:74: found symbolic link in path: /home/btullis/wmf/deployment-charts/charts/datahub/templates/_helpers.tpl resolves to /home/btullis/wmf/deployment-charts/common_templates/0.4/_helpers.tpl
walk.go:74: found symbolic link in path: /home/btullis/wmf/deployment-charts/charts/datahub/templates/_ingress_helpers.tpl resolves to /home/btullis/wmf/deployment-charts/common_templates/0.4/_ingress_helpers.tpl
walk.go:74: found symbolic link in path: /home/btullis/wmf/deployment-charts/charts/datahub/templates/_tls_helpers.tpl resolves to /home/btullis/wmf/deployment-charts/common_templates/0.4/_tls_helpers.tpl

I realized that I had to declare the helpers at the level of the parent chart, otherwise they would be duplicated with identical names, which didn't work.

So it seems to be getting as far as the datahub-mce-consumer subchart and loading the wmf.networkpolicy.egress function, but then at line 37 either the .Files.Get "default-network-policy-conf.yaml" isn't working, or the resulting import has the wrong scope somehow.

I have tried various combinations for the location of the default-network-policy-conf.yaml symlink, and I have also tried omitting the symlink and putting in a fully qualified file name: e.g.

image.png (157×1 px, 22 KB)

I now see that it might have been smart to answer here as this is a pretty good problem description, sorry for that.

Copied from gerrit:

The problem here is that you don't treat each subchart as an individual chart (which needs to be done). E.g.:

  • Each subchart needs the symlinks to _*helpers.tpl files and default-network-policy.yaml
  • Each subchart needs a default values.yaml

This:

[...] executing "ingress.gateway" at <.Values.ingress.enabled>: nil pointer evaluating interface {}.enabled

is returned because the ingress stanza is not defined at subchart level, but referenced in the helpers.

I'd suggest you make sure that "helm lint" is happy for every subchart before running it at umbrella level.

Feel free to follow up here or on gerrit if something comes up

Many thanks @JMeybohm for those clues.
I have now updated my WIP patch and helm lint is happy at all levels, although there is still a lot of work to do on them.

I have a couple more questions, sorry.

  1. How should I go about specifying that these deployments should be able to access the 'jumbo-eqiad' kafla cluster?

The default values.yaml that was produced by create_new_service.sh contains:

# Add here the list of kafka-clusters (by name) that the service will need to reach.
kafka:
  allowed_clusters: []

...but helm lint complains about executing "wmf.networkpolicy.egress.kafka" at <index $clusters $c>: error calling index: index of untyped nil if I put anything in there.

I'm guessing that this is related to fixtures somehow, but have you any guidance about where I should be defining them or referring to existing definitions?

  1. The next question relates to service ports and prometheus agents for two of the deployments. Namely the prometheus JMX emporter is the only service port opened by these deployments, but it runs as part of the single process within the main_app. Do I need to change the deployments.yaml or similar for this type of deployment? Should I try to run the JMX exporter in a sidecar instead?
  1. How should I go about specifying that these deployments should be able to access the 'jumbo-eqiad' kafla cluster?

The default values.yaml that was produced by create_new_service.sh contains:

# Add here the list of kafka-clusters (by name) that the service will need to reach.
kafka:
  allowed_clusters: []

...but helm lint complains about executing "wmf.networkpolicy.egress.kafka" at <index $clusters $c>: error calling index: index of untyped nil if I put anything in there.

I'm guessing that this is related to fixtures somehow, but have you any guidance about where I should be defining them or referring to existing definitions?

I don't see helm lint failing with the current patch set. Maybe I'm missing something?
You could take a look at mwdebug or tegola-vector-tiles, they configure some allowed clusters.

  1. The next question relates to service ports and prometheus agents for two of the deployments. Namely the prometheus JMX emporter is the only service port opened by these deployments, but it runs as part of the single process within the main_app. Do I need to change the deployments.yaml or similar for this type of deployment? Should I try to run the JMX exporter in a sidecar instead?

I'm not sure I understand correctly, maybe you can give concrete examples which chart requires which port to be accessible if I fail to answer your question :-)
The JMX exporters are fine within the main process of the app (IIUC it's the proper way to run them in order to get some insights about the JVM). There are two ways to tell prometheus to scrape the JMX exporter ports:

  • Add a annotation prometheus.io/port: "<JMX exporter port>"to the deployments
  • Add a container port with a name suffix -metrics (see _containers.tpl for example

Unfortunately the templates/scaffolding does not implement an easy way to provide either of those via .Values. I'd go with patching the deployment.yaml and add the annotation to not have to deal with merging updates of helper templates in the future.

Thanks again @JMeybohm

I don't see helm lint failing with the current patch set. Maybe I'm missing something?

Apologies, I didn't explain myself very well. You're right, the patchset that I uploaded (patchset 3) passed linting tests, but it did not yet contain any reference to the kafka clusters to which I need to connect.

I've now uploaded another patchset (patchset 4) where I have added the kafka_brokers: dictionary directly into the values.yaml for each subchart. They now pass linting again and all specify these IPs as allowed for egress.

image.png (420×555 px, 48 KB)

I guess I was just wondering whether this is the right way ™ or whether there is something else that I should have done to avoid repeating myself.
I started looking at the files in the .fixtures directory, but I can't really see how they should be used. Are they automatically loaded because of their name, or are they explicitly referenced somewhere?
Should I copy the existing .fixtures directory for the parent chart to each of the subcharts, or is this unnecessary?

Regarding the monitoring...

The JMX exporters are fine within the main process of the app (IIUC it's the proper way to run them in order to get some insights about the JVM). There are two ways to tell prometheus to scrape the JMX exporter ports:

  • Add a annotation prometheus.io/port: "<JMX exporter port>"to the deployments
  • Add a container port with a name suffix -metrics (see _containers.tpl for example

That makes perfect sense, thanks.
So given that these two charts (datahub-mce-consumer and datahub-mae-consumer) don't need any other kind of inbound traffic, should I be configuring the deployment.yaml file to tell it to be a Headless Service?

Many thanks and apologies for all of the questions. Any insights or suggestions most welcome.

p.s. I've also tagged @Gehel who has kindly offered to help share his K8S knowledge as well.

I've now uploaded another patchset (patchset 4) where I have added the kafka_brokers: dictionary directly into the values.yaml for each subchart. They now pass linting again and all specify these IPs as allowed for egress.

I guess I was just wondering whether this is the right way ™ or whether there is something else that I should have done to avoid repeating myself.

That's indeed a bit repetitive. What you could do about that is define the kafka and kafka-broker stanza in the parent charts values.yaml and then pass that as reference to the subcharts, e.g.


kafka: &kafka
  allowed_clusters:
    - jumbo-eqiad
# Kafka brokers also enable additional networkpolicy templates
kafka_brokers: &kafka-brokers
  jumbo-eqiad:
    - 10.64.0.175/32
    - 2620::861:101:10:64:0:175/128
    ...

datahub-frontend:
  kafka: *kafka
  kafka-brokers: *kafka-brokers
datahub-gms:
  kafka: *kafka
  kafka-brokers: *kafka-brokers
...

I started looking at the files in the .fixtures directory, but I can't really see how they should be used. Are they automatically loaded because of their name, or are they explicitly referenced somewhere?
Should I copy the existing .fixtures directory for the parent chart to each of the subcharts, or is this unnecessary?

The fixtures are used by CI for test and linting of the charts. With how the CI is currently working I doubt it's possible to have fixtures in subcharts (because CI will only "see" the parent chart). So you would need to put fixtures into the parent chart, eventually overwriting values of subcharts if needed.

So given that these two charts (datahub-mce-consumer and datahub-mae-consumer) don't need any other kind of inbound traffic, should I be configuring the deployment.yaml file to tell it to be a Headless Service?

If your deployments don't receive traffic at all (apart from prometheus scraping), you actually don't need any service at all. Prometheus will scrape on a per-pod basis and will discover pods automatically.

Many thanks and apologies for all of the questions. Any insights or suggestions most welcome.

No worries, happy to help! :-)

I'm significantly further forward I think, but I'm now at the point where I need guidance in terms of setting up:

  • ingress
  • TLS
  • secrets

Current progress is at patchset 12 here: https://gerrit.wikimedia.org/r/c/operations/deployment-charts/+/764375

Current Situation

One parent chart: datahub
Four subcharts:

  • datahub-frontend
  • datahub-gms
  • datahub-mce-consumer
  • datahub-mae-consumer

helm lint passes at all levels.

I have pared down the templates/_containers.tpl and templates/_columes.tpl for each subchart to only what is required, given that:

  • They will never use PHP
  • They will never emit statsd traffic

Three of the four charts use integrated prometheus exporters and I have added the prometheus.io/port: "4318" to templates/deployment.yaml

I have also pared down deployment.yaml for all subcharts to the best of my ability.

For the two consumer charts, I have removed templates/service.yaml

I can run helm install --dry-run --debug --generate-name . from the charts/datahub directory and it generates a valid configuration.

Most of the subcharts use global configuration options to generate environment variables, which is as per the datahub helm charts from linkedin: https://github.com/acryldata/datahub-helm

Our shared egress helpers for the networkpolicy configuration use the YAML anhor technique as suggested by @JMeybohm above.

Desired Situation

datahub-frontend

  • Ingress: yes from the internet
  • TLS: Yes with datahub.wikimedia.org as the service name

datahub-gms

  • Ingress: yes, from the production network and analytics VLAN only
  • TLS: Yes, with datahub.discovery.wmnet as the service name

For all charts I need to work out how to create the secrets, such as database password and database encryption key

Observations

If I try to enable ingress: yes on any of the services, I have seen errors such as this from the istio integration module:

Error: INSTALLATION FAILED: unable to build kubernetes objects from release manifest: [unable to recognize "": no matches for kind "DestinationRule" in version "networking.istio.io/v1beta1", unable to recognize "": no matches for kind "Gateway" in version "networking.istio.io/v1beta1", unable to recognize "": no matches for kind "VirtualService" in version "networking.istio.io/v1beta1"]
helm.go:84: [debug] [unable to recognize "": no matches for kind "DestinationRule" in version "networking.istio.io/v1beta1", unable to recognize "": no matches for kind "Gateway" in version "networking.istio.io/v1beta1", unable to recognize "": no matches for kind "VirtualService" in version "networking.istio.io/v1beta1"]

Any advice at this point gratefully received.

Change 767506 had a related patch set uploaded (by Btullis; author: Btullis):

[analytics/datahub@wmf] Override the location of the pidfile for datahub-frontend

https://gerrit.wikimedia.org/r/767506

Change 767506 merged by Btullis:

[analytics/datahub@wmf] Override the location of the pidfile for datahub-frontend

https://gerrit.wikimedia.org/r/767506

Change 767782 had a related patch set uploaded (by Btullis; author: Btullis):

[analytics/datahub@wmf] Add new containers for the backend setup tasks

https://gerrit.wikimedia.org/r/767782

Change 767782 merged by Btullis:

[analytics/datahub@wmf] Add new containers for the backend setup tasks

https://gerrit.wikimedia.org/r/767782

I believe that my deployment-charts CR is now at a stage where it should be merged, so that I can begin working with it on the staging cluster.

https://gerrit.wikimedia.org/r/c/operations/deployment-charts/+/764375

It now works in a development environment, so with minikube configured I can run the following from the charts/datahub directory

kubectl create secret generic mysql-secrets --from-literal=mysql-root-password=datahub
helm dep up prerequisites
helm install prerequisites ./prerequisites
helm install datahub ./

As per the original set of charts, from which I took inspiration, we can then access the web UI by running:

kubectl port-forward $(kubectl get pods --selector=app=datahub-frontend -o name) 9002:9002

Then opening a browser to http://localhost:9002

Log in with datahub and datahub for now. We will be adding LDAP/CAS-SSO to this soon.

I've added the production data backend host names and IP addresses (where currently known) to the helmfile.d/services/datahub/values.yaml file.

Currently there is still some way to go on configuring ingress, egress, and TLS. However, I think that I need to start working with a real K8S cluster in order to progress with this.

BTullis renamed this task from Define the Kubernetes Deployments for Datahub to Define the Helm charts and helmfile deployments for Datahub.Mar 4 2022, 11:35 AM
BTullis updated the task description. (Show Details)

Moving this to in review whilst T303049: New Service Request: DataHub is being handled by the Service Ops team.

Change 769037 had a related patch set uploaded (by Btullis; author: Btullis):

[operations/deployment-charts@master] Update the linting requirements to allow for local dependencies

https://gerrit.wikimedia.org/r/769037

Change 769037 merged by jenkins-bot:

[operations/deployment-charts@master] Update the linting requirements to allow for local dependencies

https://gerrit.wikimedia.org/r/769037

Change 769050 had a related patch set uploaded (by Btullis; author: Btullis):

[operations/deployment-charts@master] Update helm linting again to allow local dependencies

https://gerrit.wikimedia.org/r/769050

Change 769050 merged by jenkins-bot:

[operations/deployment-charts@master] Update helm linting again to allow local dependencies

https://gerrit.wikimedia.org/r/769050

Change 764375 merged by Btullis:

[operations/deployment-charts@master] Add helm charts and a helmfile configuration for datahub

https://gerrit.wikimedia.org/r/764375

Change 776896 had a related patch set uploaded (by Btullis; author: Btullis):

[operations/deployment-charts@master] Remove the egress rules from datahub-fronted to mysql

https://gerrit.wikimedia.org/r/776896

Change 776896 merged by Btullis:

[operations/deployment-charts@master] Remove the MySQL specific details from datahub-frontend

https://gerrit.wikimedia.org/r/776896

Change 776906 had a related patch set uploaded (by Btullis; author: Btullis):

[operations/deployment-charts@master] Increment the chart version and allow version range matching

https://gerrit.wikimedia.org/r/776906

Change 776906 merged by Btullis:

[operations/deployment-charts@master] Increment the chart version and allow version range matching

https://gerrit.wikimedia.org/r/776906

Change 776950 had a related patch set uploaded (by Btullis; author: Btullis):

[operations/deployment-charts@master] Apply kafka broker templates correctly in staging

https://gerrit.wikimedia.org/r/776950

Change 776950 merged by Btullis:

[operations/deployment-charts@master] Apply kafka broker templates correctly in staging

https://gerrit.wikimedia.org/r/776950

Change 776954 had a related patch set uploaded (by Btullis; author: Btullis):

[operations/deployment-charts@master] Define the DATHUB_SECRET value

https://gerrit.wikimedia.org/r/776954

Change 776954 merged by Btullis:

[operations/deployment-charts@master] Define the DATHUB_SECRET value

https://gerrit.wikimedia.org/r/776954

Change 777348 had a related patch set uploaded (by JMeybohm; author: JMeybohm):

[operations/deployment-charts@master] Copy all helmfile-defaults to each subchart namespace

https://gerrit.wikimedia.org/r/777348

Change 777348 merged by jenkins-bot:

[operations/deployment-charts@master] Copy all helmfile-defaults to each subchart namespace

https://gerrit.wikimedia.org/r/777348

Change 777365 had a related patch set uploaded (by Btullis; author: Btullis):

[operations/deployment-charts@master] Update the chart to address issues with secrets and CI

https://gerrit.wikimedia.org/r/777365

Change 777365 merged by jenkins-bot:

[operations/deployment-charts@master] Update the chart to address issues with secrets and CI

https://gerrit.wikimedia.org/r/777365

Change 777419 had a related patch set uploaded (by Btullis; author: Btullis):

[operations/deployment-charts@master] Add the networkpoliy for the setups as a pre-install hook

https://gerrit.wikimedia.org/r/777419

Change 777752 had a related patch set uploaded (by Btullis; author: Btullis):

[labs/private@master] Add a dummy datahub_encryption_key value

https://gerrit.wikimedia.org/r/777752

Change 777419 merged by jenkins-bot:

[operations/deployment-charts@master] Add the networkpolicy for the setups as a pre-install hook

https://gerrit.wikimedia.org/r/777419

Change 777810 had a related patch set uploaded (by Btullis; author: Btullis):

[operations/deployment-charts@master] Disable the use of SSL/TLS in datahub's MySQL connection in staging

https://gerrit.wikimedia.org/r/777810

Change 777810 merged by jenkins-bot:

[operations/deployment-charts@master] Disable the use of SSL/TLS in datahub's MySQL connection in staging

https://gerrit.wikimedia.org/r/777810

Change 777818 had a related patch set uploaded (by Btullis; author: Btullis):

[operations/deployment-charts@master] Update the public ports for TLS for the datahub service

https://gerrit.wikimedia.org/r/777818

Change 777818 merged by jenkins-bot:

[operations/deployment-charts@master] Update the public ports for TLS for the datahub service

https://gerrit.wikimedia.org/r/777818

Change 777831 had a related patch set uploaded (by Btullis; author: Btullis):

[operations/deployment-charts@master] Update the port number for the datahub-gms service using TLS

https://gerrit.wikimedia.org/r/777831

Change 777831 merged by jenkins-bot:

[operations/deployment-charts@master] Update the port number for the datahub-gms service using TLS

https://gerrit.wikimedia.org/r/777831

Change 778249 had a related patch set uploaded (by Btullis; author: Btullis):

[operations/deployment-charts@master] Correct the GMS port number that is in use

https://gerrit.wikimedia.org/r/778249

Change 778249 merged by jenkins-bot:

[operations/deployment-charts@master] Correct the GMS port number that is in use

https://gerrit.wikimedia.org/r/778249

Change 778257 had a related patch set uploaded (by Btullis; author: Btullis):

[operations/deployment-charts@master] Bump datahub version to use the containers with wmf-certicates

https://gerrit.wikimedia.org/r/778257

Change 778257 merged by jenkins-bot:

[operations/deployment-charts@master] Bump datahub version to use the containers with wmf-certicates

https://gerrit.wikimedia.org/r/778257

Change 778308 had a related patch set uploaded (by Btullis; author: Btullis):

[operations/deployment-charts@master] Enable SSL/TLS for accessing the datahub-gms service

https://gerrit.wikimedia.org/r/778308

Change 778308 merged by jenkins-bot:

[operations/deployment-charts@master] Enable SSL/TLS for accessing the datahub-gms service

https://gerrit.wikimedia.org/r/778308

Change 779031 had a related patch set uploaded (by Btullis; author: Btullis):

[operations/deployment-charts@master] Add the codfw LDAP server to the DataHub JAAS configuration

https://gerrit.wikimedia.org/r/779031

Change 779031 merged by jenkins-bot:

[operations/deployment-charts@master] Add the codfw LDAP server to the DataHub JAAS configuration

https://gerrit.wikimedia.org/r/779031

Marking this task as complete. There are still some minor tweaks to the charts to support the deployments, but the bulk of the work is done.

Change 779077 had a related patch set uploaded (by Btullis; author: Btullis):

[operations/deployment-charts@master] Add a volume for the jaas-ldap configuration for datahub

https://gerrit.wikimedia.org/r/779077

Change 779077 merged by jenkins-bot:

[operations/deployment-charts@master] Add a volume for the jaas-ldap configuration for datahub

https://gerrit.wikimedia.org/r/779077

Change 779837 had a related patch set uploaded (by Btullis; author: Btullis):

[operations/deployment-charts@master] Ensure that the datahub consumers use TLS where required

https://gerrit.wikimedia.org/r/779837

Change 779837 merged by Btullis:

[operations/deployment-charts@master] Ensure that the datahub consumers use TLS where required

https://gerrit.wikimedia.org/r/779837

Change 777752 abandoned by Btullis:

[labs/private@master] Add a dummy datahub_encryption_key value

Reason:

No longer required

https://gerrit.wikimedia.org/r/777752