Page MenuHomePhabricator

Migrate analytics/refinery/source release jobs to Docker
Closed, ResolvedPublic

Description

For analytics/refinery/source, we have a couple job to automatize releases to Archiva. They should be migrated to Docker containers.

Potentially, we could use the same mechanism for other maven based repositories whic @Gehel suggested a while back.

Event Timeline

There are a very large number of changes, so older changes are hidden. Show Older Changes
fdans lowered the priority of this task from High to Medium.Dec 3 2018, 5:56 PM

The job has to be refactored and generalized so we can use it on other maven repositories. mwdumper is one such use case ( T213874#4883586 ).

I haven't started looking at it yet, I am dealing with some other tasks on Continuous-Integration-Infrastructure (Slipway) .

For clarity, this needs the owner of the job (not sure which bit of Analytics?) to commit to re-writing it into stretch- or buster-compatible code running inside a Docker container on our CI infrastructure, or being OK with us deleting it and them running it somewhere else.

Ah I don't think we were aware there was action on our part, sorry! Moving this out of radar for re-grooming.

@JAllemandou I guess we should pair on it? :) I am not familiar with those jobs though but it should not be too complicated. A potential interesting outcome is the work can potentially be reused for other maven based repositories.

The jessie boxes this tries to run on have now been disabled. You will need to port this job for it to work again, sorry.

Thanks @Jdforrester-WMF for the warning.
@hashar I'm sorry for missing your ping :) I'm interested in pairing on that indeed! I also think the job shouldn't be too complicated.
Let's see how we can organize on this next week.

@Jdforrester-WMF ok...seems a bit harsh! This job is essential for weekly deployments of Hadoop jobs. I know this task has been around for a while, but AFAIK Analytics Eng was given no timeline or urgency about it. @Nuria @JAllemandou we need do this ASAP.

Thanks @Jdforrester-WMF for the warning.
@hashar I'm sorry for missing your ping :) I'm interested in pairing on that indeed! I also think the job shouldn't be too complicated.
Let's see how we can organize on this next week.

+1 :)

I guess my main trouble is that I have not much idea as to what this job is achieving or what we could do to fit it in a Docker container. But a few pairing should be able to solve that.

The jessie boxes this tries to run on have now been disabled. You will need to port this job for it to work again, sorry.

Sorry but we need to have those jobs enabled again. We got no warning this work was needed and without them, to be clear, we cannot deploy any software to hadoop, They are pretty critical. Let's please restore those jobs and be all aware that we need much better communication about changes such us these, the best time for us to pick up a project we were not counting on is not when a significant part of the team has a reduced schedule.

cc @Jdforrester-WMF cc @greg

The jessie boxes this tries to run on have now been disabled. You will need to port this job for it to work again, sorry.

Sorry but we need to have those jobs enabled again. We got no warning this work was needed

That's not true, sorry.

https://www.mediawiki.org/wiki/Scrum_of_scrums/2020-02-05#Analytics

https://www.mediawiki.org/wiki/Scrum_of_scrums/2020-02-12#Analytics

https://www.mediawiki.org/wiki/Scrum_of_scrums/2020-02-19#Analytics

Also mentioned in IRC repeatedly by me (and mentioned at All Hands).

and without them, to be clear, we cannot deploy any software to hadoop, They are pretty critical. Let's please restore those jobs and be all aware that we need much better communication about changes such us these, the best time for us to pick up a project we were not counting on is not when a significant part of the team has a reduced schedule.

I can re-enable to boxes for now, but it'll be brief; they were meant to be deleted at the end of December and we negotiated an extension. The deadline is set by WMCS, not us.

Sounds like a communication breakdown! Do any Analytics Engineers go to SoS? Not sure, I don't think so. None of those links mention a deadline. We were aware of this task however, it would have been nice to have a deadline mentioned and negotiated here.

Sounds like a communication breakdown! Do any Analytics Engineers go to SoS? Not sure, I don't think so. None of those links mention a deadline. We were aware of this task however, it would have been nice to have a deadline mentioned and negotiated here.

Sorry. Negotiated in TechMgmt and reflected in grandparent task, T236576. Fair point that Analytics have other things to focus on than the intracacies of SRE/WMCS decommissioning work. :-)

Hey ya'll,

tl;dr: Sorry about the miscommunication. The deprecated instance will be restored but we need to move quickly to migrate it.

Next things for us to prevent this happening again: how can we make sure that communication voids don't happen in the future? We normally use Scrum of Scrums to let other teams (eg SRE, CPT, Product teams) know that we're waiting on them (or when they're waiting on us). Looks like in this case potentially just pinging on the task itself would have been preferred.

Let's work on a time to pair on this (see @hashar's comments above) in the near future.

JAllemandou raised the priority of this task from Medium to High.Mar 23 2020, 4:48 PM

The deprecated instance will be restored but we need to move quickly to migrate it.

Thanks you. What is the timeframe?

Next things for us to prevent this happening again: how can we make sure that communication voids don't happen in the future?

Since there is a task we will have expected a deadline to be communicated on the task rather than elsewhere, let's proceed going forward assuming that communication about issues happens in phab.

From the call with @JAllemandou : we have investigated into what the job is doing and made good progress.

Maven does most of the magic via release:perform and release:prepare.

The job requires a ssh credential in order to push commits to Gerrit. In Jenkins its stored in the credential store and exposed to the build using the ssh-agent plugin (which spawns a ssh-agent for us). We can keep the agent on the host and expose the agent socket into the container by using a volume mount:

docker run --rm -it \
   -v $(SSH_AUTH_SOCK):/ssh-agent \
   --env SSH_AUTH_SOCK=/ssh-agent \
   docker-registry.wikimedia.org/releng/java8:latest release:perform

For the Archiva credentials, they are stored in the Jenkins configfiles plugin which fetch credentials from the credentials plugin and forge a maven settings.xml file holding those credentials. Again, we can expose that file from the host to the container via a volume mount and use mvn --setting to have maven to read it. If we configure the Jenkins plugin to write the file to archiva-credentials/settings.xml we can thus do:

docker run \
  -v "$WORKSPACE/archiva-credentials":/archiva-credentials \
   docker-registry.wikimedia.org/releng/java8:latest release:perform
  -s /archiva-credentials/settings.xml

Additional notes:

  • The Maven release plugin has a release goal (release:perform) and a dry run goal which uses: -DdryRun=true release:prepare. Maybe we would need a tick boxes to switch between those, or they could be two different jobs.
  • There is an update-jars job which runs a script from the repository: ./bin/update-refinery-source-jars. That is to update the analytics/refinery repository.

Change 583092 had a related patch set uploaded (by Hashar; owner: Hashar):
[integration/config@master] jjb: migrate refinery release job to Docker

https://gerrit.wikimedia.org/r/583092

The whole process is extensively described at https://wikitech.wikimedia.org/wiki/Analytics/Systems/Cluster/Deploy/Refinery-source

The way the release is done is via the Jenkins Maven Plugin which ask for a release and development version. They have to be manually filed https://integration.wikimedia.org/ci/job/analytics-refinery-release/m2release/ which then trigger the build with three extra parameters. For the last build:

MVN_RELEASE_VERSION0.0.119
MVN_DEV_VERSION0.0.120-SNAPSHOT
MVN_ISDRYRUN(checkbox)

mvnrelease.png (588×612 px, 45 KB)

The deprecated instance will be restored but we need to move quickly to migrate it.

Thanks you. What is the timeframe?

From the grandparent task:
"All instances in the integration project need to upgrade as soon as possible. Instances not upgraded by 2019-12-31 may be subject to deletion unless prior arrangements for an extended deadline has been approved by the Cloud VPS administration team."

That ^ was from WMCS. But we negotiated an extension until the end of q3. If we ask nicely again we *might* be able to get another extension, but truly, this must be done ASAP.

One nit: On the screen above, we tick the Specify custom SCM tag box and modify the default value by hand

One nit: On the screen above, we tick the Specify custom SCM tag box and modify the default value by hand

I have noticed that in the refinery documentation:

Change refinery-x.y.z to vx.y.z in the "SCM tag" input text-box and update the number. Example: refinery-0.0.40 is bad, v0.0.40 is good

Looks like we should be able to save you that manual step. The plugin (since version 2.2.0, refinery uses 2.5.1) has the option tagNameFormat to change the tag format!

https://maven.apache.org/maven-release/maven-release-plugin/prepare-mojo.html

tagNameFormat:

Format to use when generating the tag name if none is specified. Property interpolation is performed on the tag, but in order to ensure that the interpolation occurs during release, you must use @{...} to reference the properties rather than ${...}. The following properties are available:

groupId or project.groupId - The groupId of the root project.
artifactId or project.artifactId - The artifactId of the root project.
version or project.version - The release version of the root project.

Type: java.lang.String
Since: 2.2.0
Required: No
User Property: tagNameFormat
Default: @{project.artifactId}-@{project.version}

So I guess we can change it:

- @{project.artifactId}-@{project.version}
+ v@{project.version}

Looks like we should be able to save you that manual step.

<3 @hashar

Bah forget me, the refinery pom.xml already has the proper configuration. I guess the Jenkins Maven Release plugin does not recognize that option and always offer the default of @{project.artifactId}-@{project.version}.

I guess we can add build parameters MVN_RELEASE_VERSION and MVN_DEV_VERSION to allow one to override the version if need be, but otherwise let them unset and let maven figure out the next version.

Change 583356 had a related patch set uploaded (by Hashar; owner: Hashar):
[integration/config@master] docker: bump java and add ssh to java8

https://gerrit.wikimedia.org/r/583356

Change 583356 merged by jenkins-bot:
[integration/config@master] docker: bump java and add ssh to java8

https://gerrit.wikimedia.org/r/583356

Change 583374 had a related patch set uploaded (by Hashar; owner: Hashar):
[integration/config@master] dockerfiles: java8: add ssh host key for Gerrit

https://gerrit.wikimedia.org/r/583374

Change 583374 merged by jenkins-bot:
[integration/config@master] dockerfiles: java8: add ssh host key for Gerrit

https://gerrit.wikimedia.org/r/583374

I have tried the job, and exposing the SSH agent socket inside the container does not work. The agent runs on the host as the jenkins-slave user and is only readable by that user. When Docker volume mount the file inside the container, it caries the UID and file permissions. The container has the process running as nobody and it can not access the socket ...

Change 583392 had a related patch set uploaded (by Hashar; owner: Hashar):
[operations/puppet@production] contint: add acl package for file permissions tweak

https://gerrit.wikimedia.org/r/583392

So I have played a bit more with the job. Sharing the ssh agent from the host into the container does not work: the agent checks whether the client connecting has the same uid and refuses the connection otherwise:

Start an agent in foreground with debug mode:

jenkins$ ssh-agent -a agent.sock -d

Load a key and grant read/write access to the nobody user:

jenkins$ SSH_AUTH_SOCK=agent.sock ssh-add id_rsa
jenkins$ setfacl -m user:65534:rw agent.sock

As the nobody user, the identity can not be fetched:

nobody$ SSH_AUTH_SOCK=agent.sock ssh-add -l
error fetching identities for protocol 2: communication with agent failed
The agent has no identities.

The debug log in the ssh-agent shows:

uid mismatch: peer euid 65534 != uid 2947

:-(

Change 583392 abandoned by Hashar:
contint: add acl package for file permissions tweak

https://gerrit.wikimedia.org/r/583392

Change 583392 restored by Hashar:
contint: add acl package for file permissions tweak

Reason:
Turns out I actually need setfacl :)

https://gerrit.wikimedia.org/r/583392

Instead of using ssh-agent and doing Gerrit access over ssh, I have:

Jenkins then creates a temporary file that has that content and exposes the filename in the NETRC_FILE environment variable. Then it is all about using:

docker run -v "$NETRC_FILE":/nonexistent/.netrc

The credentials for archiva is in the config file provider and is owned by the Jenkins user with only user and group access. To grant access to the nobody user I went with applying a custom file ACL to grant read access to the nobody user, then bind mount it:

setfacl -m user:65534:r archiva-credentials.xml
docker run -v archiva-credentials.xml:/archiva-credentials.xml

And surprisingly, it works! The first build managing to run maven is https://integration.wikimedia.org/ci/job/analytics-refinery-maven-release-docker/31/console though it fails for some reason.

Change 583683 had a related patch set uploaded (by Hashar; owner: Hashar):
[integration/config@master] docker: java8 should checkout in a branch

https://gerrit.wikimedia.org/r/583683

Change 583683 merged by jenkins-bot:
[integration/config@master] docker: java8 should checkout in a branch

https://gerrit.wikimedia.org/r/583683

I have instructed Maven release plugin to use https with:

mvn -Dproject.scm.developerConnection=scm:git:https://maven-release-user@gerrit.wikimedia.org/r/analytics/refinery/source

The build itself seems to work until release:prepare attempts to push. The console log shows:

[INFO] Checking in modified POMs...
[INFO] Executing: /bin/sh -c cd /src && git add -- pom.xml refinery-core/pom.xml refinery-spark/pom.xml refinery-tools/pom.xml refinery-hive/pom.xml refinery-job/pom.xml refinery-camus/pom.xml refinery-cassandra/pom.xml
[INFO] Working directory: /src
[INFO] Executing: /bin/sh -c cd /src && git rev-parse --show-toplevel
[INFO] Working directory: /src
[INFO] Executing: /bin/sh -c cd /src && git status --porcelain .
[INFO] Working directory: /src
[INFO] Executing: /bin/sh -c cd /src && git commit --verbose -F /tmp/maven-scm-140771911.commit pom.xml refinery-core/pom.xml refinery-spark/pom.xml refinery-tools/pom.xml refinery-hive/pom.xml refinery-job/pom.xml refinery-camus/pom.xml refinery-cassandra/pom.xml
[INFO] Working directory: /src
[INFO] Executing: /bin/sh -c cd /src && git symbolic-ref HEAD
[INFO] Working directory: /src
[INFO] Executing: /bin/sh -c cd /src && git push ssh://maven-release-user@gerrit.wikimedia.org:29418/analytics/refinery/source refs/heads/master:refs/heads/master
[INFO] Working directory: /src
[INFO] ------------------------------------------------------------------------

I went with a terrible attempt at being able to override the property https://gerrit.wikimedia.org/r/#/c/analytics/refinery/source/+/583714/

Finally a successful build! https://integration.wikimedia.org/ci/job/analytics-refinery-maven-release-docker/38/console

I have built it using the unmerged change https://gerrit.wikimedia.org/r/#/c/analytics/refinery/source/+/583714/ and maven crafted its commits on top of it, made a v0.0.120 tag and pushed. That has caused Gerrit to consider my pending change to be merged and to close the change.

But at least the job seems to be working now!

Change 584923 had a related patch set uploaded (by Hashar; owner: Hashar):
[analytics/refinery/source@master] document -DdeveloperConnection

https://gerrit.wikimedia.org/r/584923

The job seems to be working fine. @JAllemandou is going to use the job this week and confirms everything works as expected.

The doc at https://wikitech.wikimedia.org/wiki/Analytics/Systems/Cluster/Deploy/Refinery-source will need some minor updates. Notably:

  • the version is no more needed, maven figures it out
  • the parameters name have been changed (GIT_PROJECT > ZUUL_PROJECT).

Change 583392 merged by Dzahn:
[operations/puppet@production] contint: add acl package for file permissions tweak

https://gerrit.wikimedia.org/r/583392

@hashar noted, thanks for the quick turarround and help

Change 584984 had a related patch set uploaded (by Hashar; owner: Hashar):
[analytics/refinery@master] Skip fetching commit hook when it is already present

https://gerrit.wikimedia.org/r/584984

Change 584998 had a related patch set uploaded (by Hashar; owner: Hashar):
[integration/config@master] docker: container for refinery jar updater

https://gerrit.wikimedia.org/r/584998

A couple of things I 'd like to add:

  1. Having CI uploading to the git repo is problematic from a security point of view. Simply put, compromise of a single CI job run means arbitrary code insertion to the repo and then arbitrary code execution on the next run. Yes in the context of the CI run, but by now we are talking about 1 hop away from local root (a kernel exploit). You can assume that will be chained as chained exploits are the norm these days, not the exception.
  2. Exposing a socket for communication between the host and the container is problematic. The container can now talk to a process on the host and attempt to exploit it allowing it to escape the container. From that point on it's a 1 hop away from local root as above.
  3. Do I understand correctly that setfacl -m user:65534:rw "$SSH_AUTH_SOCK" which is listed in https://gerrit.wikimedia.org/r/#/c/operations/puppet/+/583392/ has been ran manually? That it is not documented or added in any configuration repo? Essentially meaning that this isn't reproducible?
  4. POSIX acls are difficult to inspect with standard tools and thus generally unused. Yes ls has support for displaying their existence, getfacl and setfacl exist and yet their use is not widespread. The reason being that a single '+' in ls output is very easily overlooked, making it possible it will take a long time for someone to notice why something works (or is broken).
  5. Messing with the default permissions of ssh-agent socket is not prudent. There is a reason it's restricted to the same user. Having arbitrary code (yes, please assume that arbitrary code will talk to that socket) violates the internal assumptions of ssh-agent.

I would urge to avoid the above pattern and instead push the result of the build to a different repo, that is not under CI to mitigate point 1 above. I would also recommend to populate a file and environmental variables with the credentials instead of the ssh-agent trickery to mitigate points 2,3,4,5 above.

Change 584923 merged by jenkins-bot:
[analytics/refinery/source@master] document -DdeveloperConnection

https://gerrit.wikimedia.org/r/584923

I am working on it with @Joal providing the java/maven/refinery expertise :]

A couple of things I 'd like to add:

  1. Having CI uploading to the git repo is problematic from a security point of view. Simply put, compromise of a single CI job run means arbitrary code insertion to the repo and then arbitrary code execution on the next run. Yes in the context of the CI run, but by now we are talking about 1 hop away from local root (a kernel exploit). You can assume that will be chained as chained exploits are the norm these days, not the exception.
  2. Exposing a socket for communication between the host and the container is problematic. The container can now talk to a process on the host and attempt to exploit it allowing it to escape the container. From that point on it's a 1 hop away from local root as above.
  3. Do I understand correctly that setfacl -m user:65534:rw "$SSH_AUTH_SOCK" which is listed in https://gerrit.wikimedia.org/r/#/c/operations/puppet/+/583392/ has been ran manually? That it is not documented or added in any configuration repo? Essentially meaning that this isn't reproducible?
  4. POSIX acls are difficult to inspect with standard tools and thus generally unused. Yes ls has support for displaying their existence, getfacl and setfacl exist and yet their use is not widespread. The reason being that a single '+' in ls output is very easily overlooked, making it possible it will take a long time for someone to notice why something works (or is broken).
  5. Messing with the default permissions of ssh-agent socket is not prudent. There is a reason it's restricted to the same user. Having arbitrary code (yes, please assume that arbitrary code will talk to that socket) violates the internal assumptions of ssh-agent.

I would urge to avoid the above pattern and instead push the result of the build to a different repo, that is not under CI to mitigate point 1 above. I would also recommend to populate a file and environmental variables with the credentials instead of the ssh-agent trickery to mitigate points 2,3,4,5 above.

The previous CI job did use a ssh-agent spawned by the Jenkins agent and loaded with credentials. Until I found out ssh-agent checks the uid of the client that is what the commit message reflects in https://gerrit.wikimedia.org/r/#/c/operations/puppet/+/585038/ I then went with a netrc file provided by Jenkins and uses setfacl to make it readable by the container and push over ssh.

That being said, all your points equally apply to that netrc file. It is readable by the executed process.

The release itself is being done by the Maven release plugin via the release:prepare and release:perform goals. They craft a first commit to set the version in pom.xml and create a git tag (5fe8cb57c90ed28b4179d7caf561cc5143fec69d) and then a second commit for the next version ( 98fddef2b57a21c03d6d1a0dd7038ea9cb99afcb ).

I guess we could look at passing parameter to have it not do any git push operation then use another container or job that would have the writable credentials. And thus promote from there. It is still not ideal though, another build might still have access to the credentials :-/

A couple of things I 'd like to add:

  1. Having CI uploading to the git repo is problematic from a security point of view. Simply put, compromise of a single CI job run means arbitrary code insertion to the repo and then arbitrary code execution on the next run. Yes in the context of the CI run, but by now we are talking about 1 hop away from local root (a kernel exploit). You can assume that will be chained as chained exploits are the norm these days, not the exception.
  2. Exposing a socket for communication between the host and the container is problematic. The container can now talk to a process on the host and attempt to exploit it allowing it to escape the container. From that point on it's a 1 hop away from local root as above.
  3. Do I understand correctly that setfacl -m user:65534:rw "$SSH_AUTH_SOCK" which is listed in https://gerrit.wikimedia.org/r/#/c/operations/puppet/+/583392/ has been ran manually? That it is not documented or added in any configuration repo? Essentially meaning that this isn't reproducible?
  4. POSIX acls are difficult to inspect with standard tools and thus generally unused. Yes ls has support for displaying their existence, getfacl and setfacl exist and yet their use is not widespread. The reason being that a single '+' in ls output is very easily overlooked, making it possible it will take a long time for someone to notice why something works (or is broken).
  5. Messing with the default permissions of ssh-agent socket is not prudent. There is a reason it's restricted to the same user. Having arbitrary code (yes, please assume that arbitrary code will talk to that socket) violates the internal assumptions of ssh-agent.

I would urge to avoid the above pattern and instead push the result of the build to a different repo, that is not under CI to mitigate point 1 above. I would also recommend to populate a file and environmental variables with the credentials instead of the ssh-agent trickery to mitigate points 2,3,4,5 above.

The previous CI job did use a ssh-agent spawned by the Jenkins agent and loaded with credentials. Until I found out ssh-agent checks the uid of the client that is what the commit message reflects in https://gerrit.wikimedia.org/r/#/c/operations/puppet/+/585038/

I am having trouble figuring out from the commit message of that change how this is related (it talks about CSP headers and video). Am I missing something?

Also, should I take from what you say above that ssh-agent sharing is no longer in use? That is, the comment of https://gerrit.wikimedia.org/r/#/c/583392/ is no longer applicable? (despite the content of the change still applying).

I then went with a netrc file provided by Jenkins and uses setfacl to make it readable by the container and push over ssh.

/me curious. Which programs use setfacl specifically? Got a link to a change?

That being said, all your points equally apply to that netrc file. It is readable by the executed process.

Not really, or at least not all of my points. A .netrc file is not a gateway that allows communication between the container and the host in the same way that a bind mounted socket is (not to mention that it can be mounted readonly as well). In fact, if the ssh-agent is no longer used, half of my points are no longer applicable.

The release itself is being done by the Maven release plugin via the release:prepare and release:perform goals. They craft a first commit to set the version in pom.xml and create a git tag (5fe8cb57c90ed28b4179d7caf561cc5143fec69d) and then a second commit for the next version ( 98fddef2b57a21c03d6d1a0dd7038ea9cb99afcb ).

I guess we could look at passing parameter to have it not do any git push operation then use another container or job that would have the writable credentials. And thus promote from there. It is still not ideal though, another build might still have access to the credentials :-/

My issue isn't with the credentials leaking so much but rather with the fact those credentials can send commits to the git repo itself (at least that's my reading of the commits you posted above, let me know if I am wrong). That approach creates a feedback loop between CI and the git repo and as such allows attacks with the potential to pollute the repo and execute arbitrary code in multiple places, including the CI infrastructure. A simple workaround is to have it create changes that are reviewed by a human and merged with their approval. That workaround would also mitigate the impact in case those credentials leaked (as a human would have to approve repo modifying actions).

Mentioned in SAL (#wikimedia-releng) [2020-04-02T13:49:41Z] <hashar> Add Jenkins bot wmf-insecte to #wikimedia-analytics # T210271

Change 585497 had a related patch set uploaded (by Hashar; owner: Hashar):
[integration/config@master] jjb: irc notification for analytics jobs

https://gerrit.wikimedia.org/r/585497

Change 584984 merged by Joal:
[analytics/refinery@master] Fetch commit hook over https and skip if already present

https://gerrit.wikimedia.org/r/584984

refinery-source-release job has been successful today. Our deployment doc is updated :)
Still refinery-update-jars to go and we're done!

Change 583092 merged by jenkins-bot:
[integration/config@master] jjb: migrate refinery release job to Docker

https://gerrit.wikimedia.org/r/583092

Change 585497 merged by jenkins-bot:
[integration/config@master] jjb: irc notification for analytics jobs

https://gerrit.wikimedia.org/r/585497

Closing, thanks everyone for prompt responses

Unfortunately only half of this is done, sorry. :-(

Ah , i see, the update-jars is not done

A couple patches in the refinery repo got merged earlier today.

I still have to test the container ( https://gerrit.wikimedia.org/r/#/c/integration/config/+/584998/ ) and craft the associated JJB job. I have been side tracked with other duties today though.

Change 589589 had a related patch set uploaded (by Hashar; owner: Hashar):
[integration/config@master] Port analytics-update-jars to Docker

https://gerrit.wikimedia.org/r/589589

Change 584998 merged by jenkins-bot:
[integration/config@master] docker: container for refinery jar updater

https://gerrit.wikimedia.org/r/584998

Change 589589 merged by jenkins-bot:
[integration/config@master] Port analytics-update-jars to Docker

https://gerrit.wikimedia.org/r/589589

Paired with Joseph and the last bit has been completed a minute or so ago (switch the update-jars job).

We can probably reuse a lot of that logic for other maven based repositories.

Thank you a lot @hashar for making us move to the newer system :)

Thank you a lot @hashar for making us move to the newer system :)