As a way to better evaluate how we might manage Dockerfiles and base images in a Docker CI world, we should experiment with the simple use case of running malu unit tests via Jenkins within a Docker container. The job could be as simple as spinning up a container using the repo's Dockerfile and executing a generic entrypoint (e.g. make test).
Description
Status | Subtype | Assigned | Task | ||
---|---|---|---|---|---|
Resolved | dduvall | T150501 Spike: Evaluate experimental Docker based CI w/ scap builds | |||
Resolved | dduvall | T150504 Define generic job that runs unit tests within a Docker container |
Event Timeline
I'd be interesting in experimenting with that, actually. For this first pass though, I'd like to simply use the CloudBees Docker Build Jenkins plugin.
@mmodell, I don't have access to the Phab credential necessary to create the Harbormaster build for this. Can you create one based off the existing build? (https://phabricator.wikimedia.org/harbormaster/plan/9/) It should invoke the differential-docker-test parameterized Jenkins build in the same exact way.
Also, I think I'm going to test this using the malu project instead so that I don't cause too much scap noise.
rGMALU is now tagged with meta-ci-docker-diffs which should trigger differential-docker-test via Plan 11
First successful runs:
https://integration.wikimedia.org/ci/job/differential-docker-test/27/console – run with complete rebuild including download of base image ~ 135 seconds of overhead
https://integration.wikimedia.org/ci/job/differential-docker-test/25/console – run with image build on cached base image ~ 83 seconds of overhead
https://integration.wikimedia.org/ci/job/differential-docker-test/26/console – run with fully cached image (includes npm dependencies) ~ 6 seconds of overhead
I was running into all sorts of permission related issues using the Jenkins plugin, since it relies on mounting the entire Jenkins home directory and temp directories and then executes docker run with a different --user than it specifies when the image is built. With that setup, it was nearly impossible to implement Docker commands before the main entry point that would both warm the NPM cache in a way that plays nice with Docker's union FS based caching, and in a way that wouldn't cause some directories/files to be owned by root before the main test execution.
I opted instead for a very minimal and completely generalized shell script—not using the plugin at all but does of course require docker—that does the following:
- Builds an image named after the Jenkins build tag based on the repo's Dockerfile.ci
- In this experiment, the pre-build cache warming and environment setup is implemented directly in the repo's Dockerfile.ci (see {D455}) but some of this could just as easily be included in a base image that we maintain for Node.js applications, one that uses our production Node.js packages, etc. Or not; it's flexible.
- The Dockerfile.ci in this case uses COPY to move all repo files into the container instead of relying on a shared volume. This approach introduces more overhead but avoided all of the permissions related pitfalls, and it keeps the execution completely isolated.
- Runs a new container based off that image, and simply delegates to whatever ENTRYPOINT has been declared in the Dockerfile.ci
- Uses docker diff to identify any files that have been newly created and copies them back to the host workspace so they can be archived as Jenkins artifacts
- Removes the container
Dan gave me a one hour crash course in that POC and I must say *I am impressed.* That immediately address a bunch of caching related issues we have (huge got repos, package managers).
I am not entirely sold on the sequence of actions and how snapshots are invalidated. Will need a bit more experimentation to get something that is rock solid.
I will try to reproduce on my local machine, probably using MediaWiki has a play doll. Will see.
Kudos Dan and thanks Mukunda foenthe Harbor master plan !