We've been developing the Security API as a Node service that talks to a MediaWiki extension. The service is currently architected to connect to a production Wikimedia MySQL database, while using the standard mariadb image found at docker hub for local development and certain testing environments (within its docker-compose). In setting up a basic CI configuration for the Node service, we've come across difficulties in either mocking or using some test MySQL/Mariadb servers so that tests like the swagger/mocha set are run without generating errors. Are there any general best practices or examples you're aware of around this architecture pattern? I believe many of the existing Wikimedia Node services talk to APIs, which is a bit different than what we're doing here, so I'm not certain how common this need is. Any suggestions or help would be appreciated.
|Install and run mariadb under the blubber test variant||wikimedia/security/security-api||master||+61 -2|
|Declined||None||T290917 New Service Request Security API Gateway|
|Resolved||Mstyles||T296346 Create Demo Environment for Security API|
|Open||None||T308789 Determine CI best practices for service which connects to MySQL|
- Mentioned In
- T344818: Allow GitLab CI containers to connect to services
T339352: Create MySQL container in CI for integration tests
T337714: Migrate mediawiki/services/ipoid to GitLab
T305715: Work on mocha/swagger tests to have features appropriately mocked or otherwise passing
- Mentioned Here
- T337714: Migrate mediawiki/services/ipoid to GitLab
T287211: Figure out the future of (or replacements for) PipelineLib in a GitLab world
Change 800801 abandoned by SBassett:
[wikimedia/security/security-api@master] Install and run mariadb under the blubber test variant
This was an experiment to figure out if this approach was possible via blubber/pipelinelib. And it's not going to be without creating a new WMF image.
Hey @jeena -
Thanks for the reply. So the image we're talking about here would only be used for CI and, maybe local development, though we have a docker-compose for the latter which is at least ok for now. Given that we'd never plan to deploy this proposed image, would a helm test still make sense? Gitlab would definitely allow for more flexibility with what we'd like to do, with their CI/CD functionality, but we aren't quite ready to migrate the service there, at least not until the blubber/pipelinelib/image publishing stuff (T287211) has been decided. So a new node12-with-mysql/mariadb image may still make the most sense? I haven't ever built a new image for docker-registry.wikimedia.org, but I can get a (likely wrong) gerrit change set posted soon. Do you know if something like that should likely live under integration/config or releng/dev-images?
I think the reason for helm test would be that you wouldn't have to create a new image, you could just use an existing mariadb helm chart as a subchart, but if you do want to go the route of building a node image with mariadb then I think integration/config would be the best place for it, and also the easiest way to get it published.
What's your timeline? I ask because we are very close to having an image build and production registry publishing system working in GitLab (just waiting on 2 ops/puppet patches in review). I've updated T287211 to better reflect our recent progress during Release-Engineering-Team (GitLab-a-thon 🦊) if that effects your decision on when to migrate, and I believe a GitLab CI service is the best option for what you're wanting to do.
That said, the helm test approach is possible right now within our current PipelineLib based system. It would involve what @jeena already mentioned, namely getting a general mariadb image published to docker-registry.wikimedia.org and developing a chart for (test) deployment to the ci-staging k8s namespace by using PipelineLib's deploy action and deploy.test. Either of us can help with the config and chart development if you want to go that route.
I do think that developing a chart for your application solely for testing is a lot of extra work, but it is the route open to you now now. The GitLab CI way will hopefully be now soon. :)
Well, we have a product demo scheduled for this Tuesday, June 14th - so that's likely a bit too soon for Gitlab :) Anyhow, we can plan for the Gitlab option going forward, I was just wishing there was a quick solution for our current gerrit/jenkins/blubber reality. I had hoped we could just create a new node12-devel-mariadb image to use as the image for the test variant within the services' blubber.yaml, so that we could get our basic tests passing in CI (which are currently failing when the service attempts to connect to a non-existent mariadb). An image that kept the final part of the entrypoint as npm test. But it sounds like that's not really possible or advised?
@sbassett is using helm test as part of the pipeline still desirable for this?
GitLab is probably capable of running this at this point given we've implemented pipelinelib functionality there. However, it would still need a test image. The integration/config repo probably still makes sense as the place to add that image.
Hey @thcipriani - There are some changes in-flight for this project/repo and I'm hopeful that it will migrate over to gitlab sooner than later. In which case, I believe we could likely leverage a number of different debian-node-mysql images external to Wikimedia's image registry. And that would likely be sufficient for anything that needed to run in CI for this code and would likely be simpler than going down the path of helm, etc. Though I'm not sure if that's recommended or condoned by Release Engineering, SRE, etc. Anyhow, I've added a few more engineers to this task as I believe Anti-Harassment has committed resources for much of this new engineering work.
@thcipriani helm test might be nice, but I think the primary use case is making sure that integration tests in CI for patches to ipoid can verify that e.g. data import and data retrieval works as expected when the application is connected to a MySQL instance.
That use case (integration tests in application container able to connect to a database container) seems like an important one to support and document. What type of support can RelEng offer for this?
However, it would still need a test image. The integration/config repo probably still makes sense as the place to add that image.
Could RelEng create the test image?
So, I realize my suggestions above (using helm test + publishing a test image via integration/config) are based on the gerrit way of doing this type of testing.
The idea would be:
- build and publish your mysql image to GitLab's shared registry using kokkuri in gitlab-ci (assuming you have access to what you need for this image at build time).
- build and run the ipoid image as part of your CI job, and
- use a service in your gitlab-ci.yml to spin up the container from step 1, giving your ipoid image access to it as a networked service for integration testing.
Does something like that sound like it would work for you all? This pattern would be a new one for us, but it fits a need formerly only available (badly) via helm test in pipelinelib.
That sounds like it would work. In the short term, I don't think we have any integration tests that need access to MySQL for reading/writing. Once we are ready to add those, then we could follow up with the steps you've outlined above. The services: key sounds like it does exactly what we'd want. I think a next step here is T337714: Migrate mediawiki/services/ipoid to GitLab and then we could mark this task as stalled until we decide we need access to a MySQL instance in CI to support integration testing.