Page MenuHomePhabricator

Evaluate how we might run buildpack's `pack` in CI
Closed, DeclinedPublic

Description

See T265685: Set up CI for cloud/toolforge/buildpacks repository for context.

TL;DR: cloud-services-team needs to run pack in CI to verify their builder configuration. However, it needs access to dockerd to function. I suggested one option might be to give it access to the dockerd socket via a bind mount. Before doing that, we'll need to verify that the configuration can't contain anything that would allow for command injection.

Event Timeline

give it access to the dockerd socket via a bind mount

Definitely not! The Docker daemon runs as root and takes instructions through the socket. As far as I am aware, there is no access list and access to the sockets grants full access to the daemon and hence root privileges on the host machine.

The only sane way to grant such access is by starting a new isolated virtual machine with Docker running inside it and execute the commands inside that VM. Similar to how we did it with Nodepool back in the days (we booked a new VM for each job).

The only implementation we have is a for Fresh, a Docker wrapper to run npm in isolation. It has a test suite that does require running containers and that has been done by booting Qemu on the CI agent using a custom image that has Docker installed see:

That is the concept we should use (empty Docker daemon in a dedicated disposable isolated VM). The way it is currently implemented is fine for a single project (Fresh) but it should be considered a proof of concept and the current implementation is definitely not sustainable at scale.

On top of my mind, we have a few other use cases to allow CI jobs to have access to a Docker daemon, some examples on top of my mind:

  • build an image based on a Dockerfile in order to verify it builds properly
  • running our docker-pkg tool to ensure image definitions do result in valid images
  • leverage docker composer to bring up a testing environment
  • Puppet acceptance tests with Beaker

There are definitely needs, but we don't have the infrastructure for it yet.

give it access to the dockerd socket via a bind mount

Definitely not! The Docker daemon runs as root and takes instructions through the socket. As far as I am aware, there is no access list and access to the sockets grants full access to the daemon and hence root privileges on the host machine.

Ok! :) I'm not aware of ACL support builtin either. I have seen some articles about setting up proxies.

The only sane way to grant such access is by starting a new isolated virtual machine with Docker running inside it and execute the commands inside that VM. Similar to how we did it with Nodepool back in the days (we booked a new VM for each job).

That seems like one good way to secure it. Other ideas I can think of (perhaps in combination):

  1. Run a dedicated group of Docker instances (somewhere like integration or elsewhere) that are only used for building images.
  2. Limit socket access to only CI agent instances.
  3. Set up a proxy to the Docker socket that enforces a degree of access control and only allows for API calls necessary for building.

On top of my mind, we have a few other use cases to allow CI jobs to have access to a Docker daemon, some examples on top of my mind:

  • build an image based on a Dockerfile in order to verify it builds properly
  • running our docker-pkg tool to ensure image definitions do result in valid images
  • leverage docker composer to bring up a testing environment
  • Puppet acceptance tests with Beaker

There are definitely needs, but we don't have the infrastructure for it yet.

It does seem like the list of needs is growing. Maybe we can talk more about options/ideas are our offsite next week.

Declining this due to the aforementioned security blockers and lack of follow up. Feel free to re-open at any time.