Page MenuHomePhabricator

Evaluate Docker as a container deployment tool
Closed, DeclinedPublic

Description

Docker promises a convenient / developer friendly interface for regular Linux container functionality:

  • low-overhead process isolation & security improvements
    • generic support for running a service as an unprivileged user, not persisting changes, dropping capabilities & other lock-down features
    • memory, IO metering via cgroups
  • tools for creating and distributing fairly efficient overlay-based images
    • uses aufs by default, can use btrfs
    • can stack overlays for quick updates (to support container-based deploys)
    • public registry is not very secure, we'll need to run / maintain our own (https://github.com/docker/docker-registry and docker run registry)
  • linking features to manage local container dependencies
  • fairly good integration with other tools (largely owed to its popularity):
    • systemd can run docker images natively
    • Several config management systems including Ansible or SaltStack provide modules to spawn and orchestrate docker instances
    • service orchestration systems like CoreOS fleet, Kubernetes or Apache Mesos can dynamically spawn and integrate entire docker instance clusters
    • boot2docker provides a very light-weight platform for running docker instances on Windows and OSX

Compared to other Linux container solutions like LXC or Rocket, it lacks:

Docker could potentially help us with the following use cases:

  • improve isolation in continuous integration with low overhead: run tests as non-root user with limited capabilities
  • staging, canary and production deployment:
    • improve service isolation / security with low-enough overheads to make this reasonable in production
      • safely share hardware between different services
    • deploy the exact same code that was tested earlier
    • ability to gradually start using newer software for individual services, for example iojs
  • development and labs
    • provide a simple & low-overhead way to set up a few services for development and integration testing

However, there are general downsides with container-based deployment systems that we need to consider:

  • Using init scripts from packages inside the container would require root, which we don't want to grant. This can typically be worked around by starting services with a custom start command specified in the dockerfile, but this mostly loses the integration work package maintainers have already performed.
  • To ensure timely security updates, we'll need to make sure that all images are based off our own, properly maintained base image. Security updates in system libraries like OpenSSL will need to be rolled out to this base image and all derived images in an automated manner.

Event Timeline

GWicke raised the priority of this task from to Needs Triage.
GWicke updated the task description. (Show Details)
GWicke added subscribers: fgiunchedi, mark, Joe and 6 others.
Restricted Application added a subscriber: Aklapper. · View Herald TranscriptMar 20 2015, 9:56 PM
GWicke set Security to None.Mar 20 2015, 10:07 PM
GWicke added subscribers: hashar, Krinkle.
GWicke updated the task description. (Show Details)Mar 20 2015, 10:14 PM
GWicke updated the task description. (Show Details)Mar 20 2015, 10:25 PM
GWicke updated the task description. (Show Details)Mar 21 2015, 5:01 PM
ori added a subscriber: ori.Mar 21 2015, 5:10 PM

Using init scripts from packages inside the container would require root

How come?

You mention that systemd can run docker images, which is interesting. How does that work, exactly? Can systemd be made to invoke the init script somehow?

GWicke added a comment.EditedMar 21 2015, 7:15 PM
In T93439#1138212, @ori wrote:

Using init scripts from packages inside the container would require root

How come?

Standard init scripts expect to be run as root (as they are executed by sysvinit, which runs as root), and then drop privileges.

How does that work, exactly? Can systemd be made to invoke the init script somehow?

Most docker applications aren't started using an init script executing as root & then dropping privileges. Normally, there is no init system running inside the container either. Instead, they follow a one-task-per-container paradigm, where the actual entry point is directly executed with an unprivileged user determined by systemd or docker. For example, here is the line responsible for starting a service-runner based nodejs service from the service-runner Dockerfile:

CMD ["/usr/bin/npm", "start"]

To execute this as nobody & without capabilities with docker, you'd use something like

docker -u nobody --cap-drop ALL -p 7231:7231 wikimedia/someservice:0.4.7

Using systemd-import introduced in systemd 219 the same should go roughly like this (untested):

systemd-import pull-dkr wikimedia/someservice:0.4.7
systemd-nspawn -u nobody --drop-capability=all -p 7231:7231 -M someservice:0.4.7

Under the hood, this is using systemd-nspawn and machinectl. There are more ways to do this, for example as described in this blog post.

GWicke updated the task description. (Show Details)Mar 21 2015, 8:03 PM
GWicke updated the task description. (Show Details)Mar 21 2015, 8:05 PM
GWicke updated the task description. (Show Details)Mar 21 2015, 8:07 PM
GWicke updated the task description. (Show Details)Mar 24 2015, 9:01 PM
fgiunchedi triaged this task as Medium priority.Apr 2 2015, 9:48 AM
greg moved this task from To Triage to Backlog (Tech) on the Deployments board.Apr 2 2015, 5:08 PM
GWicke updated the task description. (Show Details)Apr 13 2015, 3:32 PM
Restricted Application added a subscriber: Matanya. · View Herald TranscriptAug 6 2015, 4:38 AM
GWicke closed this task as Declined.Aug 8 2017, 10:36 PM

With all the ongoing work around T170453, this task has lost its usefulness. Much of the information is out of date, and is now covered in other tasks in more detail. Closing this task for that reason.