Using Docker for OE/Yocto builds

Posted by Cliff Brake on 2016-04-21 | Comments are off for this article

Why Docker?  When using OE to build software for products, we often run into the scenario where software needs to be built using the same version of OpenEmbedded over the course of several years.  Production builds need to be predictable.  We’ve also observed that old versions of OE often break as new Linux distros come out.  This is just the result of the complexity of building tool chains.  Additionally, for predictable builds you really don’t want to be changing the build OS.  This requirement automatically rules out Arch Linux, Debian Unstable, Gentoo, etc as production build machines.  Additionally, having developers debug OE build issues on varying workstation distributions is frustrating and time consuming.

There are several options:

  • keep a golden build machine around with a fixed OS
  • do builds in cloud on a machine with a fixed OS
  • build in a systemd-nspawn, or chroot
  • build in a virtual machine (Virtualbox, etc)
  • use Docker

I have used all of the above and recently started experimenting with Docker.  As an example, I set up a container named cbrake/oe-build and provided an example of how to use a Debian based container for OE builds.

Docker requires a bit of a mind-shift from virtual machines.  The 1st aha moment with Docker comes when you realize data and build directories should be mapped from the host into the container such that the container does not really have any state.  There are no ssh keys, etc in the container.  This allows the container to be immutable and easily replicated across any number of machines.  Stuff that needs to change goes in your build directory and is managed using Git.  Users specific information lives on your host machine.

The 2nd aha moment with Docker is running commands with Docker instead of working in the context of Docker.  For years, I have run OE builds in a systemd-nspawn.  I’d always have two terminal windows open — one for the container context, and one for the host.  With Docker, you typically just work in your host context, and spin up a Docker container every time you need to run a command in the container (bitbake, etc).  If you define a few shell functions to wrap the docker commands, you hardly even know that commands are executing in the container.  This might seem impractical, but it actually works quite well.  Of course you can still run a container shell and work in the context of the container if you want, but its not really necessary or desired once you get the container set up.

Docker is a neat tool and allows you to have a consistent environment for developer and production build machines.  It also encourages a natural separation of immutable containers and transient data that makes it much simpler to manage and distribute containers.

Comments are closed.