Why systemd in Embedded Linux Systems?

Recently I was asked why use systemd vs sysvinit in embedded systems?  There are many discussions on this, and really most of the reasons people use it for servers and desktops are also valid for embedded systems.  Lennart Poettering’s articles explain very well why you might want to consider systemd.  A few things that rank high on my list:


OS Containers for Build Systems

Since I’ve been running archlinux on some of my systems, one thing I’ve found useful is systemd-nspawn. systemd-nspawn containers (or chroots on non-systemd systems) give you a quick way to install a Linux distribution, that can run inside an existing Linux system.

Some cases where systemd-nspawn containers (referred to as containers in this document) are useful:

  1. At one point, OpenEmbedded would not build with GCC 4.8 (this is no longer the case with recent versions of OE).  So a Debian or Ubuntu OS container was a quick way to get builds going again.
  2. For product build systems (may live for many years), typically OE will eventually break as you upgrade the workstation distribution.  For projects that need a long-lived OpenEmbedded build system, setting it up in a chroot makes a lot of sense.
  3. Someone might be having a compile or build problem with a distribution you don’t currently have installed.  With containers, you can quickly set up a test distribution to reproduce problems.
  4. I’ve had cases where I need an older version of Qt for a project, but my workstation includes a newer version.  Again, setting up a OS container is sometimes simpler than getting two versions of Qt to dwell together peaceably in the same distribution.
  5. Backing up or replicating your entire build system is very easy — simply rsync the OS container directory to another machine.

So the solution is to select a relatively stable, long-lived distribution to host your product builds. Debian is good choice.  Because the container is simply a directory in the host workstation filesystem, you can use host workstation tools (editors, git, etc) directly in the container filesystem.  The only thing you need to use the chroot for is the actual building.  If you make sure the user ID is the same between your workstation and nspawn container, then permissions are seamless — you can easily access files in the container from the context of your host workstation.

To set up a nspawn-container:

  1. Install debootstrap.  On arch systems, this needs to be obtained for the AUR.
  2. host: sudo debootstrap –arch=amd64 wheezy ~/debian-wheezy/
  3. host: sudo systemd-nspawn -D ~/debian-wheezy/
  4. container: apt-get update && apt-get install ssh
  5. container: edit /etc/ssh/sshd_config, and set port to something other than 22 (23 in this example)
  6. container: /etc/init.d/ssh start

(This systemd-nspawn man page gives examples for setting up other distributions.)

To set up a user in your container:

  1. host: id
  2. will return something like: uid=1000(cbrake) gid=100(users) …
  3. container: adduser –uid 1000 -gid 100 cbrake
  4. host: ssh-copy-id -p 23 localhost (will copy public key to container)

Now, on the host system, you can simply “ssh -p23 localhost” any time you want to log into the container.  Soft links between the project workspace on the host system, and the container can also make shifting between the two easier.

An alternative way to start the container once its set up is:

sudo systemd-nspawn -D ~/debian-wheezy /sbin/init

Its also handy to make the shell prompt in the container slightly different than the host OS so that you can easily tell the difference.  To accomplish this, add the following to ~/.profile in the container OS:

export PS1=”[\u@wheezy \w]\$ “

To create a service that starts your container, put something like the following in /lib/systemd/system/debian-wheezy.service

Description=Debian Wheezy
ExecStart=/usr/bin/systemd-nspawn -D /scratch/debian-wheezy/ /sbin/init 3

Hopefully this gives you a quick overview of how OS containers can be set up, and used in your OpenEmbedded build systems.


Running a reboot cycle test shell script with systemd

One of the easiest ways to stress test an embedded Linux system is to continuously reboot the system. Booting is a difficult activity for a Linux system (similar to waking up in the morning). The CPU is maxed out. There are a lot of things happening in parallel. Software is initializing. There is a lot of filesystem activity. Often, if there is an instability in the system (and especially the filesystem), continuously rebooting will expose it.

Now that we use systemd for most new systems, we need to run a simple script X seconds after the system boots to increment the boot count, and reboot. This is the classic domain of a shell script. However, for a simple task like this, it’s a little cumbersome to create a seperate shell script and then call this script from a systemd unit. The following is a way to embed the script directly in a systemd unit.

Put the following in: /lib/systemd/system/cycletest.service

Description=Reboots unit after 30s

ExecStart=/bin/sh -c "\
test -f /cycle-count || echo 0 > /cycle-count;\
echo 'starting cycletest';\
sleep 30;\
expr `cat /cycle-count` + 1 > /cycle-count;\
systemctl reboot;\


To install and start the script:

  • systemctl daemon-reload
  • systemctl enable cycletest.service (enable the service to start on reboot)
  • systemctl start cycletest.service (start the service, should reboot in 30s)

Mounting a UBIFS partition using systemd

Systemd is becoming the defacto system and service manager for Linux, replacing the SysV init scripts.  The Angstrom distribution has supported systemd for some time now. Recently, I needed to mount a UBIFS filesystem in one of my projects.  The main application is being started with systemd, so it seemed like a good fit to also use systemd to mount a data partition needed by the application. Systemd can use entries from /etc/fstab, but one additional wrinkle in this system is that I also wanted to run the UBI attach in a controlled way. This can be done with a kernel command line argument, but there are times in this system where we will want to format the data partition, so this requires a detach/attach operation.

The resulting systemd units are:


Description=Mount data partition


Description=Attach data ubi partition

ExecStart=/usr/sbin/ubiattach /dev/ubi_ctrl -m 6
ExecStop=/usr/sbin/ubidetach /dev/ubi_ctrl -m 6

add the following to my-application.service


The only real problem I ran into is the “After” statement in the data.mount unit.  It turns out that the mount will run before the attach operation is finished unless this is included.

The RemainAfterExit seems to be required so that ExecStop can be run when the unit is stopped.

(There may be a better ways to do all this, so comments are welcome!)

One of the benefits of systemd is that everything is very controlled.  If the dependencies are specified properly, there are no race conditions.  Additionally, if you need to manage units (services, mounts, etc) from a program, it is much easier to check the states as everything is very consistent.  For example, if you want to query the state of a unit, the systemctl show <unit>command will return easily parsed output (as shown below):

After=data-attach.service systemd-journald.socket -.mount
Description=Mount data partition
InactiveExitTimestamp=Thu, 01 Jan 1970 00:00:17 +0000

If the data-attach service is stopped, systemd automatically unmounts the data partition first — very nice!

Hopefully this example illustrates how to do simple tasks in systemd.  It appears that instead of having a complex script or program to initialize a system, the systemd “way” is to create a number of small units that get connected with dependencies.  This seems like a much more controlled and flexible approach.