Rebasing a set of changes with Git

One of the common things we do during Linux kernel development is move a series of patches from one kernel version to a similar version (say Linux 4.1 to 4.1.12).  This is required as new stable versions of particular kernel version are released.  One approach is to merge, but then your changes are mixed in with upstream commits and are more difficult to manage.  Git rebase offers a convenient way to move a set of patches.  In the following example we have a series of changes we made (or patches we applied) on top of the 4.1 kernel.

Continue reading “Rebasing a set of changes with Git”

Modifying the BusyBox config in OpenEmbedded

Recently, I needed to enable the eject command in BusyBox for an OpenEmbedded (Yocto) based project.  Below is a way to do this in a fairly painless way:

  1. bitbake -c menuconfig busybox (enable the eject command in the config and save)
  2. bitbake -c diffconfig busybox (this generates a config fragment, note the fragment file location)
  3. recipetool appendsrcfile -w [path to layer] busybox [path to fragment generated in step #2]

Continue reading “Modifying the BusyBox config in OpenEmbedded”

Best practices for using CMake in OpenEmbedded/Yocto

I’ve already written about using autotools and qmake in OE.  With recent projects, we’re using CMake to build most C/C++ components.  Like any good tool, CMake has a learning curve, but it seems to do its job quite well.  Below are examples of a CMake build file, and corresponding OE recipe.

Continue reading “Best practices for using CMake in OpenEmbedded/Yocto”

Library sizes for C vs C++ in an embedded Linux system

Is the size of the libraries required for C++ (vs C) a concern in Embedded Linux systems?  Most Embedded Linux systems likely include some C++ code, so this is probably not even a decision we need to make in many cases.  However, often there is a need for a small initramfs that is used as part of the boot process (perhaps for software updates) before switching to the main root file system.  In this case, it makes sense to keep the initramfs as small as possible, and we might be concerned here with the size of C++ libraries.

Continue reading “Library sizes for C vs C++ in an embedded Linux system”

Why systemd in Embedded Linux Systems?

Recently I was asked why use systemd vs sysvinit in embedded systems?  There are many discussions on this, and really most of the reasons people use it for servers and desktops are also valid for embedded systems.  Lennart Poettering’s articles explain very well why you might want to consider systemd.  A few things that rank high on my list:

Continue reading “Why systemd in Embedded Linux Systems?”

Setting up an OpenEmbedded Package Feed Server

With the BEC OE build template, you can easily set up an opkg feed server that serves up packages from your build directory.  This allows you to easily install new packages during development, without generating and reflashing an entire image.  To use:

On workstation:

  • edit local.sh, and define MACHINE_IP to point to your target machine, and re-run “source envsetup.sh”.  Alternatively, you can export the MACHINE_IP variable in your environment.
  • run: oe_setup_feed_server
  • run: oe_feed_server

On target system:

  • opkg update
  • opkg install <some package>

Opkg will install dependencies, so if you have a complex package, this is so much easier than copying over an opkg file manually, and figuring out you need 6 other packages as dependencies.

OpenEmbedded Build Template

How does one set an OpenEmbedded/Yocto/Poky/Angstrom build? There are many options. Some include:

(I’m sure there are many others, feel free to add in comments …)

Over the past years, we’ve supported a number of customers using OpenEmbedded to develop products using various SOC’s. We also try to keep builds going for a number of different systems so that we can continuously evaluate the state of OpenEmbedded and related Embedded Linux technologies. We also needed a standard way to set up builds for customers (some don’t have a lot of experience with OE and Git) that is simple and consistent. What we ended up with is the BEC OpenEmbedded build template.

The goal is to have a quick entry point into OpenEmbedded that includes the necessary layers for a number of different machines, and automates a number of routine tasks such as installing images to a SD card, setting up a development feed server, etc. The build template is only updated when the build is stable and tested on a number of machines, so it provides a series of stable snapshots of OpenEmbedded and associated layers.

This build template currently tracks the master branches for all the layers used. This gives us a platform to track the latest OE changes. With most projects, that ability to use the features in the latest versions of software outweighs the stability benefits of OpenEmbedded release branches. There are times when the OpenEmbedded project goes through invasive changes (such as the systemd integration), and using the master branches is not practical, so in this case we simply use the last stable snapshot that builds and works. In most cases if there are issues, simply report or fix the issue and wait a week.

Perhaps the most controversial decision is the use of Git submodules for including OpenEmbedded layers. The Internet is full of rants against Git submodules. For heavy developer use, submodules may not be optimal. However, from a user perspective, Git submodules provide a simple mechanism for including external repositories in a project. If most of the submodules (OE layers) won’t be touched (typical OE user scenario), submodules work very well. The fact that Git locks down submodules to a specific commit ensures you are getting exactly what you think you are getting (vs a branch that may have been rebased, modified etc). If the git hash matches, you can be pretty sure it is the same as the last time you built it. This is an important factor in production build systems where you want to be sure of what you are building. Google repo is another option under consideration, but there are still some trade-offs to work through.

In the end, build systems are very personal, and must be customized for your product and development team. The number one requirement for a an Embedded Linux build system is that you can get repeatable builds with a single command. There must be no manual steps where human error can be introduced. This is just one way to accomplish this goal.

OS Containers for Build Systems

Since I’ve been running archlinux on some of my systems, one thing I’ve found useful is systemd-nspawn. systemd-nspawn containers (or chroots on non-systemd systems) give you a quick way to install a Linux distribution, that can run inside an existing Linux system.

Some cases where systemd-nspawn containers (referred to as containers in this document) are useful:

  1. At one point, OpenEmbedded would not build with GCC 4.8 (this is no longer the case with recent versions of OE).  So a Debian or Ubuntu OS container was a quick way to get builds going again.
  2. For product build systems (may live for many years), typically OE will eventually break as you upgrade the workstation distribution.  For projects that need a long-lived OpenEmbedded build system, setting it up in a chroot makes a lot of sense.
  3. Someone might be having a compile or build problem with a distribution you don’t currently have installed.  With containers, you can quickly set up a test distribution to reproduce problems.
  4. I’ve had cases where I need an older version of Qt for a project, but my workstation includes a newer version.  Again, setting up a OS container is sometimes simpler than getting two versions of Qt to dwell together peaceably in the same distribution.
  5. Backing up or replicating your entire build system is very easy — simply rsync the OS container directory to another machine.

So the solution is to select a relatively stable, long-lived distribution to host your product builds. Debian is good choice.  Because the container is simply a directory in the host workstation filesystem, you can use host workstation tools (editors, git, etc) directly in the container filesystem.  The only thing you need to use the chroot for is the actual building.  If you make sure the user ID is the same between your workstation and nspawn container, then permissions are seamless — you can easily access files in the container from the context of your host workstation.

To set up a nspawn-container:

  1. Install debootstrap.  On arch systems, this needs to be obtained for the AUR.
  2. host: sudo debootstrap –arch=amd64 wheezy ~/debian-wheezy/
  3. host: sudo systemd-nspawn -D ~/debian-wheezy/
  4. container: apt-get update && apt-get install ssh
  5. container: edit /etc/ssh/sshd_config, and set port to something other than 22 (23 in this example)
  6. container: /etc/init.d/ssh start

(This systemd-nspawn man page gives examples for setting up other distributions.)

To set up a user in your container:

  1. host: id
  2. will return something like: uid=1000(cbrake) gid=100(users) …
  3. container: adduser –uid 1000 -gid 100 cbrake
  4. host: ssh-copy-id -p 23 localhost (will copy public key to container)

Now, on the host system, you can simply “ssh -p23 localhost” any time you want to log into the container.  Soft links between the project workspace on the host system, and the container can also make shifting between the two easier.

An alternative way to start the container once its set up is:

sudo systemd-nspawn -D ~/debian-wheezy /sbin/init

Its also handy to make the shell prompt in the container slightly different than the host OS so that you can easily tell the difference.  To accomplish this, add the following to ~/.profile in the container OS:

export PS1=”[\u@wheezy \w]\$ “

To create a service that starts your container, put something like the following in /lib/systemd/system/debian-wheezy.service

[Unit]
Description=Debian Wheezy
[Service]
ExecStart=/usr/bin/systemd-nspawn -D /scratch/debian-wheezy/ /sbin/init 3
KillMode=process
[Install]
WantedBy=multi-user.target

Hopefully this gives you a quick overview of how OS containers can be set up, and used in your OpenEmbedded build systems.

Git submodules can now track branches

As of version 1.8.2, Git submodules can now track branches instead of specific commits.  This is good news as in many cases, this is exactly the behavior we want.  However, Git submodules are still not as flexible as Google repo, but since submodules are built into Git, the submodule command is a good solution in many cases.

The “git submodule update –remote” command is the key to tracking branches with submodules.   The following is from the Git man pages:

–remote
This option is only valid for the update command. Instead of using the superproject’s recorded SHA-1 to update the submodule, use the status of the submodule’s remote tracking branch. The remote used is branch’s remote (branch.<name>.remote), defaulting to origin. The remote branch used defaults to master, but the branch name may be overridden by setting the submodule.<name>.branch option in either .gitmodules or .git/config (with .git/config taking precedence).

This works for any of the supported update procedures (–checkout, –rebase, etc.). The only change is the source of the target SHA-1. For example, submodule update –remote –merge will merge upstream submodule changes into the submodules, while submodule update –merge will merge superproject gitlink changes into the submodules.

In order to ensure a current tracking branch state, update –remote fetches the submodule’s remote repository before calculating the SHA-1. If you don’t want to fetch, you should use submodule update –remote –no-fetch.

So, if you already have a Git submodule set up, its a simple matter to run git submodule update –remote to update the submodule to the latest master branch.  If you want a different branch, simple edit .gitconfig.

[submodule "meta-bec"]
   path = meta-bec
   url = git@github.com:cbrake/meta-bec.git
   branch = test

Now, if you run git submodule update –remote, Git will update the meta-bec submodule to the latest on the test branch.

This functionality is purely a convenience feature in the submodule update command.  In the actual repository, Git still stores submodules pointed to a particular commit.  The same thing could be accomplished with something like git foreach “git fetch && git checkout test”.  The branch option in .gitmodules functions more as documentation and convenience.  It is very handy to be able to look at .gitmodules and quickly determine that submodule X is tracking branch Y.  Normally, this would have to be documented elsewhere, or figured out in some other way.  Also, for build systems where you want the build to always track the production branches of various projects, update –remote gives you a convenient way to update the build tree.

A quick way to share files from any directory

Did you ever need a quick way to share files from a directory on your computer?  Or perhaps transfer a large file to another person?  With nodejs and express, you can easily set up a temporary web server that allows users to browse and access a list of files in a directory.  For convenience, I created a simple github project that can be cloned into any directory, and then a server started in a matter of seconds.  Yes, you could upload files to a server, or share them with a file sharing service, but if you can expose a random port on your computer to the person who needs the files, then this is faster, and does not require any intermediate steps.  Check out https://github.com/cbrake/http-file-server for more information.

OpenEmbedded Source Mirrors

When using OpenEmbedded for product development, there are several reasons you may want to consider setting up a source mirror for your OpenEmbedded build:

  • over time, sources disappear from download locations
  • various servers for source packages may be off-line at the time a build is run
  • some servers may be very slow, which slows down your build
  • occasionally the checksums of a source package will change

For a production build system, you want the build to be reliable and consistent, so this means not depending on 3rd party web sites/servers for a clean build to complete.  Fortunately, OpenEmbedded makes it easy to set up a source mirror with the PREMIRRORS variable. When bitbake tries to fetch source code, it tries PREMIRRORS, the upstream source, and then MIRRORS.  There are several advantages to using a PREMIRROR variable over a MIRROR for your source mirror:

  • your source mirror will be used first, thus slow web sites are not an issue
  • if the checksums of the package change, the build will not fail because its still using the original source package from the mirror.  You are guaranteed to be always using the same source package.

Setting up a source mirror is as simple as copying the contents of your downloads directory to a web server, and then populating the following variable in local.conf:

PREMIRRORS_prepend = "\
     git://.*/.* http://my-server/sources/ \n \
     ftp://.*/.* http://my-server/sources/ \n \
     http://.*/.* http://my-server/sources/ \n \
     https://.*/.* http://my-server/sources/ \n"

The Poky reference manual has more details.

Perisistent device names for USB serial ports

Currently, my workstation has two 8-port USB<->RS232 devices, one dual port USB<->RS422/RS485 adapter, and several single port adapters such as the very useful BUBII.  So with around 20 USB->serial devices, figuring out which /dev/ttyUSBx entry corresponds to which port is not really practical.  However, with udev in Linux, you can easily give static names to each device.  This is especially convenient to do with FTDI devices because each FTDI device has a serial number.  In devices such as the 8-port RS232 adapter, there are 4, 2-port FTDI chips.  I could not find a serial number available in the adapter with a Prolific IC, so I would avoid those until that is sorted.

udevadm can be used to discover the serial number for a FTDI device:

udevadm info --attribute-walk -n /dev/ttyUSB0|grep serial 

After the serial number is known, rules can be created in the /etc/udev/rules.d/99-usb-serial.rules as shown in this example.  Now, serial ports can be accessed using convenient names such as /dev/ttyUSB_beagle.

A quick way to set up an OpenEmbedded feed server

During development with OpenEmbedded (oe-core, meta-oe, meta-angstrom), I often find it useful to set up a feed server so that packages can quickly be installed on the target system without manually copying them over or building a new image.  One way to do this is copy your deploy/ipk directory to an existing web server (perhaps Apache running on your workstation), or configure Apache to point at your OE build directory, etc.  But, it might be more convenient if your build system could directly create an opkg feed with no extra configuration.  In the below example, we use node.js + express.js to create a feed server (basically just a web server that serves up the ipk files).

tools/feed-server/app.js:

// nodejs script to start a http server with the feed from this directory
// default is port 4000
var express = require('express')
var app = express()
app.use('/', express.static(__dirname + '/../../build/tmp-angstrom_next-eglibc/deploy/ipk/'))
app.use('/', express.directory(__dirname + '/../../build/tmp-angstrom_next-eglibc/deploy/ipk/'))
console.log("feed server started on port 4000")
app.listen(4000)

The express.directory function is used to create a directory listing that can be browsed.  With most other web servers, at least this much code is required just for configuration.  With node.js, this is the code to create an entire server from scratch!  This node.js app can be started with a bash function in the environment:

function oe_feed_server()
{
  cd $OE_BASE
  bitbake package-index
  node tools/feed-server/app.js
  cd -
}

The package-index target is used to rebuild the index files that list the available packages.

On the target system, the /etc/opkg configuration files must be modified to point to the feed server, and then you can run:

opkg update; opkg install <package>

An exmaple build environement with this integrated is located:

https://github.com/cbrake/oe-build-core

https://github.com/cbrake/oe-build-core/commit/23352b9a43c60d67070abe5ac001aba9a9ac5cc4

https://github.com/cbrake/oe-build-core/commit/6baceb8b1e4477ccfd03aa553a5fcf501b398196

It might be argued that it is just as easy or easier to set up a more conventional web server.  However, the benefit of node.js is that it is a full blown programming environment.  You can quickly extend it to provide a web interface, perhaps add functionality to automatically push updated opkg configuration files to the target system that point to your feed server, etc.  It is much more than just a web server.