OpenEmbedded: configuring openssh to allow a blank password

Noticed the following when browsing around in the OpenEmbedded sources the other day:

ROOTFS_POSTPROCESS_COMMAND += "openssh_allow_empty_password ;"

This allows a blank password for development, which is conveient for running ssh/scp commands to the device.  The above can be placed in an image recipe.

The command modifies the PermitEmptyPasswords config option in /etc/ssh/sshd_config or /etc/default/dropbear.

Setting the root password in an OpenEmbedded image

During development, often a blank root password is used for the embedded Linux target system.  However, when deploying an embedded Linux system, often there is a requirement to set the root password to something non-obvious.  One way to do this is boot the system, and change the password using the passwd command.  Then copy the password hash from the /etc/shadow file into the below line in your image recipe.

ROOTFS_POSTPROCESS_COMMAND += "\
sed 's%^root:[^:]*:%root:password_hash_from_etc_shadow:%' \
< ${IMAGE_ROOTFS}/etc/shadow \
> ${IMAGE_ROOTFS}/etc/shadow.new;\
mv ${IMAGE_ROOTFS}/etc/shadow.new ${IMAGE_ROOTFS}/etc/shadow ;"

The ROOTFS_POSTPROCESS_COMMAND is useful for simple modifications like this to the rootfs image.

Running a reboot cycle test shell script with systemd

One of the easiest ways to stress test an embedded Linux system is to continuously reboot the system. Booting is a difficult activity for a Linux system (similar to waking up in the morning). The CPU is maxed out. There are a lot of things happening in parallel. Software is initializing. There is a lot of filesystem activity. Often, if there is an instability in the system (and especially the filesystem), continuously rebooting will expose it.

Now that we use systemd for most new systems, we need to run a simple script X seconds after the system boots to increment the boot count, and reboot. This is the classic domain of a shell script. However, for a simple task like this, it’s a little cumbersome to create a seperate shell script and then call this script from a systemd unit. The following is a way to embed the script directly in a systemd unit.

Put the following in: /lib/systemd/system/cycletest.service

[Unit]
Description=Reboots unit after 30s

[Service]
StandardOutput=syslog+console
ExecStart=/bin/sh -c "\
test -f /cycle-count || echo 0 > /cycle-count;\
echo 'starting cycletest';\
sleep 30;\
expr `cat /cycle-count` + 1 > /cycle-count;\
systemctl reboot;\
"

[Install]
WantedBy=multi-user.target

To install and start the script:

  • systemctl daemon-reload
  • systemctl enable cycletest.service (enable the service to start on reboot)
  • systemctl start cycletest.service (start the service, should reboot in 30s)

Bitbake has a new way of displaying build status

Now instead of displaying a scrolling log, bitbake will display a simple output that lists which tasks it is working on at the moment:

Currently 4 running tasks (185 of 3093):
0: gmp-native-5.0.5-r0 do_configure (pid 22919)
1: lzo-native-2.06-r1 do_configure (pid 27103)
2: expat-native-2.1.0-r0 do_compile (pid 7463)
3: ncurses-native-5.9-r10.1 do_compile (pid 9820)

This really allows for a clear view of how the parallel threads option (BB_NUMBER_THEADS) in bitbake works.

Mounting a UBIFS partition using systemd

Systemd is becoming the defacto system and service manager for Linux, replacing the SysV init scripts.  The Angstrom distribution has supported systemd for some time now. Recently, I needed to mount a UBIFS filesystem in one of my projects.  The main application is being started with systemd, so it seemed like a good fit to also use systemd to mount a data partition needed by the application. Systemd can use entries from /etc/fstab, but one additional wrinkle in this system is that I also wanted to run the UBI attach in a controlled way. This can be done with a kernel command line argument, but there are times in this system where we will want to format the data partition, so this requires a detach/attach operation.

The resulting systemd units are:

data.mount

[Unit]
Description=Mount data partition
Requires=data-attach.service
After=data-attach.service
[Mount]
What=ubi1:data
Where=/data
Type=ubifs

data-attach.service

[Unit]
Description=Attach data ubi partition

[Service]
Type=oneshot
RemainAfterExit=yes
ExecStart=/usr/sbin/ubiattach /dev/ubi_ctrl -m 6
ExecStop=/usr/sbin/ubidetach /dev/ubi_ctrl -m 6

add the following to my-application.service

Wants=data.mount

The only real problem I ran into is the “After” statement in the data.mount unit.  It turns out that the mount will run before the attach operation is finished unless this is included.

The RemainAfterExit seems to be required so that ExecStop can be run when the unit is stopped.

(There may be a better ways to do all this, so comments are welcome!)

One of the benefits of systemd is that everything is very controlled.  If the dependencies are specified properly, there are no race conditions.  Additionally, if you need to manage units (services, mounts, etc) from a program, it is much easier to check the states as everything is very consistent.  For example, if you want to query the state of a unit, the systemctl show <unit>command will return easily parsed output (as shown below):

...
Before=umount.target
After=data-attach.service systemd-journald.socket -.mount
Description=Mount data partition
LoadState=loaded
ActiveState=active
SubState=mounted
FragmentPath=/lib/systemd/system/data.mount
UnitFileState=static
InactiveExitTimestamp=Thu, 01 Jan 1970 00:00:17 +0000
...

If the data-attach service is stopped, systemd automatically unmounts the data partition first — very nice!

Hopefully this example illustrates how to do simple tasks in systemd.  It appears that instead of having a complex script or program to initialize a system, the systemd “way” is to create a number of small units that get connected with dependencies.  This seems like a much more controlled and flexible approach.

A Review of Graphical Application Solutions for Embedded Linux Systems

One of the decisions we face when building Embedded Linux systems is what components to use. With Open Source software, there is often more than one good option. Graphical libraries are no exception. In this article, we’ll examine GTK+, Qt, EFL, Android, and HTML/Javascript.

There are many factors that go into a choice like this, but some of them are:

  • Does the application need to run on Windows or MacOS?
  • Does the GUI need to be viewed remotely over a network?
  • Are dynamic effects (think iPhone) desired?
  • Does the application need to run on low end CPU’s (ones without a GPU)?

Putting some effort into selecting the right GUI technology is important for the following reasons:

  • Most of the development effort for the product will likely be in the graphical application, so it makes sense to maximize the productivity of the application developers.  Typically there might be 3-10 application developers on a project for every system software developer.
  • You want a technology that will scale with new revisions and improvements of the product.
  • You want a technology that will be supported and improved long term.

With Open Source software, things are always changing.  Leading technologies change.  This can be illustrated by following Intel’s support for their open source software platforms.  While some of this may be driven by politics, we can also see clear technical and licensing reasons why these shifts were made.

  • Moblin (GTK+), became public around 2007, merged into MeeGo in 2010
  • MeeGo (Qt), announced in 2010-02, cancelled 2011-09
  • Tizen (HTML/Javascript, EFL), announced in 2011-09

We can see similar shifts in other companies:

  • Nokia started with GTK+ as the GUI technology for their tablet computers, and then shifted to Qt
  • Samsung has been using GTK+ on top of DirectFB, and now is moving toward EFL/HTML/Javascript for some phones
  • Many phone manufactures are producing Android products
  • Palm moved from a proprietary GUI to HTML in webOS

GTK+

GTK+ is part of the GNOME desktop project and is perhaps the most used graphical library for desktop Linux applications, and in the past has been very popular in Embedded systems.  Nokia has invested heavily in GTK+ in the past with its early tablet products (N770, N800, N900, etc).  However, with the advent of the iPhone and faster processors with GPUs, everything has changed.  The standard is now dynamic GUI’s with sliding effects, etc.  The Clutter project is a library that can be used to build dynamic GUIs and fits in with the GNOME stack.  GTK+ supports Windows and MacOS, but probably not as well as Qt.

Qt

Qt is a very mature project that is also extensively used for desktop projects, and recently is being used on some of Nokia’s phones.  Qt was originally developed by the Norwegian company Trolltech. Originally, Qt was only offered with either a proprietary license or the GPL license.  This meant if you wanted to write a proprietary application using Qt, you had to purchase Trolltech’s commercial license. This factor alone probably made GTK+ a much more popular solution for many years for Embedded Linux GUI’s, including most cell phone stacks.  In 2008, Trolltech was acquired by Nokia, and shortly after that, Qt was offered under the LPGL license, giving it the same license as GTK+.

One of Qt’s compelling features introduced in Qt 4.7 is its QML (or Qt Quick) technology.  This allows you to write declarative GUI’s in a Javascript like syntax with many automatic bindings.  There is good support for dynamic operations like sliding effects, and the performance is reasonable, even on low-end systems without a GPU.

In the future, Qt 5.0 will require OpenGL, and hence be relegated to high end ARM CPU’s with a GPU, or desktop systems.

Qt’s cross platform support is excellent, and provides good native support for Linux, MacOS, and Windows.

Recently, Nokia has made efforts to set up Qt as more of a community project, instead of retaining exclusive control over it.

EFL

EFL (Enlightenment Foundation Libraries) is a project that started out as the Enlightenment window manager, and grew into a set of general purpose libraries.  It claims to be to be more efficient than GTK+ and Qt, and work with or without hardware acceleration.  Recently, EFL seems to have garnered the commercial interest of Samsung, Intel, and others involved in the Tizen project.  According to a presentation by Carsten Haitzler (one of EFL’s founders and developers), Samsung was using GTK+ and DirectFB, but switched to EFL after seeing the performance.  Perhaps the most compelling story for EFL is the high performance over a range of hardware capabilities (from simple phones with low end processors, to high end smart-phones running OpenGL).  Parts of EFL are also used in the commercial GUI FancyPants.

Android

Android is an interesting GUI solution, especially as many developers have experience working on Android applications.  Android now seems to be used in many applications where Windows CE was used in the past, probably due to its polished application development tool-set.

HTML/Javascript

Application development with HTML and Javascript is one of the more interesting developments because many embedded systems are headless (don’t have a local display). Couple this with the fact that many users now have smartphones or tablets readily available, and it may make sense to simply use an external device for displaying the UI.  There is often a requirement for accessing the UI of a device remotely, and in this case, HTML/Javascript works very well.  If the UI needs to be displayed locally, then a fairly powerful CPU is required (an ARM Cortex-A8, etc) to run a modern web browser.  If there is no physical display on the device, and the UI is accessed remotely on a computer or mobile device, then a less powerful CPU is required because the embedded device does not actually have to do any of the rendering.  HTML/Javascript also has the benefit that it is a very popular technology, thus there are many experienced developers.

Summary

Each of the above technologies has benefits and drawbacks.  Understanding your project’s requirements, and what each solution offers is key to making the best decision.

A Linux Kernel Tracing Tutorial

The Linux kernel has a fairly extensive tracing infrastructure that is quite useful for debugging.  There are a number of things you can do with tracing, but the focus of this article will be the traditional printk type debugging we often end up doing to trace initialization issues with a driver.  The following links provide additional information on the linux kernel tracing infrastructure:

In this example, I am working on a new audio driver.  The typical experience with a new driver is that you install it and nothing happens because something is not registered correctly with the Linux driver model.  So, the first thing I do is start with with the platform_device_add() function in my drivers init function.  To observe the kernel activity around the kernel platform code, I can do the following:

cd /sys/kernel/debug/tracing/
echo 0 > tracing_on (keep trace from filling up until we set filter)
echo function_graph > current_tracer
echo platform* > set_ftrace_filter
echo 1 > tracing_on
cat trace_pipe (leave running in a different shell)
<insmod my driver>

After executing the above, we see the following.  For this example, trace_pipe is preferred because the trace is then emptied and only new information is shown.

0) + 30.518 us   |  platform_device_alloc();
0)               |  platform_device_add() {
0)   0.000 us    |    platform_uevent();
0) + 30.518 us   |  platform_uevent();
0)   0.000 us    |  platform_uevent();
0) + 30.518 us   |    platform_match();
0) + 30.518 us   |    platform_match();
0)   0.000 us    |    platform_match();
0)   0.000 us    |    platform_match();

...

0) + 30.518 us   |    platform_match();
0)   0.000 us    |    platform_match();
0)   0.000 us    |    platform_match();
0)   0.000 us    |    platform_match();
0)   0.000 us    |    platform_match();
0) ! 3936.767 us |  }
0) + 30.518 us   |  platform_uevent();
0) + 30.518 us   |  platform_device_alloc();

From the above, I can conclude that the platform_match() is not succeeding, because I would expect some more activity.  At this point I chose to add a printk:

diff --git a/drivers/base/platform.c b/drivers/base/platform.c
index 7a24895..f9ce0c7 100644
--- a/drivers/base/platform.c
+++ b/drivers/base/platform.c
@@ -662,6 +662,8 @@ static int platform_match(struct device *dev, struct device_driver *drv)
        struct platform_device *pdev = to_platform_device(dev);
        struct platform_driver *pdrv = to_platform_driver(drv);

+       trace_printk("pdev->name = %s, drv->name = %s", pdev->name, drv->name);
+
        /* Attempt an OF style match first */
        if (of_driver_match_device(dev, drv))
                return 1;

Now, if I re-run the trace, I see the following:

 0)               |      /* pdev->name = soc_audio, drv->name = davinci_emac */
 0)   0.000 us    |    }
 0)               |    platform_match() {
 0)               |      /* pdev->name = soc_audio, drv->name = snd-soc-dummy */
 0)   0.000 us    |    }
 0)               |    platform_match() {
 0)               |      /* pdev->name = soc_audio, drv->name = soc-audio */
 0)   0.000 us    |    }
 0)               |    platform_match() {
 0)               |      /* pdev->name = soc_audio, drv->name = omap-pcm-audio */
 0)   0.000 us    |    }
 0) ! 4241.943 us |  } /* platform_device_add */

From the above, it looks like we have a simple mismatch between “soc_audio” and “soc-audio.”  Fixing this problem, and re-installing the module, we now have:

 0)               |    platform_match() {
 0)               |      /* pdev->name = soc-audio, drv->name = snd-soc-dummy */
 0)   0.000 us    |    }
 0)               |    platform_match() {
 0)               |      /* pdev->name = soc-audio, drv->name = soc-audio */
 0)   0.000 us    |    }
 0) + 91.553 us   |    platform_drv_probe();
 0) ! 4241.943 us |  } /* platform_device_add */

Now we can see that the names match, and the probe function is now being called.  At this point, we may want to turn on tracing of some additional functions to try to determine what is happening next.

echo "platform* snd* mydriver*" > set_ftrace_filter

And the result:

 0)               |      /* pdev->name = soc-audio, drv->name = snd-soc-dummy */
 0)   0.000 us    |    }
 0)               |    platform_match() {
 0)               |      /* pdev->name = soc-audio, drv->name = soc-audio */
 0) + 30.517 us   |    }
 0)               |    platform_drv_probe() {
 0)               |      snd_soc_register_card() {
 0) + 30.518 us   |        snd_soc_instantiate_cards();
 0) ! 17852.78 us |      }
 0) ! 17883.30 us |    }
 0) ! 22125.24 us |  } /* platform_device_add */

With the above additional information, we can continue to learn more about the flow through the kernel.

While all of the above could have been done with printk’s, it would have been more time consuming.  The kernel function tracing capabilities allow us to quickly get a high level view of the flow through the kernel without manually adding a bunch of printk statements.  The kernel tracing features are completely contained in the kernel without requiring additional user space utilities which makes it very convenient to use in embedded systems.  The low overhead is also important in resource constrained embedded systems.

The easy way to get serial terminal in Linux

When doing embedded Linux development, most of us spend out time tethered to a target system with a serial cable, which is used for a serial console.  Minicom is the defacto serial terminal software for Linux.  However, Minicom is a little fussy in that you typically have to set it up for each port you want to use.  This is no big deal, but is generally difficult for new users to understand, and yet another hurdle.  And with 8-port USB->serial adapters, I have a lot of ports to set up.

Just recently, I discovered that screen can be used as a serial terminal program:

screen /dev/ttyUSB0 115200

A few notes on using:

  • to exit screen: Ctrl-a k
  • to write a hardcopy of the screen image: Ctrl-a h
  • to get help: Ctrl-?

All the neat features of screen are two numerous to list here, but one more that I’ll point out is the scrollback/copy feature (activated by Ctrl-a [ ).  This allows you to scroll back and the navigation works much like VI — what could be nicer?

Verizon UML290 and Sprint U600 USB Modems in Embedded Systems

Recently I tested support for the Verizon UML290 and Sprint U600 USB Cellular modems in an embedded Linux system.  Both modems support 3G and 4G networks, but only the 3G modes were tested due to lack of 4G coverage at the testing location.

Fortunately, both modems function very similar to previous modems, so with the drivers available in the Linux kernel, and standard pppd support in OpenEmbedded, they worked fine.

The Verizon UML290 modem provides a challenge in that it must be manually switched between 4G and 3G modes.  Typically this is done automatically by the vzaccess program Verizon supplies with the modem that runs on Windows.  The solution for this system was to manually set the modem to 3G mode as detailed on the following page:

http://www.evdoinfo.com/content/view/3492/64/

It appears that some embedded systems such as the Cradlepoint routers have implemented automatic 3G/4G switching support for the UML290, so this is no doubt possible with a little effort.

The Sprint U600 modem appears to default to 3G, or automatically switch inside the modem.

The same pppd scripts can be used with both modems:

# /etc/ppp/peers/verizon_um290
user a
password v
connect "/usr/sbin/chat -v -f /etc/ppp/peers/verizon_um290_chat"
defaultroute
usepeerdns
ttyACM0
921600
local
usepeerdns
debug
-detach
# /etc/ppp/peers/verizon_um290_chat
'' 'ATZ'
'OK' 'ATDT#777'
'CONNECT' ''

To initiate a connection:

pppd call verizon_um290

With Verizon cellular modems, it appears that port 22 is often blocked, so if you need to access a remote device via ssh, you may need to run ssh on a higher port number.  With 4G networks, it appears that the networking setup may be different in that a public IP address may not be assigned.  From the above evdoinfo.com page, we find the following text:

This fix will also work for users looking to use their device for remote based applications because it assigns a public facing IP address (3G ONLY). With eHRPD you’re assigned a private IP in either 3G or 4G mode, which has prevents UML290 users from accessing remote applications.

Perhaps the Rev A HDR Service mode will also work in 4G mode, but its seems as cellular networks become more complicated, there will be more issues to deal with in using USB Cellular modems for remote access in embedded systems.

Past articles on USB Cellular modems:

Git and Distributed Development

This is part of an ongoing series of articles on the Git version control system.

The “many repository” paradigm has been partly driven by the distributed development paradigm.  Git solves the problem of multiple developers working in multiple repositories very well.  Because we want to use and customize projects like the Linux kernel, U-boot, and OpenEmbedded in our projects, then we naturally find ourself in the situation where we need to manage multiple repositories.  Yes, you can check the Linux kernel into your company Subversion repository, but you are much better off long term if you bite the bullet and implement your own Git infrastructure.

As we consider the product development process, we need to consider the life cycle of a product.  Most products live for at least several years, and will go through several software iterations.  If we can update the software components we use, then we can add value to the product in the form of new or updated drivers to support new peripherals, new libraries, performance improvements, etc.  But, we are dealing with millions of lines of source code, so we must have an efficient way to deal with software projects of this size.  The below Figure 2 illustrates how you might organize a typical project.  Notice we can pull updates from the U-boot and Kernel source trees at any time in the development process, and merge the changes with our own modifications.  We might have an outside team working an application, and we then easily synchronize the repositories when it makes sense.

There are many other design flows possible.  Once you have the ability to support multiple branches and repositories easily, it becomes trivial to implement a staging/testing repository for a QA processes, maintenance repositories for supporting old releases, etc.

Even at a personal developer level, Git’s distributed capabilities offers many advantages.  Each Git workspace is actually a full Git repository.  This means you can check changes in locally, re-organize your changes, be able to track changes when off-line, etc.  For this reason, many developers are now using git-svn when they need to work with Subversion repositories.  With git-svn, you have all the benefits of using git locally, and yet you can easily synchronize with Subversion repositories.  And this leads us to our next topic: cheap branches (coming soon).

How do modern USB chargers work

As we help customers design products, we often try to leverage the latest cell phone practices and technologies.  One of these is USB charging.  There has been a push in recent years to standardize on USB chargers for cell-phones.  There are a number of organizations involved including the USB-IF and OMTP.  Two of the specification available include:

There is also a Chinese standard titled the “Telecommunications Industry Standard of the PRC” that has been instrumental in influencing these standards.

The fundamental problem is a battery powered device needs to know if it is plugged into a computer USB port, or a dedicated charger.  A standard USB port on a computer is only specified to provide 500mA.  Typically a dedicated charger that plugs into a wall outlet provides more current than 500mA so that the battery can be charged quicker.  To differentiate between a computer USB port, and a dedicated charger, the dedicated charger shorts the D+ and D- USB signals together with a resistance of less than 200 ohms (specified by USB-IF).

One of the goals of these standards is that any charger can be used with any battery powered device.  The next question is how do we handle the case where every USB charger has a different rated current?  How can a device that charges its battery at 1.7A be compatible with a charger that only outputs 0.7A?  One theoretical solution would be for the device to query the current capacity of the charger and then only use that much current.  It turns out a much simpler approach is used.  The USB-IF provides the following chart in the above specification:

taken from http://www.usb.org/developers/devclass_docs/Battery_Charging_V1_2.zip

As long as the charger outputs power in the Required Operating Range, the battery powered device must be able to use whatever power is available to charge its battery.  If the device uses more current than the charger can supply, the charger simply goes into a constant current mode and the battery charges slower than it would with a higher capacity charger.  Thus we have a very simple scheme where we can theoretically use any charger with any device.

As a simple test, I charged a Nokia N900 phone (which came with a 1.2A charger) with a smaller LG charger that is rated for 0.7A.  I monitored the LG charger a couple times during charging to make sure it was not getting hot.  It seemed to charge the N900 battery just fine.

When purchasing after-market USB chargers, it is sometimes difficult to determine if they have the USB D+ and D- lines shorted together.  In once case I purchased a car USB charger that did not work with my phones, but was able to “fix” it by disassembling it, and putting a solder blob between the D+ and D- signals on the USB connector.

While shorting the D+ and D- pins usually makes a charger work, it is nice to know if the charger is compliant with the USB-IF specification and is designed to work in the constant current mode without shutting down, catching fire, etc.

Git and Why Multiple Repositories

This is part of an ongoing series of articles on the Git version control system.

This article discusses the trend in software configuration management toward multiple repositories, rather than one large repository.  In the past when many companies used Subversion or comparable systems, there was typically one huge company repository (I’ve seen them in the 10’s of GB in size) that held a hierarchical tree of all the company source code.  This worked fairly well, and is comfortable in that the organization is very similar to a file system directory structure.  However, this model is not very flexible in that it does not have a consistent way to re-use components between different projects.  Some people simply copy source code.  Some have a “common” project that is included in all their other projects.  Subversion externals can be used.  With Git, typically a separate repository is created for each software component.  There are perhaps several reasons for this, but one reason is that Git simply does not scale to huge multi-GByte repositories.  However, this turns out to be a blessing in disguise as I think it is a better model in many cases.  What we end up with is more of a catalog of software components rather than a rigid hierarchy.

There is much emphasis these days on modular, re-usable software components (Object Oriented Programming, plugins, etc.).  Why not keep things modular at the repository level?  Another example of this type of organization is the Internet itself.  It is not a hierarchy of information, but rather a flat system that is organized by hyperlinks.

One of the benefits of organizing your source code this way is that it encourages clean boundaries between software components.  Each software component needs to stand on its own without being propped up by header files located in an unrelated source tree 3 levels up in the directory hierarchy.  This type of organization forces us to make better use of standard build system practices.

How do you implement this type of repository infrastructure?  Build systems such as OpenEmbedded, Gentoo, and the Android build system manage this fairly well.  However, Git also includes a feature named “submodules” that provides a mechanism for one repository to reference source code from another.  What you end up using really depends on your needs, and what you are trying to accomplish.

A screencast is also available that covers this topic.

Git Overview Screencast

This screencast (use Firefox to view the screencast) provides an overview of the Git version control system.  There are 3 features of Git that are especially interesting:

  1. many repositories (vs. one large repository)
  2. distributed development
  3. cheap branches

The fundamental driver for better tools is increasing system complexity.  More and more we are required to manage and integrate more third party software, work in distributed teams, and more effectively re-use the software we do have.  Git is a tool that helps you accomplish these goals.

C++ callbacks to member functions

How to properly do callbacks to C++ member functions is something that has intrigued me for some time now.  There are a number of solutions, none of which I really liked.  But now with the std::tr1::function and std::tr1::bind functions, there appears to be a clean solution.

In the past, if you wanted a callback to a C++ member function a typical approach was to create a static function wrapper as described here.  However, this is ugly, and a step backwards from the capabilities of plain old C.

std::tr1::function is a function wrapper that can be used to create function objects for a number of scenarios.  We are interested in representing class member functions.

#include <tr1/functional>

struct X {
	int foo(int);
};

std::tr1::function<int (X*, int)> f;

f = &X::foo;

X x;
f(&x, 6);

f() is now a simple function representation that can be passed around, etc.  However, what we don’t like is the reference to the struct X type, which makes it much less generic.  This is where the std::tr1::bind function comes in.  With bind, we can dynamically bind one function to another, and re-arrange the arguments, etc.  So to make a generic portable function that points to X::foo(), we can do something like the following.

#include <tr1/functional>

struct X {
	int foo(int);
};

std::tr1::function<int (int)> f;

X x;
f = std::tr1::bind(&X::foo, &x, _1);

f(6);

Now, f() is a very generic function that could be used in various interfaces without specifically referencing struct X.

A more complete example of how callbacks might work is given below:

#include <iostream>
#include <tr1/functional> 

struct B
{
  std::tr1::function <void ()> _callback;

  void reg_callback(std::tr1::function <void ()> callback)
  {
    _callback = callback;
  }

  void unreg_callback()
  {
    _callback = NULL;
  }

  void process() {
    // simply calls callback
    if (_callback)
      _callback();
  }
};

struct A
{
  B * _b;

  A(B * b) :
    _b(b)
  {
    std::tr1::function<void()> callback;
    callback = std::tr1::bind(&A::callback, this);
    b->reg_callback(callback);
  }

  ~A()
  {
    _b->unreg_callback();
  }

  void callback() {
    std::cout << "A::callback called\n";
  }

  void process() {
    _b->process();
  }
}; 

int main()
{
  B * b = new B();
  A * a = new A(b); 

  // the following is example where object b calls
  // back into a method
  // in object a
  a->process();

  // now delete object a, and verify the callback
  // in B does not crash or still get called
  delete a;

  // now b should not execute the callback
  b->process();
}

In the above example, the callback in the object a will be called once.

Yocto and OpenEmbedded

Recently, I attended an Embedded Linux summit hosted by the Linux Foundation to discuss the Yocto project. Overall, I thought the meeting was very positive and constructive. Having met and discussed at length the methods and goals of the Linux Foundation with some of their people, I’m impressed with their approach. They are there to help Linux succeed in the Embedded space in any way they can.

It is my understanding that the Yocto project has the goal of making Linux much more accessible and usable in embedded systems, and improve efficiencies. While the OpenEmbedded project has served many of us well for many years, we all can readily understand there are some deficiencies. These can be overcome by experienced developers, but there are a number of areas that can obviously be improved. Not only are we concerned with making it easier for new users to use Embedded Linux, I think there are areas where we can drastically improve the efficiency of implementing embedded Linux systems for those who are experienced. It was stated once that tools implemented must be useful to both the new developer as well as the experienced developer.

It should be noted that building an embedded Linux system is an inherently complex undertaking, and although we can improve tools and processes to make it more efficient, and somewhat easier, in the end it is still a very complex problem, and will require considerable skill to be an effective embedded Linux developer. There is no substitute for experience, and developer skill. Just as we would not slap a fancy GUI on top of a robotic surgery instrument, and tell a novice to have at it, likewise it is still going to require considerable engineering skill to be effective in developing Embedded Linux systems. But, if we improve the base technologies, and tools, we will spend less time messing with these and doing the same things over and over, and will have more resources toward implementing new things.

One example of the pain experienced in a typical OpenEmbedded system is getting Oprofile to run. Oprofile requires that ARM system be built with frame pointers (OE defaults to not), and that you have symbols in your binaries. Figuring out how to build a reasonably sized image with the needed symbols might be a 1-2 day process for me. Then there is the issue of kernel symbols, etc. I’m sure many other developers go through the same steps, but as many of us are paid to develop products on tight schedules, we don’t have the time to polish the Oprofile integration and tools.

As an extension of this, the application development story with OpenEmbedded is not all that great. Yes, we can generate a binary SDK, but again it may take a couple days of messing around to figure out the process, and get all the necessary libraries, etc. Then you still have the problem that application developers want Eclipse/Qt Creator integration, etc. Again, this can all be done, but takes time, and many people are doing the same things over and over again.

The Yocto project seems to have two primary thrusts:

  1. stabilize a subset of the OpenEmbedded (called Poky) metadata and make it very high quality.
  2. improve tools associated with the build process, and general embedded Linux development.

One thing the OpenEmbedded project has historically lacked in the past is industry funding and ongoing developer commitment. The OE project is characterized by a rotating group of developers. When someone needs OE for a project, they jump in and do a lot of work on OE, then they disappear again once their need is met. We have very few developers who work on the project consistently for long periods of time. This has been positive in some ways in that we have a very large number of contributors, and a very open commit policy. There are so many patches coming in right now, we can’t even keep up with processing them. Week after week, we have a healthy number of committers and changesets. The community behind OE is rather astounding, and it is amazing how OE succeeds almost in spite of itself as a self organizing project without much organization.

In the past the OpenEmbedded and Poky projects have existed as two independent trees, and things were shared back and forth manually. This works fairly well as recipes are self contained, and generally can simply be copied from one tree to another. However, some of the core recipes are fairly complex, and if they start to diverge, then sharing information can get more difficult.

It seems the vision that is emerging is that Poky could become the core component of OpenEmbedded. Poky would contain a very controlled subset of core recipes that must remain at a very high quality level. OpenEmbedded could then expand on this (through the use of layers) to provide the breadth and scope it has traditionally provided.

We may bemoan the lack of quality in the OpenEmbedded project as it has 1000’s of recipes, and many of them have rotted, etc. But as a consultant helping people implement products, I still find considerable value in this. For example, one of the products I support uses the HPLIP drivers from HP. Yes, the recipe is broke every time I go to use the latest version, but with a little work I can get it working again. Having something to start with provides value. The same is true for the geos mapping libraries. Very few people are going to use geos, so it will never be in a Poky type repository, but some of us do use geos, so having a common place like OpenEmbedded to put recipes like this is very important. Using Poky as the core of OpenEmbedded seems like a win-win. We are relieved of some of the burden of maintaining core components (like compilers, C library, tool integration, sdk generation, etc), but we can still have a very open culture, and provide a wide scope of platform, library, and application support that we have historically provided.

Richard Purdie is poised to become the Linus Torvalds of Yocto, and if OpenEmbedded choses to base on Poky, then the Torvalds of the OpenEmbedded core. I am personally fine with this as I don’t know anyone else who has contributed so much time and has the same level of skill in the OE/Poky area. He has proven over time to be levelheaded and able to deal with difficult situations. Also, as a Linux Foundation fellow, he is positioned to be company neutral, which is very important for someone in this position.

Yocto is using a “pull” model similar to the Linux kernel for accepting changes. It is planned to put maintainers in place for various subsystems. With the goal of providing a very high quality subset of embedded Linux technologies, it seems this makes a lot of sense. If OpenEmbedded choses to base on Poky, there is no reason OpenEmbedded can’t continue to still use the push model for its layer that has worked well in the past. But, as we see patches languishing in the patchwork, perhaps a pull model might actually be more efficient at processing incoming patches so they don’t get lost. This is also an area where layers might actually help in that we have dedicated people committed to processing patches for each layer. One of the problems with the OpenEmbedded project with its 1000’s of recipes, the structure is very flat, and its fairly difficult to divide it up into areas of responsibility.

So going forward, it seems that if OpenEmbedded can somehow use Poky as a core technology (just like it uses bitbake, etc), then we could have the best of both worlds. With Poky, we get strong industry commitment and dedicated people working on making the core pieces better, and the tools better (50 engineers are already dedicated). This is something those of us developing products don’t really have time for and we are not currently funded to do this. With OpenEmbedded we can still keep our broad involvement, and vast recipe and target collection. Yes, the quality of OpenEmbedded will be lower, but it still provides a different type of value. Poky provides the known stable core, and OpenEmbedded is just that–it is “open”, and provides the broad menu of components that can be used. Over time this could evolve into something different, but for now it seems a good place to start.

The last thing to consider is the OpenEmbedded brand. It is recognizable from the name what it is (unlike Yocto, and Poky). It has had broad exposure for many years. It has historically been very vendor neutral with very little corporate direction or influence. From this perspective, it seems the Yocto project can benefit greatly from having an association with the OpenEmbedded project. This topic was discussed at the summit, and there was general consensus on the strong awareness of the OpenEmbedded brand, as well as an openness to how branding for the Yocto core build system might look.