Categories
Uncategorized

Yocto and OpenEmbedded

Recently, I attended an Embedded Linux summit hosted by the Linux Foundation to discuss the Yocto project. Overall, I thought the meeting was very positive and constructive. Having met and discussed at length the methods and goals of the Linux Foundation with some of their people, I’m impressed with their approach. They are there to help Linux succeed in the Embedded space in any way they can.

It is my understanding that the Yocto project has the goal of making Linux much more accessible and usable in embedded systems, and improve efficiencies. While the OpenEmbedded project has served many of us well for many years, we all can readily understand there are some deficiencies. These can be overcome by experienced developers, but there are a number of areas that can obviously be improved. Not only are we concerned with making it easier for new users to use Embedded Linux, I think there are areas where we can drastically improve the efficiency of implementing embedded Linux systems for those who are experienced. It was stated once that tools implemented must be useful to both the new developer as well as the experienced developer.

It should be noted that building an embedded Linux system is an inherently complex undertaking, and although we can improve tools and processes to make it more efficient, and somewhat easier, in the end it is still a very complex problem, and will require considerable skill to be an effective embedded Linux developer. There is no substitute for experience, and developer skill. Just as we would not slap a fancy GUI on top of a robotic surgery instrument, and tell a novice to have at it, likewise it is still going to require considerable engineering skill to be effective in developing Embedded Linux systems. But, if we improve the base technologies, and tools, we will spend less time messing with these and doing the same things over and over, and will have more resources toward implementing new things.

One example of the pain experienced in a typical OpenEmbedded system is getting Oprofile to run. Oprofile requires that ARM system be built with frame pointers (OE defaults to not), and that you have symbols in your binaries. Figuring out how to build a reasonably sized image with the needed symbols might be a 1-2 day process for me. Then there is the issue of kernel symbols, etc. I’m sure many other developers go through the same steps, but as many of us are paid to develop products on tight schedules, we don’t have the time to polish the Oprofile integration and tools.

As an extension of this, the application development story with OpenEmbedded is not all that great. Yes, we can generate a binary SDK, but again it may take a couple days of messing around to figure out the process, and get all the necessary libraries, etc. Then you still have the problem that application developers want Eclipse/Qt Creator integration, etc. Again, this can all be done, but takes time, and many people are doing the same things over and over again.

The Yocto project seems to have two primary thrusts:

  1. stabilize a subset of the OpenEmbedded (called Poky) metadata and make it very high quality.
  2. improve tools associated with the build process, and general embedded Linux development.

One thing the OpenEmbedded project has historically lacked in the past is industry funding and ongoing developer commitment. The OE project is characterized by a rotating group of developers. When someone needs OE for a project, they jump in and do a lot of work on OE, then they disappear again once their need is met. We have very few developers who work on the project consistently for long periods of time. This has been positive in some ways in that we have a very large number of contributors, and a very open commit policy. There are so many patches coming in right now, we can’t even keep up with processing them. Week after week, we have a healthy number of committers and changesets. The community behind OE is rather astounding, and it is amazing how OE succeeds almost in spite of itself as a self organizing project without much organization.

In the past the OpenEmbedded and Poky projects have existed as two independent trees, and things were shared back and forth manually. This works fairly well as recipes are self contained, and generally can simply be copied from one tree to another. However, some of the core recipes are fairly complex, and if they start to diverge, then sharing information can get more difficult.

It seems the vision that is emerging is that Poky could become the core component of OpenEmbedded. Poky would contain a very controlled subset of core recipes that must remain at a very high quality level. OpenEmbedded could then expand on this (through the use of layers) to provide the breadth and scope it has traditionally provided.

We may bemoan the lack of quality in the OpenEmbedded project as it has 1000’s of recipes, and many of them have rotted, etc. But as a consultant helping people implement products, I still find considerable value in this. For example, one of the products I support uses the HPLIP drivers from HP. Yes, the recipe is broke every time I go to use the latest version, but with a little work I can get it working again. Having something to start with provides value. The same is true for the geos mapping libraries. Very few people are going to use geos, so it will never be in a Poky type repository, but some of us do use geos, so having a common place like OpenEmbedded to put recipes like this is very important. Using Poky as the core of OpenEmbedded seems like a win-win. We are relieved of some of the burden of maintaining core components (like compilers, C library, tool integration, sdk generation, etc), but we can still have a very open culture, and provide a wide scope of platform, library, and application support that we have historically provided.

Richard Purdie is poised to become the Linus Torvalds of Yocto, and if OpenEmbedded choses to base on Poky, then the Torvalds of the OpenEmbedded core. I am personally fine with this as I don’t know anyone else who has contributed so much time and has the same level of skill in the OE/Poky area. He has proven over time to be levelheaded and able to deal with difficult situations. Also, as a Linux Foundation fellow, he is positioned to be company neutral, which is very important for someone in this position.

Yocto is using a “pull” model similar to the Linux kernel for accepting changes. It is planned to put maintainers in place for various subsystems. With the goal of providing a very high quality subset of embedded Linux technologies, it seems this makes a lot of sense. If OpenEmbedded choses to base on Poky, there is no reason OpenEmbedded can’t continue to still use the push model for its layer that has worked well in the past. But, as we see patches languishing in the patchwork, perhaps a pull model might actually be more efficient at processing incoming patches so they don’t get lost. This is also an area where layers might actually help in that we have dedicated people committed to processing patches for each layer. One of the problems with the OpenEmbedded project with its 1000’s of recipes, the structure is very flat, and its fairly difficult to divide it up into areas of responsibility.

So going forward, it seems that if OpenEmbedded can somehow use Poky as a core technology (just like it uses bitbake, etc), then we could have the best of both worlds. With Poky, we get strong industry commitment and dedicated people working on making the core pieces better, and the tools better (50 engineers are already dedicated). This is something those of us developing products don’t really have time for and we are not currently funded to do this. With OpenEmbedded we can still keep our broad involvement, and vast recipe and target collection. Yes, the quality of OpenEmbedded will be lower, but it still provides a different type of value. Poky provides the known stable core, and OpenEmbedded is just that–it is “open”, and provides the broad menu of components that can be used. Over time this could evolve into something different, but for now it seems a good place to start.

The last thing to consider is the OpenEmbedded brand. It is recognizable from the name what it is (unlike Yocto, and Poky). It has had broad exposure for many years. It has historically been very vendor neutral with very little corporate direction or influence. From this perspective, it seems the Yocto project can benefit greatly from having an association with the OpenEmbedded project. This topic was discussed at the summit, and there was general consensus on the strong awareness of the OpenEmbedded brand, as well as an openness to how branding for the Yocto core build system might look.

Categories
Uncategorized

OpenEmbedded srctree and gitver

Recently an OpenEmbedded class name srctree became usable.  The srctree.bbclass enables operation inside of an existing source tree for a project, rather than using the fetch/unpack/patch idiom.  The srctree.bbclass in combination with the OpenEmbedded gitver.bbclass and git submodules provides a very interesting way to build custom software with OpenEmbedded.

One of the classic problems with OpenEmbedded is how do application and kernel developers use it.  While OpenEmbedded excels at automating image builds, it is less friendly when used as a a cross development tool.  Historically there are several options for iterative development:

  1. develop in the working directory: cd <tmp>/work/arm…/<my recipe>; ../temp/run.do_compile …
  2. a variation of #1: bitbake -c devshell <my recipe>
  3. manually set up an environment that uses the toolchain generated by OE.  As example see this script in the BEC OE template.
  4. a variation of #3: export a SDK that includes a toolchain and libs

While the above solutions work OK, the process can be a little cumbersome.  Unless your OE recipe pulls software directly from the TIP of a SVN repository, you may have to manually update the recipe after you make changes, create patch files, etc.  There is also the problem that if your recipe fetches the latest from SVN, it drastically slows down the recipe parsing as it has to check the repository for a new version every time the recipe is processed.

The optimal solution would to be to simply check a software component out of a version control system, and build it directly using OpenEmbedded.  Icing on the cake would be if the package generated would automatically extract version information from the version control system.  This would facilitate iterative development for software components that need to be cross compiled.

Although srctree can be used with any directory of source code, it really works best with a git repository.  The gitver.bbclass provides a GITVER variable which is a (fairly) sane version, for use in ${PV}, extracted from the ${S} git checkout, assuming it is one (text from recipe).  gitver uses the ‘git describe’ command to extract the last tag and uses that for the version.

The best way to illustrate the use of these tools is an example:

The easiest way to try this is clone the above project in your openembedded/recipes directory:

$ cd openembedded/recipes
$ git clone https://github.com/cbrake/autotools-demo.git
$ cd autotools-demo
$ git describe
1.1
$ git tag -l
1.0
1.1

Notice that git describe simple returns the latest tag.   The recipe can be located in the same directory as the source code and has the following contents:

# recipe to build the autotools-demo in the working tree

inherit srctree autotools gitver

PV = "${GITVER}"

Can’t get much easier than that!  If you build the recipe, you end up with a package:

$ bitbake autotools-demo
$ ls tmp/deploy/glibc/ipk/armv5te/autotools-demo_1.1-r0.6_armv5te.ipk

Now, what happens if you make changes and commit them?

$ cd .../autotools-demo
$ (make a change and commit)
$ git describe
1.1-1-gfbc1ecc (notice the count and hash automatically appended)
$ (make another change and commit)
$ git describe
1.1-2-g7ad3715 (notice the count is now 2)

If we bitbake the recipe now, we end up with a packaged named:

tmp/deploy/glibc/ipk/armv5te/autotools-demo_1.1-2-g7ad3715-r0.6_armv5te.ipk

The gitver class in OpenEmbedded automatically takes care of creating a usable PV (package version) that always increments.

So in summary, srctree and gitver give developers a convenient way to handle custom components that change often in an Embedded Linux build without increasing parse times, requiring manual tweaks to version numbers, or creating a separate workspace for each version of the application that is built.  As practices such as continuous integration become more common, OpenEmbedded features like this are increasingly needed.  An added benefit is that the OpenEmbedded recipe can now be stored in the same location as the source code.  Perhaps in the future, most applications will include an OpenEmbedded recipe as part of their source code and git submodules could be used to simple populate the components you want to use.

2017-11-08 update: A tool named devtool is now the preferred way to do much of the above.

Categories
Uncategorized

Qt Creator for C/C++ development

Recently, I’ve been evaluating Qt Creator for general C/C++ development.  I’m currently involved in the development of a rather large C++ application that is approaching 200,000 lines of code and 1000 source modules.  In the past, I’ve typically used Vim for editing, and Eclipse as a gdb front-end when needed.  Qt Creator is a rather new tool developed by Nokia’s Qt team.  What initially attracted my attention was that one of the reasons the project was started was no existing tools effectively handled the Qt codebase, which is quite large.  Things I like about Qt Creator:

  • it works fairly well with any C++ Make based project.  This includes projects built with autotools as well as the Qt Qmake build system.
  • easy to import existing projects
  • it is very fast.  Indexing 200,000 lines of code happens in around a minute or so.
  • Provides a Vim compatibility mode that actually works
  • provides fast methods for finding files, symbols, and seeing where symbols are used
  • did I mention yet it is fast?

I also recorded a screencast that demos Qt Creator with a large project (can be viewed in firefox).  As always, I’m interested in what others find interesting in this or other tools.  Future efforts will be to use Qt Creator to build and remotely debug ARM binaries — I am interested in what others have done in this regard.

If you do try Qt Creator, I recommend the latest pre-release snapshot.

Categories
Uncategorized

Installing OMAP3 images on a SD card

This article and screen-cast is a continuation of that last couple posts describing the BEC OE build template.  The purpose again for a build system is to automate tedious manual tasks, and in doing so, we end up documenting how the build system works.  Having a good build system is important during product development so that you have an easy, repeatable process for building and deploying images.  One of these tasks is setting up a SD card and installing images to the SD card that an OMAP system can boot from.  This screencast goes over the script and makefile targets that are used to automate this process.

In summary, the steps to set up a SD card and install an image using the build template are:

  1. cd <build template directory>
  2. determine the SD card device name (/dev/sdX)
  3. unmount any file systems mounted on the SD card
  4. sudo ./scripts/omap-sd-partitions.sh /dev/sdX  (make sure you have the right device!!!)
  5. make omap-install-boot
  6. make omap-install-<image name>-rootfs

Previous articles in this series:

Categories
Uncategorized

Creating a Custom OpenEmbedded Image

In this article screencast, we’ll demonstrate how to create a custom Linux OS image using the OpenEmbedded build system.  This demonstration builds on the earlier article about using the BEC OE build template.  The OpenEmbedded build system is similar to Linux distributions in that you can select from a wide array of components to install.  One of the big differences is you can select these components when building the OS image, instead of after you have installed the “standard” image.  This screencast demonstrates how to set up a custom image (can be viewed with Firefox).

A related article also covers this procedure.

In summary, the steps include:

  1. Create an image recipe (typically in your meta data overlay).  An example is located here.  An existing image can be referenced for simplicity.
  2. Figure out what packages need to be added.  This can be done by browsing the recipe tree to see what is available.
  3. Bitbake the recipe.
  4. Look at what packages were generated.  In this example they are found in the build/angstrom-2008.1/tmp/deploy/glibc/ipk/armv7a directory.
  5. Add the relevant package names to your image recipe.

As the OpenEmbedded project includes over 6000 recipes, it gives you a big head start in including standard components in your Embedded Linux build.  Leveraging these existing components is why we use Linux, and having a way to easily build and deploy these components to your target hardware is what OpenEmbedded is all about.

Categories
Uncategorized

OpenEmbedded Build Template

Setting up an OpenEmbedded build is a fairly simple process if you carefully follow the instructions.  There are also a number of scripts available that automate the setup such as the OpenEmbedded Tools for Eclipse, the Angstrom setup scripts, the KaeilOS Openembedded Manager, and I’m sure there are many more (feel free to add in comments section).  As we have helped a number of clients use OpenEmbedded in commercial projects, we have the following requirements for a build template:

  1. The build needs to be completely automated such that only one step is required to build images.
  2. We need to be able to lock down versions of various components.
  3. Needs to be as simple as possible.

As we continue to think about creating a build template, the primary reason for creating one is to get repeatable builds with very little effort.  The build template serves as documentation; how did we build the 1.10 release.  A good build system allows any developer to create any release on about any machine at any time.  A good build system does not depend on the “golden build machine.”  We have been using makefile wrappers around OpenEmbedded for several years now to accomplish much of this.  These wrappers were typically stored in a Subversion repository with the rest of the project files.  However, now that most open source projects have moved to Git repositories, there are significant advantages to using Git for the build system.  Git has a feature called submodules which allows you to embed a git repository inside a source tree, always pointed at a particular commit.  In concept, this is perfect.  In practice, Git submodules are sometimes difficult to use, but considering the functionality provided, I think it is a reasonable trade-off.

The use of Git encourages a different development paradigm.  Typically when people use Subversion within a company, there is one repository that stores everything.  Subversion is actually quite good at this.  I’ve run into a number of companies who have repositories many GB in size that have been running for many years.  The stability is remarkable.  Git does not scale in this way, so people tend to set up a separate repository for each component, application, etc.  While there may be some disadvantages to this model, there are also significant advantages, and we can learn some of these by observing Open Source Software (OSS) development.  Just as we strive for modularity in software development where each class can stand on its own and be generally useful, OSS maintains this dynamic at the project level.  Each OSS project moves at its own pace, largely decoupled from other projects.  This dynamic forces a level of organization that might be neglected otherwise.  Likewise, in any project, if each major component is stored in its own source code repository, there will also tend to be less coupling, willy-nilly including of header files from other components, etc.  It helps people think in a more modular manner at the component level.  A component may be the Linux kernel, bootloader, each application in the system, etc.  A similar concept may be thinking about storing documents in hierarchical filesystem directory structure versus documents in a Wiki, which are stored in a flat structure and then organized by cross linking.  Yes, there are times to use a directory structure, but flexibility of a linked organization system (the essence of the Internet) is hard to beat.  Git submodules (or similar concepts) provides a similar linking method of organization.

With this in mind, we look at a build system as a way to collect a number of independent components and build them in a repeatable fashion.  While this may seem obvious to anyone somewhat familiar with OpenEmbedded, our vision is to extend this concept to easily facilitate product development so that we can move quickly and efficiently, but yet in a controlled and reliable way.  The first step is a new build template that is available here.  The following video provides a short overview of how to use the build template (can be viewed in Firefox).

The functionality provided so far is fairly simple and provides a way to quickly set up a build environment.  With Git submodules, the versions of Bitbake and the OpenEmbedded are locked down.  Submodules are also used to provide an array of components to choose from, and the user initializes only the submodules they are interested in using.  The first example of this is providing both the recipe meta data from the OpenEmbedded project or the modified version from Gumstix.  In the future, examples will be provided for doing custom application and kernel development using the OE srctree and gitver classes.  Feedback is welcome; please let me know if you have questions or suggestions.

Categories
Uncategorized

MeeGo Review

As we evaluate various technologies that might be applicable in embedded systems, MeeGo is the subject of this article.  MeeGo is a collaboration between Intel and Nokia, and is replacing the Moblin and Maemo efforts.  For this review, MeeGo was installed to a USB flash disk and booted on a Asus EEPC.  This was quite trivial to do, and the instructions on the MeeGo website are very good.  The following video (can be viewed in Firefox) provides a quick overview of the MeeGo Netbook UI.

Overall, MeeGo looks interesting.  Hopefully with the collaboration between Intel and Nokia, there will emerge a number of components that become “standards” for embedded systems such as ways to manage wireless networks, cell phone radios, etc.  It would also be nice to have better options for implementing GUI’s on embedded devices with small screens.  While the MeeGo user interface is nice, it is not a radical departure from desktop applications.  Maemo (used on the Nokia N900) is more advanced in that it has optimized widgets for a small screen.  With Maemo, all applications run in a true full screen mode, which is generally desired for devices with smaller screens, or industrial devices where we want to keep user confusion to a minimum.  MeeGo applications seem to run full screen, but you can still drag them around, and some applications like the terminal do not start full screen.

It seems there is considerable interest in providing distributions based on MeeGo.  Novell has announced it plans to provide MeeGo based distributions for netbooks, and Intel claims there are a number of other companies planning to build on top of MeeGo.

Categories
Uncategorized

Gumstix Overo review

Based on the interest and number of embedded modules currently available, it appears that the OMAP3 CPU from TI will be very popular in the general purpose embedded Linux market.  One of the OMAP3 modules available is the Overo from Gumstix.  As the company name suggests, this module looks about like a stick of gum, but smaller, as shown in the photo below.  The Gumstix module provides a lot of functionality in a very small package.  It is reasonably priced, and is an excellent way to quickly create a product that has advanced functionality such as a high powered CPU (OMAP3), high speed USB, DSP, 3-D graphics acceleration, etc.  For an example of the OMAP3 performance compared to previous generations of ARM processors, see this article.

Overo size comparison to a US dime
Overo size comparison to a US dime

The module contains all the core components, and can then be attached to a custom baseboard using the two high density connectors shown in the above photo.  There are 70 signals on each of the two connectors.  BEC provides a spreadsheet of the Overo signals to assist in designing a custom baseboard.

Due to the high level of component integration, the Overo has very few components on the module.  As shown in the below photo, there are really only 3 major visible components.  The flash and RAM is stacked on top of the CPU.  The PMIC contains all the required power managment circuitry, as well as Audio and USB interfacing functionality.

Major components on the Gumstix Overo
Major components on the Gumstix Overo

The Overo module must be used with a baseboard.  Gumstix provides several development baseboards.  For production, a custom baseboard is typically created.  The Gumstix baseboards are reasonable cost, so they could be used for prototyping or low volume production if they have the required functionality.  Below is a photo of the Summit baseboard that provides DVI video out, audio, USB host and OTG, and an expansion connector with a number of signals.

Gumstix Summit baseboard (overo is not installed)
Gumstix Summit baseboard (overo is not installed)

Below is the Palo baseboard from Gumstix that can be used to interface with a LCD display.  The Palo board contains a resistive touch screen controller, and circuitry required to interface with a LCD.

Palo baseboard with Overo installed
Palo baseboard with Overo installed
Display connected to Palo baseboard.  Note the tape applied to the display to keep from shorting to components on baseboard.
Display connected to Palo baseboard. Note the tape applied to the display to keep from shorting to components on baseboard.

The Overo does not include an Ethernet controller, so you typically use a USB-Ethernet adapter for development.  Below shows a typical development setup.

Typical USB-Ethernet connection through a USB hub.
Typical USB-Ethernet connection through a USB hub.

One of the things about the OMAP3 that makes it very attractive for embedded development is the amount of software support, and the number of development and production solutions available.  The OMAP3 is very well supported by the OpenEmbedded project, and TI is very active in making contributions to various open source projects.  This greatly increases the quality and availability of advanced software functionality needed to support a complex system like the OMAP3.

Size comparison of several OMAP3 systems
Size comparison of several OMAP3 systems
WIFI and BT antennas connected to Overo module
WIFI and BT antennas connected to Overo module

There are many options for software development with the OMAP3 systems.  Below is a photo of a very simple Qt application.  GTK+ and Enlightenment are other popular GUI toolkits.  With 256MB of both flash and RAM, this is more than enough memory for most embedded applications, and provides plenty of headroom for adding features for future product revisions.  The Overo has the CPU processing capability to run advanced GUI applications with 3-D affects, as well as advanced web application frameworks like web2py that previous generations of ARM CPUs could not effectively run.

A simple Qt application that controls LEDs and reads user switch
A simple Qt application that controls LEDs and reads user switch

The Gumstix Overo provides an excellent value for developing advanced products.  Especially for low volume products, when you compare the effort and time required to develop a full custom OMAP3 solution versus a simple Overo baseboard, using a module can provide significant advantages in terms of time to market, and development cost.

Categories
Uncategorized

The Go language for embedded systems

As one of the things I do is evaluate new technologies for embedded systems, the Go language from Google is an interesting development.  I have been looking for a better “system” language for some time.  What I mean by a better system language is one that is nearly as efficient as C, does not require a large runtime, has a fast start-up time, and yet supports modern language features.  It is interesting that the constraints for very high performance server applications are often similar to those found in embedded systems in that applications must make very efficient use of memory and CPU cycles.  This is why C++ is still used a lot by Google and is also popular in embedded systems, but is not as common in applications where CPU power is plentiful compared to the resources to implement the application such as business applications.  One area of divergence is the focus on multi-core for high end server applications, but in the next few years, multi-core CPU’s such as the Atom and ARM Cortex-A9 will likely be common in low power embedded systems.

There are other options.  Vala is very nice to use, is efficient, offers advanced language features, and extensive language bindings.  Vala is being used to implement some of the http://freesmartphone.org software stack.  C/C++ are the old standby languages, but are rather tedious to program in compared to some of the newer languages in that source and header files are separate, and memory management is manual.

My interest in Go at this time is for ARM systems, so I ran a few experiments.  From what I can tell, there does exist an ARM Go compiler that is able to produce ARM code, but it currently lacks support for EABI soft floating point.  The lack of floating point support is pretty much a show stopper for now, but it is at least a start.  The following is an example:

export GOARCH=arm
cd $GOROOT/src/
./make-arm.bash

cd
(create the following source file: hello.go)

==================
package main
import "fmt"

func main() {
        fmt.Printf("Hello, world\n");
}
=================

5g hello.go   (5g is the ARM compiler)
5l hello.5
scp 5.out root@n900:  (copy to ARM system)
ssh root@n900
?:~# ./5.out
Hello, world

(modify with floating point code)

==================
package main
import "fmt"

func main() {
        var f = 1.5;
        fmt.Printf("Hello, world\n");
        fmt.Printf("float f = %f\n", f);
}

=================

(now run on ARM system)
?:~# ./5.out
Segmentation fault

As, you can see, floating point does not work as expected.  I’m not sure yet if gogcc can be configured to compile ARM binaries.  The other area where Go appears to need a lot of work is bindings to various libraries.  Though it does not appear to be ready for embedded ARM systems at this time, Go will be an interesting language to watch over the next few years.

Categories
Uncategorized

OpenEmbedded development activity

Ever since I have been sending out weekly change logs, I have been impressed by the consistent amount of development activity in the OpenEmbedded project.  Every week there are consistently over a dozen developers making changes.  Developers come and go, but the contribution level always seems healthy.  While this amount of development leads to some amount of churn and issues, this amount of development is also required to keep pace with all the new developments in the OSS space.  Not having to wait 2 years for the next big release is one of the big advantages of using Linux and OpenEmbedded.  Having access to the latest technology provides a competitive advantage for many products, and makes dealing with the occasional issues that come up an acceptable trade-off.  Coupled with a flexible and consistent build system, OpenEmbedded is the vehicle to build advanced products.  Yes, its hard.  Yes, you better budget for some system software development time.  Yes, you should get some help if you’re not an embedded Linux expert.  Yes, you can start with some canned vendor “BSP” that seems to work and is seemingly the low-cost way to go, but eventually you’ll hit a brick wall where you need some bit of functionality that is not there, and then you’ll start up the exponential curve of dumping lots of time into trying to make something work with little progress forward.  It is much better to count the cost up front, and do things right.

Categories
Uncategorized

OMAP3 Resume Timing

One of the most common power management modes for ARM processors is the suspend mode.  In this mode, peripherals are shut down when possible, the SDRAM is put into self-refresh, and the CPU is placed in a low power mode.  A useful bit of information is to know how soon the system can respond to a resume (wake) event.

It turns out the answer to this question depends on where you want to do the work.  In the first test, I created a simple application that continuously toggled a GPIO.  This is an easy way to tell determine with a scope when the application was running.  I then measured the time between the wake event, and when this application signal started toggling.  It was a consistent 180ms.

For the next test I simply toggled a gpio in the omap3_pm_suspend() first thing after resume.  This tells me roughly how fast I can execute code in the kernel after a wake event.  This turned out to be 250uS.

So the good news is we can run kernel code very quickly after a resume event (250uS), but it currently takes a long time until user space processes start running again (180mS).  The 180ms can likely be reduced with some work, but this at least gives us a baseline.  180ms is fairly quick from a human perspective (seems pretty much instant), but for industrial devices that are responding to real-world events, then 180mS can be a long time.

Categories
Uncategorized

Linux kernel stats, and long term advantages

I just read an interesting interview with Greg Kroah Hartman.  According to Greg, we add 11,000 lines, remove 5500 lines, and modify 2200 lines every single day.  This rate of change is something that few organizations have the resources to match.  It is interesting that Google chose to use the Linux kernel in the Android project even though they implemented most other parts of the system.  Another interesting thing to consider is the consequences of the Linux kernel license, which requires many drivers to be released as GPL.  While initially, this may be viewed as a disadvantage, it leads to some interesting long-term implications.  The obvious effect is that most Linux driver code is maintained in the mainline kernel source.  This leads to other effects.  One is that things are allowed to change, and get better.  The USB stack in both Linux and Windows has been re-written several times, but the difference is that in Linux the old software does not need to be maintained, as most drivers are maintained in the kernel, and can simply be modified along with core changes.  Another advantage is that drivers tend to be much simpler as common code is merged.  As a result, a Linux driver tends to be about 1/3 the size of a driver in other operating systems.  Yes, getting driver code accepted in the mainline kernel can be difficult at times, but it is clearly the best long-term option.

Categories
Uncategorized

Linux PM: OMAP3 Suspend Support

This article provides an overview of the Linux kernel support for the suspend state in the TI OMAP3.  Power management has always been one of the more difficult parts of a system to get right.  The OMAP3 power management is quite extensive.  There are many levels of very granular control over the entire system.  Initially, we are going to focus on a simple power state where we want to put the CPU in a low power mode (basically not doing anything), but resume very quickly once an event occurs and continue running applications where they left off.  This is typically called “Suspend”.

Power States

We must first match up terminology between the Linux kernel and the OMAP documentation.  The Linux kernel supports several suspend power states:

  • PM_SUSPEND_ON: system is running
  • PM_SUSPEND_STANDBY: system is in a standby state.  This is typically implemented where the CPU is in a somewhat static state.  After all the peripherals have been shut down, and the memory put into self refresh, the CPU executes an instruction that puts the CPU into a low power state.  After the next wake-event occurs, the CPU resumes operating at the next instruction.  The CPU state is static in this case.
  • PM_SUSPEND_MEM: This goes one step beyond STANDBY in that the CPU is completely powered down and loses its state.  Any CPU state that needs to be saved is stored to SDRAM (which is in self refresh), or some other static memory.  When the system resumes, it has to boot through the reset vector.  The bootloader determines that the system was sleeping, and re-initializes the system, enables the SDRAM, restores whatever state is needed, and then jumps to a vector in the kernel to finish the resume sequence.  This mode is fairly difficult to implement as it involves the bootloader, and a lot of complex initialization code that is difficult to debug.
  • PM_SUSPEND_MAX: From browsing the kernel source code, it appears this state is used for “Off” or perhaps suspend to disk.

The OMAP3 documentation uses the following terms:

  • SLM: Static Leakage Management.  Includes “suspend” functionality.
  • Standby:  OMAP3x retains internal memory and logic.  (uses 7mW)
  • Device Off: System state is saved to external memory(0.590mW).  This could theoretically map to the PM_SUSPEND_MEM state.
  • WFI: Wait for Interrupt.  This is an ARM instruction that puts the CPU in the standby state.

Based on the source code in the Linux kernel, there is currently only support for the Standby state:

static int omap3_pm_enter(suspend_state_t unused)
{
	int ret = 0;

	switch (suspend_state) {
	case PM_SUSPEND_STANDBY:
	case PM_SUSPEND_MEM:
		ret = omap3_pm_suspend();
		break;
	default:
		ret = -EINVAL;
	}

	return ret;
}

What kernel branches/versions support PM

Kevin Hilman has been maintaining an OMAP3 power management branch at http://git.kernel.org/?p=linux/kernel/git/khilman/linux-omap-pm.git.  Bits are being merged upstream.  Some of the things that don’t seem to be merged yet are SmartReflex support, and a number of other details.  But, it seems that basic suspend/resume support is available in the upcoming 2.6.32 kernel.

More Details on the Standby Mechanism

Due to the granularity and complexity of the the OMAP3 power management, relevant pieces of documentation are scattered throughout the OMAP35x Technical Reference Manual.  The following were helpful:

  • Table 3-17: details the MPU subsystem operation power modes
  • Table 3-18: Power Mode Allowable Transitions.  Standby
    mode can enter from active mode only by executing the wait for interruption (WFI) instruction.
  • SDRC, “Self Refresh Management”.  The OMAP3 can automatically put the SDRAM in self refresh mode when the CPU enters the idle state.
  • MPU Power Mode Transistions: The ARM core initiates entering into standby via software only (CP15 – WFI).

The omap34xx_cpu_suspend function is called to put the CPU into standby mode.  The actual instruction that stops the CPU is the ARM WFI (wait for interrupt) instruction.  This is a little different than previous generations of ARM CPUs that used coprocessor registers.

Testing

With the 2.6.32-rcX kernels, suspend/resume appears to work at least from the console:

root@overo:~# echo mem > /sys/power/state
PM: Syncing filesystems ... done.
Freezing user space processes ... (elapsed 0.00 seconds) done.
Freezing remaining freezable tasks ... (elapsed 0.00 seconds) done.
Suspending console(s) (use no_console_suspend to debug)

<after key press on serial console ...>

omapfb omapfb: timeout waiting for FRAME DONE
CLIFF: going into suspend
Wake up daisy chain activation failed.
CLIFF: returning from suspend
Successfully put all powerdomains to target state
Restarting tasks ...
done.
root@overo:~#

The CLIFF messages are debug statements placed in the low-level suspend code to make sure we were getting to that point.  The console is disabled early in the suspend process, so we don’t get the debug messages until after resume.  The no_console_suspend kernel option can be used to leave the console enabled during suspend, but that feature does not seem to work.

Summary/Future work

This covers the basics of Linux Suspend/Resume support for the OMAP3 CPU.  There are many more options and details such as configuring wake sources, dynamic power management while running, etc.

Categories
Uncategorized

Integrated CAN solutions for Linux

I just received an email notification from EMS (http://ems-wuensche.com/) that support for their CAN controllers is now in mainline Linux kernels.  The EMS PCI products are supported in 2.6.31, and the CPC-USB product will be supported in 2.6.32.  I’ve used various Linux CAN stacks in the past, but none were as well integrated as the SocketCAN solution that has been merged into the mainline Linux kernel.  Though I’ve not used the EMS products yet, the fact that support is in the mainline kernel source provides a good indication that they know how to do things correctly, and that the software is of good quality.

The Microchip MCP2515 is another option for interfacing CAN with a Linux system, but uses the SPI bus so it typically requires a custom hardware design.   The availability of a USB adapter provides an easy way for CAN to be used with about any existing Linux system including small embedded systems.  From looking at the kernel source code, it appears that EMS solutions use the SJA1000 CAN controller, which has been an industry standard controller for many years.

Message buffering is an important consideration when selecting a Linux CAN solution.  Often in a system, the Linux computer is responsible for data processing, user interface, and other CPU intensive operations.  The real-time control is often relegated to a separate microcontroller.  The CAN bus is then just a convenient and robust way to get data from the control part of the system to the data processing part.  The amount of hardware buffering in the CAN controller determines how quickly the system needs to respond to process the CAN data, so the CAN controller has space to receive new data.  The SJA1000 provides a 64-byte receive fifo, or roughly space for 4 CAN messages.  This provides a little more buffering than the 2 messages in the MCP2515.  Dealing with full bandwidth CAN data in a Linux system can be a challenge  with small amounts of buffering as you typically have to implement the Linux RT patch, and spend some time tuning it.  Another upcoming CAN solution that looks nice is the TI High End CAN Controller (HECC) found in the new Sitara CPUs.  This controller provides 32 hardware mailboxes which should provide even more buffering, which makes it even better suited for operating systems like Linux.  As with any technology, there are tradeoffs, so its nice to have options.

One area the EMS solution may be very useful is for prototyping and development.  The ideal development flow when developing an Embedded Linux product is to development as much as possible on a PC.  The EMS product allows you to easily implement SocketCAN functionality on a PC so you can talk to the rest of your system directly from a PC.  The final embedded solution may be based on a more cost effective solution like the MCP2515 or the TI HECC, but being able to interface to the rest of the system directly from a PC has the potential to improve development efficiency.  This is part of the “Big Win” with Embedded Linux where the same technology scales from the embedded system, to the PC, to the server.

Categories
Uncategorized

Notification at the end of builds

I do quite a few OpenEmbedded project builds during the course of a week.  This process usually takes 3-5 minutes.  That is just enough time to get distracted doing something else and forget about the build until an hour later when you realize — oops, I was supposed to send out a release email once the build was finished and uploaded.  It occured to me that it would be nice if my computer played a distinct sound at the end of the build.  With Linux this is incredibly easy:

  • wget http://upload.wikimedia.org/wikipedia/commons/7/76/Ding_Dong_Bell.ogg
  • sudo mv Ding_Dong_Bell.ogg /usr/local/
  • sudo aptitude install cplay
  • sudo echo “cplay /usr/local/Ding_Dong_Bell.ogg” > /usr/local/bin/bell
  • sudo chmod 775 /usr/local/bin/bell

Now, when I have a long build, I simply do something like:

bitbake my-image; bell

There is probably something better than cplay that does not start a UI — any suggestions?