Home > On-Demand Archives > Theatre Talks >

Container-ize Your Build Environment: Advantages of Docker For Firmware Development

Akbar Dhanaliwala - Watch Now - EOC 2021 - Duration: 36:27

Docker is a relatively new technology that has revolutionized the web world, but is underutilized in the embedded systems industry. However, Docker's light-weight virtualization technology is perfectly suited for solving many of the biggest pain points firmware developers face today. In this talk we'll briefly go over how containerization works (vs. traditional virtual machines), the benefits of containerization for firmware development and automated testing, and walk through what developing with a Docker container actually looks like.

M↓ MARKDOWN HELP
italicssurround text with
*asterisks*
boldsurround text with
**two asterisks**
hyperlink
[hyperlink](https://example.com)
or just a bare URL
code
surround text with
`backticks`
strikethroughsurround text with
~~two tilde characters~~
quote
prefix with
>

SimonSmith
Score: 0 | 4 years ago | 1 reply

Interesting presentation on a new topic for me. I guess it?s not for me as I use IAR on Windows. We have a short document for development tool setup (so slight variation is possible), but any formal builds for testing and release are built on a CI server.

nwaters
Score: 1 | 4 years ago | 1 reply

You can install IAR in a windows docker image. The limitations are:

  1. There is no GUI, so you can't use the graphical debugger.
  2. The debugger can't talk to an emulator via serial interfaces, you've got to use a network interface.
  3. You've got to use a network license.

But, you can definitely containerize IAR for CI/CD builds and running automated unit tests with CSPY.bat without shelling out the extra money for IAR BX.

To do a headless install of IAR you have to prerecord the install using install shield command line options.

E.g:

        .\setup.exe /r /f1"C:\IAR_install_iss\setup.iss"
            /f2"C:\EWARM_8302_18209_install_media\ew\install.log"

Then you can install the recording like:

        .\setup.exe /s /f1"C:\IAR_install_iss\setup.iss" 
            /f2"C:\EWARM_8302_18209_install_media\ew\install.log"

So this is what is in my Dockerfile:

COPY ./iar_setup.iss /iar_setup.iss
RUN C:\EWARM_8302_18209_install_media\ew\setup.exe /s \
    /f1"C:\iar_setup.iss" /f2"C:\docker_work\iar_install.log"
Eighth
Score: 1 | 4 years ago | 1 reply

Great answer, just wanted to add a couple things:

  1. You've got to use a network license not just because it's hard to use hardware-bound one in container, but also because usage of IAR for build server purpose requires that. So that's more of legal restriction.
  2. You can access graphical interface if you choose to use Linux container instead of Windows container. To do this with IAR you need to use Wine, which may be tricky but I'm fairly sure it's doable, because I had experience with something similar. Then you can pass GUI from your container to your Windows host using VcXsrv. As additional benefit you'll gain ability to run your image on Linux and Mac hosts, not just Windows.
  3. To direct you further on account of debugging - you'll need to pass access to your debug probe through TCP socket to use it from container. It can be achieved with OpenOCD, JLink Remote Server and a few other ways. You'll also need to edit your project's debug settings accordingly. So it's not as out-of-the-box easy as with debugging from host, but still doable.
nwaters
Score: 0 | 4 years ago | 1 reply

I've had it in mind to try running IAR with Wine for a long time. I'd be curious to see a blogpost or notes about it if anyone has successfully done it.

Eighth
Score: 0 | 4 years ago | 1 reply

Well, this blogpost is on Russian and pretty short, but I think it still may be a good starting point.

To me it looks like he first installs dependencies required for IAR to his wine prefix using winetricks tool. In second line he runs IAR installer, and in third starts IDE.

nwaters
Score: 0 | 4 years ago | 2 replies

Hmm... I'm wondering how he got the license manager working. Maybe the light command line network license manager "just works" from Wine like it does in a Windows Container.

barnberg
Score: 1 | 3 years ago | no reply

Second confirmation that it just works, even including building from the IDE.
One caveat, make sure to start the container with a specific hostname (-h), rather than using the automatic hostname. Otherwise, you may run out of network licenses.

Eighth
Score: 1 | 4 years ago | no reply

Well, I don't think I can muster enough time to write a thorough guide right now, but I've performed a quick spontanous test and it seems that once you get through installation quirks, license manager kinda just works. I didn't went beyond launching License Wizard, though.

Screenshot

There are IAR and IAR License Manager, both launched from container through Wine, GUI forwarded through VcXsrv. Windows 10, Docker for Windows, WSL2 backend.

urbite
Score: 0 | 4 years ago | 2 replies

Nice work - really excellent intro to Docker. I have a Docker use case for an intro class to FPGA coding, where the students' computing platforms could be a mix of any and all OSes. With Docker, only the Docker tool would need to be installed on the students PC. All the heavy lifting FPGA tools would be in the Docker image.
Does Docker handle GUI interfaces, via an RDP or VNC server such as remmina? This would be required for the graphical viewing of the post-simulation waveform data? I assume all that is needed is to add the commands to the Docker file to install the RDP/VNC client, along with the startup commands. In that case, how is networking handles, since the IP of the Docker instance is needed for the remote GUI?

Eighth
Score: 0 | 4 years ago | no reply

I guess for FPGA you're using Windows software? Let me recommend you this video. It covers running of Windows software in Linux containers, and forwarding GUI of such software to another machine (i.e. host). Performance may be an issue, so prior testing is needed.

I'm talking about using Linux container because Windows containers have severe issues with GUI, to say it mildly.

akbarSpeaker
Score: 0 | 4 years ago | no reply

Hi,
You can setup a GUI interface with a Docker container, but the devil is in the details e.g. windows container vs linux container, etc.
TBH, I don't have a lot of experience setting up remote GUIs with Docker, but I'm fairly confident that the use-case you're describing is possible.

MaciejDrozd
Score: 0 | 4 years ago | 1 reply

To be able to reproduce my build at given SW version (given git commit) I need also well determined environment version. If I save my docker git commit (e.g. as hash) in build asset then my build should be completely deterministic, I hope. But to be able to build something I need at first a docker. How schould it be managed? How to link SW-version with docker-version?

Eighth
Score: 0 | 4 years ago | no reply

You can store name and version of image you're using for builds somewhere along with your code. In a plain .txt, for example.

nibbles
Score: 1 | 4 years ago | 1 reply

I would add that with Linux/Docker Toolbox for Windows/Mac you can mount USB devices into Docker containers and run hardware tests that way. Then you get a truly self-contained build pipeline.

CarlesMarsal
Score: 0 | 4 years ago | no reply

Where can you get Docker Toolbox for Windows? At the Docker website, it appears as deprecated and no working link appears to be there.

franV
Score: 0 | 4 years ago | no reply

hi Akbar. I totally agree with you.
In my company we used to have a single building machine shared between all the developers, taking more than 30 minutes to build (pretty slow machine) and a queueing system to get access to it. If any build error, all wasted. Also access to the machine was restricted so not all developer had direct access to it. Some times the whole process was breaking, cause someone updated packages on the building machine.
We then moved the whole solution using docker and it's heaven:

  • can either use own computer, or spawn an instance online using kubernetes. this now compiles in minutes
  • we now have a single command to build (in parallel) 7 different processes that needs completely different building requirements
  • same docker images are used to run git CI pipelines, and run all the tests
  • can literally trigger this solution on any machine (dev OS agnostic), independently on internet connectivity
  • any update needed for the base image is obviously documented from the docker files (no more unwelcome changes on the building machine).
Taki
Score: 0 | 4 years ago | no reply

Hi Akbar, Thank you for your great talk!
I was really glad I'm not the only one to create container-ized dev env for embedded system.
As your example, I tried made my cmake and gcc dev-env to container-ized. I think portability is very important for maintainability. From my experiment, dev-env have to keep to support and update the product. Also, sometimes we need to reproduce the previous situation for some proto-typing. Container-ized environments can easily reproduce ancient environment as well. With windows container large local binary installer, I also could create armcc and MDK-ARM environment on dokcer conatiner. Build process is like nightmare, but that is very helpful for me.
I couldn't tell the benefit of container-ized env to my colleagues well. But thanks to your kind instruction, I would tell them better (Of course, won't re-provide this video directly).
Thanks again!

Harri
Score: 0 | 4 years ago | no reply
malreverse
Score: 0 | 4 years ago | 1 reply

EXCELLENT presentation! One question that I may have missed...when you executed the build_app and clean_app, I'm assuming the output messages are from the commands? Thanks again!

akbarSpeaker
Score: 0 | 4 years ago | no reply

Yes, the output is from the commands. To recap, "build_app" and "clean_app" are aliases for this:
alias build_app='docker run -it --rm -v $PWD:/app devenv-nrf52 /bin/bash -c '"'"'mkdir -p _build;cd _build;cmake .. -G Ninja;cmake --build .'"'"

alias clean_app='docker run -it --rm -v $PWD:/app devenv-nrf52 /bin/bash -c '"'"'cd _build;ninja -t clean'"'"
The output you're seeing is what you would see if you ran
cmake .. -G Ninja;cmake --build .
and
ninja -t clean
inside of the container.

KHamilton
Score: 0 | 4 years ago | 1 reply

Nice example. What network configuration does the docker container get?

akbarSpeaker
Score: 0 | 4 years ago | no reply

Hi,
Networking is kind of a complex topic when it comes to Docker. Here's a good resource for getting started:
https://docs.docker.com/network/
The default is that you get the bridge network which means (and I'm vastly oversimplifying here ) that the container can talk to the internet but the internet can't talk to the container.

pmalcolm
Score: 0 | 4 years ago | 1 reply

Akbar, great talk! Thank you.
My company has been using Docker for a couple years now, but we are struggling with large image sizes and the resulting disk usage on our servers. I would be interested on your thoughts or ideas as to keeping Docker images and footprints small, when the toolchains we use in embedded are typically are so large?
One thing we have experimented with is placing our toolchain into a separate image (which is rarely updated), and mounting it via the '--volumes-from' option. This allows the size of our other images to remain relatively small. Maybe there are other ideas we are overlooking?

akbarSpeaker
Score: 0 | 4 years ago | no reply

Hi!
Managing image size is also a challenge for us. One strategy (although by no means a cureall) is to uninstall packages you don't need in the same layer that you install your toolchain(it's very important that the un-install happens in the same layer!). Often these toolchains come with a lot of cruff that we don't actually need to build our projects. If you take a look at this "minimal" cortex-m build environment we put together you can see what I mean on line 8:
https://github.com/lagerdata/devenv-dockerfiles/blob/master/Dockerfile.cortexm-minimal

jvillasante
Score: 0 | 4 years ago | 1 reply

Very nice presentation, thanks!
As a developer I would love to create a docker image with the tool-chain I need and then run it mounted to my local file system. I can then fire my trusted Emacs on my filesystem mounted volume for development and build the project with docker. My question: How can I make specific information on the docker image (e.g. platform specific headers) to my local Emacs instance?
The way I do this now is kind of ugly. Is there a better way of doing it?

nibbles
Score: 0 | 4 years ago | no reply

Other editors like VSCode and the JetBrains IDEs support using a running Docker container as a "remote" system. You can edit files, build and debug as if the files were on your local system. I think you could combine this with mounting your source folders into Docker. I bet there is a way to do the same with Emacs. These editors essentially use SSH to access files etc.

marckarasek
Score: 1 | 4 years ago | 4 replies

Remember that Docker on Windows and VirtualBox DO NOT play well together. Docker for Windows will break Virtualbox. And it is a pain to get it back..

Joffrey
Score: 0 | 4 years ago | 1 reply

That's true Mark. From memory, I couldn't have VirtualBox and Docker running at the same time. It was necessary to disable Hyper-V before running VirtualBox and Virtualbox would not run without Hyper-V enabled. Although, I don't know if this is still a problem these days...

marckarasek
Score: 1 | 4 years ago | no reply

It still is. I fell into this trap about 1 month ago. I wanted to test a docker image I had created under virtualbox VM, so installed Docker for Windows. Took me a day to recover and in googling found out this has been an issue since at least 2016-7. Which IMO is unacceptable.

nibbles
Score: 0 | 4 years ago | no reply

Another advantage of using Docker Toolbox (and VirtualBox) on Windows, is that you can forward USB ports into VirtualBox and also into containers. This makes it possible to run hardware tests in Docker even on your Windows PC, and the same setup can be used for continuous integration.

urbite
Score: 0 | 4 years ago | no reply

VMware seems to play well with Docker. Just fired up the Docker tutorial image, which serves up a web page. All with while an Ubuntu 16 VM is running under VMware. It's only in the past year or so that VMware has been able to run simultaneously with WSL 2.

akbarSpeaker
Score: 1 | 4 years ago | no reply

I didn't know that Docker for Windows and VirtualBox do not play well together, thank you for bringing this up!

LeeT
Score: 1 | 4 years ago | 2 replies

Lots of really good information. It felt like you were talking really fast, I was getting winded just listening.

malreverse
Score: 0 | 4 years ago | no reply

Same.

MatthewZaleski
Score: 0 | 4 years ago | no reply

I found he was talking at the pace I prefer since I speed up podcasts regularly. There are options for video playback here to speed up/slow down the video if you don't like the presenter's pacing.

Joffrey
Score: 0 | 4 years ago | no reply

I still have to take the plunge with Docker. There are several issues that still make its use difficult for some embedded developments. If you are working on Windows machines, tools like Puppet offer a simple and powerful way to manage your development environments and deployments. Puppet is not like Docker, it only manages software packages and as such does not provide that "isolation / abstraction" layer. However, it still serves a useful purpose for us right now. As Akbar says, this is only a matter of time before tool vendors embrace Docker as it becomes more widespread.

marckarasek
Score: 0 | 4 years ago | no reply

There is one added benefit you did not go over. If you are the one providing the toolchain/tools to your customer Docker makes it easier to deploy them.

jsolano
Score: 0 | 4 years ago | no reply

Agree completely on using unity on the hardware. It needs little resources and direct interaction with the hardware (without mocking) is great.

RyanMac
Score: 2 | 4 years ago | no reply

Nice job in presenting use of containers with an embedded focus! Many embedded developers may not be familiar with the concept since it came from the web/server world but it is so useful. Anybody who has helped a new developer get their build environment setup (or written instructions on how to setup a build environment) along with getting teams to update their build environments should be able to see the immense benefits.

CMiller
Score: 1 | 4 years ago | no reply

Excellent presentation, Akbar; thank you I hope you will consider a future talk at next year's EOC on integrating Docker with Continuous Integration/Delivery tools--especially Jenkins. I always get stymied without learning-by-example.

Steve_Wheeler
Score: 1 | 4 years ago | 1 reply

Very nice presentation. Thank you. This is introductory-level information that I never quite had clear before.

burak.seker
Score: 0 | 4 years ago | no reply

+1

nwaters
Score: 2 | 4 years ago | 1 reply

You said "You cannot add windows executables to a docker container". But you can create windows containers and add windows utilities to them. I've installed IAR EWARM (via install shield command line interface) to a Windows docker image and set it up to use a network license. This has allowed me to containerize IAR EWARM builds. The big shortfall is that you can't easily run WIndows and Linux containers in parallel. You have to set the daemon to run one or the other and it takes several minutes to make the transition on my system.

akbarSpeaker
Score: 4 | 4 years ago | no reply

Hi, yes I should have been more clear on that point. If you're running a linux/amd64 container, that container can be run on Linux/MacOs/Windows assuming you have Docker Desktop installed. But for Windows containers, the host OS version needs to match the container OS version (https://hub.docker.com/_/microsoft-windows-base-os-images), which limits running these containers to Windows Machines only. You may be able to get around this by using a virtual machine running Windows on non-windows machines, but I've never tested this out, so can't confirm that.
So to recap: You CAN run windows executables in a Docker Container, but that Container is limited to Windows host machines.
Thank you for bringing up the point about windows containers!

DaveN
Score: 2 | 4 years ago | 1 reply

Thanks for a great talk! A couple of things you might want to emphasize:

  • use specific versions in the docker file; fetching the latest version of ARM tools for example may lead to really nasty surprises trying to reproduce the container later
  • containers may not be so helpful for anything requiring use of the JTAG/SWD dongle; I have here for example lots of projects we must support where the device only has drivers for 32-bit windows (or worse). For these projects we use a VM - though it would be great not to have to...

Thanks again for a great talk!

akbarSpeaker
Score: 1 | 4 years ago | no reply

Hi Dave,
Yes, I agree, I should have been explicit about tagging the versions of the tools I was installing into the Docker Container. One of the huge benefits of Docker is the ability to freeze in time your build environment!
And agreed, re: build tools that only work on 32-bit windows machines. Hopefully as more and more people start using Docker, vendors will realize that they need to start supporting toolchains that work inside a Docker container...hopefully :-)

daleonpz
Score: 3 | 4 years ago | 1 reply

Great talk! You gave a simple and easy to understand overview of docker. But I think you should have mentioned two important use cases in the embedded world for docker:
1) maintainability of your toolchain: For instance, if you develop a project for a client with the gcc-4.8, with Docker you can keep offering support for that project for years, because all you need to build and test that project lives in a docker image. One docker image per project.
2) CI/CD: Using docker and ci/cd tools such as gitlab-ci is getting more popular for testing, building, and deployment new firmware.
I just wanted to add that.
Great talk Akbar ;)

akbarSpeaker
Score: 1 | 4 years ago | no reply

Hi Dale,
Both these points are spot on, thank you for mentioning them! Using Docker to capture a snapshot of a build environment in time is especially important for embedded systems that may need to be supported for years or even a decade or more.
And 100% yes re: Docker + CI/CD. Getting your build env into a docker container makes using ci/cd tools from gitlab/github/bitbucket, etc super easy.

marckarasek
Score: 0 | 4 years ago | no reply
This post has been deleted by the author
marckarasek
Score: 1 | 4 years ago | no reply

You also do not have to build via cmdline. You can run the docker with /bin/bash and then keep the docker up and edit the code. So your tools are there and you compile with docker and edit under the host .

marckarasek
Score: 0 | 4 years ago | no reply

I agree that you need to use a specific version of the tools. Even if this is just a sha1 for a specific commit. Sometimes the tools do not have regular releases and you need to have a stable point for your development.

piotr_zdunek
Score: 1 | 4 years ago | 2 replies

Thank you for your talk! I have some comments, hope you find them useful!

  • Your Dockerfile could use a lot of improvements. For example you actually increase image size with RUN apt-get remove ... at the end. See https://docs.docker.com/develop/develop-images/dockerfile_best-practices/ for more ideas.
  • It's a good idea to put scripts for running docker in the repo to share with other devs.
  • use docker run -w <dir> ... for setting custom workdir.
  • there are also other docker registries like ghcr.io
  • there are windows only containers which can be used for Windows based development.
Erwin
Score: 0 | 4 years ago | no reply
This post has been deleted by the author
akbarSpeaker
Score: 1 | 4 years ago | no reply

Hi Piotr,
Thank you so much for watching and for your feedback, these are great!
-You are absolutely, right, the "RUN apt-get remove" at the end shouldn't be there, as it's just adding another layer of cache, not actually reducing the size of the image.
-Yes, I agree regarding adding scripts to share with devs. This by the way is one of Lager's great features (e.g. lager devenv create). With Lager, commands that are run inside of a container are stored in the repo, and can be easily run using shortcuts (e.g. lager exec build). Lager also gives you cross-platform scripting with the same command (e.g. it will work on windows even without WSL2)
-Great point re: setting the custom workdir
-I hadn't heard of ghrc.io, thanks for sharing!
-Very interesting re: windows only containers

Erwin
Score: 0 | 4 years ago | no reply

Also from my side a big thumb up for this great talk. I heard a lot of the benefits of using docker but didn't find enough time to dive deeper into until now! Thanks for giving that compact explanation on how docker works and how to use it!

OUR SPONSORS

OUR PARTNERS