#snappy #snapcraft #development

Snapcraft Build Environments

30 July 2018


After a week away from my computer I want to organize my thoughts on the progress made towards build VMs by providing this write up since that forum post can be a bit overwhelming if you are casually wanting to keep up to date.

The reasons for this feature work to exist, for those not up to speed, is that we want to have a very consistent build environment for which anyone building a project can have an expectable outcome of a working snap (or non working one if it really doesn’t). Case in point, we want to avoid the works for me situation as much as possible.

One of the reasons to choose virtualization over containers is that we even want to isolate ourselves from the kernel running on the system, providing a consistent experience for things like the ability to use build-snaps.

This is an overview of the current state of the development of this functionality.

Everything you are about to see here has not made it to the actual product yet.

Working on a project


Let’s try and snap up a very simple hello world project written in C and driven by a Makefile, the snapcraft.yaml for this project looks like the following:

name: make-hello
version: 0.1
summary: say hello to the world
description: |
  This is a basic make snap. It just prints a hello world.
confinement: strict
grade: devel
base: core18

build-packages: [gcc, libc6-dev]

    command: test

    source: .
    plugin: make

After this you just need to run snapcraft to get a resulting snap. Given that we set the base to core18 a virtual machine will be setup so that the entire lifecycle process of creating that snap happens entirely in that environment.

Here’s a video of what it looks like:

As you can see there, there is essentially no contamination from the build in the project directory (keen eyes may notice a snap directory there, that is a bug).

You may have noticed, too, that creating the image took some time, that is the first boot (a ticker hinting about this is in the works) and the initial environment setup taking place (as can be seen in the video).


The first boot took some time, if this where the case every time you ran snapcraft, I don’t believe you would be fond of iterating on a project. This is why we have optimized it (more to come) by having pre flight checks and saving the virtual machine state before we tear it down to load from a booted state.

Here is the same project where I have already run snapcraft pull:


As everything is run inside the virtual machine instance, it becomes really hard to take a glance at what is going on when running something or how to debug an issue we may have. To solve that, we added a few options to the lifecycle related commands which will drop you into a shell inside the virtual machine instance, these are:

  • --shell, which runs up to the previous lifecyle step and replaces the called out lifecycle step to be run (in the pipeline).
  • --shell-after, which allows you to enter a shell after the lifecycle step has run.
  • --debug, which drops you into a shell if there are errors in any lifecycle step related to the project.

These command options should drop you into the corresponding directory for the part and lifecycle step (in the pipeline).

The shell prompt (PS1) is descriptive to the project with regards to its location and should be intuitive into how to get to places.

Let’s see how --shell-after works with our current project which last run through the build step:

As you can see, we can even continue running snapcraft from inside the virtual machine instance from any working directory.

Debugging should be a similar experience, let’s pick up from where we left off and introduce an error that will trigger dropping us into a shell inside the virtual machine:

In that video, you can observe that after fixing the issue we just ran snapcraft through its entire lifecycle to get the snap created.


Cleaning up is rather easy, just run snapcraft clean. This simply wipes the data directories reserved for the snapcraft project.

Looking further

So far I have targets for core18 and a fictitious core16 base. To validate the entire process though, I did also play with creating a Fedora based disk to run through that same make project by hacking up a quick DNF repo handler inside snapcraft, the resulting snap that came out of exercising this process was uninstallable as expected as there is no fedora base (yet at least).