The Delayed Public License

Say you’re an open source developer and you want to write a mobile app (in my case, for the Ubuntu phone).

You put some effort into it, and you think it would be nice if you could get a little revenue back for your efforts. Mobile stores make it very easy to do this, either from upfront payments, in-app payments (IAP), or ads.

The problem is that if you open source your app, anyone can fork it, strip the payments or ads (or re-code the app to pay themselves instead), and re-upload the stripped app to the store. Which, while it may not seem fair, is something that you explicitly allowed by choosing an open source license.

I don’t like the forced choice between either receiving no renumeration for your time or closing your source code. If I die or lose interest, I’d still like the world at large to be able to build upon my code.

Open source developers have been used to receiving no renumeration for their time for quite a while. :) But it’d be nice to change that, or at least allow those that are interested in doing so to try to earn some revenue by traditional means.

This isn’t an issue that was invented by mobile stores. You always could have had IAP or ads in your open source project and have had to worry about Debian maintainers stripping them out or being forked. It’s just that using IAP or ads before mobile platforms existed was very difficult.

The Delayed Public License

Some Ubuntu app developers were talking about this issue recently, and we came up with the Delayed Public License (DPL) [1]. It’s basically a meta license, that toggles between all-rights-reserved and a FOSS license after a set amount of time. In plain terms, something like:

All rights reserved. Every code commit can be licensed as GPL-3 one year after its publication.

  • You can still host your code in a public repository. Even though the code is public, it wouldn’t be legal for anyone to use it until that year is up. People are still welcome to fork your code, but they have to fork it one year back in the commit history.
  • You would release your app to the store as a closed source app. You can do that despite the DPL, because you own the copyright to the whole project. But people could still get the code from your repository.
  • Unlike a closed source project, you can still accept patches. Though you’d need a CLA of some sort.
  • The FOSS trigger is automatic. So you can’t forget or change your mind. And it’s easy to do; you don’t need some code escrow service.
  • When using the DPL, you obviously aren’t stuck with GPL-3 or a delay of one year. You can choose a different FOSS license and time period to suit your own needs.

While this meta license wouldn’t be be OSI approved [2], it still feels open source.

I’d love to hear more legally-binding ways of phrasing the license. Tying each VCS commit to its own publication timer might not be trivial to express legally.


[1] The idea came from a discussion between Michael Zanetti, Stuart Langridge, Sturm Flut, Ted Gould, and me. Credit for the name goes to Sturm Flut.
[2] Which means you have to be careful about picking a hosting provider. Launchpad for example will only host you for free if you use a FOSS license. GitLab would allow a DPL project. I’m not sure about GitHub: you have to pay for private repositories, but I don’t know about a public, non-open-source repository.

Nonogram Database

I am working on a nonogram (aka griddler, paint by number, picross, hanjie, etc) app for Ubuntu Touch.

Along the way, I’ve discovered that there are very few freely distributable puzzles floating around.

In an attempt to change that (for myself and future app developers), I’m collecting the ones I can find in one place.

If anyone knows of any good caches of freely distributable puzzles, please let me know and I’ll add them.

In the meantime, I’m going to strip files from the few open source apps I know about and see if I can get members of some popular nonogram sites to release their puzzles under a free license.

Snapifying Normal Ubuntu Packages

I’ve been playing with Ubuntu Snappy and wanted a way to bundle up traditional Ubuntu programs into a snap package.

So I wrote a script to do so! Introducing deb2snap. It isn’t perfect, but it can do some neat stuff already.

Full instructions and examples can be found on the homepage, but to whet your appetite:

./deb2snap fortune
./deb2snap --mir mir_demo_client_fingerpaint
./deb2snap --xmir xfreerdp

Tech Board Nomination

So I’ve been nominated for the Ubuntu Technical Board. Modesty and a touch of impostor syndrome prevent me from strongly endorsing myself. Though if voted in, I would certainly work hard at it. Here’s a bit about where I sit in Ubuntuland for those who are voting, but don’t know me well:

I’ve been involved in Ubuntu for about 5 and half years. I’ve worked in Canonical’s OEM team, as well as the Foundation and Desktop teams and now the Unity team. I’m a core dev and a member of the MIR team (main inclusion reviews, not the Mir display server). I maintain the default Ubuntu backup program (Déjà Dup / duplicity) and the default Ubuntu login screen. Sort of a jack of all trades, master of none kinda guy.

But really, let me talk real quick about the other candidates and why they would be awesome choices.

Martin Pitt is ridiculously talented and a super nice guy; he knows a little bit of everything; he even has his own fan club, for goodness sake. Loïc Minier is equally helpful and experienced, and is nowadays doing great things on the Touch side of things. Adam Conrad has a great eye for detail; I work with him often doing MIR work. Steve Langasek does great work on the Foundations team and always knows where the Ubuntu bodies are buried. Marc Deslauriers is super smart, appropriately cautious, and has a pleasingly big picture view. Kees Cook, Clint Byrum, Stéphane Graber, and Benjamin Drung have all been very helpful and nice when I’ve interacted with them, but I don’t happen to know them personally enough to be able to write a little blurb.

Point is, all of those nominees rock and would be great tech board members.

Universal Emulator Frontend in Ubuntu 12.04

I wanted to set up a system hooked up to my TV that let me play NES or SNES games from the comfort of my couch. It was an interesting project, and I wanted to share my findings.


I have a spare laptop with an NVIDIA card running Ubuntu 12.04. If you also have an NVIDIA card, I highly recommend using the latest experimental NVIDIA drivers. They really increased the performance of and reduced the heat from my laptop.


I ordered two Logitech wireless F710 gamepads. They have a tiny USB dongle that they talk to wirelessly. They work great out of the box, but note that they must be on different USB socket groups. I first tried plugging them into USB sockets right next to each other and one of the gamepads didn’t work. When I put the USB dongles on different sides of the laptop, both gamepads worked again. ::shrug::

I recommend putting a sticker on gamepad 1 so you know which one it is.


I installed XBMC then used it to download an add-on for its Programs section called “ROM Collection Browser”.

Using the ROM add-on, you can scan your ROM collections for each emulator. Be prepared for it to take a long time to download screenshots and covers if you have a lot of ROMs. The best feature is the ability to mark ROMs as “favorites” so if you have a huge collection, you don’t have to browse through all the crap each time.

XBMC lets you change the navigation bindings so you can use your gamepad.

NES Emulator

I’m used to the fceu family of emulators (gfceu, fceux, etc). But they did not support binding the direction buttons on my F710 gamepad. Those buttons send “hat” presses instead of simply button presses.

Looking further, I found an NES emulator I had never heard of. Mednafen not only can handle the “hat” presses on my gamepad but can also emulate GameBoy and a few other systems.

Note that the man page shipped with it is not helpful. You’ll need to browse the online documentation.

Press “ALT+SHIFT+1” to set up bindings for gamepad 1 and “ALT+SHIFT+2” to set up bindings for gamepad 2.

Press “F2” to set up a binding to exit the emulator. This is an important theme! Once XBMC launches an emulator, you need a way to quit it with just the gamepad. When it closes, XBMC comes back. But since you don’t want to use an easy-to-accidentally-hit key or a key that a game is likely to use, you have to be careful. Thankfully, the F710 gamepad has a middle button that normally just turns it on. But once the gamepad is on, the button also sends a normal key press. And no game would need to use this special middle button. So make sure to bind the middle power button to the exit command of mednafen.

Also pass “-fs 1” at least once to turn on fullscreen mode. The option is saved, so you only need to give it once.

When you add your NES collection in XBMC, note that the path to the mednafen command is “/usr/games/mednafen“.

SNES Emulator

I prefer the zsnes emulator for SNES games.

It doesn’t have any weird gotchas. Press “Esc” to bring up its main menu. Use the “Input” menu to set up the gamepads. Use the “Misc” menu to assign the exit button.

And don’t forget to enable full screen.

When you add your SNES collection in XBMC, the path to the zsnes command is the expected “/usr/bin/zsnes”.

Arcade Emulator

I found that the mame emulator works great for arcade games.

Press “Tab” to bring up its main menu. Under “Input (general)”, you can find the close command that is currently bound to “Esc” and replace it with your middle gamepad button. I found that most games needed me to individually set up “Input (this game)” bindings.

When you add your arcade collection in XBMC, note that the path to the mame command is “/usr/games/mame“.

Using the TV

I did hit one weird problem using the TV. Both mednafen and zsnes, if fullscreen, would switch which monitor was turned on. To stop them from doing that, I had to manually set each emulator’s fullscreen resolution to the size of TV.


Anyway, that’s “all” it takes. Now you have an awesome emulator station. You can also use the “Advanced Launcher” XBMC add-on to add launchers for Ubuntu games that work well with gamepads, like Jamestown.

To avoid using the mouse or keyboard at all, you can set your user to automatically log in and add XBMC to your startup applications.

Software Updater Changes in Ubuntu 13.04

Thanks to Matthew Paul Thomas’s specification and work by Dylan McCall and myself, there is a proposed patch to change the Software Updater’s details panel.

I’m excited about the work and just wanted to highlight it here. See the difference for yourself:

Old ViewNew View

This hasn’t landed yet, but there is enough time before 13.04 that I’m confident it will make it.

A bit about what you’re seeing in that new view:

  • The “Backup” text and icon is being pulled from the .desktop file for the app.
  • “Backup” has subitems because there are some updates that only “Backup” depends on.
  • Updates that are not apps but are part of the base system (i.e. “on the CD but no .desktop file”) are under “Ubuntu base”.
  • For flavors, that should intelligently change to “Xubuntu base” or whatever and use the right metapackage.
  • Notice that the package name is no longer shown, so make sure your package descriptions are high quality!

Ubuntu Maintenance Work

I promised myself I would blog more about the work I do on Ubuntu, but I’ve not done anything glamorous enough recently to talk about. So instead I thought it might be interesting to give a top level view at some of the behind-the-scenes maintenance work that an Ubuntu developer like me gets up to.

Most Ubuntu developers will already know this stuff, but for those of you who are curious what kind of high-rolling life we lead, read on.

+1 Team

I recently came off a month rotation on Ubuntu’s +1 Team. This is a shifting group of people that try to keep all packages buildable and installable.

There are two very helpful web reports for this. The FTBFS report (fails to build from source) and the NBS report (not built from source).

When a package can’t be compiled from source (due to changes in libraries it uses, changes in compilers, incompatibility with less common architectures like ARM, etc.), it shows up on the FTBFS report. Typical cases are a missing build dependency or needing a newer version of a library.

The NBS report is for packages that we have to keep in the archive because some other package needs them, but we no longer have the source package in the archive to match it. For example, if libfoo1 becomes libfoo2, the foo source package no longer builds libfoo1, but we still need that package in the archive until all other packages use libfoo2. Usually, this is as simple as a rebuild of affected packages. When there are no more users of libfoo1, an archive admin can drop it from the archive.

This team always likes help, if you’re interested!

MIR Team

The MIR Team is the team responsible for approving packages that need to enter main, usually in order to be included on the Desktop or Server CDs. This additionally means that Canonical is on the hook for support of the package.

This review is largely focused on making sure that the package will be well maintained. That it doesn’t have a terrible history of security flaws. That it doesn’t needlessly duplicate existing main packages (for example, we’d like to have as few XML parsers that the security team has to look after as possible).

This is a relatively small team, since these reviews don’t need to happen often.

Can’t Log Into Precise? Here’s Help!

So apparently I helped break the login screen for a lot of non-English speakers in Ubuntu 12.04 recently. Sorry!

The basic symptom is that regardless of your system keyboard layout, you would get a “us” layout. Which likely meant you couldn’t enter your password correctly.

To work around this, change your keyboard layout to be correct in the upper right keyboard indicator in the login screen.

I’ve pushed a fixed version of unity-greeter to the archive (0.2.0-0ubuntu4). So once you do manage to log in and update your system, you shouldn’t hit this problem next login.

Vala Autotool Tricks

If you have a project that uses Vala and autotools, you are probably using automake 1.11’s native support for the language.

Overall, automake’s support is great. But there are a few oddities that I’ve worked around and thought I’d share here.

Declaring a preferred valac version

Let’s say your project expects valac-0.14. But sometimes people have more than one valac installed, and valac-0.14 may not be their default valac compiler. The AM_PROG_VALAC macro only looks for ‘valac’ in PATH, so you’ll end up with whatever default compiler the user has.

Instead, create an acinclude.m4 file in your project directory. Copy the AM_PROG_VALAC macro from /usr/share/aclocal-1.11/vala.m4 into it. Then:

  1. Change the name to something like MY_PROG_VALAC
  2. Change the AC_PATH_PROG line to something like:
    AC_PATH_PROGS([VALAC], [valac-0.14 valac], [])
  3. Change the AM_PROG_VALAC call in to be MY_PROG_VALAC instead

Now configure will first look for the ‘valac-0.14’ executable and then fallback to plain old ‘valac’.

Stop shipping C code

For a long time, I shipped the valac-generated C code with my release tarballs because I did not want distributors to have to deal with valac churn. But nowadays, especially with valac’s six month release cycle, things are calmer.

Not shipping the C code makes tarballs smaller (over 20% in my case), makes it easier for distros to grab upstream patches (they don’t have to create a C patch from your Vala patch), lets you use Vala conditionals (more about that below), and means you’re not shipping something different than you yourself are using.

This only requires a simple change. In each source directory that contains Vala code, add the following bits to its (for the purposes of this example, your is creating the program foo):

  1. Separate out all your .vala code files into a separate variable called foo_VALASOURCES. If you want, you can then use this variable in your foo_SOURCES variable, so you don’t have to specify them twice.
  2. Add the following dist-hook:
    	cd $(distdir) && \
    	rm $(foo_VALASOURCES:.vala=.c) foo_vala.stamp

This will delete the generated C code and stamp file when creating a dist tarball.

Use Vala conditionals

One problem with shipping the generated C code is that you can’t use #ifdefs in your code. This is because Vala does not have a version of #ifdef that will “fall through” to the C code. Whether a conditional block of code gets used or not is always decided at valac compilation time.

And since your tarball consumers were compiling the C code, you couldn’t use Vala conditionals. To have an optional project dependency, you had to write little C wrapper functions that did use real C #ifdefs and then call them from Vala. Which is a pain.

But now that you’re shipping straight Vala code, all compilers of your program can now have Vala conditionals decided at compilation time.

Let’s say you have an optional dependency on libunity:

  1. In your, find the block of code that deals with having successfully detected libunity and add the following line:
  2. In your files, add $(UNITY_VALAFLAGS) to AM_VALAFLAGS
  3. In your files, add a new dependency to each of your stamp files:
    foo_vala.stamp: $(top_srcdir)/config.h
  4. In your Vala code itself, you can now surround your libunity-using code blocks with #ifs:
    #if HAVE_UNITY

The first step declares the HAVE_UNITY symbol if we’re using libunity. If we’re not, that line won’t ever be reached and UNITY_VALAFLAGS will be empty. The only other interesting bit is step three, which ensures that when ./configure is re-run, valac will also be re-run to pick up any new symbol definitions.

Backups and Distro Upgrading

tl;dr; I don’t recommend using Déjà Dup to hold your data when you upgrade distros (e.g. from Ubuntu 11.04 to 11.10) without understanding the risks.

I’m the maintainer of the Déjà Dup backup tool that will be included by default in Ubuntu 11.10. So I’m generally biased in its favor. But I am also a cautious person.

My concern stems from the fact that Déjà Dup uses an opaque backup format [1]. Which means that it does not store your data in plain files that you can just copy back into place with the file manager. You’ll need to use Déjà Dup again to restore them [2]. Which is fine if Déjà Dup is working correctly, as it should.

But just from a risk management perspective, I always recommend that people try to have at least one copy of their data in “plain files” format at all times.

So if you back up with Déjà Dup, then wipe your disk to put Ubuntu 11.10 on there, you’re temporarily going down to zero “plain file” copies of your data. And if anything should go wrong with Déjà Dup, you’ll be very sad.

Here are a few recommended ways to upgrade:

  • Use the Ubuntu CD’s built-in upgrade support. It will leave your personal files alone, but upgrade the rest of the system.
  • Use the Update Manager to upgrade your machine. Again, this will leave all your personal files in place.
  • Copy your files to an external hard drive with your file manager and copy them back after install.

In my mind, a backup system’s primary use case is disaster recovery, where going down to zero “plain file” copies of your data is unintentional and an acceptable risk. Intentionally reducing yourself to zero copies seems unnecessary.

Hopefully, all this caution is overblown. I just want people to be aware of the risks.

[1] Déjà Dup uses an opaque format to support a feature set that just can’t be done with plain files:

  • Encryption
  • Compression
  • Incremental backups
  • Assuming little about the backup location (allowing cloud backups)
  • Supporting file permissions, even when backing up to locations that don’t

[2] There are technically ways to recover without going through the Déjà Dup interface. They just aren’t very user-friendly.