Limba Project: Another progress report

And once again, it’s time for another Limba blogpost 🙂limba-small

Limba is a solution to install 3rd-party software on Linux, without interfering with the distribution’s native package manager. It can be useful to try out different software versions, use newer software on a stable OS release or simply to obtain software which does not yet exist for your distribution.

Limba works distribution-independent, so software authors only need to publish their software once for all Linux distributions.

I recently released version 0.4, with which all most important features you would expect from a software manager are complete. This includes installing & removing packages, GPG-signing of packages, package repositories, package updates etc. Using Limba is still a bit rough, but most things work pretty well already.

So, it’s time for another progress report. Since a FAQ-like list is easier to digest. compared to a long blogpost, I go with this again. So, let’s address one important general question first:

How does Limba relate to the GNOME Sandboxing approach?

(If you don’t know about GNOMEs sandboxes, take a look at the GNOME Wiki – Alexander Larsson also blogged about it recently)

First of all: There is no rivalry here and no NIH syndrome involved. Limba and GNOMEs Sandboxes (XdgApp) are different concepts, which both have their place.

The main difference between both projects is the handling of runtimes. A runtime is the shared libraries and other shared ressources applications use. This includes libraries like GTK+/Qt5/SDL/libpulse etc. XdgApp applications have one big runtime they can use, built with OSTree. This runtime is static and will not change, it will only receive critical security updates. A runtime in XdgApp is provided by a vendor like GNOME as a compilation of multiple single libraries.

Limba, on the other hand, generates runtimes on the target system on-the-fly out of several subcomponents with dependency-relations between them. Each component can be updated independently, as long as the dependencies are satisfied. The individual components are intended to be provided by the respective upstream projects.

Both projects have their individual up and downsides: While the static runtime of XdgApp projects makes testing simple, it is also harder to extend and more difficult to update. If something you need is not provided by the mega-runtime, you will have to provide it by yourself (e.g. we will have some applications ship smaller shared libraries with their binaries, as they are not part of the big runtime).

Limba does not have this issue, but instead, with its dynamic runtimes, relies on upstreams behaving nice and not breaking ABIs in security updates, so existing applications continue to be working even with newer software components.

Obviously, I like the Limba approach more, since it is incredibly flexible, and even allows to mimic the behaviour of GNOMEs XdgApp by using absolute dependencies on components.

Do you have an example of a Limba-distributed application?

Yes! I recently created a set of package for Neverball – Alexander Larsson also created a XdgApp bundle for it, and due to the low amount of stuff Neverball depends on, it was a perfect test subject.

One of the main things I want to achieve with Limba is to integrate it well with continuous integration systems, so you can automatically get a Limba package built for your application and have it tested with the current set of dependencies. Also, building packages should be very easy, and as failsafe as possible.

You can find the current Neverball test in the Limba-Neverball repository on Github. All you need (after installing Limba and the build dependencies of all components) is to run the make_all.sh script.

Later, I also want to provide helper tools to automatically build the software in a chroot environment, and to allow building against the exact version depended on in the Limba package.

Creating a Limba package is trivial, it boils down to creating a simple “control” file describing the dependencies of the package, and to write an AppStream metadata file. If you feel adventurous, you can also add automatic build instructions as a YAML file (which uses a subset of the Travis build config schema)

This is the Neverball Limba package, built on Tanglu 3, run on Fedora 21:

Limba-installed Neverball

Which kernel do I need to run Limba?

The Limba build tools run on any Linux version, but to run applications installed with Limba, you need at least Linux 3.18 (for Limba 0.4.2). I plan to bump the minimum version requirement to Linux 4.0+ very soon, since this release contains some improvements in OverlayFS and a few other kernel features I am thinking about making use of.

Linux 3.18 is included in most Linux distributions released in 2015 (and of course any rolling release distribution and Fedora have it).

Building all these little Limba packages and keeping them up-to-date is annoying…

Yes indeed. I expect that we will see some “bigger” Limba packages bundling a few dependencies, but in general this is a pretty annoying property of Limba currently, since there are so few packages available you can reuse. But I plan to address this. Behind the scenes, I am working on a webservice, which will allow developers to upload Limba packages.

This central ressource can then be used by other developers to obtain dependencies. We can also perform some QA on the received packages, map the available software with CVE databases to see if a component is vulnerable and publish that information, etc.

All of this is currently planned, and I can’t say a lot more yet. Stay tuned! (As always: If you want to help, please contact me)

Are the Limba interfaces stable? Can I use it already?

The Limba package format should be stable by now – since Limba is still Alpha software, I will however, make breaking changes in case there is a huge flaw which makes it reasonable to break the IPK package format. I don’t think that this will happen though, as the Limba packages are designed to be easily backward- and forward compatible.

For the Limba repository format, I might make some more changes though (less invasive, but you might need to rebuilt the repository).

tl;dr: Yes! Plase use Limba and report bugs, but keep in mind that Limba is still in an early stage of development, and we need bug reports!

Will there be integration into GNOME-Software and Muon?

From the GNOME-Software side, there were positive signals about that, but some technical obstancles need to be resolved first. I did not yet get in contact with the Muon crew – they are just implementing AppStream, which is a prerequisite for having any support for Limba[1].

Since PackageKit dropped the support for plugins, every software manager needs to implement support for Limba.


So, thanks for reading this (again too long) blogpost 🙂 There are some more exciting things coming soon, especially regarding AppStream on Debian/Ubuntu!

 

[1]: And I should actually help with the AppStream support, but currently I can not allocate enough time to take that additional project as well – this might change in a few weeks. Also, Muon does pretty well already!

7 Comments

  • Timothy Hobbs commented on 31. March 2015 Reply

    I am also working on a similar thing to what Alex is working on. My project can be read about on http://subuser.org/ . However, I think that rather than compairing yourself to Alex’s work, you should compair Limba to http://zero-install.sourceforge.net/ 😉

    What wasn’t clear to me, from my reading, is whether Limba sandboxes it’s applications or not.

    Tim

    • Matthias commented on 31. March 2015 Reply

      Subuser looks interesting, although I think Docker is quite “heavy” for this usecase… (I need to take a closer look at it though)

      ZeroInstall has a different scope, primarily because it is not only cross-distro, but also cross-OS, since it supports also Windows and MacOS-X. Morphing ZI into something close to Limba/XdgApp could never happen without sacrificing the cross-OS support. Also, using AppStream in ZI, like it is used in Limba, would be a bit awkward.

      Limba does not sandbox applications by default yet, simply because proper sandboxing will require changes on the application itself. As soon as we have kdbus and Wayland, sandboxing will be implemented properly though, and used for applications which support it. Limba will likely get another system policy, which allows restricting application installations to “sandboxed-only” (so app authors have a motivation to enable sandboxing). This policy will be set by the system administrator or distributor.
      In sandbox matters, I plan to collaborate with the GNOME people as closely as possible – a lot of work has already been done on their side, but to move this project forward, we need kdbus and likely adjustments in the toolkits as well (implementing this properly is not trivial for complex applications, like an office suite).

      Cheers,
      Matthias

      • Timothy Hobbs commented on 1. April 2015 Reply

        What is heavy in your mind about Docker? The disk usage? I beleive that the disk usage problems will be resolved via block/file level de-duplication. Beyond that, I cannot imagine what in Docker is any heavier than any other container system.

        What can Limba do that ZI cannot?

        “Proper sandboxing will require changes on the application itself.” Subuser does not require any changes to the applications. What changes do you imagine necessary?

        As for wayland and kdbus, of course we are all looking forward to wayland, however, I think that kdbus is not necessary for this. You could always implement the required features of kdbus in normal dbus… Indeed, you will need to do this, because many systems will never have kdbus.

        Tim

        • Sławomir Lach commented on 2. May 2015

          You must know, that xdg-apps allows to install application into user home directory, so BTRFS deduplicatinc not fix the problem.

  • Sławomir Lach commented on 2. May 2015 Reply

    What’s the reason to remove backends from PackageKit? It was security?

    • Matthias commented on 4. May 2015 Reply

      What exactly are you referring to here?
      Some PackageKit backends for native package managers have been removed, because they were outdated and unmaintained (e.g. the SMART backend).
      If you are referring to the PackageKit integration which Listaller had: That one had to go since PackageKit’s maintainer didn’t want plugins to be loaded into the daemon anymore, mainly to increase its stability.
      Security was not an issue in both cases.

Leave a Reply

Your email address will not be published. Required fields are marked *