Chapter 4. Ecosystem Elements

A group of individual unikernel projects is interesting, but if there is no ecosystem developing around them, the advancement of this technology will slow to a crawl. However, that is not the case here. There are, in fact, a number of ecosystem projects supporting the development and use of unikernels. The following are only a handful of the most interesting ecosystem projects.

Jitsu

Jitsu demonstrates the promise of the amazing agility of unikernel-based workloads. Jitsu, which stands for “Just-in-Time Summoning of Unikernels,” is actually a DNS server. But unlike other DNS servers, it responds to the DNS lookup request while simultaneously launching the unikernel that will service that address. Because unikernels can boot in milliseconds, it is possible to wait until someone has need for a service before that service is actually started. In this case, someone asking for the IP address of a service actually generates the service itself. By the time the requester sees the response to their DNS query, Jitsu has created the service that is associated with that IP address.

This ability to generate services as quickly as they are needed is a major game changer in our industry. We will see more of this in “Transient Microservices in the Cloud”.

MiniOS

MiniOS is a unikernel base from the Xen Project. Originally designed to facilitate driver disaggregation (basically, unikernel VMs that contain only a hardware driver for use by the hypervisor), MiniOS has been used as the base for any number of unikernel projects, including MirageOS and ClickOS. By itself, MiniOS does nothing. Its value is that, as open source software, it can be readily modified to enable unikernel projects. It leverages the functionality of the Xen Project hypervisor to simplify the task of unikernel development (refer to the subsection “Xen Project Hypervisor” for additional information).

Rump Kernels

The Rump Kernel project has facilitated some of the most interesting advances in unikernel development. The concept of Rump Kernels comes from the world of NetBSD. Unlike most operating systems, NetBSD was specifically designed to be ported to as many hardware platforms as possible. Thus, its architecture was always intended to be highly modular, so drivers could be easily exchanged and recombined to meet the needs of any target platform. The Rump Kernel project provides the modular drivers from NetBSD in a form that can be used to construct lightweight, special-purpose virtual machines. It is the basis for Rumprun (see Chapter 3), a unikernel that can be used to power a wide range of POSIX-like workloads. The RAMP stack was created without changes to the application code. The bulk of the work was in modifying the compilation configuration so the unikernels could be produced.

Why would the compilation configuration be an issue? Keep in mind that the process of creating a unikernel is a cross-compilation. The target system is not the same as the development system. The development system is a fully functional multiuser operating system, while the production target is a standalone image that will occupy a virtual machine without any operating environment. That requires a cross-compile. And cross-compilation requires that the build process make the right choices to create usable output. So the source code may remain unaltered, but the compilation logic requires some work in some cases.

Xen Project Hypervisor

The Xen Project hypervisor was the first enterprise-ready open source hypervisor. Created in 2003, the Xen Project created a concept called paravirtualization, which has been heavily leveraged by most unikernel efforts to date. Most hypervisors use hardware virtualization—that is, the guest VM sees emulated hardware that looks identical to real hardware. The VM cannot tell that the hardware devices it sees are virtualized, so it employs the same drivers to run the devices that it would on an actual hardware-based server. That makes it easy for the operating system on the guest VM to use the hardware, but it isn’t very efficient.

Consider an emulated network device. The guest operating system on the virtual machine sees a piece of networking hardware (let’s say an NE2000). So it uses its NE2000 software driver to package up the network data to be acceptable to the hardware and sends it to the device. But the device is emulated, so the hypervisor needs to unpack the network data that the guest VM just packaged and then repack it in a method suitable for transport over whatever actual network device is available on the host hypervisor. That’s a lot of unnecessary packing and unpacking. And, in a unikernel, that’s a lot of unneeded code that we’d like to remove.

Xen Project is capable of providing hardware virtualization like any other hypervisor, but it also provides paravirtualization. Paravirtualization starts with the concept that some guest VMs may be smart enough to know that they are running in a hypervisor and not directly on server hardware. In that case, there is no need for fancy drivers and needless packing and unpacking of data. Instead, Xen Project provides a very simple paravirtualized interface for sending and receiving data to and from the virtualized device. Because the interface is simple, it replaces complex drivers with very lightweight drivers—which is ideal for a unikernel that wants to minimize unnecessary code.

This is one reason why the Xen Project has been at the forefront of unikernel development. Its paravirtualization capabilities allow unikernels to have a very small and efficient footprint interfacing with devices. Another reason is that the project team has helped foster unikernel innovation. The MirageOS project is in the Xen Project incubator, so that team has had significant influence on the direction of the hypervisor. As a result, the hypervisor team has been consciously reworking the hypervisor’s capabilities so it can handle a future state where 2,000 or 3,000 simultaneous unikernel VMs may need to coexist on a single hardware host server. Currently, the hypervisor can handle about 1,000 unikernels simultaneously before scaling becomes nonlinear. The development work continues to improve unikernel support in each release.

Solo5

Solo5 is a unikernel base project, originating from the development labs at IBM. Like MiniOS, Solo5 is meant to be an interface platform between a unikernel and the hypervisor. Unlike MiniOS, the target hypervisor is KVM/QEMU rather than Xen Project. Where Xen Project leverages paravirtualization to allow the unikernel to talk to the hypervisor, Solo5 contains a hardware abstraction layer to enable the hardware virtualization used by its target hypervisors.

UniK

UniK (pronounced “unique”) is a very recent addition to the unikernel ecosystem, with initial public release announced in May 2016. It is an open source tool written in Go for compiling applications into unikernels and deploying those unikernels across a variety of cloud providers, embedded devices (for IoT), as well as developer laptops or workstations. UniK utilizes a simple Docker-like command-line interface, making developing on unikernels as easy as developing on containers.

UniK utilizes a REST API to allow painless integration with orchestration tools, including example integrations with Docker, Kubernetes, and Cloud Foundry. It offers an architecture designed for a high degree of pluggability and scalability, with a wide range of support in a variety of languages, hardware architectures, and hypervisors. Although quite new, this is a project worth watching.

And Much More…

This is far from a definitive list of unikernel ecosystem elements. The reality is that the era of unikernels has just begun, so the development and refinement of new unikernel ecosystem elements is still in its infancy. There is still a large amount of work to be done to properly control unikernels in popular cloud orchestration systems (like OpenStack). Outside of the cloud, plenty of opportunity exists for projects that will deal with unikernel management. For example, there have been demonstrations of Docker controlling unikernels, which could become part of Docker’s supported capabilities before long. And Jitsu makes sense for certain workloads, but how can unikernels be dynamically launched when a DNS server is not the best solution? We can expect that additional solutions will emerge over time.

It is important to understand that unikernels and their surrounding ecosystem are propelled by open source. While it is technically possible to create closed source unikernels, the availability of a wide variety of open source drivers and interfaces makes creation of unikernels much simpler. The best illustration of that is the Rump Kernel project, which heavily leverages existing mature NetBSD drivers, which themselves sometimes draw on original BSD code from decades ago. By using established open source libraries, Rump Kernels specifically—and other unikernels in general—can spend far less time on the drudgery of making drivers work and spend more time doing innovative unikernel tasks.

Get Unikernels now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.