The following blogpost is a summary of Rump Kernels: No OS? No Problem.
Rumpkernels provide all the necessary components to run applications on baremetal without the necessity of an operating system. Simply put it is way to run kernel code in user space.
The main goal of rumpkernels in netbsd is to run,debug,examine and develop kernel drivers as easy as possible in the user space without having to run the entire kernel but run the exact same kernel code in userspace. This makes most of the components(drivers) easily portable to different environments.
Rump Kernels are constructed out of components, So the drivers are built as libraries and these libraries are linked to an interface(some application) that makes use of the libraries(drivers). So we need not build the entire monolithic kernel just the required parts of the kernel.
For example if we are running a web server all we need is a tcp/ip stack and sockets, We don’t need memory manager, file systems. To achieve this goal we need to find a way to scrape the drivers from the kernel code and must facilitate the rump kernel with i/o device access, memory etc.Here comes the anykernel and hypercall interface.
This is the core concept in the implementation of the rumpkernel. Anykernel is using “any” driver/s in any configuration(monolithic/micro/exo). It is analogous to loading kernel modules into any place beyond the operating system.
The anykernel is divided into 3 abstractions:
- base-Contains fundamental routines(allocators, sync routines)
- factions- filesystem, i/o devices, networking
- drivers-actual driver code to use the factions.
Consider NFS(Network File System) which is half file system and half network protocol, in order to construct a rump kernel consisting the necessary drivers we must also build they dependent-drivers. But in cases where rumpkernel differs from monolithic kernel we must use some “glue code” to make sure thigns run properly while making sure that the glue code is minimal so as to assure maintainability on NetBSD.
For proper operation of the rump kernel we require require resourcs such as i/o functions and memory. These resources are facilitated by the hypercall interface. It provides a bridge b/w the rumpkernel and the platform the rumpkernel is running on. So we need some bootstrap code to run on the host platform to facilitate this interface. Hypercall is a software trap from the rump kernel to the platform that the rump is running on. Hypercall is to a hypervisor what a syscall is to the kernel.
Rumpkernel is always executed by the host platform.
It is similar to just running a binary on the userspace or in Xen(hypervisor used extensively to test rumpkernel not sure why?) ti is just starting a guest domain or on embedded platforms the bootloader loads the rumpkernel into memory and we just jump to the entry point of the rumpkernel code. Quite contrasting to how the monolithic kernel is ran either on hardware or virtualization the only difference arises when executing applications which is not natively possible on rumpkernels(but application layer can be bundled with rumpkernels). We can have different processes communicating with the rumpkernel but either way it is still linked, loaded, executed by the host patform.
Notion of a CPU core is fictional
Usually the CPU configuration is in our hands for a virtual machine or the kernel running on bare metal but this is simply not possible on the rump kernel. The number of cores is actually the number of threads running we can simply all the cpu cores to rump cores for improved performance using caching and locking.
There is no scheduler
Rumpkernels use the platforms thread scheduling policy there is no native scheduler running in rumpkernel
so as to avoid the overhead of running scheduler on a scheduler. All the sync ops are defines as hypercall interfaces so that they can be optimized further and avoid classic execution problems(spinlocks/deadlocks). Since there is scheduling policy the host is free to schedule/unschedule the running thread as required.
No virtual memory concept
The rumpkernel simpy runs in the space allocated to it either virtual or not. This is just to remove the cumbersome work of porting the complex memory management operations and the memory manager itself to the rumpkernel when they are explicitly not required. But there are cases where we might need to implement a few custom alternatives to achieve memory manager dependent tasks(mmap()).
We can completely avoid the concept of memory locks by using just one processor core in the rumpkernel. So the locking scheme can be implemented in a single file without modifying the driver code.