12 releases (breaking)
0.13.0 | Oct 23, 2024 |
---|---|
0.11.0 | Jan 22, 2024 |
0.10.0 | Oct 16, 2023 |
0.9.0 | May 3, 2023 |
0.1.0 | Feb 19, 2019 |
#25 in Unix APIs
28,724 downloads per month
300KB
7K
SLoC
Contains (Windows exe, 5KB) src/loader/pe/test_riscv64_image.bin, (ELF exe/lib, 1KB) src/loader/elf/test_bad_align.bin, (ELF exe/lib, 1KB) src/loader/elf/test_dummy_note.bin, (ELF exe/lib, 1KB) src/loader/elf/test_elf.bin, (ELF exe/lib, 1KB) src/loader/elf/test_elfnote.bin, (ELF exe/lib, 1KB) test_elfnote_8byte_align.bin and 1 more.
Linux-loader
The linux-loader
crate offers support for loading raw ELF (vmlinux
) and
compressed big zImage (bzImage
) format kernel images on x86_64
and PE
(Image
) kernel images on aarch64
and riscv64
. ELF support includes the
Linux and
PVH boot protocols.
The linux-loader
crate is not yet fully independent and self-sufficient, and
much of the boot process remains the VMM's responsibility. See [Usage] for details.
Supported features
- Parsing and loading kernel images into guest memory.
x86_64
:vmlinux
(raw ELF image),bzImage
aarch64
:Image
riscv64
:Image
- Parsing and building the kernel command line.
- Loading device tree blobs (
aarch64
andriscv64
). - Configuring boot parameters using the exported primitives.
x86_64
Linux boot:x86_64
PVH boot:aarch64
boot:riscv64
boot:
Usage
Booting a guest using the linux-loader
crate involves several steps,
depending on the boot protocol used. A simplified overview follows.
Consider an x86_64
VMM that:
- interfaces with
linux-loader
; - uses
GuestMemoryMmap
for its guest memory backend; - loads an ELF kernel image from a
File
.
Loading the kernel
One of the first steps in starting the guest is to load the kernel from a
Read
er into guest memory.
For this step, the VMM is required to have configured its guest memory.
In this example, the VMM specifies both the kernel starting address and the starting address of high memory.
use linux_loader::loader::elf::Elf as Loader;
use vm_memory::GuestMemoryMmap;
use std::fs::File;
use std::result::Result;
impl MyVMM {
fn start_vm(&mut self) {
let guest_memory = self.create_guest_memory();
let kernel_file = self.open_kernel_file();
let load_result = Loader::load::<File, GuestMemoryMmap>(
&guest_memory,
Some(self.kernel_start_addr()),
&mut kernel_file,
Some(self.himem_start_addr()),
)
.expect("Failed to load kernel");
}
}
Configuring the devices and kernel command line
After the guest memory has been created and the kernel parsed and loaded, the VMM will optionally configure devices and the kernel command line. The latter can then be loaded in guest memory.
impl MyVMM {
fn start_vm(&mut self) {
...
let cmdline_size = self.kernel_cmdline().as_str().len() + 1;
linux_loader::loader::load_cmdline::<GuestMemoryMmap>(
&guest_memory,
self.cmdline_start_addr(),
&CString::new(kernel_cmdline).expect("Failed to parse cmdline")
).expect("Failed to load cmdline");
}
Configuring boot parameters
The VMM sets up initial registry values in this phase, without using
linux-loader
. It can also configure additional boot parameters, using the
structs exported by linux-loader
.
use linux_loader::configurator::linux::LinuxBootConfigurator;
use linux_loader::configurator::{BootConfigurator, BootParams};
impl MyVMM {
fn start_vm(&mut self) {
...
let mut bootparams = boot_params::default();
self.configure_bootparams(&mut bootparams);
LinuxBootConfigurator::write_bootparams(
BootParams::new(¶ms, self.zeropage_addr()),
&guest_memory,
).expect("Failed to write boot params in guest memory");
}
Done!
Testing
See docs/TESTING.md
.
License
This project is licensed under either of:
- Apache License, Version 2.0
- BSD-3-Clause License
Dependencies
~0.6–1.3MB
~25K SLoC