FrobOS is turning 10: What can you learn from a 10 year old?

Stuart Easson
VMware, Inc


The Virtual Machine kernel, and the virtual machine monitor (VMM) in particular, have a difficult but critical task. They need to present a set of virtual CPUs and devices with enough fidelity that the myriad supported guest operating systems (GOS) run flawlessly on the virtual platform. Producing a very-high-quality VMM and kernel requires all low-level CPU and device features to be characterized and tested, both on native hardware and in a virtual machine.

One way to work toward this goal is to write tests using a production operating system, such as Linux. Because an operating system wants to manage CPU and device resources, it ends up hiding or denying direct access to CPU and low-level device interfaces. What is desired is a special-purpose operating system that enables easy manipulation of machine data structures and grants direct access to normally protected features. This article presents the main features of FrobOS, a test kernel and associated tools that directly address this development goal. Written in C, FrobOS provides access to all hardware features, both directly and through sets of targeted APIs. Using a built-in boot loader, a FrobOS test is run by being booted directly on hardware or in a virtual machine provisioned within a hypervisor. In either case, FrobOS scales very well. Tests are simple to run on systems ranging from a uniprocessor 32-bit with 128MB of memory to a 96 thread, 64-bit, 1TB Enterprise server.

Despite FrobOS maturity,1 information about FrobOS has not been widely disseminated. Today, only a handful of teams actively use the system, with the greatest use occurring within the virtual machine monitor team. This article describes FrobOS for the broader development community to encourage its use.

1. Introduction

The virtualization of a physical computer poses a number of difficult problems, but the Virtual Machine kernel (VMX) and virtual machine monitor (VMM) in particular have both a difficult and functionally critical task: present a set of virtual CPUs with their associated devices as a coherent virtual computer. This pretense must be executed with enough fidelity that the myriad supported (GOS) run flawlessly on the virtual platform. The task is made more difficult by the requirement that the virtual CPU must be able to change some of its behaviors to conform to a different reference CPU, sometimes based on the host system, sometimes defined by the cluster of which it is a part, or based on the users preferences and the GOS expectations. Despite these conflicting requirements it must do its work with little or no overhead.

Great software engineering is an absolute requirement to implement a high-quality virtual machine monitor. The best engineering is ultimately only able to rise to greatness through the rigors of thorough testing. Some testing challenges can be mitigated through focused testing with reference to a high-quality model. Unfortunately, these represent two major difficulties: both the reference model and focused testing are hard to find.

For many purposes, the behaviors one wants in the virtualized CPU are described “in the manual”. Yet there are problems:

  • Vendor documentation often contains omissions, ambiguities, and inaccuracies.
  • Occasionally CPUs do not do what manuals suggest they should.
  • Some well-documented instructions have outputs that are ‘undefined’, a potential headache for a ‘soft’ CPU that is required to match the underlying hardware.
  • Different generations of x86 hardware do not match for undefined cases, even those from the same vendor.

For the cases where reference documents fail to provide a complete answer, the only recourse is to execute the instructions of interest and study what happens in minute detail. These details can be used to make sure the behavior is modeled correctly in the virtual environment. This brings us to the second difficulty: how do you execute the instructions of interest in an appropriate way? Commodity operating systems (COS) are not helpful in this regard. Despite offering development tools, they use CPU protection mechanisms to stop programs from running many interesting instructions.

This article describes FrobOS, a testing tool designed specifically to address the issues described above. The Monitor team uses FrobOS to build their unit tests, performing checks on the virtual hardware that would be difficult or impossible using a COS.

The FrobOS test development process is hosted on Linux and uses familiar tools (Perl, gcc, gas, make, and so on) provided by the VMware® tool chain. The result of building a FrobOS test is a bootable image that can be started from floppy, PXE, CD, or other disk-like device. FrobOS startup is quite efficient, and is limited essentially by the host BIOS and boot device. As a bootable GOS, FrobOS can be run easily in a virtual machine, and virtual machine-specific customizations to the run-time environment can be made at build or test execution time.

This article presents the use of FrobOS on VMware hosted products such as VMware Workstation™. However, it also runs on VMware® ESX®. See the end of this article for links to more information about this important tool.

2. The Problem

Two types of problems need to be addressed to make a high-quality VMM: the generation of reference models, and testing.

  1. Generating reference models. What do you expect your virtual CPU and devices to do? When vendor manuals fail to provide an adequate answer, all that is left is to execute code and see what the CPU does under the conditions of interest.
  2. Testing. In general, testing has a number of issues:
  • Development efficiency: How hard is it to create a test?
  • Run-time efficiency: How much overhead does running a test incur?
  • Environmental accessibility: What can one touch and test before crashing the world?
  • Determinism: If you do the same steps again, do you get the same results?
  • Transparency: Can a test result be interpreted easily?

(COS) such as Linux and Microsoft Windows offer tools, documentation, and support for making end-user programs. They seem like a good vehicle for testing. Unfortunately, running a test requires a very different environment than the one used for creating it. When considering an operating system such as Linux or Microsoft Windows for running tests, a number of problems emerge.

  • Typical COS work hard to stop programs from accessing anything that would interfere with fairness, stability, or process protection models.
  • While it is possible to use an operating system-specific extension to gain supervisor privileges, the environment is fragile and very unpredictable.
  • Many desired tests are ‘destructive’ to their run-time environment, requiring the machine to be rebooted before
    testing can continue.
  • The long boot time for a typical COS makes for very low run-time efficiency.
  • Inevitably, the boot of a COS grossly pollutes the lowest level machine state.
  • Scheduling in a GOS makes the exact reproduction of machine state difficult or impossible.

Given these problems what is needed is a special-purpose operating system that does not protect or serve. It should be largely be idle unless kicked into action. It needs to boot quickly, and ideally does not do anything outside the test author’s notice.

3. The Idea

Our approach is to create an operating system with a kernel specifically designed for efficient generation and completely unfettered execution of low-level CPU and device tests. Ideally, development for this kernel would use familiar tools in a stable run-time environment. While all levels of x86 hardware and devices need to be accessible, users should not be required to know every detail of the hardware to use features in a normal way. Where possible, our operating system provides high-level programmatic access to CPU features, a set of C language APIs that encapsulate the complexities of the x86 architecture and its long list of eccentricities. Done well, it would perhaps mitigate the inherent pitfalls in ad-hoc coding. Properly realized, the benefits of FrobOS are many, including:

  • Ease of use. FrobOS tests are cross-compiled from Linux, using normal development tools and processes. Running the output in VMware Workstation is automatic, while doing so in VMware ESX requires only a few more command-line parameters. Running natively requires the boot image to be DD’ed onto a floppy disk,
    or better yet, setting up an image for PXE booting.
  • Efficiency. The early development of FrobOS was driven by the recognition that unit tests being built by Monitor engineers necessarily shared a lot of code—code that was tricky to write—so why not do it once, and do it right? Over time, this library of routines has grown to include the previously mentioned, as well as PCI device enumeration and access, ACPI via Intel’s ACPICA, a quite complete C RTL, an interface to Virtual Performance counters, a complete set of page management routines, and more.
  • Quick results. Shortest runtime has always been a philosophy for FrobOS. Monitor engineers are an impatient bunch, and the net effect is a very fast boot time. In a virtual machine, a FrobOS test can boot, run, and finish in less than 10 seconds. In general, the boot process is so fast it is limited by the BIOS and the boot device, virtual or native.
  • Flexibility. By default, the output of a test build is a floppy disk image that can be booted natively (floppy, CD, PXE…), in VMware Workstation (default), or VMware ESX (in a shell or via remote virtual machine invocation). If needed, the build output can be a hard disk image. This is the default for a UEFI boot, and might be needed if the test generates large volumes of core files.
  • Footprint. A typical FrobOS test run-time footprint is small. The complete test and attendant run-time system fit on a single 1.44MB floppy disk, with room for a core file if the test crashes. FrobOS hardware requirements are small. Currently, it boots in ~128Mb, but a change of compile time constants can reduce it to the size of an L2 cache.
  • Restrictions. FrobOS imposes very few restrictions. All CPLs and CPU features are trivially available. There is no operating system agenda (fairness, safety, and so on) to interfere with test operation.
  • Testing. FrobOS addresses the need for low-level CPU and device testing in a way not seen in a COS. Since FrobOS is cross-compiled from Linux, the programming environment for FrobOS has a familiar feel. Engineers have their normal development tools at hand. The FrobOS kernel, libraries, and tests are built with GCC and GAS. While the majority of code is written in C, the initial bootstrap code and interrupt handlers are assembly coded.
  • Scalability. FrobOS is eminently scalable. It runs on a Pentium III system with less 128Mb of memory, yet offers complete functionality on systems with 96+ CPUs and more than 1TB of RAM. A recent check-in to improve the efficiency of the SMP boot required statistics to be gathered. The results highlighted just how quick this process is—all 80 threads in a server with an Intel® Westmere processor can be booted and shutdown by a FrobOS SMP test in approximately 1 second.
  • Booting. For characterization purposes, FrobOS tests can be booted directly on hardware from floppy, CD-ROM, and PXE, and there has been success with USB sticks. There are several suites populated with tests that are expected to function usefully in a non-virtual machine (native environment).

4. Welcome to the Machine

What is FrobOS? Depending on the specific interest one has, FrobOS can be seen in a number of different ways: For a test developer, FrobOS is perhaps most accurately described as a GOS construction kit. For someone performing a smoke test of a new build of VMware Workstation, FrobOS is a catalog of unit tests. While for an engineer working on enabling x86 Instruction set extensions in the Monitor FrobOS is an instruction level characterization tool, providing many convenient interfaces to low level details.

Looking around the FrobOS tree within the bora directory, one finds a set of scripts, sources, and libraries. The purpose of the pieces is to build a bootable image designed to execute the test(s) in a very efficient way. The tests are stored in the frobos/test directory. Each test is stored in an eponymously named directory and represents a unit test or regression test for a particular bug or area of the monitor.

At present, FrobOS has three teams developing new tests: the Monitor, SVGA, and Device teams have all generated great results with the platform. Most recently, an intern in the Security team made a USB device fuzzer with great results. He filed several bugs as a result of his work, and added basic USB functionality to FrobOS. Later sections of this article present a real example of a device test, specifically demonstrating testing proper disablement of the SVGA device.

FrobOS tests are defined in the suite.def file. This file can be found, along with the rest of the FrobOS infrastructure, under the bora directory vmcore/frobos. The doc subdirectory contains documentation about FrobOS. The test subdirectory contains the tests and suite.def. The runtime/scripts subdirectory contains scripts, such as frobos-run, for running FrobOS.

Most FrobOS operational functions are controlled with an executive script called frobos-run. Written in Perl, the script uses the catalog of tests defined in suite.def to build each requested test’s bootable image(s) and then, by default, starts the execution of the images in VMware Workstation as virtual machines. As each test is built and run, frobos-run provides several levels of test-specific parameterization. At build time, a handful of options control the compiler’s debug settings (-debug), whether to use a flat memory model (-offset), and which BIOS to use when booting (-efi), and so on.

At run time, frobos-run creates a unique configuration file to control VMM-specific parameters, such as the number of virtual CPUs (VCPUs), memory size, mounted disks, and so on. Additionally, other test or VMM-specific options can be applied via the command line. These options are applied to the current run, and can override settings in suite.def. FrobOS uses GRUB as its boot loader and supports reading parameters from the GRUB command line, so options can be passed to a test to control specific behaviors. Ordinarily, a FrobOS test runs to some level of completion. In normal cases a test can Pass, Fail, or Skip. The final states of Pass and Fail are easily understood. Skip is slightly unusual—it means either frobos-run or the test discovered an environmental issue that would make running the test meaningless. An example might be running a test for an Intel CPU feature on a VIA CPU, or running an SMP test with only one CPU. In the event something happens to terminate the test prematurely, frobos-run inspects the logs generated and notices the absence of Pass messages and Fails the test.

So what do you need to make a FrobOS test? The minimal 32-bit FrobOS test can be assembled from the following four items:

  1. Two additional lines in suite.def, describing how frobos-run should find, build, and run the test. Example: legacymode
  2. A directory in …/frobos/test/ whose name matches the entry in suite.def.
  3. A file named that describes the source files required to build the test.

CFILES = main.c

  1. The source file (main.c) containing the test code. Example:

#define ALLOW_FROBOS32
#include <frobos.h>

            TESTID(0, “Journal example test”);


EXPECT_TRUE(1 == 1);

Assuming a VMware Workstation development tree and tool chain are already established, the test is built and invoked using frobos-run as follows:

frobos-run -mm bt legacymode:example

This invocation produces the following output:

Found 1 matching test…

Building tests….
Launching: legacymode:example (BT) (PID 27523), using 1 VCPU

Random_Init: Using random seed: 0x34a37b99379cfef2

TEST: 0000: Journal example test CHANGE: 1709956
PASS: Test 0000: Journal example Test (1 cases)
Frobos: Powering off VM.
PASS: legacymode:example (BT) (PID 27523) after 5s.

Hostname:   shanghai
Command Line:           legacymode:example -mm bt
Environment:  /vmc/bora:ws:obj
Client:          vmc, synced on 2012/02/07, change number 1709956
Suite spec:  legacymode:example
Monitor modes:           BT
Start time:    Tue Feb  7 12:38:00 2012
End time:     Tue Feb  7 12:38:07 2012
Duration:      0h:00m:07s

Tests run:    1
Passes        1
Skipped Tests:            0
Test Failures:  0
Log file: …/build/frobos/results/shanghai-2012-02-07.5/frobos-runlog

While there is a lot of bookkeeping information, the lines starting “PASS:…” show the test booted and ran successfully. The total time of 7 seconds includes starting VMware Workstation, booting FrobOS, and running the test. If the test were run natively, the lines from “Random_Init:…” to “PASS: legacymode:…” would be identical. Such log lines are copied to com1 as well.

Code reuse is critical for productivity and reliability. FrobOS makes testing across three major CPU modes relatively trivial. The test example is part of the legacymode suite and runs in ‘normal’ 32-bit protected mode. It can be made into a 64bit test with the addition of one line (#define ALLOW_FROBOS64), seen here in situ:

#define ALLOW_FROBOS32
#define ALLOW_FROBOS64
#include <frobos.h>

TESTID(0, “Journal example test”);

One line in suite.def also is needed:


The 64-bit (longmode) version of the test is invoked with the following command:

frobos-run -mm bt longmode:example

While not shown here, the output looks very similar, and as expected the test passes again. To run the test in compatibility mode, use

#define ALLOW_FROBOS48.

By default, frobos-run launches a test three times, once for each of the monitor’s major execution modes: Binary Translation (BT), Hardware Execution/Software MMU (HV), and Hardware Execution/Hardware MMU (HWMMU). I used the –mm bt switch to override this since I did not want all that output. Note that -mm is the short form of the –monitorMode option. After your test is ready, frobos-run allows all instances for a particular test to be run using the all pseudo suite:

            frobos-run all:example

This runs all the entries in suite.def for the test named example using all three monitor modes. In this case, it runs six tests, the product of the CPU modes and monitor execution modes: (32, 64) x (BT, HV, HWMMU). Because the number of tests can explode quickly, frobos-run is SMP-aware. It knows how many CPUs each test needs (from suite.def) and determines how many are available on the host. Using the -j nn command line option, it schedules multiple tests to run in parallel. As a result, the six tests can be run much more quickly on a 4-way host:

frobos-run all:example -j 4

This results in the following output:

Found 6 matching tests…

Building tests….

Launching: legacymode:example (BT) (PID 16853), using 1 VCPU
Launching: longmode:example (HV) (PID 16855), using 1 VCPU
Launching: legacymode:example (HWMMU) (PID 16856), using 1 VCPU
Launching: legacymode:example (HV) (PID 16858), using 1 VCPU
PASS: legacymode:example (HWMMU) (PID 16856) after 4s.
Launching: longmode:example (HWMMU) (PID 17009), using 1 VCPU
PASS: legacymode:example (BT) (PID 16853) after 5s.
Launching: longmode:example (BT) (PID 17048), using 1 VCPU
PASS: longmode:example (HV) (PID 16855) after 5s.
PASS: legacymode:example (HV) (PID 16858) after 5s.
PASS: longmode:example (HWMMU) (PID 17009) after 3s.
PASS: longmode:example (BT) (PID 17048) after 4s.

Duration:       0h:00m:12s

Tests run:      6
Passes:         6
Skipped Tests:  0
Test Failures:  0
Log file: …/build/frobos/results/shanghai-2012-02-07.9/frobos-runlog


As you can see, frobos-run starts four tests and waits. As each test finishes, frobos-run starts another test. The total run time is about twice as long as individually run tests, for a net speed increase of approximately 300 percent. Additional host CPUs allow more tests to execute in parallel. Since some tests require more than one CPU, frobos-run keeps track and schedules accordingly. In addition, the frobos-run scheduler is quite sophisticated. By default, the scheduler attempts to keep the host fully committed—but not over committed—so tests are scheduled according to CPU and memory requirements.

The following shows how to create a test that would be tricky, perhaps impossible, in an operating system such as Linux or Microsoft Windows. It first writes code to touch an unmapped page, generating a page fault. Once the fault is observed, it checks a few important things that should have occurred or been recorded by the CPU as a result of the page fault exception.

  1. A page fault occurs. (This is possible in Linux and Microsoft Windows!)
  2. The error code reported for the page fault is correct.
  3. The address reported for the page fault is correct.

#define ALLOW_FROBOS32
#include <frobos.h>

TESTID(1, “Journal example Test 1”);

// MM_GetPhysPage() returns the address of an unmapped physical page
PA physAddr = MM_GetPhysPage();

            // There is no mapping for physAddr, this must #PF…

            EXPECT_ERR_CODE(EXC_PF, PF_RW, *(uint8 *)physAddr = 0);
EXPECT_INT(CPU_GetCR2(), physAddr, “%x”);

Running the test results in the following:

TEST: 0001: Journal example Test 1 CHANGE: 1709956
PASS: Test 0001: Journal example Test 1 (1 cases)
Frobos: Powering off VM.
PASS: legacymode:example1 (BT) (PID 22715) after 4s.

With a few extra lines in the C source, and two additional entries in suite.def, we can test for the same conditions in both compatibility and 64-bit modes.

/* C source */
#define ALLOW_FROBOS32
#define ALLOW_FROBOS48
#define ALLOW_FROBOS64

/* suite.def */


The frobos-run tool automatically runs the test in the BT, HV, and HWMMU monitor modes.

PASS: legacymode:example1 (HWMMU) (PID 23171) after 4s.
PASS: compatmode:example1 (HWMMU) (PID 23175) after 4s.
PASS: longmode:example1 (HWMMU) (PID 23192) after 4s.
PASS: legacymode:example1 (BT) (PID 23165) after 5s.
PASS: compatmode:example1 (HV) (PID 23167) after 5s.
PASS: legacymode:example1 (HV) (PID 23172) after 5s.
PASS: longmode:example1 (HV) (PID 23168) after 5s.
PASS: longmode:example1 (BT) (PID 23195) after 5s.
PASS: compatmode:example1 (BT) (PID 23177) after 6s.

Duration:       0h:00m:10s

That is a lot of testing for 5 (or 3?) real lines of ‘new’ code.

This test uses a couple of EXPECT macros, wrappers for code to check for a certain expected value or behavior, and a lot more code to catch all sorts of unexpected behaviors. In the event the values sought are not presented, that fact is logged and the test is flagged as a failure. Execution continues unless overridden, since there may be other tests of value yet to run. The EXPECT macros can hide a lot of very complex code. If they cannot perform the task needed, the underlying implementation is available as a try/fail macro set for more generality.

5. So just how much do I have to do?

We saw earlier that making test code start in 32-bit, compatibility, or 64-bit mode is easy. To enable cross-mode testing, the FrobOS runtime library is as processor mode agnostic as possible. Calling the MM_MapPage() function achieves the same result in 32-bit/PAE or 64-bit modes. The following, slightly more elaborate, code fragment retrieves an unused page, identity maps, zeros it, and announces it did so—and it works as expected in all processor and paging modes.

PA physAddr = MM_GetPhysPage();
ASSERT(physAddr != NULL);
MM_MapPage(physAddr, physAddr, PTE_P | PTE_RW | PTE_US | PTE_A);
memset(PA_TO_PTR(physAddr), 0, PAGE_SIZE);
Log(“zeroed page at %p\n”, PA_TO_PTR(physAddr));

I hesitate to claim that every RTL routine achieves complete register size and mode agnosticism, but it is a very good percentage. If the library-provided types, accessors, and the like are used, tests tend to be easily moved to all modes with only a little extra effort.

6. Let’s Get Real

While it is easy for me to say that writing tests in FrobOS is simple, perhaps the point is better amplified with a real example: a FrobOS test written in response to a bug. Let’s dive into device testing. The purpose of the test is to make sure the VMware SVGA device is disabled and invisible when turned off in the virtual machine configuration file, and remains off across a suspend and resume operation.

As discussed earlier, there are three pieces:

  1. The entry in suite.def that includes the configuration option that turns the SVGA device off, specifies which hardware version to use, and so on:

835729-svga-not-present: # Basic testing when SVGA device is removed
all  (-passthru “svga.present=FALSE”
-passthru “virtualhw.version=8”
-bits 32 -cpus 1),

  1. The file, which is the same as shown previously
  2. The test source


* Copyright 2012 VMware, Inc.  All rights reserved. — VMware Confidential

* main.c —
*          Test basic functionality with the svga device removed.

#define ALLOW_FROBOS32
#include “frobos.h”
#include “vm_device_version.h”

TESTID(835729, “SVGA-not-present”);

* Frobos_Main —
*          This is the main entry point for the test.
* Results:
*          None
PCI_Device *pci;


            Test_SetCase(“Verify SVGA device not present”);

            pci = PCI_Search(PCI_VENDOR_ID_VMWARE,



            if (Test_IsDevelMonitor()) {
Test_SetCase(“Suspend resume with no SVGA device”);


A couple of new features are used here. The (optional) SKIP_DECL makes the test refuse to run (SKIP) if it is booted directly on hardware. The PCI library is used to gain access to the device. The source calls PCI_Init() and searches for the various device IDs that have been used by our SVGA device, and tests to make sure it is not found. The special backdoor call to force a suspend/resume sequence only is supported on a developer build, so we protect
the call appropriately. This test reproduces the bug on an unfixed tree, and runs in less than 10 seconds.

The author of the test tells me this test took him 30 minutes to write, including a coffee break. A similar test, if possible at all, would take much longer in any other operating system. Plus, programmers can convert this test to run in compatibility mode and 64-bit mode with the addition of two lines of source and two lines in suite.def. These additional lines are seen at the start of the next and prior examples.

7. What about SMP?

By convention, the x86 architecture distinguishes the first CPU to boot from those CPUs that boot later. The first is called the Boot Strap Processor (BSP), and those that follow are called auxiliary processors (AP). Assuming you want to test CPU features in an SMP environment, FrobOS offers APIs to bring APs into the action. Additional CPUs configured in a virtual machine, or those present natively, are booted and then ‘parked’ looping on a shared variable.*

So what do we need to make an SMP FrobOS test? Following in our minimalist vein, here are the source and differences from the simplest possible test:

•           The suite.def file adds an option to enable more CPUs:

smp (-cpus 0)      # use all available CPUs

•           The source file main.c is as follows:

#define ALLOW_FROBOS32
#define ALLOW_FROBOS48
#define ALLOW_FROBOS64
#include <frobos.h>


TESTID(2, “Journal SMP example test”);

static void
SayHello(void * unused)
Log(“Hello from CPU %u\n”, SMP_MyCPUNum());


*Using a shared variable might seem like a strange choice, after all there are other mechanisms such as Inter-Processor Interrupts (IPI) that are specifically designed for one CPU to send messages to another CPU. Using an IPI requires both the sender and receiver to have their APIC enabled, and the receiver must be ready and able to receive interrupts. While this is certainly a valid set of test conditions, it is not reasonable to impose them as a limitation on all SMP tests.

int i;

for (i = 0; i < SMPNumCPUs(); i++) {
SMP_RemoteCPUExecute(i, SayHello, NULL);


I made the test able to run in all three basic modes, and introduced a couple of new features. This particular SKIP_DECL(SK_SMP_2) makes the test skip when there is only one CPU present. In addition, I used SMP_RemoteCPUExecute() to kick each AP into action, while SMP_WaitAllIdle(), as its name suggests, waits for all APs to become idle.

The test simply iterates through all available CPUs, asking each CPU to execute the SayHello() function. Each AP polls for work and executes the function as soon as it can, resulting in the following output:

TEST: 0002: Journal SMP example Test CHANGE: 1709956
Hello from CPU 0
Hello from CPU 6
Hello from CPU 5
Hello from CPU 4
Hello from CPU 2
Hello from CPU 3
Hello from CPU 7
Hello from CPU 1
PASS: Test 0002: Journal SMP example Test
Frobos: Powering off VM.

This was performed on an 8-way virtual machine. As one might hope, the code functions without modification in environments with any number of CPUs. Plus, the same code runs in 32-bit, compatibility, and 64-bit modes.

The SMP environment provided is not overly complex. Once booted into a GCC compatible runtime environment, APs do nothing unless asked. Data is either SHARED or PRIVATE (default). The run-time library provides basic locks, simple CPU coordination routines, and reusable barriers. For almost all runtime functions, the AP can do whatever the BSP can do.

8. There’s More

When frobos-run runs a test in a virtual machine, it has the means to set a number of important options in the configuration file, including the Virtual Hardware version. This allows (or denies, depending on your perspective) guest visibility for certain hardware features, including instruction set extensions, maximum memory, maximum number of CPUs, and so on. If needed, a particular FrobOS test can inspect a variable to determine the specific hardware value and thereby tailor error checking or feature analysis as needed.

9. Conclusion

FrobOS is a great tool for writing low-level x86 and hardware tests on the PC platform, both for virtual machines and native operation. For test authors, it offers a rich run-time environment with none of the blinkers and constraints so typical of COS running on this platform. For product testing, FrobOS offers an ever-growing catalog of directed tests, with extensive logging and failure reporting capabilities. For most test running purposes, frobos-run hides the differences between VMware ESX and VMware Workstation, allowing the test author to check the behavior of either environment.

By its nature, FrobOS is a great characterization tool. The simple run-time model and very low requirements mean almost any x86-compatible machine that can boot from a floppy disk, PXE, USB disk or CD-ROM can be tested. It is a particularly easy fit for developers running Linux that wish to make or run tests, FrobOS uses the normal tools (make, gcc, gas, and so on) to build the kernel and a large catalog of tests. It has a very capable executive script that hides most of the complexities of building and running the tests: individually, as specific collections, or as a whole.

We are extending the use of FrobOS in several areas, including:

  • Building fuzzers, with three already built (CPU, USB device, and VGA FIFO)
  • Dynamic assemblers, because sometimes you just cannot make up your mind
  • RTPG, a CPU fuzzer
  • UEFI, a new BIOS
  • Coverage, examining who executes what and why

Now, it is your turn. In addition to the documentation and sources in the bora/vmcore/frobos directory, I encourage you to explore the following resources:

  • The output of the frobos-run –help command
  • Existing test and suite.def examples

Join my team in helping better test and understand our hypervisor.


Thanks to present and past authors of the tests, libraries, and tools that have become what we call FrobOS. The list is long, but includes the past and present Monitor team, FrobOS maintainers, and Monitor reliability: Rakesh Agarwal, Mark Alexander, Kelvin Fong, Paul Leisy, Ankur Pai, Vu Tran, and Ying Yu.

Many thanks to Alex Garthwaite and Rakesh Argarwal for their guidance in writing this article, the reviewers without whom there would be many more mistakes of all sorts, and Mark Sheldon for the real SVGA enablement test.

1Perforce spelunking suggests the first check-in was more than 10 years ago.