Offline docs (switch to live docs)                         UI-only         CLI-only

How to deploy virtual machines

Virtual machines bring tremendous advantages to MAAS. We use LXD as our primary VM host, so everything about MAAS VMs is optimised for LXD VMs. For reference, VM hosts are called “KVMs” in the MAAS Web UI.

If KVMs and LXD VMs are not new to you, feel free to go ahead and set up LXD, create one or more VM hosts, and start deploying virtual machines to cover your workload. The rest of this article provides a little theory about MAAS VM hosts, just in case you need to catch up.

MAAS VM hosting

MAAS VM hosts allow for the dynamic composition of nodes from a pool of available hardware resources (e.g. disk space, memory, cores). You can create virtual machines (VMs) as needed within the limits of your resources, without concern for physical hardware.

This theory section will help you learn:

MAAS currently supports VM hosts and VMs created with LXD VMs and VM hosts as the preferred VM hosting method. As a legacy offering, MAAS still supports VM hosts and VMs created via libvirt.

About VM hosts

A VM host is simply a machine which can run virtual machines (VMs) by allocating resources across the VMs you want to create. If needed, you can over-commit resources, allocating more resources than actually available, so long as you don’t try to use more than the VM host has available at any one time. Once MAAS has enlisted, commissioned, and allocated a newly-added machine, you can deploy it as a VM host. Alternatively, you can create a VM host from a machine you’ve already got running.

VM hosts are particularly useful for Juju integration, allowing for dynamic allocation of VMs with custom interface constraints. Alternatively, if you would like to use MAAS to manage a collection of VMs, the robust web UI allows you to easily create and manage VMs, logically grouped by VM host. Six conspicuous features include:

Simply put, a VM host is a machine which is designated to run virtual machines (VMs). A VM host divides its resources (CPU cores, RAM, storage) among the number of VMs you want to create, based on choices that you make when creating each VM. It is also possible to overcommit resources – that is, use more resources than the VM host actually has available – as long as you use the VMs carefully. Once MAAS has enlisted, commissioned, and allocated a newly-added machine, you can deploy it as a VM host.

About VM host storage pools

“Storage pools” are storage resources managed by a VM host. A storage pool is a given amount of storage set aside for use by VMs. A pool can be organised into storage volumes, assigned to VMs as individual block devices.

NOTE: For LXD VM hosts, each VM can be assigned a single block device from the storage pool.

The MAAS web UI displays information about each VM host’s storage pools so you can understand your resource usage at a glance:

About LXD VM hosts

About LXD VM host authentication

MAAS 3.1 provides a smoother experience when connecting an existing LXD server to MAAS, guiding the user through manual steps and providing increased connection security with use of certificates. Currently, each MAAS region/rack controller has its own certificate. To add a LXD VM host to MAAS, the user needs to either add the certificate for each controller that can reach the LXD server to the trust list in LXD, or use the trust_password (in which case the controller talking to LXD will automatically add its certificate to the trust).

This doesn’t provide a great user experience, as the former process is cumbersome, and the latter is not suggested for production use for security reasons. To improve this, MAAS 3.1 manages per-LXD keys/certificates, and provide a way for users to get the content of certificates, to authorise MAAS in LXD.

About on-the-spot certificate creation

As a MAAS user, you want to register a LXD host into MAAS using certificates for authentication – to follow LXD production deployment best practices. The standard way for clients to authenticate with LXD servers is through certificates. The use of trust_password is only meant as a way to interact for initial setup.

While prior versions of MAAS support both ways of authentication (and automatically adds the certificate for the rack talking to LXD when registering the VM host), the user experience is lacking, since there’s no control over the certificate being used. In addition, each rack uses a different certificate, making it hard to manage scenarios where multiple racks can connect to a LXD server.

For these reasons, when adding a LXD host, MAAS 3.1 provides a way to generate a secret key and certificate pair to use specifically for that server, and show the certificate to the user, so that they can add it to the LXD server trust list. The user experience changes to something like the following:

About bringing your own certificates

As a MAAS user, you may want to register a LXD host into MAAS by providing a private key for a certificate that’s already trusted by the LXD server. For example, you may already have set up certificates in the server trust for MAAS to use, MAAS should provide a way to import it, instead of generating a new one.

With MAAS 3.1, it’s possible to import an existing key/certificate pair for use with a LXD server when registering it with MAAS. MAAS stores the key/certificate instead of generating new ones.

NOTE: The imported key must not have a passphrase; otherwise, MAAS will not be able to use it.

About LXD VM host project summaries

Each LXD VM host provides a “Project” tab that summarises the current state of the LXD KVM:

This tab identifies the project, shows its current resource state, and provides the ability to select existing VM hosts and perform specific actions on them – as well as being able to compose new VMs on the spot.

About LXD VM host resource details

This tab presents a summary of the LXD VM host’s resource usage:

The only interactive option on this tab allows you to map or unmap resource usage to NUMA nodes.

About VM host settings

VM hosts have several settings. Modify these by selecting the ‘Settings’ tab and editing items directly. Options include a VM host’s address, password, network zone, resource pool, and memory and CPU overcommit sliders.

About VMs and NUMA

MAAS provides extensive optimisation tools for using NUMA with virtual machines. Earlier versions of MAAS guarantee that machines are assigned to a single NUMA node that contains all the machine’s resources. As of 2.9, MAAS now allows you to see how many VMs are allocated to each NUMA node, along with the allocations of cores, storage, and memory. You can quickly spot a VM running in multiple NUMA nodes, and optimise accordingly, with instant updates on pinning and allocations. You can also tell which VMs are currently running.

In addition, you can get a bird’s-eye view of network configuration:

MAAS also shows hugepages information (if they are in use) and prevents overcommit when using them. Hugepages essentially allow a much larger memory cache associated with the core. This obviously reduces the number of times a core has to access memory, but because the core must swap entire hugepages, optimising usage of them can be complex. MAAS helps you create these optimisations by giving you a discrete view of hugepages associated with your VM, helping you decide whether you need to use them or not.

About support for NUMA, SR-IOV, and hugepages

VM host management has been redesigned to support NUMA/SR-IOV configurations and hugepages from the API/CLI. Users can:

Via the CLI, users can see more details about NUMA-bearing VM host resources and configure hugepages. Select the relevant “CLI” link in the top menu to access this information.

About over-committed resources

Over-committed resources are those allocated beyond what’s available in the physical resource. Using sliders on the configuration page, you can limit whether MAAS will attempt to overcommit CPU and memory. The input fields to the right of the sliders accept floating-point values from 0 to 10, with a default value of 1.

The following shows four theoretical examples of these ratios and how they affect physical resource allocation:

  1. 8 physical CPU cores * 1 multiplier = 8 virtual CPU cores
  2. 8 physical CPU cores * 0.5 multiplier = 4 virtual CPU cores
  3. 32 physical CPU cores * 10.0 multiplier = 320 virtual CPU cores
  4. 128GB physical memory * 5.5 multiplier = 704G virtual Memory

Over-committing resources allows a user to compose many MAAS-managed machines without worrying about the physical limitations of the host. For example, on a physical host with four cores and 12 GB of memory, you could compose four libvirt machines, each using two cores and 4 GB of memory. This arrangement over commits the available physical resources. Provided you never run all four VMs simultaneously, you would have all the benefits of MAAS-managed VMs without over-taxing your host.