Ryan Schachte's Blog
Linux VMs for ARM CPUs using Multipass, Terraform & Ansible Hacking the Multipass Go lib and Terraform to support Linux VMs on ARM
12 min read
2.19.24
Jump to comments

The increasing presence of ARM CPUs in the consumer market, notably driven by Apple’s introduction of its ARM-based M-line processors (M1, M2, etc.), has influenced the computing landscape for better or for worse. This shift has presented frustrating challenges for developers, particularly those using Vagrant for environment orchestration, due to its dependency on VirtualBox—a tool primarily designed for x86 architecture and not fully optimized for ARM. While VirtualBox has initiated support for ARM, particularly for Apple’s silicon, it remains a work in progress, leading developers to seek alternative solutions or adjust their workflows.

With lacking support for simple and configurable VMs on Apple silicon, I almost gave up on the whole thing… but then I found Multipass! Multipass comes from the folks over at Canonical who also happened to be responsible for creating Ubuntu, one of the most popular Linux distributions we have today.

Automation woes with Terraform and Multipass

There are two primary libraries I was interested in to achieve VM automation on ARM:

Big shoutout to Lars Tobias Skjong-Børsting for continuing development on the fork of go-multipass and pushing the Terraform provider!

While this is great out of the box, I noticed the biggest issue was that I could not easily automate static IP assignments to the development machines. This is incredibly annoying when you’re using Ansible because I assign the IP addresses to different machine groups. If they are constantly changing between tearing down the VMs, then I’m constantly editing the inventory files.

Let’s take a look at why:

hosts.dev.ini
[master]
node1 ansible_host=192.168.64.97 node_type=master ansible_ssh_common_args='-o StrictHostKeyChecking=no'
 
[workers]
node2 ansible_host=192.168.64.98 node_type=worker ansible_ssh_common_args='-o StrictHostKeyChecking=no'
node3 ansible_host=192.168.64.99 node_type=worker ansible_ssh_common_args='-o StrictHostKeyChecking=no'

Within Ansible, you use this idea of an inventory file to specify which hosts you want to run your playbooks against. As you can see, I have static IPs pre-defined in the hosts file.

When using the Multipass out of the box, I noticed that when spinning up and down the machines, the IP addresses and MAC addresses would change every single time! As a result, this non-deterministic behavior would then break my entire Ansible workflow due to my reliance on consistent IP addresses in the inventory file. To continuously remember to update the hosts file every time I create VMs would add way too much friction to my development experience.

After doing a bit of research, I thought this was probably just an issue with the DHCP server and how IP addresses were being assigned to the machines. It seemed like I could simply modify the entries in /var/db/dhcpd_leases manually for the associated nodes, but what I ran into was just duplicate entries for nodes with the same name.

Launch machine: multipass launch --name node2

/var/db/dhcpd_leases
{
        name=node2
        ip_address=192.168.64.39
        hw_address=1,52:54:0:1c:76:ab
        identifier=1,52:54:0:1c:76:ab
        lease=0x65d562c2
}
{
        name=node2
        ip_address=192.168.64.38
        hw_address=1,52:54:0:b2:22:ce
        identifier=1,52:54:0:b2:22:ce
        lease=0x65d41109
}

I also tried various modifications with cloud-init files to pre-generate a network interface with a specific IP, but didn’t have luck with that approach either.

Getting static IPv4 assignments working with Multipass

Fortunately, I came across this article - How to configure static IPs for Multipass, but it was specific to Linux systems. While Mac and Linux are both Unix based systems, their network interface tooling is wildly different. On M1 Mac, I’m relying on the QEMU backend driver, which only supports physical interfaces, whereas the article is mentioning virtual interfaces.

Luckily for me, I learned you can view the supported physical network interfaces Multipass supports directly with multipass networks.

$ multipass networks
Name   Type       Description
en0    wifi       Wi-Fi
en4    ethernet   Ethernet Adapter (en4)
en5    ethernet   Ethernet Adapter (en5)
en6    ethernet   Ethernet Adapter (en6

In my case, I have 4 physical network interfaces available, so I figured I could just leverage en0 as my interface and create an instance locally.

multipass launch --name schachte --network name=en0,mode=manual,mac="52:54:00:4b:ab:cd"

Let’s see if this did anything by invoking ip a on the new VM.

$ multipass exec -n schachte -- ip a
 
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: enp0s1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
    link/ether 52:54:00:c2:4e:15 brd ff:ff:ff:ff:ff:ff
    inet 192.168.64.42/24 metric 100 brd 192.168.64.255 scope global dynamic enp0s1
       valid_lft 86357sec preferred_lft 86357sec
    inet6 fdf3:de0d:f4a7:1f16:5054:ff:fec2:4e15/64 scope global dynamic mngtmpaddr noprefixroute
       valid_lft 2591960sec preferred_lft 604760sec
    inet6 fe80::5054:ff:fec2:4e15/64 scope link
       valid_lft forever preferred_lft forever
3: enp0s2: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
    link/ether 52:54:00:4b:ac:bb brd ff:ff:ff:ff:ff:ff

As you can see, the enp0s2 interface is assigned a MAC address of 52:54:00:4b:ac:bb!

Ok, I knew I was getting somewhere, but I still needed to assign a static IP address to the machine, which can be done via netplan.

Apply a Netplan file manually against the VM
multipass exec -n schachte -- sudo bash -c 'cat << EOF > /etc/netplan/10-custom.yaml
network:
    version: 2
    ethernets:
        extra0:
            dhcp4: no
            match:
                macaddress: "52:54:00:4b:ac:bb"
            addresses: [192.168.68.90/24]
EOF'

Notice in this case, I’m turning off DHCP and assigning the static IPv4 address of 192.168.68.90 to the node. This can be applied via multipass exec -n schachte -- sudo netplan apply.

Let’s see if this took affect -

$ multipass info schachte
Name:           schachte
State:          Running
Snapshots:      0
IPv4:           192.168.64.41
                192.168.64.90
Release:        Ubuntu 22.04.3 LTS
Image hash:     f1c6fc0bb527 (Ubuntu 22.04 LTS)
CPU(s):         1
Load:           0.23 0.05 0.02
Disk usage:     1.6GiB out of 3.3GiB
Memory usage:   148.4MiB out of 1.9GiB
Mounts:         --

Perfect! I see 192.168.68.90. You can verify connectivity by sending an ICMP request to it via ping.

ping 192.168.64.90
 
PING 192.168.64.90 (192.168.64.90): 56 data bytes
64 bytes from 192.168.64.90: icmp_seq=0 ttl=64 time=27.466 ms
64 bytes from 192.168.64.90: icmp_seq=1 ttl=64 time=1.086 ms
64 bytes from 192.168.64.90: icmp_seq=2 ttl=64 time=1.074 ms

As I’m getting responses and not timeouts, I know that this is looking good.

Terraform and Multipass

Well, that was a lot of manual typing and I had the initial goal of automating my nodes via Terraform and so far, I haven’t seen any Terraform involved. Looking at the provider documentation, there is a glaring problem which is that I don’t see any networking options provided out of the box, :(.

Required

Optional

This is problematic as we need to specify the network interface to initialize the node with as well as a static MAC address to the node so Multipass doesn’t assign one at random.

Hacking into the go-multipass SDK

As mentioned previously, the Terraform provider uses a Go client SDK under the hood that interfaces directly with Multipass. You can check it out here: https://github.com/larstobi/go-multipass.

The functionality of the lib is pretty straight forward as it expects a set of parameters that are used to feed into the Multipass binary.

When I was reading this, I noticed a big glaring problem which was the networks feature isn’t supported! This can be added pretty easily with:

launcher.go
type LaunchReqV2 struct {
	Image            string
	CPUS             string
	Disk             string
	Name             string
	Memory           string
	CloudInitFile    string
	MacAddress       string
	NetworkInterface string
}

This gives us the ability to specify an optional MAC address for static MAC address assignments to our VMs as well as a physical network interface available on our dev machines.

Additionally, we can add support for appending the options into the binary when invoked:

launcher.go
if launchReqV2.NetworkInterface != "" {
    if launchReqV2.MacAddress != "" {
        args = append(args, "--network", fmt.Sprintf("name=%s,mode=manual,mac=%s",
            launchReqV2.NetworkInterface,
            launchReqV2.MacAddress,
        ))
    } else {
        // MAC address randomly assigned during VM initialization in this case
        args = append(args, "--network", fmt.Sprintf("name=%s,mode=manual",
            launchReqV2.NetworkInterface,
        ))
    }
}

This is pretty hacky, but an extra, more cautious measure, would be to prevalidate the list of available network interfaces via the multipass networks command to create more verbose error messages early if the interface was invalid.

Extending the Multipass provider networking support

Now that we’ve modified the binary that handles interfacing with Multipass under the hood, we need to add support for it on the Terraform provider side.

The first step was to update the schemas that TF will look at when validating input parameters to the resource. Static analysis will complain otherwise and the input parameters will not be known to the resource. In my case, I added the mac_address and network_interface parameters to the resource.

resource_instance.go
"mac_address": {
    MarkdownDescription: "Custom MAC address to assign to the VM instance",
    Type:                types.StringType,
    Optional:            true,
    PlanModifiers: []tfsdk.AttributePlanModifier{
        tfsdk.RequiresReplace(),
    },
},
"network_interface": {
    MarkdownDescription: "Set custom network interface from \"multipass networks\"",
    Type:                types.StringType,
    Optional:            true,
    PlanModifiers: []tfsdk.AttributePlanModifier{
        tfsdk.RequiresReplace(),
    },
},

Additionally, without showing all the granular changes, I wanted to ensure we initialized the call to Multipass SDK lib with the TF input parameters, which can be done easily in the same file.

resource_instance.go
_, err := multipass.LaunchV2(&multipass.LaunchReqV2{
    Name:             plan.Name.Value,
    Image:            plan.Image.Value,
    CPUS:             cpus,
    Memory:           plan.Memory.Value,
    Disk:             plan.Disk.Value,
    CloudInitFile:    plan.CloudInitFile.Value,
    NetworkInterface: plan.NetworkInterface.Value,
    MacAddress:       plan.MacAddress.Value,
})

This pipes the input parameters from Terraform into the backend lib that interfaces with Multipass to then initialize the VM with a predefined MAC address and network interface at boot time!

VM automation with Terraform

Leveraging our newly modified provider, we can automate this in 2 phases.

main.tf
module "multipass" {
  count  = terraform.workspace == "dev" ? 1 : 0
  source = "./modules/multipass"
 
  instance_names = ["node1", "node2", "node3"]
  ip_addresses   = ["192.168.64.97/32", "192.168.64.98/32", "192.168.64.99/32"]
  mac_addresses  = ["52:54:00:4b:ab:bd", "52:54:00:4b:ab:cd", "52:54:00:4b:ab:dd"]
  cpus           = 1
  memory         = "2G"
  disk           = "3.5G"
  image          = "22.04"
}

In my main.tf I wanted to ensure that this module was only invoked when a workspace called dev was active. From here, I would then pre-define the host names, mac and IP addresses to use during the initialization step.

Create a dev workspace to use
terraform workspace new dev
terraform workspace select dev

Since Ansible is agentless and uses SSH, I wanted to ensure I could use passwordless authentication to access the nodes automatically. This meant that I needed to update each nodes authorized_keys file with a public key so Ansible could connect. For this, I decided to leverage a cloud-init file to automate the setup.

./templates/cloud-init.yaml.tpl
#cloud-config
users:
  - name: schachte
    sudo: ALL=(ALL) NOPASSWD:ALL
    ssh_authorized_keys:
      - ssh-rsa <PUBLIC_KEY>

I would use this template file to populate it with any cloud-init values I wanted to interpolate during runtime of the plan.

In this case, I hardcode the public key. However, cloud-init is highly configurable and this is a good opportunity to configure the public key to be a variable and various other install parameters that are dynamic.

./modules/multipass/main.tf
resource "local_file" "cloudinit" {
  for_each = { for i, name in var.instance_names : name => {
  } }
 
  filename = "${path.module}/cloud-init-${each.key}.yaml"
  content  = templatefile("${path.module}/templates/cloud-init.yaml.tpl")
}

Since the provider doesn’t support inlining the contents of cloud-init directly, I decided to persist an interpolated template to disk with the variables I pass in that align with each node I configured. These are created and destroyed automatically when I run and destroy the plan.

.rwxr-xr-x schachte staff 854 B Mon Feb 19 18:40:29 2024 cloud-init-node1.yaml
.rwxr-xr-x schachte staff 854 B Mon Feb 19 18:40:29 2024 cloud-init-node2.yaml
.rwxr-xr-x schachte staff 854 B Mon Feb 19 18:40:29 2024 cloud-init-node3.yaml

As you can see, the output ends up looking like this adjacent to the module files. Now, it’s time to actually create the machines leveraging our work from above.

./modules/multipass/main.tf
resource "multipass_instance" "dev_vm" {
  for_each = { for i, name in var.instance_names : name => {
    ip_address  = var.ip_addresses[i]
    mac_address = var.mac_addresses[i]
  } }
 
  name   = each.key
  cpus   = var.cpus
  memory = var.memory
  disk   = var.disk
  image  = var.image
 
  cloudinit_file    = local_file.cloudinit[each.key].filename
  network_interface = "en0"
  mac_address       = each.value.mac_address
 
  provisioner "local-exec" {
    command = <<-EOT
      multipass exec ${each.key} -- sudo bash -c 'cat << EOF > /etc/netplan/10-custom.yaml
      network:
        version: 2
        ethernets:
          extra0:
            dhcp4: no
            match:
              macaddress: "${each.value.mac_address}"
            addresses: ["${each.value.ip_address}"]
      EOF'
      multipass exec ${each.key} -- sudo netplan apply
    EOT
  }
}

In the above, you’ll notice we are leveraging the new fields:

This initial step will create the VMs like we did manually at the beginning. Similar to the article for configuring static IPs for Multipass, I’ve codifed the Netplan assignments via a standard command.

I think this could be further automated in the lib, but I decided to keep this piece simple.

  1. Create a new Netplan file that assigns a static IP for the associated MAC address we defined
  2. Apply the new Netplan file on the node directly

Verifying connection with Ansible

Now that the nodes should be running after applying the Terraform plan, we can validate Ansible can connect to them successfully.

Verifying SSH connection with Ansible
ansible all -m ping -i inventory/inventories.dev.ini
 
node2 | SUCCESS => {
    "ansible_facts": {
        "discovered_interpreter_python": "/usr/bin/python3"
    },
    "changed": false,
    "ping": "pong"
}
node1 | SUCCESS => {
    "ansible_facts": {
        "discovered_interpreter_python": "/usr/bin/python3"
    },
    "changed": false,
    "ping": "pong"
}
node3 | SUCCESS => {
    "ansible_facts": {
        "discovered_interpreter_python": "/usr/bin/python3"
    },
    "changed": false,
    "ping": "pong"
}

I have my changes in a WIP branch, but you can view them here:

Feel free to reach out if you have any questions!

Care to comment? click to expand
Author: Ryan Schachte Published: 2024-02-19