Setup of persistent docker node on ARM M1 MacBook

Callum Smith
6 min readDec 2, 2020

Since the launch of the new M1 MacBook Pro I think many of us who use Macs for our daily development have been excited to take advantage of the octa-core power afforded by the M1 chip not least to try and curb those lengthy build times for containers.

In this we have an issue: Docker does not work on the M1 chip.

Docker has dependencies on Golang at its core which has no intention of disrupting their regimented release schedule to adopt support for the new M1 chip. They have, however, committed to releasing support for M1 in the new version 1.16 to be released in February 2021 — you can track the progress of the release on github. Additionally the virtualisation layer within MacOS has changed on the M1 chips and will likely require some tweaks within the Docker app to fully integrate this new system.

What to do next? Well, when you are running Docker on a Mac you are in fact running a virtual machine with a light linux kernel that runs the docker stack and provides an interface for you to run your containers in a transparent way. This opens the opportunity for us to configure this manually, using the new virtualisation core and a light linux image we can run Docker manually ourselves inside a VM.

Below I’ve compiled some excellent guides and github issues together to make a single coherent tutorial that should get you up and running with Docker on your M1 Mac while we wait for the component pieces for the Docker app to be updated for us. Waiting for these things is inevitable when we depend on open source developers to work on these critical tools.

First off: Install Xcode from the App Store. This takes a little while so getting this going first off is a good idea.

You will also need homebrew to be installed. I recommend using the stable homebrew compiled for x86_64 and not the experimental ARM branch.

arch -arch x86_64 /bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)"

Next you will want to get a disk image, initrd and kernel file for any linux distribution. In this guide I have used the cloud image for Ubuntu, you can choose anything, but the process of getting it up and running may differ.

Kernel:
https://cloud-images.ubuntu.com/releases/focal/release/unpacked/ubuntu-20.04-server-cloudimg-arm64-vmlinuz-generic
Initrd:
https://cloud-images.ubuntu.com/releases/focal/release/unpacked/ubuntu-20.04-server-cloudimg-arm64-initrd-generic
Disk image:
https://cloud-images.ubuntu.com/releases/focal/release/ubuntu-20.04-server-cloudimg-arm64.tar.gz

Collect these files together in a folder, I chose ~/VM/docker-ubuntu-cloud/

mkdir -p ~/VM/docker-ubuntu-cloud
cd ~/VM/docker-ubuntu-cloud
mv ~/Downloads/ubuntu-20.04-server-cloudimg-arm64-vmlinuz-generic vmlinuz.gz
mv ~/Downloads/ubuntu-20.04-server-cloudimg-arm64-initrd-generic initrd
mv ~/Downloads/ubuntu-20.04-server-cloudimg-arm64.tar.gz diskimg.tar.gz

You will want to unzip your disk image and the kernel file and tidy up the image tarball

gunzip vmlinuz.gz
tar xvzf diskimg.tar.gz
rm diskimg.tar.gz

Accept the license agreement for the xcodebuild tool

sudo xcodebuild -license

Clone the VM command line tool from github and build it using our Xcode install and copy it to our local bin

git clone https://github.com/evansm7/vftool
cd vftool
xcodebuild
cd ../
cp vftool/build/Release/vftool /usr/local/bin/
source ~/.zshrc

Start the server using vftool

vftool -k vmlinuz -i initrd -d focal-server-cloudimg-arm64.img -m 4096 -a “console=hvc0”

The ouput will look something like this:

vftool[43462:425638] vftool (v0.1 25/11/2020) starting
vftool[43462:425638] +++ kernel at vmlinuz -- file:///Users/xi0s/VM/ubuntu-cloud-docker/, initrd at initrd -- file:///Users/xi0s/VM/ubuntu-cloud-docker/, cmdline 'console=hvc0', 1 cpus, 4096MB memory
vftool[43462:425638] +++ fd 3 connected to /dev/ttys001
vftool[43462:425638] +++ Waiting for connection to: /dev/ttys001

Connect to your VM using the screen — making note of which tty socket is actually connected to your VM. You will need to do this in a new terminal tab/window from the one running your VM.

screen /dev/ttys001

The cloud image for Ubuntu is configured to be provisioned by cloud-init by default. We will want to manually provision the image and set the root password along with some host keys and networking. Run the following commands inside your VM to get it provisioned for the next step.

mkdir /mnt
mount /dev/vda /mnt
chroot /mnt
touch /etc/cloud/cloud-init.disabledecho 'root:root' | chpasswdssh-keygen -f /etc/ssh/ssh_host_rsa_key -N '' -t rsa
ssh-keygen -f /etc/ssh/ssh_host_dsa_key -N '' -t dsa
ssh-keygen -f /etc/ssh/ssh_host_ed25519_key -N '' -t ed25519
cat <<EOF > /etc/netplan/01-dhcp.yaml
network:
renderer: networkd
ethernets:
enp0s1:
dhcp4: true
version: 2
EOF
exit
umount /dev/vda

Close your VM (Press Ctrl+c inside the terminal window that is running your VM) and expand the disk image to allow other things to be installed and run. You will need to install the qemu package via homebrew for this step

arch -arch x86_64 brew install qemu
qemu-img resize focal-server-cloudimg-arm64.img +5G

Now start the server with the disk attached

vftool -k vmlinuz -i initrd -d focal-server-cloudimg-arm64.img -m 4096 -a “console=hvc0 root=/dev/vda”

Connect again using screen — your tty socket may have changed so just be sure to double check

screen /dev/ttys001

Now login using your new root credentials (username root password root)

Update the disk size inside the VM

resize2fs /dev/vda

Install Docker — below is a list of commands from the docker install guide for Ubuntu on ARM

apt-get update
apt-get install \
apt-transport-https \
ca-certificates \
curl \
gnupg-agent \
software-properties-common
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | apt-key add -
add-apt-repository \
"deb [arch=arm64] https://download.docker.com/linux/ubuntu \
$(lsb_release -cs) \
stable"
apt-get update
apt-get install docker-ce docker-ce-cli containerd.io

Create a user account to login with and add it to the docker group. Here I use the username docker and ensure the home directory is created with the -m flag.

useradd -g docker -m docker

Login to that user account and test docker is working

su — docker
docker run hello-world

If you don’t already have an ssh-key configured generate one on your local Mac — you will want to do this on your local machine in a new terminal window/tab

ssh-keygen

Copy your ssh-key to your clipboard

cat ~/.ssh/id_rsa.pub | pbcopy

Now go back to your VM screen session and setup the authorized_keys file

mkdir ~/.ssh
chmod 700 ~/.ssh
touch ~/.ssh/authorized_keys
chmod 600 ~/.ssh/authorized_keys

Now paste your public key into the file

vi ~/.ssh/authorized_keys

Get the ip address of your running VM

ip a | grep 192.168

An example output is below:

inet 192.168.64.6/24 brd 192.168.64.255 scope global dynamic enp0s1

From your local machine now test ssh to your machine (be sure to say yes to the host-key. You will need this reference in your known-hosts for docker run to work correctly.

ssh docker@192.168.64.6
exit

If you don’t already have it, you will now need to install the docker client on your local machine. You can do this using brew or manually compiling them via the instructions on the docker website.

Configure docker to connect to your new machine over ssh

docker context create dockervm --docker "host=ssh://docker@192.168.64.6"
docker context use dockervm

Now you can run your docker VMs from your local machine connecting directly to your VM, let’s test that.

docker run hello-world

Now you can move on to more complex services! Note than unlike your traditional Docker setup, you now won’t be able to use localhost as a reference to services running on your VM. So if you run an nginx container, expect it to be running on the IP address of your VM.

docker run -p 80:80 nginx:latest

You will need to open the IP of your VM in the browser: http://192.168.64.6

And that’s about it!

Update: Just a note that the DHCP server seems to not have a long lease, so if you turn your VM off over night it’s likley you will need to set a new IP address. It may be easier to assign a static IP address rather than using DHCP for a more consistent configuration.

--

--