Encrypted ZFS Root on Ubuntu Server 20.04 (with USB Unlock)

It’s all nice and dandy to setup unencrypted ZFS on server or setting it up with boot encryption. However, what if we want to use USB to unlock the encrypted drive? And no, it’s not as crazy as it seems. Scenarios are actually plentiful.

One scenario is when you have your servers encrypted (as realistically everybody should) but you don’t necessarily want (or can) enter the password. If you can plug in USB with a key and make ZFS use that key, you suddenly have password-less boot without connecting to the server. After boot is done, you can unplug the USB and store it somewhere safe. And this can be done by literally anybody you trust - it doesn’t have to be you.

My favorite scenario is using it with self-erasing USB drive. You place the encryption key on the small drive and it will be there for every boot. You can use your server as you normally would. However, if power is lost, your key will disappear and content of server will not be accessible anymore. When would such crazy scenario happen you ask? Well, if anybody is stealing your server they have to unplug it first. And yes, your server is gone but at least your data is not.

I admit I never had that scenario happen to me - fortunately all my servers are still accounted for. But I did RMA disk drives. And worrying about erasing the data when you cannot access it anymore is a bit too late.

Whatever might be your case, let me guide you through setting up natively-encrypted ZFS with a key on the USB drive.

Once you enter the shell of the installation media, the very first step is setting up a few variables - location of disk and USB drive, followed by pool and host name. This way we can use them going forward and avoid accidental mistakes. Make sure to replace these values with the ones appropriate for your system.

DISK=/dev/disk/by-id/^^ata-xxx^^
USB=/dev/disk/by-id/^^usb-xxx^^
POOL=^^Ubuntu^^
HOST=^^server^^

Next let’s sort out question of the encryption key. Assumption is that the key will be on the first partition of the FAT formatted USB drive and we’ll mount it at /tmpusb. While you could create the key material directly, I personally prefer the passphrase as it makes life easier in the case of recovery. If you already have the passphrase on the drive just skip the last command as it will overwrite it.

mkdir /tmpusb
mount -t vfat -o rw "$USB-part1" /tmpusb
echo -n "^^password^^" > /tmpusb/boot.pwd

General idea of my disk setup is to maximize amount of space available for pool with the minimum of supporting partitions. If you are installing on SSD blkdiscard will trim all the data. You can safely ignore any errors on disks that don’t support it.

blkdiscard $DISK 2>/dev/null

sgdisk --zap-all                        $DISK

sgdisk -n1:1M:+127M -t1:EF00 -c1:EFI    $DISK
sgdisk -n2:0:+512M  -t2:8300 -c2:Boot   $DISK
sgdisk -n3:0:0      -t3:8309 -c3:Ubuntu $DISK

sgdisk --print                          $DISK

To kick off the fun of the installation we need debootstrap and zfsutils-linux package.

apt update
apt install --yes debootstrap zfsutils-linux

Now we’re ready to create system ZFS pool.

zpool create -o ashift=12 -o autotrim=on \
    -O compression=lz4 -O normalization=formD \
    -O acltype=posixacl -O xattr=sa -O dnodesize=auto -O atime=off \
    -O encryption=aes-256-gcm -O keyformat=passphrase -O keylocation=file:///tmpusb/boot.pwd \
    -O canmount=off -O mountpoint=none -R /mnt/install $POOL $DISK-part3
zfs create -o canmount=noauto -o mountpoint=/ $POOL/System
zfs mount $POOL/System

Assuming UEFI boot, two additional partitions are needed - one for EFI and one for booting. I don’t have ZFS pool for boot partition but a plain old ext4 as I find potential fixup works better that way.

yes | mkfs.ext4 $DISK-part2
mkdir /mnt/install/boot
mount $DISK-part2 /mnt/install/boot/

mkfs.msdos -F 32 -n EFI $DISK-part1
mkdir /mnt/install/boot/efi
mount $DISK-part1 /mnt/install/boot/efi

Bootstrapping Ubuntu on the newly created pool is next. This will take a while.

debootstrap focal /mnt/install/

zfs set devices=off $POOL

Our newly copied system is lacking a few files and we should make sure they exist before proceeding.

echo $HOST > /mnt/install/etc/hostname
sed '/cdrom/d' /etc/apt/sources.list > /mnt/install/etc/apt/sources.list
cp /etc/netplan/*.yaml /mnt/install/etc/netplan/

Finally we’re ready to “chroot” into our new system.

mount --rbind /dev  /mnt/install/dev
mount --rbind /proc /mnt/install/proc
mount --rbind /sys  /mnt/install/sys
chroot /mnt/install /usr/bin/env DISK=$DISK USB=$USB POOL=$POOL bash --login

Let’s not forget to setup locale and time zone.

locale-gen --purge "en_US.UTF-8"
update-locale LANG=en_US.UTF-8 LANGUAGE=en_US
dpkg-reconfigure --frontend noninteractive locales

dpkg-reconfigure tzdata

To mount EFI and boot partitions, we need to do some fstab setup too:

echo "PARTUUID=$(blkid -s PARTUUID -o value $DISK-part2) \
    /boot ext4 noatime,nofail,x-systemd.device-timeout=5s 0 1" >> /etc/fstab
echo "PARTUUID=$(blkid -s PARTUUID -o value $DISK-part1) \
    /boot/efi vfat noatime,nofail,x-systemd.device-timeout=5s 0 1" >> /etc/fstab
cat /etc/fstab

Now we’re ready to onboard the latest Linux image.

apt update
apt install --yes --no-install-recommends linux-image-generic linux-headers-generic

Followed by the boot environment packages.

apt install --yes zfs-initramfs plymouth grub-efi-amd64-signed shim-signed

Now it’s time to setup boot scripts to ensure USB drive is mounted before ZFS needs it. I found that initramfs’ init-premount directory is the ideal spot.

cat << EOF > /usr/share/initramfs-tools/scripts/init-premount/tmpusb
#!/bin/sh -e

PREREQ="udev"
prereqs() {
    echo "\$PREREQ"
}

case \$1 in
    prereqs)
        prereqs
        exit 0
    ;;
esac

USB="$USB"
POOL="$POOL"

echo "Waiting for \$USB"
for I in \`seq 1 20\`; do
    if [ -e "\$USB" ]; then break; fi
    echo -n .
    sleep 1
done
echo

sleep 2

if [ -e "\$USB" ]; then
    mkdir /tmpusb
    mount -t vfat -o ro "\$USB-part1" /tmpusb
    if [ \$? -eq 0 ]; then
        exit 0
    else
        echo "Error mounting \$USB-part1" >&2
    fi
else
    echo "Cannot find \$USB" >&2
fi
exit 1
EOF
# chmod 755 /usr/share/initramfs-tools/scripts/init-premount/tmpusb

# cat << EOF > /usr/share/initramfs-tools/scripts/init-bottom/tmpusb
#!/bin/sh -e

PREREQ="udev"
prereqs() {
    echo "\$PREREQ"
}

case \$1 in
    prereqs)
        prereqs
        exit 0
    ;;
esac

if [ -e "/tmpusb" ]; then
    umount /tmpusb
    rmdir /tmpusb
fi
EOF

chmod 755 /usr/share/initramfs-tools/scripts/init-bottom/tmpusb

The first script will wait for USB drive if needed and mount it at /tmpusb for ZFS to find. Second script is there just for a bit of cleanup.

If USB drive is not mounted, this will cause boot to fail. If we want ZFS to ask for the passphrase instead (despite having file as the keylocation) a further customization is needed. But note these commands might need adjustment and they definitely need to be repeated each time ZFS package is updated. I might go into the details in some future post but suffice to say this is really not future-proof solution but it’s the minimum set of changes that I could make sed work with.

sed -i 's/load-key/load-key -L prompt/' /usr/share/initramfs-tools/scripts/zfs
sed -i '0,/load-key/ {s/-L prompt//}' /usr/share/initramfs-tools/scripts/zfs
sed -i '/KEYSTATUS=/i \\t\t\t$ZFS load-key "${ENCRYPTIONROOT}"' /usr/share/initramfs-tools/scripts/zfs
sed -i '/KEYSTATUS=/i \\t\t\tKEYLOCATION=prompt' /usr/share/initramfs-tools/scripts/zfs

In lieu of warning, suffice it to say these changes to zfs script are suitable only for this scenario and don’t really work for anything else.

Now we get grub started and update our boot environment. Due to Ubuntu 19.10 having some kernel version kerfuffle, we need to manually create initramfs image. This is also a good moment to check if our script is in.

KERNEL=`ls /usr/lib/modules/ | cut -d/ -f1 | sed 's/linux-image-//'`
update-initramfs -u -k $KERNEL

lsinitramfs /boot/initrd.img-$KERNEL | grep tmpusb

Grub update is what makes EFI tick.

update-grub 2>/dev/null
grub-install --target=x86_64-efi --efi-directory=/boot/efi --bootloader-id=Ubuntu \
    --recheck --no-floppy

Short package upgrade will not hurt.

apt dist-upgrade --yes

We can omit creation of the swap dataset but I personally find its good to have it just in case.

zfs create -V 4G -b $(getconf PAGESIZE) -o compression=off -o logbias=throughput \
    -o sync=always -o primarycache=metadata -o secondarycache=none $POOL/Swap
mkswap -f /dev/zvol/$POOL/Swap
echo "/dev/zvol/$POOL/Swap none swap defaults 0 0" >> /etc/fstab
echo RESUME=none > /etc/initramfs-tools/conf.d/resume

This is a good time to install other packages (e.g.,openssh-server) and do any setup you might need (e.g.,firewall). If nothing else, then setup root password so you have a way to log in (I personally prefer to create another user and leave root passwordless).

passwd

As installation is finally done, we can exit our chroot environment.

exit

And cleanup mount points.

umount /mnt/install/boot/efi
umount /mnt/install/boot
mount | grep -v zfs | tac | awk '/\/mnt/ {print $3}' | xargs -i{} umount -lf {}
zpool export -a

After the reboot you should be able to enjoy your installation.

reboot

Rebooting RB1100HAx4 via Reset Button

One thing that annoyed me about my Mikrotik RB1100HAx4 router was the need to unplug darn thing when I wanted to reboot it. It does have reset button but the darn thing is just there for resetting configuration. Simple reboot was not the part of the repertoire.

Well, that changed with RouterOS 6.47. As of them there is a few more options under settings - most notably reset button configuration. Now action on reset button can be configured.

And it’s easy enough.

/system routerboard reset-button
set enabled=yes on-event="/log info message=(\"Reset button\")\r\n/system reboot"

PS: This works with vast majority of Mikrotik routers and switches. But not all so your mileage may wary.

Ubuntu Server 20.04 on UEFI ZFS Without Encryption

Illustration

With Ubuntu 20.04 Desktop there is a (still experimental) ZFS setup option in the addition to long time manual ZFS installation option. For Ubuntu Server we’re still dependent on the manual steps.

Steps here follow my 19.10 server guide but without the encryption steps. While I normally love having encryption enabled, there are situations where it gets in the way. Most notable example is a machine which you cannot access remotely to enter encryption key.

To start with installation we need to get to the root prompt. Just find Enter Shell behind Help menu item (Shift+Tab comes in handy) and you’re there.

The very first step is setting up a few variables - disk, pool, host name, and user name. This way we can use them going forward and avoid accidental mistakes. Make sure to replace these values with the ones appropriate for your system. It’s a good idea to use something unique for the pool name (e.g., host name). I personally also like having pool name start with uppercase but there is no real rule here.

DISK=/dev/disk/by-id/^^ata_disk^^
POOL=^^Ubuntu^^
HOST=^^server^^
USER=^^user^^

To start the fun we need debootstrap and zfsutils-linux package. Unlike desktop installation, ZFS package is not installed by default.

apt install --yes debootstrap zfsutils-linux

General idea of my disk setup is to maximize amount of space available for pool with the minimum of supporting partitions. If you are planning to have multiple kernels, increasing boot partition size might be a good idea. Major change as compared to my previous guide is partition numbering. While having partition layout different than partition order had its advantages, a lot of partition editing tools would simply “correct” the partition order to match layout and thus cause issues down the road.

blkdiscard $DISK

sgdisk --zap-all                        $DISK

sgdisk -n1:1M:+127M -t1:EF00 -c1:EFI    $DISK
sgdisk -n2:0:+512M  -t2:8300 -c2:Boot   $DISK
sgdisk -n3:0:0      -t3:8309 -c3:Ubuntu $DISK

sgdisk --print                          $DISK

Now we’re ready to create system ZFS pool.

zpool create -o ashift=12 -O compression=lz4 -O normalization=formD \
    -O acltype=posixacl -O xattr=sa -O dnodesize=auto -O atime=off \
    -O canmount=off -O mountpoint=none -R /mnt/install $POOL $DISK-part3
zfs create -o canmount=noauto -o mountpoint=/ $POOL/root
zfs mount $POOL/root

Assuming UEFI boot, two additional partitions are needed. One for EFI and one for booting. Unlike what you get with the official guide, here I don’t have ZFS pool for boot partition but a plain old ext4. I find potential fixup works better that way and there is a better boot compatibility. If you are thinking about mirroring, making it bigger and ZFS might be a good idea. For a single disk, ext4 will do.

yes | mkfs.ext4 $DISK-part2
mkdir /mnt/install/boot
mount $DISK-part2 /mnt/install/boot/

mkfs.msdos -F 32 -n EFI $DISK-part1
mkdir /mnt/install/boot/efi
mount $DISK-part1 /mnt/install/boot/efi

Bootstrapping Ubuntu on the newly created pool is next. As we’re dealing with server you can consider using --variant=minbase rather than the full Debian system. I personally don’t see much value in that as other packages get installed as dependencies anyhow. In any case, this will take a while.

debootstrap focal /mnt/install/

zfs set devices=off $POOL

Our newly copied system is lacking a few files and we should make sure they exist before proceeding.

echo $HOST > /mnt/install/etc/hostname
sed '/cdrom/d' /etc/apt/sources.list > /mnt/install/etc/apt/sources.list
cp /etc/netplan/*.yaml /mnt/install/etc/netplan/

Finally we’re ready to “chroot” into our new system.

mount --rbind /dev  /mnt/install/dev
mount --rbind /proc /mnt/install/proc
mount --rbind /sys  /mnt/install/sys
chroot /mnt/install /usr/bin/env DISK=$DISK POOL=$POOL USER=$USER bash --login

Let’s not forget to setup locale and time zone. If you opted for minbase you can either skip this step or manually install locales and tzdata packages.

locale-gen --purge "en_US.UTF-8"
update-locale LANG=en_US.UTF-8 LANGUAGE=en_US
dpkg-reconfigure --frontend noninteractive locales

dpkg-reconfigure tzdata

Now we’re ready to onboard the latest Linux image.

apt update
apt install --yes --no-install-recommends linux-image-generic linux-headers-generic

Followed by boot environment packages.

apt install --yes zfs-initramfs grub-efi-amd64-signed shim-signed

To mount EFI and boot partitions, we need to do some fstab setup too:

echo "PARTUUID=$(blkid -s PARTUUID -o value $DISK-part2) \
    /boot ext4 noatime,nofail,x-systemd.device-timeout=1 0 1" >> /etc/fstab
echo "PARTUUID=$(blkid -s PARTUUID -o value $DISK-part1) \
    /boot/efi vfat noatime,nofail,x-systemd.device-timeout=1 0 1" >> /etc/fstab
cat /etc/fstab

Now we get grub started and update our boot environment. Due to Ubuntu 19.10 having some kernel version kerfuffle, we need to manually create initramfs image. As before, boot cryptsetup discovery errors during mkinitramfs and update-initramfs as OK.

KERNEL=`ls /usr/lib/modules/ | cut -d/ -f1 | sed 's/linux-image-//'`
update-initramfs -u -k $KERNEL

Grub update is what makes EFI tick.

update-grub
grub-install --target=x86_64-efi --efi-directory=/boot/efi --bootloader-id=Ubuntu \
    --recheck --no-floppy

Since we’re dealing with computer that will most probably be used without screen, it makes sense to install OpenSSH Server.

apt install --yes openssh-server

I also prefer to allow remote root login. Yes, you can create a sudo user and have root unreachable but that’s just swapping one security issue for another. Root user secured with key is plenty safe.

sed -i '/^#PermitRootLogin/s/^.//' /etc/ssh/sshd_config
mkdir /root/.ssh
echo "^^<mykey>^^" >> /root/.ssh/authorized_keys
chmod 644 /root/.ssh/authorized_keys

If you’re willing to deal with passwords, you can allow them too by changing both PasswordAuthentication and PermitRootLogin parameter. I personally don’t do this.

sed -i '/^#PasswordAuthentication yes/s/^.//' /etc/ssh/sshd_config
sed -i '/^#PermitRootLogin/s/^.//' /etc/ssh/sshd_config
sed -i 's/^PermitRootLogin prohibit-password/PermitRootLogin yes/' /etc/ssh/sshd_config
passwd

Short package upgrade will not hurt.

apt dist-upgrade --yes

We can omit creation of the swap dataset but I personally find its good to have it just in case.

zfs create -V 4G -b $(getconf PAGESIZE) -o compression=off -o logbias=throughput \
    -o sync=always -o primarycache=metadata -o secondarycache=none $POOL/swap
mkswap -f /dev/zvol/$POOL/swap
echo "/dev/zvol/$POOL/swap none swap defaults 0 0" >> /etc/fstab
echo RESUME=none > /etc/initramfs-tools/conf.d/resume

If one is so inclined, /home directory can get a separate dataset too.

rmdir /home
zfs create -o mountpoint=/home $POOL/home

And now we create the user and assign a few extra groups to it.

adduser --disabled-password --gecos '' $USER
asermod -a -G adm,cdrom,dip,plugdev,sudo $USER
chown -R $USER:$USER /home/$USER
passwd $USER

Consider enabling firewall. While you can go wild with firewall rules, I like to keep them simple to start with. All outgoing traffic is allowed while incoming traffic is limited to new SSH connections and responses to the already established ones.

apt install --yes man iptables iptables-persistent

iptables -F
iptables -X
iptables -Z
iptables -P INPUT DROP
iptables -P FORWARD DROP
iptables -P OUTPUT ACCEPT
iptables -A INPUT -i lo -j ACCEPT
iptables -A INPUT -m conntrack --ctstate ESTABLISHED,RELATED -j ACCEPT
iptables -A INPUT -p tcp --dport 22 -j ACCEPT
iptables -A INPUT -p icmp -j ACCEPT
iptables -A INPUT -p ipv6-icmp -j ACCEPT

netfilter-persistent save

As install is ready, we can exit our chroot environment.

exit

And cleanup our mount points.

umount /mnt/install/boot/efi
umount /mnt/install/boot
mount | grep -v zfs | tac | awk '/\/mnt/ {print $3}' | xargs -i{} umount -lf {}
zpool export -a

After the reboot you should be able to enjoy your installation.

reboot

XigmaNAS as a SysLog Server

As it is the machine with the most redundancy, there are many tasks that fall onto my XigmaNAS server. One of those tasks is serving as a syslog destination for other devices.

However, you cannot get that behavior by default as its syslog server will start in secure mode ([-ss](https://www.freebsd.org/cgi/man.cgi?query=syslogd&sektion=8)) without opening any socket. And there is no easy GUI way to make it accept syslog messages.

But here comes the power of postinit actions. Just add the following in SystemAdvancedCommand scripts as the new PostInit entry:

sed -i -e 's^ -ss^ -a 192.168.0.0/16^' /etc/rc.d/syslogd ; /etc/rc.d/syslogd restart

Your XigmaNAS server will now accept all messages coming from your local network.

PS: To troubleshoot settings, you can do the following on the command line:

$ kenv rc.debug=1
$ /etc/rc.d/syslogd restart

Using Mikrotik's Router to Detect Power Outage

Before I had CyberCard, I still had a need to monitor if my system was running off the UPS power. If my server could detect power out and shut down other devices, my battery life would keep server up for longer.

If you have Mikrotik’s router with two power supplies and an SSH connection to the same there is a trick you can use - Mikrotik can show you each power supply state. If you take care to plug one power supply into the UPS and the other one into the non-UPS outlet, you suddenly have a detector.

ssh -i ~/.ssh/id_rsa admin@router.home "/system health print"
            voltage: 23.6V
            current: 426mA
        temperature: 50C
  power-consumption: 10W
       psu1-voltage: 24.4V
       psu2-voltage: 0V

Even better, the voltage doesn’t go immediately to 0 V as soon as power is out so there is a delay built-in. So, script is as easy as detecting 0V on the output. Something like this.

ssh -i ~/.ssh/id_rsa admin@^^router.home^^ "/system health print" \
  | egrep 'psu[12]-voltage' | grep -q '0V' && echo "Do Something!"

PS: If you’re interested in the whole script around this, you can download it here.