ZFS Encryption Speed (Ubuntu 24.04)

Well, another Ubuntu version, another set of encryption performance tests. Here are the results for Ubuntu 24.04 on kernel 6.8 using ZFS 2.2.2. As I’m doing this for quite a few versions now, you can find older tests for Ubuntu 23.10, 23.04, 22.10, 22.04, 20.10, and 20.04.

Testing was done on a Framework laptop with an i5-1135G7 processor and 64GB of RAM. Once booted into installation media, I execute the script that creates a 42 GiB RAM disk that hosts all data for six 6 GiB files. Those files are then used in a RAIDZ2 configuration to create a ZFS pool. The process is repeated multiple times to test all different native ZFS encryption modes in addition to a LUKS-based test. This whole process is repeated again with AES disabled. As before, the test is a simple DD copy of 4 GB files; however, this time I included FIO tests for sequential and random read/write. One thing absent for the 24.04 round is a 2-core run. Relative performance between a 2-core and 4-core setup remained about the same over many years I’ve been doing this testing and thus it doesn’t really seem worth the effort.

Illustration

Since I am testing on the same hardware as previously, I expected little to no difference in performance but I was pleasantly surprised as performance did significantly increase across the board by about 20%. Considering 23.10 decreased performance by 10%, it’s nice to see we have that performance recovered with a bit of improvement on top. If you need more disk performance out of your existing hardware, you should really consider upgrading to Ubuntu 24.04.

When it comes to the relative performance, nothing really changed. ZFS encryption is still more performant than LUKS on writes and LUKS exhibits slightly higher performance when it comes to reads. CCM modes are still atrocious but, if your processor doesn’t have AES support, might be useful.

Illustration

As, going forward, I plan to use FIO instead of a simple dd copy, it’s as good time to analyze those numbers too. Unsurprisingly, the sequential performance numbers as compared to the simple DD copy are about the same. The only outlier seems to be read performance that drops a bit more than other readings. My best guess is that this is due to higher parallel IO demands FIO makes.

Illustration

Since I am using FIO, I decided to add random I/O too. I expected results to be lower but numbers surprised me still. Write performance dropped to 50 MB/s without encryption. With encryption performance drops even further to 30 MB/s. Fortunately, real loads are not as unforgiving as FIO so you can expect much better performance in real-life.

In future, there are a few things I plan to change. First of all, I plan to switch onto using FIO instead of DD. While I will probably still collect DD data, it will just be there so one can compare it more easily to older tests and not as a main tool. Secondly, I plan to switch LUKS to 4K blocks and not bother measuring 512-byte sector size at all. Most of drives these days have 4K sectors and thus it makes sense that any proper LUKS installation would match that sector size. Making it default just makes sense. Performance-wise, they’re not a huge improvement but the do bring LUKS numbers closer to the native encryption.


PS: Raw data is available in Google Sheets.

AMD processor temperature under Ubuntu 24.04

I often like to check my laptop’s temperature when I am doing something that requires a lot of power. I found knowing temperature really helps with understanding where the limits lie. However, my old scripts that worked on Intel systems doesn’t work on AMD. So I went to research it a bit.

After a bit of snooping around, all the data can be found under /sys/class/hwmon/. It’s there where we can find multiple _label files which describe a temperature source. The one we’re after is Tctl. Once we look over all of these, THERMAL_SOURCES variable should contain the file path (or more of them) for the temperature expressed in thousands of ℃.

for THERMAL_LABEL_FILE in `find /sys/class/hwmon/hwmon?/ -type f -name "temp*_label" -print`; do
    THERMAL_LABEL=`cat "$THERMAL_LABEL_FILE"`
    if [ "$THERMAL_LABEL" = "Tctl" ]; then
        THERMAL_SOURCES="`echo $THERMAL_LABEL_FILE | sed 's/_label$/_input/g'`"
    fi
done

Knowing which file contains a temperature is only the first part. What I like to do next is to fold all temperatures (if multiple sources exist) into a single figure by selecting the maximum value. Then, it’s just a matter of moving the decimal point around to get a while number reading.

TEMP_ALL="$(cat $THERMAL_SOURCES | awk '{print $1}' | sort -n)"
TEMP_MAX="$(echo "$TEMP_ALL" | tail -n 1 | awk '{print int(($1 + 500) / 1000) }')"

Manual Grub Boot for ZFS Root

As I was messing with making my EFI partition larger, I managed to corrupt the system. My best guess was that my new partition sizes weren’t properly (re)loaded before I formated them. Thus, even though both boot and EFI partitions had all files properly restored, during boot I would end up dropped into the Grub prompt.

While I do not often end up in such situation, I already know grub from my Surface Go adventures. So I did what I had done many times before (gpt2 is my boot partition):

set root=(hd0,gpt2)
linux /vmlinuz-6.8.0-28-generic
initrd /initrd.img-6.8.0-28-generic

This moved needle a bit by dumped me into the initramfs prompt. At least here it did helpfully indicate that the issue was (corrupted disk). However, it was obvious something was still wrong as my root ZFS partition was nowhere to be found. Thus, no fsck to fix the issue.

Initial thought was to just load ZFS filesystem:

zpool import Tank/System
zfs mount Tank/System
exit

Well, this actually caused the system to crash as filesystem wasn’t properly overlaid. So I had to figure out either how to reload the root partition from the initramfs prompt or to go back to the drawing board.

Thankfully, Looking at my other computer’s Grub configuration, I noticed the way forward. There, I saw that linux command has an extra ZFS-related argument. Thus, I adjusted my grub commands accordingly (the example below assumes the root dataset is Tank/System):

set root=(hd0,gpt2)
linux /vmlinuz-6.8.0-28-generic root=ZFS=Tank/System
initrd /initrd.img-6.8.0-28-generic
boot

And this brought my system back to its bootable self.


PS: Since the boot file system was actually readable, I decided to simply copy files to a temporary location, format both boot and EFI partitions, and then copy the data back.

mkdir /mnt/{efi,boot}-copy
rsync -avxAHWX /boot/efi/ /mnt/efi-copy/
rsync -avxAHWX /boot/     /mnt/boot-copy/

umount /boot/efi
umount /boot

DISK1=</dev/disk/by-id/...>
yes | mkfs.ext4 $DISK1-part2
mkfs.vfat -F 32 -n EFI -i 4d65646f $DISK1-part1

mount /boot
mount /boot/efi
rsync -avxAHWX /mnt/boot-copy/ /boot/
rsync -avxAHWX /mnt/efi-copy/  /boot/efi/

rm -rf /mnt/{boot,efi}-copy

[2024-10-05] If you didn’t copy all permissions for files, you might need to reapply grub too:

grub-install --target=x86_64-efi --efi-directory=/boot/efi \
    --bootloader-id=Ubuntu --recheck --no-floppy