Native ZFS Encryption Speed (Ubuntu 23.04)

There is a newer version of this post

Well, Ubuntu 23.04 is here and it’s time for the new round of ZFS encryption testing. New version, minor ZFS updates, and slightly confusing numbers at some points.

First, Ubuntu 23.04 brings us to ZFS 2.1.9 on kernel 6.2. It’s a minot change on ZFS version (up from 2.1.5 in Ubuntu 22.10) but kernel bump is more than what we had in a while (was kernel 5.19).

Good news is that almost nothing has changed as compared to 22.10. Numbers are close enough to what they were before that they might be a statistical error when it comes to either AES-GCM or AES-XTS (on LUKS). If that’s what you’re using (and you should), you can stop here.

Illustration

However, if you’re using AES-CCM, things are a bit confusing, at least on my test system. For writes, all is good. But when it comes to reads, gremlins seem to be hiding somewhere in the background.

Every few reads speed would simply start dropping. After a few slower measurements, it would come back where it was. I repeated it multiple times and it was always reads that started dropping while writes would stay stable.

While that might not be reason not to upgrade if you’re using AES-CCM, you might want to perform a few tests of your own. Mind you, you should be switching to AES-GCM anyhow.

As always, raw data I gathered during my tests is available.

Adding Tools to .NET Container

When Microsoft provides you with container image, they provide everything you need to run .NET application. And no more. But what if we want to add our own tools?

Well, there’s nothing preventing you from using just standard docker stuff. For example, enriching default Alpine Linux image would just require creating a Dockerfile with the following content:

FROM mcr.microsoft.com/dotnet/runtime:7.0-alpine
RUN apk add iputils traceroute curl netcat-openbsd

Essentially we tell Docker to use Microsoft’s image as our baseline and to install a few packages. To “execute” those commands, simply use the file to build an image:

docker build --tag dotnet-runtime-7.0-alpine-withtools .

To see if all works as intended, we can simply test it with Docker.

docker run --rm -it dotnet-runtime-7.0-alpine-withtools sh

Once happy, just tag and push it. In this case, I’m adding it to the local repository.

docker tag dotnet-runtime-7.0-alpine-withtools:latest localhost:5000/dotnet-runtime:7.0-alpine-withtools
docker push localhost:5000/dotnet-runtime:7.0-alpine-withtools

In our .NET project, we just need to change the ContainerBaseImage value and publish it as usual:

<ContainerBaseImage>localhost:5000/dotnet-runtime:7.0-alpine-withtools</ContainerBaseImage>

PS: If you don’t have Docker running locally, don’t forget to start it:

docker run -d -p 5000:5000 --name registry registry:2

Using Alpine Linux Docker Image for .Net 7.0

With .NET 7 publishing a docker image became trivial. Really, all that’s needed is to add a few entries into .csproj file.

<ContainerBaseImage>mcr.microsoft.com/dotnet/runtime:7.0</ContainerBaseImage>
<ContainerRuntimeIdentifier>linux-x64</ContainerRuntimeIdentifier>
<ContainerImageName>test</ContainerImageName>
<ContainerImageTags>0.0.1</ContainerImageTags>

With those in place, and assuming we have docker working, we can then “publish” the image.

dotnet publish -c Release --no-self-contained \
    /t:PublishContainer -p:PublishProfile=DefaultContainer \
    Test.csproj

And there’s nothing wrong with this. However, what if you want an image that’s smaller than 270 MB this method offers? Well, there’s always Alpine Linux. And yes, Microsoft offers an image for Alpine too.

So I changed my project values.

<ContainerBaseImage>mcr.microsoft.com/dotnet/runtime:7.0-alpine</ContainerBaseImage>
<ContainerRuntimeIdentifier>linux-x64</ContainerRuntimeIdentifier>
<ContainerImageName>test</ContainerImageName>
<ContainerImageTags>0.0.1</ContainerImageTags>

And that led me to a dreadful Error/CrashLoopBackOff state. My application simply wouldn’t run and since the container crashed, it was really annoying to troubleshoot anything. But those familiar with .NET and Alpine Linux might see the issue. While almost any other Linux is happy with the linux-x64 moniker, our Alpine needs a special linux-musl-x64 value due to using a different libc implementation. And no, you cannot simply put that in .csproj as you’ll get error that The RuntimeIdentifier 'linux-musl-x64' is not supported by dotnet/runtime:7.0-alpine.

You need to add it to the publish command line as an option

dotnet publish -c Release --no-self-contained  -r linux-musl-x64\
    /t:PublishContainer -p:PublishProfile=DefaultContainer \
    Test.csproj

And now, our application should work on Alpine with considerable size savings without any issues.

Quickly Patching a Failing Ansible Setup

In my network, I use Ansible to configure both servers and clients. And yes, that includes Windows clients too. And it all worked flawlessly for a while. Out of nowhere, one Wednesday, my wife’s Surface Pro started failing its Ansible setup steps with Error when collecting bios facts.

For example:

[WARNING]: Error when collecting bios facts: New-Object : Exception calling ".ctor" with "0" argument(s): "String was not recognized as a valid DateTime."  At line:2 char:21  + ...         $bios = New-Object -TypeName
Ansible.Windows.Setup.SMBIOSInfo  +                     ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~      + CategoryInfo          : InvalidOperation: (:) [New-Object], MethodInvocationException      +
FullyQualifiedErrorId : ConstructorInvokedThrowException,Microsoft.PowerShell.Commands.NewObjectCommand      at <ScriptBlock>, <No file>: line 2

And yes, the full list of exceptions was a bit longer, but they all had one thing in common. They were pointing toward SMBIOSInfo.

The first order of business was to find what the heck was being executed on my wife’s Windows machine. It took some process snooping to figure out that setup.ps1 was the culprit. Interestingly, this was despite ansible_shell_type being set to cmd. :)

On my file system, I found that file at two places. However, you’ll notice that if you delete one in the .ansible directory, it will be recreated from the one in /usr/lib.

  • /usr/lib/python3/dist-packages/ansible_collections/ansible/windows/plugins/modules/setup.ps1
  • /root/.ansible/collections/ansible_collections/ansible/windows/plugins/modules/setup.ps1

Finally, I was ready to check the script for errors, and it didn’t take me long to find the one causing all the kerfuffle I was experiencing.

The issue was with the following code:

string dateFormat = date.Length == 10 ? "MM/dd/yyyy" : "MM/dd/yy";
DateTime rawDateTime = DateTime.ParseExact(date, dateFormat, null);
return DateTime.SpecifyKind(rawDateTime, DateTimeKind.Utc);

That code boldly assumed the BIOS date uses a slash / as a separator. And that is true most of the time, but my wife’s laptop reported its date as 05.07.2014. Yep, those are dots you’re seeing. Even worse, the date was probably in DD.MM.YYYY format, albeit that’s a bit tricky to prove conclusively. In any case, ParseExact was throwing the exception.

My first reaction was to simply return null from that function and not even bother parsing the BIOS date as I didn’t use it. But then I opted to just prevent the exception as maybe that information would come in handy one day. So I added a TryParse wrapper around it.

DateTime rawDateTime;
if (DateTime.TryParseExact(date, dateFormat, null,
    System.Globalization.DateTimeStyles.None, out rawDateTime)) {
    return DateTime.SpecifyKind(rawDateTime, DateTimeKind.Utc);
} else {
    return null;
}

This code retains status quo. If it finds the date in either MM/dd/yyyy or MM/dd/yy format, it will parse it correctly. Any other format will simply return null, which is handled elsewhere in the code.

With this change, my wife’s laptop came back into the fold, and we lived happily ever after. The end.


PS: Yes, I have opened a pull request for the issue.

ZFS Root Setup with Alpine Linux

Running Alpine Linux on ZFS is nothing new as there are multiple guides describing the same. However, I found official setups are either too complicated when it comes to the dataset setup or they simply don’t work without legacy boot. What I needed was a simplest way to bring up ZFS on UEFI systems.

First of all, why ZFS? Well, for me it’s mostly the matter of detecting issues. While my main server is reasonably well maintained, rest of my lab consists of retired computers I stopped using a long time ago. As such, it’s not rare that I have hardware faults and it happened more than once that disk errors went undetected. Hardware faults will still happen with ZFS but at least I will know about them immediately and without corrupting my backups too.

In this case, I will describe my way of bringing up the unencrypted ZFS setup with a separate ext4 boot partition. It requires EFI enabled BIOS with secure boot disabled as Alpine binaries are not signed.

Also, before we start, you’ll need Alpine Linux Extended ISO for ZFS installation to work properly. Don’t worry, the resulting installation will still be a minimal set of packages.

Once you boot from disk, you can proceed with the setup as you normally would but continue with [none] at the question about installation disk.

setup-alpine

Since no answer was given, we can proceed with manual steps next. First, we can set up a few variables. While I usually like to use /dev/disk/by-id for this purpose, Alpine doesn’t install eudev by default. In order to avoid depending on this, I just use good old /dev/sdX paths.

DISK=/dev/sda
POOL=Tank

Of course, we need some extra packages too. And while we’re at it, we might as well load ZFS drivers.

apk add zfs sgdisk e2fsprogs util-linux grub-efi
modprobe zfs

With this out of way, we can partition the disk out. In this example, I use three separate partitions. One for EFI, one for /boot, and lastly, one for ZFS.

sgdisk --zap-all             $DISK
sgdisk -n1:1M:+127M -t1:EF00 $DISK
sgdisk -n2:0:896M   -t2:8300 $DISK
sgdisk -n3:0:0      -t3:BF00 $DISK
sgdisk --print               $DISK
mdev -s

While having a separate dataset for different directories sometimes makes sense, I usually have rather small installations. Thus, putting everything into a single dataset actually makes sense. Most of the parameters are the usual suspects but do note I am using ashift 13 instead of the more common 12. My own testing has shown me that on SSD drives, this brings slightly better performance. If you are using this on spinning rust, you can use 12, but 13 will not hurt performance in any meaningful way, so might as well leave it as is.

zpool create -f -o ashift=13 -o autotrim=on \
    -O compression=lz4 -O normalization=formD \
    -O acltype=posixacl -O xattr=sa -O dnodesize=auto -O atime=off \
    -O canmount=noauto -O mountpoint=/ -R /mnt ${POOL} ${DISK}3

Next is the boot partition, and this one will be ext4. Yes, having ZFS here would be “purer,” but I will sacrifice that purity for the ease of troubleshooting when something goes wrong.

yes | mkfs.ext4 ${DISK}2
mkdir /mnt/boot
mount -t ext4 ${DISK}2 /mnt/boot/

The last partition to format is EFI, and that has to be FAT32 in order to be bootable.

mkfs.vfat -F 32 -n EFI -i 4d65646f ${DISK}1
mkdir /mnt/boot/efi
mount -t vfat ${DISK}1 /mnt/boot/efi

With all that out of the way, we can finally install Alpine onto our disk using the handy setup-disk script. You can ignore the failed to get canonical path error as we’re going to manually adjust things later.

BOOTLOADER=grub setup-disk -v /mnt

With the system installed, we can chroot into it and continue the rest of the steps from within.

mount --rbind /dev  /mnt/dev
mount --rbind /proc /mnt/proc
mount --rbind /sys  /mnt/sys
chroot /mnt /usr/bin/env DISK=$DISK POOL=$POOL ash --login

For grub, we need a small workaround first so it properly detects our pool.

sed -i "s|rpool=.*|rpool=$POOL|"  /etc/grub.d/10_linux

And then we can properly install the EFI bootloader.

apk add efibootmgr
mkdir -p /boot/efi/alpine/grub-bootdir/x86_64-efi/
grub-install --target=x86_64-efi \
  --boot-directory=/boot/efi/alpine/grub-bootdir/x86_64-efi/ \
  --efi-directory=/boot/efi \
  --bootloader-id=alpine
grub-mkconfig -o /boot/efi/alpine/grub-bootdir/x86_64-efi/grub/grub.cfg

And that’s it. We can now exit the chroot environment.

exit

Let’s unmount all our partitions.

umount -Rl /mnt
zpool export -a

And, after reboot, your system should come up with ZFS in place.

reboot