Linux, Unix, and whatever they call that world these days

Running Comfast CF-953AX on Ubuntu 22.04

Based on list of wireless cards supported on Linux, I decided to buy Comfast CF-953AX as it should have been supported since Linux kernel 5.19. And HWE kernel on Ubuntu 22.04 LTS brings me right there. With only the source of that card being AliExpress, it took some time for it to arrive. All that wait for nothing to happen once I plugged it in.

In order to troubleshoot the issue, I first checked my kernel, and it was at the expected version.

Linux 5.19.0-38-generic #39~22.04.1-Ubuntu SMP PREEMPT_DYNAMIC

Then I checked with lsusb my devices, and there it was.

Bus 002 Device 003: ID 3574:6211 MediaTek Inc. Wireless_Device

However, checking for network using lshw -C network showed nothing. As always, looking stuff up on the Internet brought a bit more clarity to the issue. Not only the driver wasn’t loaded but the USB VID:PID combination was unrecognized. The solution was simple enough: load the driver and teach it the new VID:POD combination.

sudo modprobe mt7921u
echo 3574 6211 | sudo tee /sys/bus/usb/drivers/mt7921u/new_id

Running lshw has found the card.

  *-network
       description: Wireless interface
       physical id: 5
       bus info: usb@2:1
       logical name: wlxe0e1a9389d77
       serial: e0:e1:a9:38:9d:77
       capabilities: ethernet physical wireless
       configuration: broadcast=yes driver=mt7921u driverversion=6.2.0-20-generic firmware=____010000-20230302150956
       multicast=yes wireless=IEEE 802.11

Well, now to make changes permanent, we need to teach Linux a new rule:

sudo tee /etc/udev/rules.d/90-usb-3574:6211-mt7921u.rules << EOF
ACTION=="add", \
    SUBSYSTEM=="usb", \
    ENV{ID_VENDOR_ID}=="3574", \
    ENV{ID_MODEL_ID}=="6211", \
    RUN+="/usr/sbin/modprobe mt7921u", \
    RUN+="/bin/sh -c 'echo 3574 6211 > /sys/bus/usb/drivers/mt7921u/new_id'"
EOF

After that, we should update our initramfs.

sudo update-initramfs -k all -u

And that’s it. Our old 22.04 just learned a new trick.

Visual Code Ctrl+. Not Working Under Ubuntu

Every time I reinstall Ubuntu, I am faced with the same issue. Whenever I press Ctrl+. in Visual Studio Code, I get an underscored letter “e” instead of the expected refactoring menu. And it takes me a while to remember the solution: running ibus-setup and removing emoji keyboard shortcuts.

However, there is also a scriptable solution. Just use gsettings and remove hotkeys from the command line.

gsettings set org.freedesktop.ibus.panel.emoji hotkey "[]"
gsettings set org.freedesktop.ibus.panel.emoji unicode-hotkey "[]"

Running Supermicro IPMIView 2.0 on Ubuntu

If you have a Supermicro motherboard, you have probably already downloaded the IPMIView utility. This utility allows access to motherboard sensors but most importantly, it includes KVM functionality, enabling you to access your computer as if it had a screen. Pure magic when it works. However, when using Ubuntu 23.04, I encountered difficulties running it due to the Malformed \uxxxx encoding error.

An internal LaunchAnywhere application error has occured and this application cannot proceed. (LAX)

Stack Trace:
java.lang.IllegalArgumentException: Malformed \uxxxx encoding.
    at java.base/java.util.Properties.loadConvert(Properties.java:678)
    at java.base/java.util.Properties.load0(Properties.java:455)
    at java.base/java.util.Properties.load(Properties.java:381)
    at com.zerog.common.java.util.PropertiesUtil.loadProperties(Unknown Source)
    at com.zerog.lax.LAX.<init>(Unknown Source)
    at com.zerog.lax.LAX.main(Unknown Source)

Nevertheless, since Supermicro provides all the necessary components, bypassing this error is actually quite simple. Just run the main .jar file directly.

cd <directory>
./jre/bin/java -jar IPMIView20.jar

And that’s it.

Ubuntu 23.04 on Framework Laptop (with Hibernate)

I’ve been running Ubuntu on my Framework 13 for a while now without any major issues. However, my initial setup restricted me to a deep sleep suspend that will drain your battery in a day or two if you forget about it. As I anyhow needed to reinstall my system to get Ubuntu 23.04 going, I decided to mix it up a bit.

My setup is simple and has only a few requirements. First of all, a full disk encryption is a must. Secondly, ZFS is non-negotiable. And lastly, it would be nice to have hibernation this time round.

When it comes to full disk encryption with ZFS, there is an option of native ZFS encryption. And indeed, I’ve done setups with it before. However, getting hibernation running on top of ZFS was not something I managed to get running properly.

For hibernation, I really prefer to have a separate swap partition encrypted using Luks. And, if you use both Luks and native ZFS encryption, you get asked for the encryption passphrase twice. Since I’m too lazy for that, I decided to have ZFS on top of the Luks, like in the good old days. Performance-wise it’s awash anyhow. Yes, writing is a bit slower on artificial tests but in reality, the difference is negligible.

Avid readers of my previous installation guides will already know that my personal preferences are really noticeable in these guides. For example, I like my partitions set up a certain way and I will always nuke the dreadful snap system.

Honestly, if you are ok with the default Ubuntu setup, or just uncomfortable with the command line, you might want to stop reading and simply follow the official Framework 13 installation guide. It’s a great guide and the final result is something 99% of people will be happy with.

The first step is to boot into the “Try Ubuntu” option of the USB installation. Once we have a desktop, we want to open a terminal. And, since all further commands are going to need root credentials, we can start with that.

sudo -i

Next step should be setting up a few variables - disk, pool name, hostname, and username. This way we can use them going forward and avoid accidental mistakes. Just make sure to replace these values with ones appropriate for your system.

DISK=/dev/disk/by-id/<diskid>
POOL=<poolname>
HOST=<hostname>
USER=<username>

For this setup, I wanted 4 partitions. The first two partitions will be unencrypted and in charge of booting. While I love encryption, I decided not to encrypt the boot partition in order to make my life easier as you cannot integrate the boot partition password prompt with the later data password prompt thus requiring you to type the password twice (or trice if you decide to use native ZFS encryption on top of that). Both swap and ZFS partition are fully encrypted.

Also, my swap size is way too excessive since I have 64 GB of RAM and I wanted to allow for hibernation under the worst of circumstances (i.e., when RAM is full). Hibernation usually works with much smaller partitions but I wanted to be sure and my disk

Also, my swap size is way too excessive since I have 64 GB of RAM and I wanted to allow for hibernation under the worst of circumstances (i.e., when RAM is full). Hibernation usually works with much smaller partitions but I wanted to be sure and my disk was big enough to accommodate.

Lastly, while blkdiscard does nice job of removing old data from the disk, I would always recommend also using dd if=/dev/urandom of=$DISK bs=1M status=progress if your disk was not encrypted before.

blkdiscard -f $DISK
sgdisk --zap-all                     $DISK
sgdisk -n1:1M:+63M -t1:EF00 -c1:EFI  $DISK
sgdisk -n2:0:+960M -t2:8300 -c2:Boot $DISK
sgdisk -n3:0:+64G  -t3:8200 -c3:Swap $DISK
sgdisk -n4:0:0     -t4:8309 -c4:ZFS  $DISK
sgdisk --print                       $DISK

Once partitions are created, we want to setup our LUKS encryption. Here you will notice I use luks2 headers with a few arguments helping with nVME performance.

cryptsetup luksFormat -q --type luks2 \
    --perf-no_write_workqueue --perf-no_read_workqueue \
    --cipher aes-xts-plain64 --key-size 256 \
    --pbkdf argon2i $DISK-part4

cryptsetup luksFormat -q --type luks2 \
    --perf-no_write_workqueue --perf-no_read_workqueue \
    --cipher aes-xts-plain64 --key-size 256 \
    --pbkdf argon2i $DISK-part3

Since creating encrypted partition doesn’t mount them, we do need this as a separate step. Since the swap partition will be the first one to load, I will give it a name of the host in order to have a bit nicer password prompt.

cryptsetup luksOpen $DISK-part4 zfs
cryptsetup luksOpen $DISK-part3 $HOST

Finally, we can set up our ZFS pool with an optional step of setting quota to roughly 80% of disk capacity. Adjust the exact values as needed.

zpool create -o ashift=12 -o autotrim=on \
    -O compression=lz4 -O normalization=formD \
    -O acltype=posixacl -O xattr=sa -O dnodesize=auto -O atime=off \
    -O canmount=off -O mountpoint=none -R /mnt/install \
    $POOL /dev/mapper/zfs
zfs set quota=1.5T $POOL

I used to be a fan of using just a main dataset for everything, but these days I use a more conventional “separate root dataset” approach.

zfs create -o canmount=noauto -o mountpoint=/ $POOL/Root
zfs mount $POOL/Root

And a separate home partition will not be forgotten.

zfs create -o canmount=noauto -o mountpoint=/home $POOL/Home
zfs mount $POOL/Home
zfs set canmount=on $POOL/Home

With all datasets in place, we can finish setting the main dataset properties.

zfs set devices=off $POOL

Now it’s time to format the swap.

mkswap /dev/mapper/$HOST

And then the boot partition.

yes | mkfs.ext4 $DISK-part2
mkdir /mnt/install/boot
mount $DISK-part2 /mnt/install/boot

And finally, the EFI partition.

mkfs.msdos -F 32 -n EFI -i 4d65646f $DISK-part1
mkdir /mnt/install/boot/efi
mount $DISK-part1 /mnt/install/boot/efi

At this time, I also sometime disable IPv6 as I’ve noticed that on some misconfigured IPv6 networks it takes ages to download packages. This step is both temporary (i.e., IPv6 is disabled only during installation) and fully optional.

sysctl -w net.ipv6.conf.all.disable_ipv6=1
sysctl -w net.ipv6.conf.default.disable_ipv6=1
sysctl -w net.ipv6.conf.lo.disable_ipv6=1

To start the fun we need debootstrap package. Starting this step, you must be connected to the Internet.

apt update && apt install --yes debootstrap

Bootstrapping Ubuntu on the newly created pool comes next. This will take a while.

debootstrap lunar /mnt/install/

We can use our live system to update a few files on our new installation.

echo $HOST > /mnt/install/etc/hostname
sed "s/ubuntu/$HOST/" /etc/hosts > /mnt/install/etc/hosts
sed '/cdrom/d' /etc/apt/sources.list > /mnt/install/etc/apt/sources.list
cp /etc/netplan/*.yaml /mnt/install/etc/netplan/

If you are installing via WiFi, you might as well copy your wireless credentials. Don’t worry if this returns errors - that just means you are not using wireless.

mkdir -p /mnt/install/etc/NetworkManager/system-connections/
cp /etc/NetworkManager/system-connections/* /mnt/install/etc/NetworkManager/system-connections/

At last, we’re ready to “chroot” into our new system.

mount --rbind /dev  /mnt/install/dev
mount --rbind /proc /mnt/install/proc
mount --rbind /sys  /mnt/install/sys
chroot /mnt/install \
    /usr/bin/env DISK=$DISK USER=$USER \
    bash --login

With our newly installed system running, let’s not forget to set up locale and time zone.

locale-gen --purge "en_US.UTF-8"
update-locale LANG=en_US.UTF-8 LANGUAGE=en_US
dpkg-reconfigure --frontend noninteractive locales
dpkg-reconfigure tzdata

Now we’re ready to onboard the latest Linux image.

apt update
apt install --yes --no-install-recommends \
    linux-image-generic linux-headers-generic

Followed by the boot environment packages.

apt install --yes \
    zfs-initramfs cryptsetup keyutils grub-efi-amd64-signed shim-signed

Now we set up crypttab so our encrypted partitions are decrypted on boot.

echo "$HOST PARTUUID=$(blkid -s PARTUUID -o value $DISK-part3) none \
      swap,luks,discard,initramfs,keyscript=decrypt_keyctl" >> /etc/crypttab
echo "zfs PARTUUID=$(blkid -s PARTUUID -o value $DISK-part4) none \
      luks,discard,initramfs,keyscript=decrypt_keyctl" >> /etc/crypttab
cat /etc/crypttab

To mount all those partitions, we also need some fstab entries. The last entry is not strictly needed. I just like to add it in order to hide our LUKS encrypted ZFS from the file manager.

echo "UUID=$(blkid -s UUID -o value /dev/mapper/$HOST) \
      swap swap defaults 0 0" >> /etc/fstab
echo "PARTUUID=$(blkid -s PARTUUID -o value $DISK-part2) \
      /boot ext4 noatime,nofail,x-systemd.device-timeout=5s 0 1" >> /etc/fstab
echo "PARTUUID=$(blkid -s PARTUUID -o value $DISK-part1) \
      /boot/efi vfat noatime,nofail,x-systemd.device-timeout=5s 0 1" >> /etc/fstab
echo "/dev/disk/by-uuid/$(blkid -s UUID -o value /dev/mapper/zfs) \
      none auto nosuid,nodev,nofail 0 0" >> /etc/fstab
cat /etc/fstab

On systems with a lot of RAM, I like to adjust swappiness a bit. This is inconsequential in the grand scheme of things, but I like to do it anyhow.

echo "vm.swappiness=10" >> /etc/sysctl.conf

Now we create the boot environment.

KERNEL=`ls /usr/lib/modules/ | cut -d/ -f1 | sed 's/linux-image-//'`
update-initramfs -c -k $KERNEL

And then, we can get grub going. Do note we also set up booting from swap (needed for hibernation) here too. Since we’re using secure boot, bootloaded-id HAS to be Ubuntu.

sed -i "s/^GRUB_CMDLINE_LINUX_DEFAULT.*/GRUB_CMDLINE_LINUX_DEFAULT=\"quiet splash \
    nvme.noacpi=1 \
    module_blacklist=hid_sensor_hub \
    RESUME=UUID=$(blkid -s UUID -o value /dev/mapper/$HOST)\"/" \
    /etc/default/grub
update-grub
grub-install --target=x86_64-efi --efi-directory=/boot/efi --bootloader-id=Ubuntu \
    --recheck --no-floppy

And now, finally, we can install our desktop environment.

apt install --yes ubuntu-desktop-minimal

Once the installation is done, I like to remove snap and banish it from ever being installed.

apt remove --yes snapd
echo 'Package: snapd'    > /etc/apt/preferences.d/snapd
echo 'Pin: release *'   >> /etc/apt/preferences.d/snapd
echo 'Pin-Priority: -1' >> /etc/apt/preferences.d/snapd

Since Firefox is only available as snapd package, we can install it manually.

add-apt-repository --yes ppa:mozillateam/ppa
cat << 'EOF' | sed 's/^    //' | tee /etc/apt/preferences.d/mozillateamppa
    Package: firefox*
    Pin: release o=LP-PPA-mozillateam
    Pin-Priority: 501
EOF
apt update && apt install --yes firefox

For Framework Laptop I use here, we need one more adjustment due to Dell audio needing special care. In addition, you might want to mess with WiFi power save modes a bit.

echo "options snd-hda-intel model=dell-headset-multi" >> /etc/modprobe.d/alsa-base.conf
sed '/s/wifi.powersave =.*/wifi.powersave = 2/' \
    /etc/NetworkManager/conf.d/default-wifi-powersave-on.conf

Of course, we need to have a user too.

adduser --disabled-password --gecos '' $USER
usermod -a -G adm,cdrom,dialout,dip,lpadmin,plugdev,sudo,tty $USER
echo "$USER ALL=NOPASSWD:ALL" > /etc/sudoers.d/$USER
passwd $USER

I like to add some extra packages and do one final upgrade before dealing with the sleep stuff.

add-apt-repository --yes universe
apt update && apt dist-upgrade --yes

The first portion is setting up the whole suspend-then-hibernate stuff. This will make Ubuntu to do normal suspend first. If suspended for 20 minutes, it will quickly wake up and do the hibernation then.

sed -i 's/.*AllowSuspend=.*/AllowSuspend=yes/' \
    /etc/systemd/sleep.conf
sed -i 's/.*AllowHibernation=.*/AllowHibernation=yes/' \
    /etc/systemd/sleep.conf
sed -i 's/.*AllowSuspendThenHibernate=.*/AllowSuspendThenHibernate=yes/' \
    /etc/systemd/sleep.conf
sed -i 's/.*HibernateDelaySec=.*/HibernateDelaySec=20min/' \
    /etc/systemd/sleep.conf

And lastly, the whole sleep setup is nothing if we cannot activate it. Closing the lid seems like a perfect place to do it.

apt install -y pm-utils

sed -i 's/.*HandleLidSwitch=.*/HandleLidSwitch=suspend-then-hibernate/' \
    /etc/systemd/logind.conf
sed -i 's/.*HandleLidSwitchExternalPower=.*/HandleLidSwitchExternalPower=suspend-then-hibernate/' \
    /etc/systemd/logind.conf

It took a while but we can finally exit our debootstrap environment.

exit

Let’s clean all mounted partitions and get ZFS ready for next boot.

umount /mnt/install/boot/efi
umount /mnt/install/boot
mount | grep -v zfs | tac | awk '/\/mnt/ {print $3}' | xargs -i{} umount -lf {}
umount /mnt/install
zpool export -a

After reboot, we should be done and our new system should boot with a password prompt.

reboot

Once we log into it, we still need to adjust boot image and grub, followed by a hibernation test. If you see your desktop in the same state as you left it after waking the computer up, all is good.

sudo update-initramfs -u -k all
sudo update-grub
sudo systemctl hibernate

If you get Failed to hibernate system via logind: Sleep verb "hibernate" not supported, go into BIOS and disable secure boot (Enforce Secure Boot option). Unfortunately, the secure boot and hibernation still don’t work together but there is some work in progress to make it happen in the future. At this time, you need to select one or the other.


PS: Just setting HibernateDelaySec as in older Ubuntu versions doesn’t work with the current Ubuntu anymore due to systemd bug. Hibernation is only going to happen when the battery reaches 5% of capacity instead of at a predefined time. This was corrected in v253 but I doubt Ubuntu 23.04 will get that update. I’ll leave it in the guide as it’ll likely work again in Ubuntu 23.10.

PPS: If battery life is really precious to you, you can go to hibernate directly by setting HandleLidSwitch=suspend-then-hibernate. Alternatively, you can look into setting mem_sleep_default=deep in the Grub.

PPPS: There are versions of this guide (without hibernation though) using the native ZFS encryption for the other Ubuntu versions: 22.04, 21.10, and 20.04. For LUKS-based ZFS setup, check the following posts: 22.10, 20.04, 19.10, 19.04, and 18.10.

Mixing HDD and SSD in a ZFS Mirror

One of my test bad computers had a ZFS mirror between its internal 2.5" HDD (ST20000LM003) and external My Passport 2.5" USB 3.0 HDD (WDC WD20NMVW-11W68S0). And yes, having a mirror between SATA and USB is not the most ideal solution to start with, but it does work. In any case, setup happily chugged along until recently when the internal drive started having faults. Replacement was in order.

But replacing an old 2 TB drive proved not to be so easy. When it comes to 2 TB 2.5" models, all laptop drives manufactured these days are SMR. While you can use them in ZFS pool, performance is abysmal during resilvering. Normal use might be ok, depending on load, but it wouldn’t be as good as CMR. The only way to get equivalent drive to the one I had was to get a refurbished years old drive. Not ideal.

But then my eyes went toward cheap 2 TB SSDs. For just a $10 more, I could get a (somewhat) faster drive. However, searching on Internet, I noticed that idea of mixing HDD and SSD in the same pool seems to be frowned upon.

And yes, I knew that you won’t get full benefits of either HDD or SSD when using them together in the same pool but it seemed like an arbitrary limitation especially when price in 2 TB range is essentially equivalent. Why wouldn’t you use SSD when drive needs replacing?

So I ordered myself a cheap SSD and tried to see if there are any downsides to mixed HDD/SSD setup.

The first test I did was an FIO sequential read/write (fio-seq-RW.fio). With two HDDs in mirror, I was at 148/99 MB/s for read/write, respectively. After changing the internal drive to SSD, speeds went to almost identical 147/98 MB/s. Adding a single SSD brought no practical difference in this scenario. Based on this test alone, I would have said that while SSD doesn’t bring a performance improvement, it doesn’t drag it down too much. Having an SMR drive in this setup would bring performance down more than this low-price SSD ever could.

The second test I tried was random read/write (fio-rand-RW.fio). Here speed with two HDD was 480/320 KB/s while combination of HDD and SSD brought speed all the way to 4980/3330 KB/s. Essentially ten-fold increase in performance. If you have virtual machines running on top of ZFS you will feel the difference.

The third test was just to verify if previous two tests looked sensible (ssd-test.fio). While numbers did differ slightly, overall data looked the same. No improvement when it comes to sequential access (even a slight performance decrease) but a huge improvements for random data access.

My conclusion is that, while replacing HDD with SSD might not be the most cost effective approach when it comes to larger pools, there is nothing bad about it as such and, depending on your workload, you might see a healthy improvement. It’s not an appropriate solution when it comes to larger drives, but for pools having up to 2 TB drives, go for it!


PS: For curious, here is raw testing data.

TestHDD + HDDHDD + SDD
Sequential148 MiB/s147 MiB/s
Sequential Write99.0 MiB/s98.3 MiB/s
Random Read480 KiB/s4985 KiB/s
Random Write321 KiB/s3333 KiB/s
SSD Sequential Read135 MiB/s122 MiB/s
SSD Sequential Write28.6 MiB/s25.8 MiB/s
SSD Random Read584 KiB/s11.5 MiB/s
SSD Random Write572 KiB/s6641 KiB/s

PPS: No, I don’t want to talk about who hurt me that much that I’m willing to use an external USB as part of a mirrored pool.

Native ZFS Encryption Speed (Ubuntu 23.04)

There is a newer version of this post

Well, Ubuntu 23.04 is here and it’s time for the new round of ZFS encryption testing. New version, minor ZFS updates, and slightly confusing numbers at some points.

First, Ubuntu 23.04 brings us to ZFS 2.1.9 on kernel 6.2. It’s a minot change on ZFS version (up from 2.1.5 in Ubuntu 22.10) but kernel bump is more than what we had in a while (was kernel 5.19).

Good news is that almost nothing has changed as compared to 22.10. Numbers are close enough to what they were before that they might be a statistical error when it comes to either AES-GCM or AES-XTS (on LUKS). If that’s what you’re using (and you should), you can stop here.

Illustration

However, if you’re using AES-CCM, things are a bit confusing, at least on my test system. For writes, all is good. But when it comes to reads, gremlins seem to be hiding somewhere in the background.

Every few reads speed would simply start dropping. After a few slower measurements, it would come back where it was. I repeated it multiple times and it was always reads that started dropping while writes would stay stable.

While that might not be reason not to upgrade if you’re using AES-CCM, you might want to perform a few tests of your own. Mind you, you should be switching to AES-GCM anyhow.

As always, raw data I gathered during my tests is available.

ZFS Root Setup with Alpine Linux

Running Alpine Linux on ZFS is nothing new as there are multiple guides describing the same. However, I found official setups are either too complicated when it comes to the dataset setup or they simply don’t work without legacy boot. What I needed was a simplest way to bring up ZFS on UEFI systems.

First of all, why ZFS? Well, for me it’s mostly the matter of detecting issues. While my main server is reasonably well maintained, rest of my lab consists of retired computers I stopped using a long time ago. As such, it’s not rare that I have hardware faults and it happened more than once that disk errors went undetected. Hardware faults will still happen with ZFS but at least I will know about them immediately and without corrupting my backups too.

In this case, I will describe my way of bringing up the unencrypted ZFS setup with a separate ext4 boot partition. It requires EFI enabled BIOS with secure boot disabled as Alpine binaries are not signed.

Also, before we start, you’ll need Alpine Linux Extended ISO for ZFS installation to work properly. Don’t worry, the resulting installation will still be a minimal set of packages.

Once you boot from disk, you can proceed with the setup as you normally would but continue with [none] at the question about installation disk.

setup-alpine

Since no answer was given, we can proceed with manual steps next. First, we can set up a few variables. While I usually like to use /dev/disk/by-id for this purpose, Alpine doesn’t install eudev by default. In order to avoid depending on this, I just use good old /dev/sdX paths.

DISK=/dev/sda
POOL=Tank

Of course, we need some extra packages too. And while we’re at it, we might as well load ZFS drivers.

apk add zfs sgdisk e2fsprogs util-linux grub-efi
modprobe zfs

With this out of way, we can partition the disk out. In this example, I use three separate partitions. One for EFI, one for /boot, and lastly, one for ZFS.

sgdisk --zap-all             $DISK
sgdisk -n1:1M:+127M -t1:EF00 $DISK
sgdisk -n2:0:896M   -t2:8300 $DISK
sgdisk -n3:0:0      -t3:BF00 $DISK
sgdisk --print               $DISK
mdev -s

While having a separate dataset for different directories sometimes makes sense, I usually have rather small installations. Thus, putting everything into a single dataset actually makes sense. Most of the parameters are the usual suspects but do note I am using ashift 13 instead of the more common 12. My own testing has shown me that on SSD drives, this brings slightly better performance. If you are using this on spinning rust, you can use 12, but 13 will not hurt performance in any meaningful way, so might as well leave it as is.

zpool create -f -o ashift=13 -o autotrim=on \
    -O compression=lz4 -O normalization=formD \
    -O acltype=posixacl -O xattr=sa -O dnodesize=auto -O atime=off \
    -O canmount=noauto -O mountpoint=/ -R /mnt ${POOL} ${DISK}3

Next is the boot partition, and this one will be ext4. Yes, having ZFS here would be “purer,” but I will sacrifice that purity for the ease of troubleshooting when something goes wrong.

yes | mkfs.ext4 ${DISK}2
mkdir /mnt/boot
mount -t ext4 ${DISK}2 /mnt/boot/

The last partition to format is EFI, and that has to be FAT32 in order to be bootable.

mkfs.vfat -F 32 -n EFI -i 4d65646f ${DISK}1
mkdir /mnt/boot/efi
mount -t vfat ${DISK}1 /mnt/boot/efi

With all that out of the way, we can finally install Alpine onto our disk using the handy setup-disk script. You can ignore the failed to get canonical path error as we’re going to manually adjust things later.

BOOTLOADER=grub setup-disk -v /mnt

With the system installed, we can chroot into it and continue the rest of the steps from within.

mount --rbind /dev  /mnt/dev
mount --rbind /proc /mnt/proc
mount --rbind /sys  /mnt/sys
chroot /mnt /usr/bin/env DISK=$DISK POOL=$POOL ash --login

For grub, we need a small workaround first so it properly detects our pool.

sed -i "s|rpool=.*|rpool=$POOL|"  /etc/grub.d/10_linux

And then we can properly install the EFI bootloader.

apk add efibootmgr
mkdir -p /boot/efi/alpine/grub-bootdir/x86_64-efi/
grub-install --target=x86_64-efi \
  --boot-directory=/boot/efi/alpine/grub-bootdir/x86_64-efi/ \
  --efi-directory=/boot/efi \
  --bootloader-id=alpine
grub-mkconfig -o /boot/efi/alpine/grub-bootdir/x86_64-efi/grub/grub.cfg

And that’s it. We can now exit the chroot environment.

exit

Let’s unmount all our partitions.

umount -Rl /mnt
zpool export -a

And, after reboot, your system should come up with ZFS in place.

reboot

Qemu on Ubuntu

For a long time, I used VirtualBox as my virtualization software of choice. Not only did I have experience with it, but it also worked flawlessly, whether on Windows or on Linux. That is until I faced The virtual machine has terminated unexpectedly during startup because of signal 6 error. No matter what, I couldn’t get VirtualBox working on D34010WYK NUC with Ubuntu 22.04 LTS. Well, maybe it was time to revisit my virtualization platform of choice.

My needs were modest. It had to work without a screen (i.e., some form of remote access), it had to support NAT network interface, and it had to support sharing a directory with host (i.e., shared folder functionality).

When I went over all contenders that could work on top of an existing Linux installation, that left me with two — VirtualBox and QEMU. Since I already had issues with VirtualBox, that left me only with QEMU to try.

I remember using QEMU back in the day a lot before I switched to VirtualBox, and it was OK but annoying to set up and it allowed no host directory sharing. Well, things change. Not all — it’s still focused on command-line — but among features, it now had directory sharing.

To install QEMU we need a few prerequisites but actually less than I remember:

apt install -y libvirt-daemon libvirt-daemon-system bridge-utils qemu-system-x86
systemctl enable libvirtd
systemctl start libvirtd
modprobe kvm-intel
adduser root kvm

When it comes to VM setup, you only need to create a disk (I prefer a raw one, but qcow2 and vmdk are also an option):

qemu-img create -f raw disk.raw 20G

After that, you can use qemu-system-x86_64 to boot VM for the first time:

qemu-system-x86_64 -cpu max -smp cpus=4 -m 16G \
  -drive file=disk.raw,format=raw \
  -boot d -cdrom /Temp/alpine-virt-3.17.1-x86_64.iso \
  -vnc :0 --enable-kvm

This will create a VM with 4 CPUs, 16 GB of RAM, and assign it the disk we created before. In addition, it will boot from CDROM containing installation media. To reach its screen we can use VNC on port 5900 (default).

Once installation is done, we power off the VM, and any subsequent boot should omit CDROM:

qemu-system-x86_64 -cpu max -smp cpus=4 -m 16G \
  -drive file=disk.raw,format=raw \
  -virtfs local,path=/shared,mount_tag=sf_shared,security_model=passthrough \
  -vnc :0 --enable-kvm

And that’s all there is to it. As long as our command is running, our VM is too. Of course, if we want to run it in the background, we can do that too by adding the -daemonize parameter.

Unlike with VirtualBox, within the host, we don’t need any extra drivers or guest additions (at least not for Alpine Linux). We just need to mount the disk using 9p file system:

mount -t 9p -o trans=virtio sf_shared /media/sf_shared

If we want to make it permanent, we can add the following line into /etc/fstab:

sf_shared /media/sf_shared 9p trans=virtio,allow_other 0 0

With this, we have an equivalent setup to the one I used VirtualBox for. And all that with less packages installed and definitely with less disk space used.

And yes, I know QEMU is nothing new, and I remember playing with it way before I went with VirtualBox. What changed is support QEMU enjoys within guest VMs and how trouble-free its setup got.

In any case, I’ve solved my problem.

Start Application Without the X Bit Set

When one plays in many environments, ocassionally you can expect issues. For me one of those issues was starting Linux application from a shared drive. For reasons I won’t get into now, except to say security-related, executable (aka X) bit was removed. Thus it wasn’t possible to start application.

But, as always in Linux, there are multiple ways to skin a cat. For me the method that did wonders was usage of ld-linux library. For example, to start a.out application, one could use the following command:

/usr/lib64/ld-linux-x86-64.so.2 ./a.out

PS: This is for applications (e.g., files with ELF header). If you want to run a script without executable bit set, just call the interpreter directly, e.g.:

bash ./a.sh

Sleep Until the Next Full Second

For a bash script of mine I had to execute a certain command every second. While this command lasted less than a second, its duration was not always the same. Sometimes it would be done in 0.1 seconds, sometime in 0.5, and rarely in more than 1 second (that’s curl download for you). This variance made using a simple sleep command a bit suboptimal.

What I needed was a command that would wait until the next full second. What I needed up with was this

SLEEP_SEC=`printf "0.%03d" $((1000 - 10#$(date +%N | head -c 3)))`
if [[ "$SLEEP_SEC" == "0.000" ]]; then SLEEP_SEC="1.000"; fi
sleep $SLEEP_SEC

The first line is just taking the current nano-second count and trimming it to the first three digits that are then subtracted from 1000 and prefixed with 0.. This essentially gives millisecond precision count until the next full second. For example, if current nanosecond counter is 389123544, this would result in 0.611. And yes, you lose a bit of precision here as number gets truncated but the result will be precise enough.

If you wonder what is 10# doing here, it’s just ensuring numbers starting with 0 are not misdetected by bash as octal numbers.

The if conditional that follows is just to ensure there is at least 1 second of sleep if the previous command took less than 1 ms. Rare occurrence but cheap enough to protect against.

Finally, we send this to the sleep command which will do its magic and allow the script to continue as the next second starts. And yes, it’s not a millisecond precise despite all this calculation as sleep is not meant to be precise to start with. However, it is consistent and it triggers within the same 2-3 milliseconds almost every time. And that was plenty precise for me.