Linux, Unix, and whatever they call that world these days

Speed Test from Command Line

I still like using SpeedTest by Ookla. It’s not the only kid in the town anymore but old habits die slowly. And honestly, it is a decent speed tool.

So, when I wanted to measure Internet speed from my Ubuntu server, I was happy to see it was available. And install is trivial

sudo apt install speedtest-cli

However, on my Ubuntu 24.04 installation this doesn’t really help since result is always 0.00. Never mind, there is a source availabe in now archived speedtest-cli repo. With a small change, this script will work a charm.

mkdir -p ~/Downloads
cd ~/Downloads
wget https://raw.githubusercontent.com/sivel/speedtest-cli/refs/heads/master/speedtest.py
chmod +x speedtest.py
sed -i '1s/python/python3/' speedtest.py
./speedtest.py

Sudo Can Asterisk

With sudo tool getting its Rust variant, it was bound sooner or later to have an incompatibility that is in the eye of the beholder. That dubious honor fell onto pwfeedback setting.

Most complete reporting was already done by Brodie Robertson so watch that video for details. Suffice to say, I am firmly in the camp of those who believe that the new behavior is correct. Let them have asterisk!

But, if you don’t want to wait for the bright future but you want this password echo behavior now, this is really easy to achieve.

echo "Defaults pwfeedback" | sudo tee /etc/sudoers.d/pwfeedback >/dev/null
sudo chmod 440 /etc/sudoers.d/pwfeedback

Now you can experience the future today.

Fixing MPV Playback from Samba Share

Running KDE is usually quite troublefree. However, on my new install I faced unusual problem. My photos and media on samba share simply would not open. I could see text files just fine. But no movie for me.

Interestingly, this behavior seemed limited to MPV and Haruna. Other media players seemed unaffected by whatever afflicted them. It took a bit of investigation but I traced the issue to the following line in their .desktop file:

X-KDE-Protocols=ftp,http,https,mms,rtmp,rtsp,sftp,smb,srt,rist,webdav,webdavs

While they both claim to understand samba protocol (smb), removing it from supported protocols actually solved the issue. It seems that my samba server was simply not to their liking. After doing a bit of investigation, I am still puzzled why exactly - especially since it works just fine on another computer. Some dependency is missing and, while finding it would be possible, it would also waste my time when I already have solution.

Just removing smb works like a charm:

sudo sed -i 's/,smb,/,/g' /usr/share/applications/mpv.desktop
sudo sed -i 's/,smb,/,/g' /usr/share/applications/org.kde.haruna.desktop

It’s a temporary fix but Kubuntu 26.04 is just around the corner anyhow.

How Big Should a Linux EFI Be

My “standard” partition scheme is always, EFI, Boot, Swap, Data. I have been toying with an idea to swap EFI and Boot around but I stuck with the same order for a while now. However, sizes have changed over the time. One that changed the most is EFI.

Under Linux, assuming you have a separate Boot partition, you can have EFI really small. I’ve ran it at 32 MB for a while, not observing any negative issues. Until I did.

While most of the time Linux doesn’t really use EFI partition for much, things get interesting with firmware updates (fwupdmgr update). The only way for Linux (or any other OS, for that matter) to pass files to UEFI is a common partition. That common partition can only be FAT32 EFI. And, depending on your system, those files might need over 64 MB.

Translated, your EFI needs to be able to have more than 64 MB. How much more? Playing with power of two values is an unofficial standard and thus 128 MB is the next good spot.

But, is that future proof? Considering BIOS images are 32 MB these days (256 MBit), it’ll take a while before 128 MB is not enough

I am almost certain that more space will be needed at undefined time in the future. However, I won’t lose any sleep about it for at least couple of years.


PS: If you are dual-booting Windows, I would say doubling the partition might be a good idea as Windows does have a larger footprint on EFI partition.

Updating Framework's QMK Firmware

New BIOS for Framework 16, brought also a problem. Every boot I got a message that my keyboard firmware is outdated. It was true, but also annoying warning as it paused the boot process. Normal person would just update keyboard but I had a few customizations and thus could not do the same.

What customizations? Well…

  • Entering QMK firmware by holding CapsLock (keyboard) or NumLock (numpad)
  • Key to mute microphone
  • Disabling airplane mode key
  • Reconfiguring numpad to allow volume and media controls
  • Using brightness as a NumLock signal
  • Simplifying background light setup

Could I live without those? Yes, but I still would prefer not to. So, I had to redo my changes on top of v0.3.1 which was simple enough as I just pretty much cherry-picked my changes directly. Maybe for the next version I also squash a few of them but I was lazy this time.

Short flash later and my numpad had a newer firmware with the same behavior. Happily I proceeded to flash the keyboard but in my excitement, I forgot to change my command line from framework/numpad to framework/ansi. Thus, I flashed my numpad firmware onto my keyboard. Annoying mistake but easily corrected by just reflashing the correct firmware on top of it. Or so I though.

My keyboard still didn’t work even after the correct firmware has been flashed. Some keys did produce something but it was never a correct letter. Did I brick my keyboard? Well, fortunately for me, with QMK answer is never yes.

QMK flashing doesn’t erase the full EEPROM when new version is loaded. So, if you end up corrupting the whole thing by flashing something very incompatible, you cannot just flash new firmware and be ok. What you need is to erase the old firmware all together. There are a few tools to do it, but I like this one.

Once EEPROM was erased, flashing my keyboard firmware worked like a charm.

Using ZFS in Docker

I have most of workloads on my server setup in Docker. It makes for self-documenting configuration, easier move to other machine, and less dependencies on the underlying OS. But one workload I kept on server itself - reporting.

The way my scripts were written, they required proper ZFS access. Now, I could adjust script to loopback via SSH to OS itself, but there was an easier way.

Docker allows for device exposing, and in the case of ZFS, all you need is to point container toward /dev/zfs:

    devices:
      - /dev/zfs:/dev/zfs

Now your scripts in container will have proper ZFS access.

Forcing a reboot

After messing with my remote server, there came a time for a reboot. Simple enough - but this time it ended in error.

Call to Reboot failed: Connection timed out

I’ve been using Linux servers for decades now and I was never faced with this issue. Fortunately, somebody at Unix StackExchange did.

Solution was to manually enter a few letters into /proc/sysrq-trigger, one letter at a time.

echo s > /proc/sysrq-trigger ; echo u > /proc/sysrq-trigger ; echo b > /proc/sysrq-trigger

This (attempts to) execute three distinct commands:

  • s: syncs all file systems
  • u: makes all file systems read-only
  • b: reboots the system in agressive manner.

If you are curious about what other things you can do, kernel.org has a nice documentation page.

Reconnecting HDMI

Illustration

Switching my media PC from Windows to Bazzite went awfully uneventful. No issues whatsoever. At least for a while.

My media computer is connected to TV courtesy of type-C HDMI output. And I verified heck out of it. Computer’s output worked flawlessly with TV.

However, once TV was off for a while, turning on TV would result in “No Signal” message. At first I though it was sleep, but no. Computer was still reachable via network and it would all start working after reboot. A lot of troubleshooting later and I found a pattern, courtesy of DP status:

cat /sys/class/drm/card1-DP-2/status

If I run it while TV was on, I would see connected status. If I turn TV off, I would still see connected status. But, once I turned TV back on (after some time has passed), status would change to disconnected. And nothing I’ve tried to do over network helped. Well, nothing but one thing.

If you ever used Ctrl+Alt+Fx keys, you saw virtual terminals in action. Most of the time we only do stuff on the first terminal, but other terminals are sometime handy too. Switching to other terminal would actually reset connection every time. And that is something you can do from the command line.

chvt 2
sleep 1
chvt 1

Figuring it out was the first part. While I now knew how to recover connection to my TV, having to do it every time via network was annoying. I wanted something automatic.

The first step was creating script in /usr/local/bin/dp-reconnect:

#!/bin/bash

function status() {
    for DP_PATH in /sys/class/drm/card1-DP-*; do
        DP_STATUS=$( cat "$DP_PATH/status" | grep '^connected$' | xargs )
        if [[ "$DP_STATUS" == "connected" ]]; then
            echo "Display $( basename "$DP_PATH" ) connected"
            exit 0
        fi
    done
}

status
echo "No connected displays"

chvt 2
sleep 1
chvt 1

status
exit 1

Next step is to create service definition at /etc/systemd/system/dp-reconnect.service:

[Unit]
Description=Switch terminal if no DP is connected

[Service]
Type=oneshot
ExecStart=/usr/local/bin/dp-reconnect

And lastly, we can create timer in /etc/systemd/system/dp-reconnect.timer:

[Unit]
Description=Switch terminal if no DP is connected

[Timer]
OnCalendar=*-*-* *:*:00
Persistent=true
AccuracySec=1s

[Install]
WantedBy=timers.target

With all files in place, the only remaining task is to enable the timer.

sudo systemctl enable --now dp-reconnect.timer

With all in place, if display gets disconnected, script will reconnect it within a minute. Not perfect but lightweight enough not to be a serious hassle.

Enabling Kubuntu Fingerprint Support

After installing Kubuntu on my Framework laptop, I found all hardware was fully supported. It took me a while to notice that my fingerprint scanner was missing. Mind you, hardware was supported but KDE simply didn’t show interface to use it. Well, we cannot have that.

Illustration

Fortunately, enabling fingering is as simple as running two commands.

sudo apt install -y fprintd libpam-fprintd
sudo pam-auth-update

The first command installs software support, while the second command allows usage of the fingerprint for authentication. However, on Kubuntu you will notice that this sometime works, sometime not. The way the default files are written (/etc/pam.d/common-auth), the first failure will result module being ignored until password is entered. If you want a bit more permissive fingerprint handling, you can adjust max_tries a bit.

sudo sed -i 's/pam_fprintd.so max-tries=1/pam_fprintd.so max-tries=3/g' /etc/pam.d/common-auth

And now you have it, a fingerprint support that allows for some leeway.

Warning Me Softly

I already wrote about post-quantum cryptography before. If you check the dates, you’ll see that this is not a new topic. However, it’s still quite common to see standard key exchange for SSH sessions. Well, this might be about to change.

With version 10.1, OpenSSH will present you with the following warning:

** WARNING: connection is not using a post-quantum key exchange algorithm.
** This session may be vulnerable to "store now, decrypt later" attacks.
** The server may need to be upgraded. See https://openssh.com/pq.html

In reality, this change nothing as everything will continue to work as before. But, knowing the human nature, I forsee a lot of people moving to a newer key exchange just to avoid the warning. In no time, projects will face their security review teams. And if security team doesn’t like warnings, projects will oblige.

My own network is in surprisingly good state. Most of my SSH connections already uses sntrup761x25519-sha512 key exchange algorithm. However, there are two notable exceptions: Windows and Mikrotik.

Mikrotik, I pretty much expected. It took them ages to support ED26619 so I don’t doubt I will see the warning for a long while before they update software. I love Mikrotik devices, but boy, do they move slow.

But Windows 11 came as a surprise. They still advertise curve25519-sha256 at best. I guess all that time spent making start menu worse prevented them from upgrading their crypto. I predict that, as always, when warning starts, Microsoft forums will be full of people saying that warning is wrong and that Windows can do no wrong. Only to eventually be dragged into the future.