ZFS GUID Galore

I am a big fan of ZFS and I have it installed on every Linux/Unix machine I own. Including the machine I use for playing with Docker containers. And it was that machine where I saw a bunch of ZFS snapshots with weird random hexadecimal names. And it wasn’t one snapshot, nor two - it was hundreds of them. So I deleted them.

Guess what, Docker build started complaining:

error creating zfs mount: mount system/root/a4f339f95f920b918bb23290a3e831dc22477bc76ef0d3496224fc424e65ec67:/var/lib/docker/zfs/graph/a4f339f95f920b918bb23290a3e831dc22477bc76ef0d3496224fc424e65ec67: no such file or directory

Well, I guess that sorted who was to blame for all those long snapshots. You see, Docker gets smart if it detect ZFS and does a lot of smart things. Unfortunately those smart things result in a lot of snapshots. And I don’t like people (or software) messing with my ZFS. And obviously Docker didn’t like me messing with it either. :)

Fortunately, the solution is easy enough. One should reconfigure Docker to use overlay2 storage driver instead of ZFS one and short daemon restart later one can continue playing with Docker without having to deal with ZFS snapshot hell.

Now only if I could remember this when I reinstall the OS…

Change the Interface's MAC Address

MAC address should be unique for each network card - whether it’s wireless or wired. It’s this uniqueness why some networks will use it to distinguish an authorized user from the unauthorized one. If you ever had a time-limited access to the network, chances are that your MAC address was used to block you once time is up.

I will leave it up to you to think of scenarios but I find randomizing my MAC address a useful option in Linux. Therefore, I created a script to automatically generate a new one. This script will randomize the last three octets of the MAC address while leaving the first three octets as they originally were. In effect this makes your computer present itself to the network as the new adapter from the same manufacturer.

#!/bin/bash

INTERFACE="^^eth0^^"

CURRENT_MAC=`/usr/sbin/ifconfig $INTERFACE | grep ether | awk '{print $2}'`
ORIGINAL_MAC=`/usr/sbin/ethtool -P $INTERFACE | rev | cut -d' ' -f1 | rev`
ORIGINAL_MAC_PREFIX=`echo $ORIGINAL_MAC | cut -d: -f1-3`
NEW_MAC="$ORIGINAL_MAC_PREFIX`/usr/bin/hexdump -n3 -e'3/1 ":%02x"' /dev/urandom`"

echo "Current MAC : $CURRENT_MAC"
echo "Original MAC: $ORIGINAL_MAC"
echo "New MAC ....: $NEW_MAC"

sudo /usr/sbin/ifconfig $INTERFACE down
sudo /usr/sbin/ifconfig $INTERFACE hw ether $NEW_MAC
sudo /usr/sbin/ifconfig $INTERFACE up

VirtualBox Host I/O Cache

Illustration

My XigmaNAS-based file server is usually quite a speedy beast. Between 8 disk ZFS RAID-Z2 and LACP, it can pretty much handle everything I need for my home. Except VirtualBox.

When I tried using its built-in VirtualBox, the guest virtual machine was really slow to install. Disk transfer was in kilobytes. And it wasn’t the disk speed problem as I could copy files in the background at the excess of 100 MB/s. After a bit of investigation, culprit was found in the way how VirtualBox writes to disk. Every write is essentially flushed. Combine that with ZFS on spinning rust and you have ridiculously low performance.

There are essentially two ways to solve this. The first one is to enable such pattern on ZFS. Adding logging SSD disk to my array would do wonders. However, considering this was the only load requiring them, I didn’t want to go through neither the cost or the effort of setting mirrored logging devices.

Another fix is much easier and comes without the cost. I just enabled Use Host I/O Cache for my virtual controller and speed went through the roof. Yes, this solution makes host crashes really dangerous as all cached data will be lost. And that’s a few seconds worth of important guest file system data with a potential to cause corruption. You should really think twice before turning it on.

However, for my VM it proved to be good enough. All data used by that VM lived on network shares to start with and recovering from corrupted OS didn’t bother me much as I scripted the whole setup anyhow.

Low cost solution for when you can handle a data loss potential.

Bimil failing with FontFamilyNotFound

While Bimil is primarily Windows application, I use it regularly on Linux. However, when I tried running it on freshly installed Linux Mint 19.3, I was greeted with a quick crash.

Running it from console did shine a bit more light onto the situation as the following line was quite noticeable: [ERROR] FATAL UNHANDLED EXCEPTION: System.ArgumentException: The requested FontFamily could not be found [GDI+ status: FontFamilyNotFound].

As mono is Windows Forms application written in C#, running it on Linux requires a few packages extra. Most of them are actually taken care of with the installation of mono-complete. However, in Linux mint, I found one other dependency I was not aware of - Microsoft fonts.

Solution?

sudo apt-get install ttf-mscorefonts-installer

Using Let's Encrypt with Certificate Based Authentication

For one of my sites I wanted to use TLS client authentication. It’s easy enough to setup in Apache:

<VirtualHost *:80>
  …
  RewriteEngine On
  RewriteRule (.*) https://%{SERVER_NAME}$1 [R=301,L]
</VirtualHost>

<VirtualHost *:443>
  …
  SSLEngine on
  SSLCertificateFile /etc/letsencrypt/live/example.com/cert.pem
  SSLCertificateKeyFile /etc/letsencrypt/live/example.com/privkey.pem
  SSLCertificateChainFile /etc/letsencrypt/live/example.com/chain.pem
  SSLVerifyClient require
  SSLVerifyDepth 1
  SSLCACertificateFile /srv/apache/data/root.crt
  SSLRequire (%{SSL_CLIENT_S_DN_CN} == "Me") \
          || (%{SSL_CLIENT_S_DN_CN} == "Myself") \
          || (%{SSL_CLIENT_S_DN_CN} == "Irene")
  SSLUserName SSL_CLIENT_S_DN_CN
</VirtualHost>

Illustration

And this worked just fine for 90 days or so. More precisely, it worked until CertBot had to update my Let’s Encrypt certificate.

Guess what? Let’s Encrypt doesn’t have knowledge of my client certificate and thus handshake fails. Error message is not really helpful as “tls: unexpected message” doesn’t really point you to the correct path. Fortunately, I actually remembered my certificate shenanigans and thus was able to debug it quite quickly. Issue verification was as easy as dropping certificate requirements made my renewal work again.

However, dropping certificates every month or two would not work for me. I wanted something that would work the same as automatic renewal for other Let’s Encrypt certificates. And no, you cannot set .well-known directory to use different validation. With TLS 1.3, you cannot change client requirements once connection is established. You’ll just get “Cannot perform Post-Handshake Authentication” error.

But, you know where you can play with locations to your heart’s content? In HTTP section. Instead of just redirecting to HTTPS, you want to carve small hole for CertBot verification.

<VirtualHost *:80>
  …
  RewriteEngine On
  RewriteRule (.*) https://%{SERVER_NAME}$1 [R=301,L]
  <Location "/.well-known/">
    RewriteEngine Off
  </Location>
</VirtualHost>

Now Let’s Encrypt verifies renewal requests using HTTP which is not really a security issue as verification file is completely random and generated anew each time.