Linux, Unix, and whatever they call that world these days

Using Let's Encrypt with Certificate Based Authentication

For one of my sites I wanted to use TLS client authentication. It’s easy enough to setup in Apache:

<VirtualHost *:80>
  …
  RewriteEngine On
  RewriteRule (.*) https://%{SERVER_NAME}$1 [R=301,L]
</VirtualHost>

<VirtualHost *:443>
  …
  SSLEngine on
  SSLCertificateFile /etc/letsencrypt/live/example.com/cert.pem
  SSLCertificateKeyFile /etc/letsencrypt/live/example.com/privkey.pem
  SSLCertificateChainFile /etc/letsencrypt/live/example.com/chain.pem
  SSLVerifyClient require
  SSLVerifyDepth 1
  SSLCACertificateFile /srv/apache/data/root.crt
  SSLRequire (%{SSL_CLIENT_S_DN_CN} == "Me") \
          || (%{SSL_CLIENT_S_DN_CN} == "Myself") \
          || (%{SSL_CLIENT_S_DN_CN} == "Irene")
  SSLUserName SSL_CLIENT_S_DN_CN
</VirtualHost>

Illustration

And this worked just fine for 90 days or so. More precisely, it worked until CertBot had to update my Let’s Encrypt certificate.

Guess what? Let’s Encrypt doesn’t have knowledge of my client certificate and thus handshake fails. Error message is not really helpful as “tls: unexpected message” doesn’t really point you to the correct path. Fortunately, I actually remembered my certificate shenanigans and thus was able to debug it quite quickly. Issue verification was as easy as dropping certificate requirements made my renewal work again.

However, dropping certificates every month or two would not work for me. I wanted something that would work the same as automatic renewal for other Let’s Encrypt certificates. And no, you cannot set .well-known directory to use different validation. With TLS 1.3, you cannot change client requirements once connection is established. You’ll just get “Cannot perform Post-Handshake Authentication” error.

But, you know where you can play with locations to your heart’s content? In HTTP section. Instead of just redirecting to HTTPS, you want to carve small hole for CertBot verification.

<VirtualHost *:80>
  …
  RewriteEngine On
  RewriteRule (.*) https://%{SERVER_NAME}$1 [R=301,L]
  <Location "/.well-known/">
    RewriteEngine Off
  </Location>
</VirtualHost>

Now Let’s Encrypt verifies renewal requests using HTTP which is not really a security issue as verification file is completely random and generated anew each time.

VPN-only Internet Access on Linux Mint 19.3 via Private Internet Access

Setting up Private Internet Access VPN is usually not a problem these days as Linux version is readily available among the supported clients. However, such installation requires GUI. What if we don’t want or need one?

For setup to work independently of GUI, one approach is to use OpenVPN client usually installed by default. Also needed are PIA’s IP-based OpenVPN configuration files. While this might cause issues down the road if that IP changes, it does help a lot with security as we won’t need to poke an unencrypted hole (and thus leak information) for DNS.

From the PIA configuration archive extract your choice of .ovpn file (usually going with the one physically closest to you will give you the best results). There is no need to extract .crt and .pem files as configuration has certificates embedded.

Rest of the VPN configuration needs to be done from the Bash:

sudo cp ~/Downloads/openvpn-ip/^^US\ Seattle^^.ovpn /etc/openvpn/client/pia.conf

echo "auth-user-pass /etc/openvpn/client/pia.login" | sudo tee -a /etc/openvpn/client/pia.conf
echo "mssfix 1400" | sudo tee -a /etc/openvpn/client/pia.conf
echo "dhcp-option DNS 209.222.18.218" | sudo tee -a /etc/openvpn/client/pia.conf
echo "dhcp-option DNS 209.222.18.222" | sudo tee -a /etc/openvpn/client/pia.conf
echo "script-security 2" | sudo tee -a /etc/openvpn/client/pia.conf
echo "up /etc/openvpn/update-resolv-conf" | sudo tee -a /etc/openvpn/client/pia.conf
echo "down /etc/openvpn/update-resolv-conf" | sudo tee -a /etc/openvpn/client/pia.conf

The basic VPN setup is already completed but we still need to setup our login (replacing username and password with the actual values):

unset HISTFILE
echo '^^username^^' | sudo tee -a /etc/openvpn/client/pia.login
echo '^^password^^' | sudo tee -a /etc/openvpn/client/pia.login
sudo chmod 400 /etc/openvpn/client/pia.login

Firewall rules are to allow data flow only via VPN’s tun0 interface with only encrypted VPN traffic being allowed on port 1198.

sudo sed -i 's/IPV6=yes/IPV6=no/' /etc/default/ufw
yes | sudo ufw reset
sudo ufw default deny incoming
sudo ufw default deny outgoing
sudo ufw allow out on tun0
sudo ufw allow out on ^^eth0^^ proto udp to `cat /etc/openvpn/client/pia.conf \
    | grep "^remote " | grep -o ' [^ ]* '` port 1198
sudo ufw disable
sudo ufw enable

To test VPN connection execute:

sudo openvpn --config /etc/openvpn/client/pia.conf

Assuming test was successful (i.e. resulted in Initialization Sequence Completed message), we can further make sure data is actually traversing VPN. I’ve found whatismyipaddress.com quite helpful here. Just check if IP detected is different then IP you usually get without VPN.

Stop the test connection using Ctrl+C and proceed to configure OpenVPN’s auto-startup. Reboot is there just to test if auto-startup works.

sudo systemctl enable openvpn-client@pia
sudo reboot

This should give you quite secure setup without the need for GUI.

[2020-07-06: Works with Linux Mint 20 too.] [2020-08-07: Added step to disable IPv6.]

Colored CPU usage in TMUX

Illustration

Those spending a lot of time in Linux command line know the value of a good terminal multiplexer. Even in the age of GUI terminal windows, using tmux will speed your work while many customizations it offers will make for comfortable environment.

Knowing its flexibility, for my Minecraft server, I wanted tmux status line to include something I never tried before - CPU usage. Yes, one could use top, htop, glances, or any other of the many monitoring tools but I didn’t want the whole screen occupied by it. I wanted just a small unobtrusive note in the corner.

There are multitude of ways to retrieve CPU usage but I found vmstat works well for me and thus the first version just placed its parsed output in the bottom-right corner:

set-option -g status-interval 1
set-window-option -g status-right ' #( vmstat 1 2 | tail -1 | awk "{ printf 100-\$15; }" )% '

While this satisfied my original requirements, I started thinking about colors. Could I make it more colorful?

Search immediately found me a few tmux plugins and one of them looked really promising. While using it would be perfectly fine for my use case, I don’t like dependencies and thus I tried to find tmux-only solution for colors.

A bit of testing later and I noticed shell commands (enclosed in #()) are processed before formatting directives. So, if I manage to return the formatting from shell, tmux will do further processing and colors will be there. A bit of awk-ing later and this is what I came up with.

set-option -g status-interval 1
set-window-option -g status-right ' #( vmstat 1 2 | tail -1 | awk "{ USAGE=100-\$15; if (USAGE &lt; 20) { printf \"#[fg=green,bright]\"; } else if (USAGE &lt; 80) { printf \"#[fg=yellow,bright]\"; } else { printf \"#[bg=red,fg=white,bright]\"; }; print \" \" USAGE \"% \" }" )'

It’s still vmstat based output but now awk appends color formatting strings for different usage levels. If usage is lower than 20%, a bright green foreground is used; lower than 80% results in a bright yellow text; and anything higher results in a bright red background.

It’s a simple color-coded way of showing CPU usage at a glance.


PS: If you want extra status line text after CPU percentage, consider adding #[bg=default,fg=default] to stop color “bleeding” to remainder of the line.

LACP Between Mikrotik and XigmaNAS

Illustration

My Mikrotik RB100AHx4 network router has only 1 Gbps ports and recently I have started noticing the need for a bit more bandwidth toward my NAS. As this router has no SFP+ ports, there is no easy way to increase speed. However, since my NAS machine has 4 network ports, there is a way forward - LACP.

While there are many ways to aggregate network links, I found 802.3ad link aggregation is the way to go. Not only all modern network devices support it but it also comes almost without any costs in the terms of CPU processing. To make it better, it also allows unbalanced links (e.g. router is LACP while NAS is just a single port setup). While this is not a desired end state, it makes both initial setup and potential later troubleshooting much easier. While not the focus of my setup, additional resilience of link to the cable damage is nothing to frown about either.

To create link aggregation on Mikrotik’s side, one has to first remove all ports from existing bridge (Bridge, Ports). Then we can create the new bond interface (Interfaces, Bonding) with mode set to 802.3ad and transmit hash policy set to layer 2 and 3. All interfaces we want to bond together should be listed under slaves. Assuming we want a 3-port bond, it would be something like this:

/interface bonding
add name=bonding-nas slaves=^^ether1^^,^^ether2^^,^^ether3^^ mode=802.3ad transmit-hash-policy=layer-2-and-3

Once bond interface is created, just add it to a bridge as you would any other interface (Bridge, Ports) and your work on Mikrotik side is done.

Illustration

On XigmaNAS, setup is equally easy. Due to LACP gracefully handling our currently asymmetric link, we can simply connect to the web GUI as we normally would. There go into Network, Interface Management, LAGG tab. Add a new interface selecting LACP (Link Aggregation Control Protocol) as protocol and by clicking all the ports you want to participate. Then go back to Network, Interface Management and select newly created interface (lagg0) from LAN’s dropdown menu. Once you save the settings, you can go ahead and reboot your NAS.

Enjoy your bandwidth and resilience increase!

While LACP is nice don’t expect wonders from it. As far as resiliency goes, things are as you would expect them - if you set it up with 3 ports as I did, you can lose 2 cables before connectivity is gone. For bandwidth things are not as simple. Just having 802.3ad link makes no difference if you have just a single connection. All packets are still going to travel over only a single cable (mandatory to avoid packet reordering). Since most test tools use just one connection, you will see no improvement.

Even when you are dealing with multiple connections, you depend on the gods of hashing. As your hash space is only N wide (3 for examples above), you might have multiple connections sharing the same hash and thus the same physical link while rest of links stay bored. Only if you have many different connections you can count on approximately equal traffic distribution and full benefits. For my NAS setup this is not really an issue as I can count on multiple connections from either one (SMB3) or multiple machines.

Not a magic bullet but it’s a cheap way to double bandwidth as long as you are aware of limitations.


PS: Once setup is done, you can try setting up fast LACP rate (lacp-rate=1s). This will enable for faster detection of link failures when interface itself is up but there is a problem between devices (e.g. you have damaged cable or misbehaving switch in between). In reality, you probably don’t need it but you might as well turn it on if network equipment supports it.

PPS: Asynchronous port setup is beneficial only during deployment. If you set 4-port LACP on one side and 2-port LACP on another, you are not reaping any benefits for two extra ports. When you are done with configuration you always want the same port count on both network devices.

ZFS Ubuntu Server 19.10 Without Encryption

I already wrote on how to setup ZFS on both desktop and server Ubuntu 19.10. And both guides have two things in common. They use UEFI booting and they both make use of encryption. But what if there is a good reason why we don’t want encryption? What if there is no way to enter password? How do we install server then? Well, procedure is quite similar to the one already explained.

Entering root prompt from within Ubuntu Server installation is not hard if you know where to look. Just find Enter Shell behind Help menu item (Shift+Tab comes in handy).

The very first step should be setting up few variables - disk, pool, host name, and user name. This way we can use them going forward and avoid accidental mistakes. Just make sure to replace these values with ones appropriate for your system.

DISK=/dev/disk/by-id/^^ata_disk^^
POOL=^^ubuntu^^
HOST=^^server^^
USER=^^user^^

To start the fun we need debootstrap and zfsutils-linux package. Unlike desktop installation, ZFS pacakage is not installed by default.

apt install --yes debootstrap zfsutils-linux

General idea of my disk setup is to maximize amount of space available for pool with the minimum of supporting partitions. If you are planning to have multiple kernels, increasing boot partition size might be a good idea. Major change as compared to my previous guide is partition numbering. While having partition layout different than partition order had its advantages, a lot of partition editing tools would simply “correct” the partition order to match layout and thus cause issues down the road.

sgdisk --zap-all                        $DISK

sgdisk -n1:1M:+127M -t1:EF00 -c1:EFI    $DISK
sgdisk -n2:0:+384M  -t2:8300 -c2:Boot   $DISK
sgdisk -n3:0:0      -t3:BF01 -c3:Ubuntu $DISK

sgdisk --print                          $DISK

Without any encryption, we’re now ready to create system ZFS pool.

zpool create -o ashift=12 -O compression=lz4 -O normalization=formD \
    -O acltype=posixacl -O xattr=sa -O dnodesize=auto -O atime=off \
    -O canmount=off -O mountpoint=none -R /mnt/install $POOL $DISK-part3
zfs create -o canmount=noauto -o mountpoint=/ $POOL/root
zfs mount $POOL/root

Assuming UEFI boot, two additional partitions are needed. One for EFI and one for booting. Unlike what you get with the official guide, here I don’t have ZFS pool for boot partition but a plain old ext4. I find potential fixup works better that way and there is a better boot compatibility. If you are thinking about mirroring, making it bigger and ZFS might be a good idea. For a single disk, ext4 will do.

yes | mkfs.ext4 $DISK-part2
mkdir /mnt/install/boot
mount $DISK-part2 /mnt/install/boot/

mkfs.msdos -F 32 -n EFI $DISK-part1
mkdir /mnt/install/boot/efi
mount $DISK-part1 /mnt/install/boot/efi

Bootstrapping Ubuntu on the newly created pool is next. As we’re dealing with server you can consider using --variant=minbase rather than the full Debian system. I personally don’t see much value in that as other packages get installed as dependencies anyhow. In any case, this will take a while.

debootstrap eoan /mnt/install/

zfs set devices=off $POOL

Our newly copied system is lacking a few files and we should make sure they exist before proceeding.

echo $HOST > /mnt/install/etc/hostname
sed "s/ubuntu-server/$HOST/" /etc/hosts > /mnt/install/etc/hosts
sed '/cdrom/d' /etc/apt/sources.list > /mnt/install/etc/apt/sources.list
cp /etc/netplan/*.yaml /mnt/install/etc/netplan/

Finally we’re ready to “chroot” into our new system.

mount --rbind /dev  /mnt/install/dev
mount --rbind /proc /mnt/install/proc
mount --rbind /sys  /mnt/install/sys
chroot /mnt/install /usr/bin/env DISK=$DISK POOL=$POOL USER=$USER bash --login

Let’s not forget to setup locale and time zone. If you opted for minbase you can either skip this step or manually install locales and tzdata packages.

locale-gen --purge "en_US.UTF-8"
update-locale LANG=en_US.UTF-8 LANGUAGE=en_US
dpkg-reconfigure --frontend noninteractive locales

dpkg-reconfigure tzdata

Now we’re ready to onboard the latest Linux image.

apt update
apt install --yes --no-install-recommends linux-image-generic linux-headers-generic

Followed by boot environment packages.

apt install --yes zfs-initramfs grub-efi-amd64-signed shim-signed

To mount EFI and boot partitions, we need to do some fstab setup too:

echo "PARTUUID=$(blkid -s PARTUUID -o value $DISK-part2) \
    /boot ext4 noatime,nofail,x-systemd.device-timeout=1 0 1" >> /etc/fstab
echo "PARTUUID=$(blkid -s PARTUUID -o value $DISK-part1) \
    /boot/efi vfat noatime,nofail,x-systemd.device-timeout=1 0 1" >> /etc/fstab
cat /etc/fstab

Now we get grub started and update our boot environment. Due to Ubuntu 19.10 having some kernel version kerfuffle, we need to manually create initramfs image. As before, boot cryptsetup discovery errors during mkinitramfs and update-initramfs as OK.

KERNEL=`ls /usr/lib/modules/ | cut -d/ -f1 | sed 's/linux-image-//'`
update-initramfs -u -k $KERNEL

Grub update is what makes EFI tick.

update-grub
grub-install --target=x86_64-efi --efi-directory=/boot/efi --bootloader-id=Ubuntu \
    --recheck --no-floppy

Since we’re dealing with computer that will most probably be used without screen, it makes sense to install OpenSSH Server.

apt install --yes openssh-server

I also prefer to allow remote root login. Yes, you can create a sudo user and have root unreachable but that’s just swapping one security issue for another. Root user secured with key is plenty safe.

sed -i '/^#PermitRootLogin/s/^.//' /etc/ssh/sshd_config
mkdir /root/.ssh
echo "^^<mykey>^^" >> /root/.ssh/authorized_keys
chmod 644 /root/.ssh/authorized_keys

If you’re willing to deal with passwords, you can allow them too by changing both PasswordAuthentication and PermitRootLogin parameter. I personally don’t do this.

sed -i '/^#PasswordAuthentication yes/s/^.//' /etc/ssh/sshd_config
sed -i '/^#PermitRootLogin/s/^.//' /etc/ssh/sshd_config
sed -i 's/^PermitRootLogin prohibit-password/PermitRootLogin yes/' /etc/ssh/sshd_config
passwd

As fstab won’t work properly when you have ZFS starting first, we can place manual mount in crontab as a workaround until ZFS gets systemd loader.

( crontab -l ; echo "@reboot mount /boot ; mount /boot/efi" ) | crontab -

Short package upgrade will not hurt.

apt dist-upgrade --yes

We can omit creation of the swap dataset but I personally find a small one handy.

zfs create -V 4G -b $(getconf PAGESIZE) -o compression=off -o logbias=throughput \
    -o sync=always -o primarycache=metadata -o secondarycache=none $POOL/swap
mkswap -f /dev/zvol/$POOL/swap
echo "/dev/zvol/$POOL/swap none swap defaults 0 0" >> /etc/fstab
echo RESUME=none > /etc/initramfs-tools/conf.d/resume

If one is so inclined, /home directory can get a separate dataset too.

rmdir /home
zfs create -o mountpoint=/home $POOL/home

And now we create the user.

adduser $USER

The only remaining task before restart is to assign extra groups to user and make sure its home has correct owner.

usermod -a -G adm,cdrom,dip,plugdev,sudo $USER
chown -R $USER:$USER /home/$USER

Also consider enabling firewall:

apt install --yes man iptables iptables-persistent

While you can go wild with firewall rules, I like to keep them simple to start with. All outgoing traffic is allowed while incoming traffic is limited to new SSH connections and responses to the already established ones.

sudo apt install --yes man iptables iptables-persistent
for IPTABLES_CMD in "iptables" "ip6tables"; do
    $IPTABLES_CMD -F
    $IPTABLES_CMD -X
    $IPTABLES_CMD -Z
    $IPTABLES_CMD -P INPUT DROP
    $IPTABLES_CMD -P FORWARD DROP
    $IPTABLES_CMD -P OUTPUT ACCEPT
    $IPTABLES_CMD -A INPUT -i lo -j ACCEPT
    $IPTABLES_CMD -A INPUT -m conntrack --ctstate ESTABLISHED,RELATED -j ACCEPT
    $IPTABLES_CMD -A INPUT -p tcp --dport 22 -j ACCEPT
done
iptables -A INPUT -p icmp -j ACCEPT
ip6tables -A INPUT -p ipv6-icmp -j ACCEPT
netfilter-persistent save

As install is ready, we can exit our chroot environment.

exit

And cleanup our mount points.

umount /mnt/install/boot/efi
umount /mnt/install/boot
mount | grep -v zfs | tac | awk '/\/mnt/ {print $3}' | xargs -i{} umount -lf {}
zpool export -a

After the reboot you should be able to enjoy your installation.

reboot

Ubuntu Server 19.10 on ZFS

Illustration

With Ubuntu 19.10 Desktop there is finally (experimental) ZFS setup option or option to install ZFS manually. However, getting Ubuntu Server installed on ZFS is still full of manual steps. Steps here follow my desktop guide closely and assume you want UEFI setup.

Entering root prompt from within Ubuntu Server installation is not hard if you know where to look. Just find Enter Shell behind Help menu item (Shift+Tab comes in handy).

The very first step should be setting up few variables - disk, pool, host name, and user name. This way we can use them going forward and avoid accidental mistakes. Just make sure to replace these values with ones appropriate for your system.

DISK=/dev/disk/by-id/^^ata_disk^^
POOL=^^ubuntu^^
HOST=^^server^^
USER=^^user^^

To start the fun we need debootstrap and zfsutils-linux package. Unlike desktop installation, ZFS pacakage is not installed by default.

apt install --yes debootstrap zfsutils-linux

General idea of my disk setup is to maximize amount of space available for pool with the minimum of supporting partitions. If you are planning to have multiple kernels, increasing boot partition size might be a good idea. Major change as compared to my previous guide is partition numbering. While having partition layout different than partition order had its advantages, a lot of partition editing tools would simply “correct” the partition order to match layout and thus cause issues down the road.

sgdisk --zap-all                        $DISK

sgdisk -n1:1M:+511M -t1:8300 -c1:Boot   $DISK
sgdisk -n1:0:+128M  -t2:EF00 -c2:EFI    $DISK
sgdisk -n3:0:0      -t3:8309 -c3:Ubuntu $DISK

sgdisk --print                          $DISK

Unless there is a major reason otherwise, I like to use disk encryption.

cryptsetup luksFormat -q --cipher aes-xts-plain64 --key-size 512 \
    --pbkdf pbkdf2 --hash sha256 $DISK-part3

Of course, you should also then open device. I like to use disk name as the name of mapped device, but really anything goes.

LUKSNAME=`basename $DISK`
cryptsetup luksOpen $DISK-part3 $LUKSNAME

Finally we’re ready to create system ZFS pool.

zpool create -o ashift=12 -O compression=lz4 -O normalization=formD \
    -O acltype=posixacl -O xattr=sa -O dnodesize=auto -O atime=off \
    -O canmount=off -O mountpoint=none -R /mnt/install $POOL /dev/mapper/$LUKSNAME
zfs create -o canmount=noauto -o mountpoint=/ $POOL/root
zfs mount $POOL/root

Assuming UEFI boot, two additional partitions are needed. One for EFI and one for booting. Unlike what you get with the official guide, here I don’t have ZFS pool for boot partition but a plain old ext4. I find potential fixup works better that way and there is a better boot compatibility. If you are thinking about mirroring, making it bigger and ZFS might be a good idea. For a single disk, ext4 will do.

yes | mkfs.ext4 $DISK-part1
mkdir /mnt/install/boot
mount $DISK-part1 /mnt/install/boot/

mkfs.msdos -F 32 -n EFI $DISK-part2
mkdir /mnt/install/boot/efi
mount $DISK-part2 /mnt/install/boot/efi

Bootstrapping Ubuntu on the newly created pool is next. As we’re dealing with server you can consider using --variant=minbase rather than the full Debian system. I personally don’t see much value in that as other packages get installed as dependencies anyhow. In any case, this will take a while.

debootstrap eoan /mnt/install/

zfs set devices=off $POOL

Our newly copied system is lacking a few files and we should make sure they exist before proceeding.

echo $HOST > /mnt/install/etc/hostname
sed "s/ubuntu-server/$HOST/" /etc/hosts > /mnt/install/etc/hosts
sed '/cdrom/d' /etc/apt/sources.list > /mnt/install/etc/apt/sources.list
cp /etc/netplan/*.yaml /mnt/install/etc/netplan/

Finally we’re ready to “chroot” into our new system.

mount --rbind /dev  /mnt/install/dev
mount --rbind /proc /mnt/install/proc
mount --rbind /sys  /mnt/install/sys
chroot /mnt/install \
    /usr/bin/env DISK=$DISK POOL=$POOL USER=$USER LUKSNAME=$LUKSNAME \
    bash --login

Let’s not forget to setup locale and time zone. If you opted for minbase you can either skip this step or manually install locales and tzdata packages.

locale-gen --purge "en_US.UTF-8"
update-locale LANG=en_US.UTF-8 LANGUAGE=en_US
dpkg-reconfigure --frontend noninteractive locales

dpkg-reconfigure tzdata

Now we’re ready to onboard the latest Linux image.

apt update
apt install --yes --no-install-recommends linux-image-generic linux-headers-generic

Followed by boot environment packages.

apt install --yes zfs-initramfs cryptsetup keyutils grub-efi-amd64-signed shim-signed

If there are multiple encrypted drives or partitions, keyscript really comes in handy to open them all with the same password. As it doesn’t have negative consequences, I just add it even for a single disk setup.

echo "$LUKSNAME UUID=$(blkid -s UUID -o value $DISK-part3) none \
    luks,discard,initramfs,keyscript=decrypt_keyctl" >> /etc/crypttab
cat /etc/crypttab

To mount EFI and boot partitions, we need to do some fstab setup too:

echo "PARTUUID=$(blkid -s PARTUUID -o value $DISK-part1) \
    /boot ext4 noatime,nofail,x-systemd.device-timeout=1 0 1" >> /etc/fstab
echo "PARTUUID=$(blkid -s PARTUUID -o value $DISK-part2) \
    /boot/efi vfat noatime,nofail,x-systemd.device-timeout=1 0 1" >> /etc/fstab
cat /etc/fstab

Now we get grub started and update our boot environment. Due to Ubuntu 19.10 having some kernel version kerfuffle, we need to manually create initramfs image. As before, boot cryptsetup discovery errors during mkinitramfs and update-initramfs as OK.

KERNEL=`ls /usr/lib/modules/ | cut -d/ -f1 | sed 's/linux-image-//'`
update-initramfs -u -k $KERNEL

Grub update is what makes EFI tick.

update-grub
grub-install --target=x86_64-efi --efi-directory=/boot/efi --bootloader-id=Ubuntu \
    --recheck --no-floppy

Since we’re dealing with computer that will most probably be used without screen, it makes sense to install OpenSSH Server.

apt install --yes openssh-server

I also prefer to allow remote root login. Yes, you can create a sudo user and have root unreachable but that’s just swapping one security issue for another. Root user secured with key is plenty safe.

sed -i '/^#PermitRootLogin/s/^.//' /etc/ssh/sshd_config
mkdir /root/.ssh
echo "^^<mykey>^^" >> /root/.ssh/authorized_keys
chmod 644 /root/.ssh/authorized_keys

If you’re willing to deal with passwords, you can allow them too by changing both PasswordAuthentication and PermitRootLogin parameter. I personally don’t do this.

sed -i '/^#PasswordAuthentication yes/s/^.//' /etc/ssh/sshd_config
sed -i '/^#PermitRootLogin/s/^.//' /etc/ssh/sshd_config
sed -i 's/^PermitRootLogin prohibit-password/PermitRootLogin yes/' /etc/ssh/sshd_config
passwd

Short package upgrade will not hurt.

apt dist-upgrade --yes

We can omit creation of the swap dataset but I personally find its good to have it just in case.

zfs create -V 4G -b $(getconf PAGESIZE) -o compression=off -o logbias=throughput \
    -o sync=always -o primarycache=metadata -o secondarycache=none $POOL/swap
mkswap -f /dev/zvol/$POOL/swap
echo "/dev/zvol/$POOL/swap none swap defaults 0 0" >> /etc/fstab
echo RESUME=none > /etc/initramfs-tools/conf.d/resume

If one is so inclined, /home directory can get a separate dataset too.

rmdir /home
zfs create -o mountpoint=/home $POOL/home

And now we create the user.

adduser $USER

The only remaining task before restart is to assign extra groups to user and make sure its home has correct owner.

usermod -a -G adm,cdrom,dip,plugdev,sudo $USER
chown -R $USER:$USER /home/$USER

Consider enabling firewall:

apt install --yes man iptables iptables-persistent

While you can go wild with firewall rules, I like to keep them simple to start with. All outgoing traffic is allowed while incoming traffic is limited to new SSH connections and responses to the already established ones.

sudo apt install --yes man iptables iptables-persistent
for IPTABLES_CMD in "iptables" "ip6tables"; do
    $IPTABLES_CMD -F
    $IPTABLES_CMD -X
    $IPTABLES_CMD -Z
    $IPTABLES_CMD -P INPUT DROP
    $IPTABLES_CMD -P FORWARD DROP
    $IPTABLES_CMD -P OUTPUT ACCEPT
    $IPTABLES_CMD -A INPUT -i lo -j ACCEPT
    $IPTABLES_CMD -A INPUT -m conntrack --ctstate ESTABLISHED,RELATED -j ACCEPT
    $IPTABLES_CMD -A INPUT -p tcp --dport 22 -j ACCEPT
done
iptables -A INPUT -p icmp -j ACCEPT
ip6tables -A INPUT -p ipv6-icmp -j ACCEPT
netfilter-persistent save

As install is ready, we can exit our chroot environment.

# exit

And cleanup our mount points.

umount /mnt/install/boot/efi
umount /mnt/install/boot
mount | grep -v zfs | tac | awk '/\/mnt/ {print $3}' | xargs -i{} umount -lf {}
zpool export -a

After the reboot you should be able to enjoy your installation.

reboot

[2020-06-12: Increased partition size to 511+128 MB (was 384+127 MB before)]

Adding WinBox to Ubuntu Applications Menu

Illustration

If you need to run Mikrotik’s WinBox under Ubuntu, solution is wine and 64-bit WinBox download. It works, as far as I can tell, flawlessly. However, I found dropping to command line every time I want to run it, a bit annoying.

Adding WinBox to activities is a two step process. The first step being creation of winbox.desktop file. In its simplest form it can look something like this

[Desktop Entry]
Type=Application
Name=WinBox
Exec=wine ^^/home/user/Apps/winbox64.exe^^

Then, to get application officially registered, we just need to let system know about it:

sudo desktop-file-install winbox.desktop

And this is all it takes for WinBox to find it’s home among other applications.

Installing UEFI ZFS Root on Ubuntu 19.10

There is a newer version of this guide for Ubuntu 20.04.


With Ubuntu 19.10 there is finally (experimental) ZFS setup option. And frankly, you should use it instead of the manual installation procedure. However, manual installation does offer it’s advantages - especially when it comes to pool layout and naming. If manual installation is needed, there is great Root on ZFS installation guide that’s part of ZFS-on-Linux project but its final ZFS layout is a bit too complicated for my taste. Here is my somewhat simplified version of the same intended for a singe disk installations.

After booting into Ubuntu desktop installation we want to get a root prompt. All further commands are going to need root credentials anyhow.

sudo -i

The very first step should be setting up a few variables - disk, pool, host name, and user name. This way we can use them going forward and avoid accidental mistakes. Just make sure to replace these values with ones appropriate for your system.

DISK=/dev/disk/by-id/^^ata_disk^^
POOL=^^ubuntu^^
HOST=^^desktop^^
USER=^^user^^

To start the fun we need debootstrap package. With 19.10 ZFS is available in main repository so we don’t need to add universe as in the previous Ubuntu versions.

apt install --yes debootstrap

General idea of my disk setup is to maximize amount of space available for pool with the minimum of supporting partitions. If you are planning to have multiple kernels, increasing boot partition size might be a good idea. Major change as compared to my previous guide is partition numbering. While having partition layout different than partition order had its advantages, a lot of partition editing tools would simply “correct” the partition order to match layout and thus cause issues down the road.

sgdisk --zap-all                        $DISK

sgdisk -n1:1M:+127M -t1:EF00 -c1:EFI    $DISK
sgdisk -n2:0:+512M  -t2:8300 -c2:Boot   $DISK
sgdisk -n3:0:0      -t3:8309 -c3:Ubuntu $DISK

sgdisk --print                          $DISK

Unless there is a major reason otherwise, I like to use disk encryption.

cryptsetup luksFormat -q --cipher aes-xts-plain64 --key-size 512 \
    --pbkdf pbkdf2 --hash sha256 $DISK-part3

Of course, you should also then open device. I liku to use disk name as the name of mapped device, but really anything goes.

LUKSNAME=`basename $DISK`
cryptsetup luksOpen $DISK-part3 $LUKSNAME

Finally we’re ready to create system ZFS pool.

zpool create -o ashift=12 -O compression=lz4 -O normalization=formD \
    -O acltype=posixacl -O xattr=sa -O dnodesize=auto -O atime=off \
    -O canmount=off -O mountpoint=none -R /mnt/install $POOL /dev/mapper/$LUKSNAME
zfs create -o canmount=noauto -o mountpoint=/ $POOL/root
zfs mount $POOL/root

Assuming UEFI boot, two additional partitions are needed. One for EFI and one for booting. Unlike what you get with the official guide, here I don’t have ZFS pool for boot partition but a plain old ext4. I find potential fixup works better that way and there is a better boot compatibility. If you are thinking about mirroring, making it bigger and ZFS might be a good idea. For a single disk, ext4 will do.

yes | mkfs.ext4 $DISK-part2
mkdir /mnt/install/boot
mount $DISK-part2 /mnt/install/boot/

mkfs.msdos -F 32 -n EFI $DISK-part1
mkdir /mnt/install/boot/efi
mount $DISK-part1 /mnt/install/boot/efi

Bootstrapping Ubuntu on the newly created pool is next. This will take a while.

debootstrap eoan /mnt/install/

zfs set devices=off $POOL

Our newly copied system is lacking a few files and we should make sure they exist before proceeding.

echo $HOST > /mnt/install/etc/hostname
sed "s/ubuntu/$HOST/" /etc/hosts > /mnt/install/etc/hosts
sed '/cdrom/d' /etc/apt/sources.list > /mnt/install/etc/apt/sources.list
cp /etc/netplan/*.yaml /mnt/install/etc/netplan/

If you are installing via WiFi, you might as well copy your wireless credentials:

mkdir -p /mnt/install/etc/NetworkManager/system-connections/
cp /etc/NetworkManager/system-connections/* /mnt/install/etc/NetworkManager/system-connections/

Finally we’re ready to “chroot” into our new system.

mount --rbind /dev  /mnt/install/dev
mount --rbind /proc /mnt/install/proc
mount --rbind /sys  /mnt/install/sys
chroot /mnt/install \
    /usr/bin/env DISK=$DISK POOL=$POOL USER=$USER LUKSNAME=$LUKSNAME \
    bash --login

Let’s not forget to setup locale and time zone.

locale-gen --purge "en_US.UTF-8"
update-locale LANG=en_US.UTF-8 LANGUAGE=en_US
dpkg-reconfigure --frontend noninteractive locales

dpkg-reconfigure tzdata

Now we’re ready to onboard the latest Linux image.

apt update
apt install --yes --no-install-recommends linux-image-generic linux-headers-generic

Followed by boot environment packages.

apt install --yes zfs-initramfs cryptsetup keyutils grub-efi-amd64-signed shim-signed

Since we’re dealing with encrypted data, we should auto mount it via crypttab. If there are multiple encrypted drives or partitions, keyscript really comes in handy to open them all with the same password. As it doesn’t have negative consequences, I just add it even for a single disk setup.

echo "$LUKSNAME UUID=$(blkid -s UUID -o value $DISK-part3) none \
    luks,discard,initramfs,keyscript=decrypt_keyctl" >> /etc/crypttab
cat /etc/crypttab

To mount EFI and boot partitions, we need to do some fstab setup too:

echo "PARTUUID=$(blkid -s PARTUUID -o value $DISK-part2) \
    /boot ext4 noatime,nofail,x-systemd.device-timeout=5s 0 1" >> /etc/fstab
echo "PARTUUID=$(blkid -s PARTUUID -o value $DISK-part1) \
    /boot/efi vfat noatime,nofail,x-systemd.device-timeout=5s 0 1" >> /etc/fstab
cat /etc/fstab

Now we get grub started and update our boot environment. Due to Ubuntu 19.10 having some kernel version kerfuffle, we need to manually create initramfs image. As before, boot cryptsetup discovery errors during mkinitramfs and update-initramfs as OK.

KERNEL=`ls /usr/lib/modules/ | cut -d/ -f1 | sed 's/linux-image-//'`
update-initramfs -u -k $KERNEL

Grub update is what makes EFI tick.

update-grub
grub-install --target=x86_64-efi --efi-directory=/boot/efi --bootloader-id=Ubuntu \
    --recheck --no-floppy

Finally we install out GUI environment. It’ll take ages.

apt-get install --yes ubuntu-desktop samba

Short package upgrade will not hurt.

apt dist-upgrade --yes

We can omit creation of the swap dataset but I personally find a small one handy.

zfs create -V 4G -b $(getconf PAGESIZE) -o compression=off -o logbias=throughput \
    -o sync=always -o primarycache=metadata -o secondarycache=none $POOL/swap
mkswap -f /dev/zvol/$POOL/swap
echo "/dev/zvol/$POOL/swap none swap defaults 0 0" >> /etc/fstab
echo RESUME=none > /etc/initramfs-tools/conf.d/resume

If one is so inclined, /home directory can get a separate dataset too.

rmdir /home
zfs create -o mountpoint=/home $POOL/home

And now we create the user.

adduser $USER

The only remaining task before restart is to assign extra groups to user and make sure its home has correct owner.

usermod -a -G adm,cdrom,dip,lpadmin,plugdev,sambashare,sudo $USER
chown -R $USER:$USER /home/$USER

As install is ready, we can exit our chroot environment.

exit

And cleanup our mount points.

umount /mnt/install/boot/efi
umount /mnt/install/boot
mount | grep -v zfs | tac | awk '/\/mnt/ {print $3}' | xargs -i{} umount -lf {}
zpool export -a

After the reboot you should be able to enjoy your installation.

reboot

PS: There are versions of this guide using the native ZFS encryption for other Ubuntu versions: 21.10 and 20.04

PPS: For LUKS-based ZFS setup, check the following posts: 20.04, 19.04, and 18.10.

Using Different Key for GitHub Push

If your default id_rsa key is different than the one you use for GitHub, it’s still possible to use simple git push regardless. Trick is in adding mapping to identity file in ~/.ssh/config:

Host github.com
  User git
  IdentityFile ^^~/.ssh/id_rsa_github^^
  IdentitiesOnly yes

This will ensure all communication with github.com uses id_rsa_github key.

Coloring Make Output

Considering how verbose make output is, it’s really easy to miss warnings or errors. What we need is a bit of color. However, make being such an old program, doesn’t support any ANSI coloring. However, since both errors and warnings have standardized formats, it’s relativelly easy to use grep to color them.

For example:

make | grep --color=always -e '^Makefile:[0-9]\+:.*' -e '^'

This will color all lines following the Makefile:number: format in default color. Extending this for catching errors is similar as there is just an extra match of “.Stop”:

make | grep --color=always -e '^Makefile:[0-9]\+:.*  Stop\.$' -e '^'

But what if we want to have warnings in yellow and errors in red? Well, then we get to use GREP_COLOR variable and pass grep twice:

make | GREP_COLOR='01;31' grep --color=always -e '^Makefile:[0-9]\+:.*  Stop\.$' -e '^' \
     | GREP_COLOR='01;33' grep --color=always -e '^Makefile:[0-9]\+:.*' -e '^'

And yes, the order of greps is important as we first want to capture errors. Matched lines are to be colored red (01;31) and prefixed with ANSI escape sequence thus preventing the second grep matching. Lines matching the second grep will get similar treatment, just in yellow (01;33).

Instead of remembering this, we can create a new amake function that will do the coloring:

function amake() {
    make "$@" 2>&1 | \
        GREP_COLOR='01;31' grep --color=always -e '^Makefile:[0-9]\+:.*  Stop\.$' -e '^' | \
        GREP_COLOR='01;33' grep --color=always -e '^Makefile:[0-9]\+:.*' -e '^'
}

So, now when we call function, we can see issues a bit more quickly:

amake