Linux, Unix, and whatever they call that world these days

Systemd Watchdog for Any Service

Making basic systemd service is easy. Let’s assume the simplest application (not necessarily even designed to be a service) and look into making it work with systemd.

Our example application will be a script in /opt/test/application with the following content:

#!/bin/bash

while(true); do
  date | tee /var/tmp/test.log
  sleep 1
done

Essentially it’s just never ending output of a current date.

To make it a service, we simply create /etc/systemd/system/test.service with description of our application:

[Unit]
Description=Test service
After=network.target
StartLimitIntervalSec=0

[Service]
Type=simple
ExecStart=/opt/test/application
Restart=always
RestartSec=1

[Install]
WantedBy=multi-user.target

That’s all needed before we can start the service:

sudo systemctl start test

sudo systemctl status test
 ● test.service - Test service
    Loaded: loaded (/etc/systemd/system/test.service; disabled; vendor preset: enabled)
    Active: active (running)
  Main PID: 5212 (service)
     Tasks: 2 (limit: 4657)
    CGroup: /system.slice/test.service
            ├─5212 /bin/bash /opt/test/application
            └─5321 sleep 1

Systemd will start application and even perform restart if application fails. But what if we want it a bit smarter? What if we want a watchdog that’ll restart application not only when it’s process fails but also when some other health check goes bad?

While sytemd does support such setup, application generally should be aware of it and call watchdog function every now and then. Fortunately, even if our application doesn’t do that, we can use watchdog facilities via systemd-notify tool.

First we need to change three things in our service definition. One is changing type to notify, then changing executable to the wrapper script, and lastly defining the watchdog time.

In this example, if application doesn’t respond in 5 seconds, it will be considered failed. The new service definition in /etc/systemd/system/test.service can look something like this:

[Unit]
Description=Test service
After=network.target
StartLimitIntervalSec=0

[Service]
Type=^^notify^^
ExecStart=^^/opt/test/test.sh^^
Restart=always
RestartSec=1
TimeoutSec=5
WatchdogSec=^^5^^

[Install]
WantedBy=multi-user.target

Those watching carefully will note we don’t actually solve anything with this and that we just move all responsibility to /opt/test/test.sh wrapper.

It’s in that script we first communicate to sytemd when application is ready and later, in a loop, check for not only application PID but also for any other condition (e.g. certain curl response), calling systemd-notify if application proves to be healthy:

#!/bin/bash

trap 'kill $(jobs -p)' EXIT

/opt/test/service &
PID=$!

/bin/systemd-notify --ready

while(true); do
    FAIL=0

    kill -0 $PID
    if [[ $? -ne 0 ]]; then FAIL=1; fi

#    curl http://localhost/test/
#    if [[ $? -ne 0 ]]; then FAIL=1; fi

    if [[ $FAIL -eq 0 ]]; then /bin/systemd-notify WATCHDOG=1; fi

    sleep 1
done

Starting service now gives slightly different output:

sudo systemctl stop test

sudo systemctl start test

sudo systemctl status test
 ● test.service - Test service
    Loaded: loaded (/etc/systemd/system/test.service; disabled; vendor preset: enabled)
    Active: active (running)
  Main PID: 6406 (test.sh)
     Tasks: 4 (limit: 4657)
    CGroup: /system.slice/test.service
            ├─6406 /bin/bash /opt/test/test.sh
            ├─6407 /bin/bash /opt/test/application
            ├─6557 sleep 1
            └─6560 sleep 1

If we kill application manually (e.g. sudo kill 6407), systemd will pronounce service dead and start it again. It will do the same if any other check fails.

While this approach is not ideal, it does allow for easy application watchdog retrofitting.

Setting up Encrypted Ubuntu 18.10 ZFS Desktop

I have already explained how I deal with ZFS mirror setup on Ubuntu 18.10. But what about laptops that generally come with a single drive?

Well, as before basic instructions are available from ZFS-on-Linux project. However, they do have a certain way of doing things I don’t necessarily subscribe to. Here is my way of setting this up. As always, it’s best to setup remote access so you can copy/paste as steps are numerous.

As before, we first need to get into root prompt:

sudo -i

Followed by getting a few basic packages ready:

apt-add-repository universe
apt update
apt install --yes debootstrap gdisk zfs-initramfs

We setup disks essentially the same way as in previous guide:

sgdisk --zap-all                 /dev/disk/by-id/^^ata_disk^^

sgdisk -a1 -n3:34:2047  -t3:EF02 /dev/disk/by-id/^^ata_disk^^
sgdisk     -n2:1M:+511M -t2:8300 /dev/disk/by-id/^^ata_disk^^
sgdisk     -n1:0:0      -t1:8300 /dev/disk/by-id/^^ata_disk^^

sgdisk --print                   /dev/disk/by-id/^^ata_disk^^
 …
 Number  Start (sector)    End (sector)  Size       Code  Name
    1         1050624        67108830   31.5 GiB    8300
    2            2048         1050623   512.0 MiB   8300
    3              34            2047   1007.0 KiB  EF02

Because we want encryption, we need to setup LUKS:

cryptsetup luksFormat -qc aes-xts-plain64 -s 512 -h sha256 /dev/disk/by-id/^^ata_disk^^-part1
cryptsetup luksOpen /dev/disk/by-id/^^ata_disk^^-part1 luks1

Unlike in the last guide, this time I want to have a bit of separation. Dataset system will contain the whole system, while data will contain only the home directories. Again, if you want to split it all, follow the original guide:

zpool create -o ashift=12 -O atime=off -O canmount=off -O compression=lz4 -O normalization=formD \
    -O xattr=sa -O mountpoint=none rpool /dev/mapper/luks1
zfs create -o canmount=noauto -o mountpoint=/mnt/rpool/ rpool/system
zfs mount rpool/system

We should also setup the boot partition:

mke2fs -Ft ext2 /dev/disk/by-id/^^ata_disk^^-part2
mkdir /mnt/rpool/boot/
mount /dev/disk/by-id/^^ata_disk^^-part2 /mnt/rpool/boot/

Now we can get basic installation onto our disks:

debootstrap cosmic /mnt/rpool/
zfs set devices=off rpool
zfs list

Before we start using it, we prepare few necessary files:

cp /etc/hostname /mnt/rpool/etc/hostname
cp /etc/hosts /mnt/rpool/etc/hosts
cp /etc/netplan/*.yaml /mnt/rpool/etc/netplan/
sed '/cdrom/d' /etc/apt/sources.list > /mnt/rpool/etc/apt/sources.list

With chroot we can get the first taste of our new system:

mount --rbind /dev  /mnt/rpool/dev
mount --rbind /proc /mnt/rpool/proc
mount --rbind /sys  /mnt/rpool/sys
chroot /mnt/rpool/ /bin/bash --login

Now we can update our software and perform locale and time zone setup:

apt update

locale-gen --purge "en_US.UTF-8"
update-locale LANG=en_US.UTF-8 LANGUAGE=en_US
dpkg-reconfigure --frontend noninteractive locales

dpkg-reconfigure tzdata

Now we install Linux image and basic ZFS boot packages:

apt install --yes --no-install-recommends linux-image-generic
apt install --yes zfs-initramfs

Since we’re dealing with encrypted data, our cryptsetup should be also auto mounted:

apt install --yes cryptsetup

echo "luks1 UUID=$(blkid -s UUID -o value /dev/disk/by-id/^^ata_disk^^-part1) none luks,discard,initramfs" >> /etc/crypttab
cat /etc/crypttab

And of course, we need to auto-mount our boot partition too:

echo "UUID=$(blkid -s UUID -o value /dev/disk/by-id/^^ata_disk^^-part2) /boot ext2 noatime 0 2" >> /etc/fstab
cat /etc/fstab

Now we get grub started (do select the WHOLE disk):

apt install --yes grub-pc

And update our boot environment again (seeing errors is nothing unusual):

update-initramfs -u -k all

And then we finalize our grup setup:

update-grub
grub-install /dev/disk/by-id/^^ata_disk^^

Finally we get the rest of desktop system:

apt-get install --yes ubuntu-desktop samba linux-headers-generic
apt dist-upgrade --yes

We can omit creation of the swap dataset but I always find it handy:

zfs create -V 4G -b $(getconf PAGESIZE) -o compression=off -o logbias=throughput -o sync=always \
    -o primarycache=metadata -o secondarycache=none rpool/swap
mkswap -f /dev/zvol/rpool/swap
echo "/dev/zvol/rpool/swap none swap defaults 0 0" >> /etc/fstab
echo RESUME=none > /etc/initramfs-tools/conf.d/resume

And now is good time to swap our /home directory too:

rmdir /home
zfs create -o mountpoint=/home rpool/data

Now we are ready to create the user:

adduser -u 1002 ^^user^^
usermod -a -G adm,cdrom,dip,lpadmin,plugdev,sambashare,sudo ^^user^^
chown -R ^^user^^:^^user^^ /home/^^user^^

Lastly we exit our chroot environment and reboot:

exit
reboot

You will get stuck after the password prompt as our mountpoint for system dataset is wrong. That’s easy to correct:

zfs set mountpoint=/ rpool/system
exit
reboot

Assuming nothing went wrong, your system is now ready.

Expanding Ext4 Volume on ZFS

Due to Dropbox’s idiotic decision to limit file system support drastically for no reason other than to piss people off, I have a small ext4 volume hosted on my ZFS pool.

Originally I made it a bit small (only 8 GB) and got Dropbox complaining. Had I created it as partition, enlarging it would be annoying task at best. However, having it exposed as ZFS block volume, resize was trivial.

First I simply increased volsize property and then told ext4 to simply use that additional space (resize2fs command):

sudo zfs set volsize=^^16G^^ ^^rpool/data/dropbox^^

sudo resize2fs ^^/dev/zvol/rpool/data/dropbox^^
 resize2fs 1.44.4 (18-Aug-2018)
 Filesystem at /dev/zvol/rpool/data/dropbox is mounted on /home/user/Dropbox; on-line resizing required
 old_desc_blocks = 1, new_desc_blocks = 2
 The filesystem on /dev/zvol/rpool/data/dropbox is now 4194304 (4k) blocks long.

Doesn’t get much easier.

Booting Encrypted ZFS Mirror on Ubuntu 18.10

As I was setting up my new Linux machine with two disks, I decided to forgo my favorite Linux Mint and give Ubuntu another try. Main reason? ZFS of course.

Ubuntu already has a quite decent guide for ZFS setup but it’s slightly lacking in the mirroring department. So, here I will list steps that follow their approach closely but with slight adjustments as not only I want encrypted setup but also a proper ZFS mirror setup. If you need a single disk ZFS setup, stick with the original guide.

After booting into installation, we can go for Try Ubuntu and open a terminal. My strong suggestion would be to install openssh-server package first and connect to it remotely because that allows for copy/paste:

passwd
Changing password for ubuntu.``
 (current) UNIX password: ^^(empty)^^
 Enter new UNIX password: ^^password^^
 Retype new UNIX password: ^^password^^
 passwd: password updated successfully

sudo apt install --yes openssh-server

Regardless if you continue directly or you connect via SSH (username is ubuntu), the first task is to get onto root prompt and never leave it again. :)

sudo -i

To get the ZFS on, we need Internet connection and extra repository:

sudo apt-add-repository universe
apt update

Now we can finally install ZFS, partitioning utility, and an installation tool:

apt install --yes debootstrap gdisk zfs-initramfs

First we clean the partition table on disks followed by a few partition definitions (do change ID to match your disks):

sgdisk --zap-all /dev/disk/by-id/^^ata_disk1^^
sgdisk --zap-all /dev/disk/by-id/^^ata_disk2^^

sgdisk -a1 -n2:34:2047 -t2:EF02 /dev/disk/by-id/^^ata_disk1^^
sgdisk -a1 -n2:34:2047 -t2:EF02 /dev/disk/by-id/^^ata_disk2^^

sgdisk     -n3:1M:+512M -t3:EF00 /dev/disk/by-id/^^ata_disk1^^
sgdisk     -n3:1M:+512M -t3:EF00 /dev/disk/by-id/^^ata_disk2^^

sgdisk     -n4:0:+512M  -t4:8300 /dev/disk/by-id/^^ata_disk1^^
sgdisk     -n4:0:+512M  -t4:8300 /dev/disk/by-id/^^ata_disk2^^

sgdisk     -n1:0:0      -t1:8300 /dev/disk/by-id/^^ata_disk1^^
sgdisk     -n1:0:0      -t1:8300 /dev/disk/by-id/^^ata_disk2^^

After all these we should end up with both disks showing 4 distinct partitions:

sgdisk --print /dev/disk/by-id/^^ata_disk1^^
 …
 Number  Start (sector)    End (sector)  Size       Code  Name
    1         2099200        67108830   31.0 GiB    8300
    2              34            2047   1007.0 KiB  EF02
    3            2048         1050623   512.0 MiB   EF00
    4         1050624         2099199   512.0 MiB   8300

With partitioning done, it’s time to encrypt our disks and mount them (note that we only encrypt the first partition, not the whole disk):

cryptsetup luksFormat -c aes-xts-plain64 -s 512 -h sha256 /dev/disk/by-id/^^ata_disk1^^-part1
cryptsetup luksFormat -c aes-xts-plain64 -s 512 -h sha256 /dev/disk/by-id/^^ata_disk2^^-part1

cryptsetup luksOpen /dev/disk/by-id/^^ata_disk1^^-part1 luks1
cryptsetup luksOpen /dev/disk/by-id/^^ata_disk2^^-part1 luks2

Finally we can create our pool (rpool is a “standard” name) consisting of both encrypted devices:

zpool create -o ashift=12 -O atime=off -O compression=lz4 \
    -O normalization=formD -O xattr=sa -O mountpoint=/ -R /mnt/rpool \
    rpool mirror /dev/mapper/luks1 /dev/mapper/luks2

There is advantage into creating fine grained datasets as the official guide instructs, but I personally don’t do it. Having one big free-for-all pile is OK for me - anything of any significance I anyhow keep on my network drive where I have properly setup ZFS with rights, quotas, and all other goodies.

Since we are using LUKS encryption, we do need to mount 4th partition too. We’ll do it for both disks and deal with syncing them later:

mkdir /mnt/rpool/boot
mke2fs -t ext2 /dev/disk/by-id/ata_disk1-part4
mount /dev/disk/by-id/ata_disk1-part4 /mnt/rpool/boot

mkdir /mnt/rpool/boot2
mke2fs -t ext2 /dev/disk/by-id/^^ata_disk2^^-part4
mount /dev/disk/by-id/^^ata_disk2^^-part4 /mnt/rpool/boot2

Now we can finally start copying our Linux (do check for current release codename using lsb_release -a). This will take a while:

debootstrap ^^cosmic^^ /mnt/rpool/

Once done, turn off devices flag on pool and check if data has been written or we messed the paths up:

zfs set devices=off rpool

zfs list
 NAME    USED  AVAIL  REFER  MOUNTPOINT
 rpool   218M  29.6G   217M  /mnt/rpool

Since our system is bare, we do need to prepare a few configuration files:

cp /etc/hostname /mnt/rpool/etc/hostname
cp /etc/hosts /mnt/rpool/etc/hosts
cp /etc/netplan/*.yaml /mnt/rpool/etc/netplan/
sed '/cdrom/d' /etc/apt/sources.list > /mnt/rpool/etc/apt/sources.list

Finally we get to try our our new system:

mount --rbind /dev  /mnt/rpool/dev
mount --rbind /proc /mnt/rpool/proc
mount --rbind /sys  /mnt/rpool/sys
chroot /mnt/rpool/ /bin/bash --login

Once in our new OS, a few further updates are in order:

apt update

locale-gen --purge "^^en_US.UTF-8^^"
update-locale LANG=^^en_US.UTF-8^^ LANGUAGE=^^en_US^^
dpkg-reconfigure --frontend noninteractive locales

dpkg-reconfigure tzdata

Now we need to install linux image and headers:

apt install --yes --no-install-recommends linux-image-generic linux-headers-generic

Then we configure booting ZFS:

apt install --yes zfs-initramfs
echo UUID=$(blkid -s UUID -o value /dev/disk/by-id/^^ata_disk1^^-part4) /boot  ext2 noatime 0 2 >> /etc/fstab
echo UUID=$(blkid -s UUID -o value /dev/disk/by-id/^^ata_disk2^^-part4) /boot2 ext2 noatime 0 2 >> /etc/fstab

And disk decryption:

apt install --yes cryptsetup
echo "luks1 UUID=$(blkid -s UUID -o value /dev/disk/by-id/^^ata_disk1^^-part1) none luks,discard,initramfs" >> /etc/crypttab
echo "luks2 UUID=$(blkid -s UUID -o value /dev/disk/by-id/^^ata_disk2^^-part1) none luks,discard,initramfs" >> /etc/crypttab

And install grub bootloader (select both disks - not partitions!):

apt install --yes grub-pc

Followed by update of boot environment (some errors are ok):

update-initramfs -u -k all
 update-initramfs: Generating /boot/initrd.img-4.18.0-12-generic
 cryptsetup: ERROR: Couldn't resolve device rpool
 cryptsetup: WARNING: Couldn't determine root device

Now we update the grub and fix its config (only needed if you are not using sub-datasets):

update-grub
sed -i "s^root=ZFS=rpool/^root=ZFS=rpool^g" /boot/grub/grub.cfg

Now we get to copy all boot files to second disk:

cp -rp /boot/* /boot2/

With grub install we’re getting close to the end of story:

grub-install /dev/disk/by-id/^^ata_disk1^^
 Installing for i386-pc platform.
 Installation finished. No error reported.

grub-install /dev/disk/by-id/^^ata_disk2^^
 Installing for i386-pc platform.
 Installation finished. No error reported.

Now we install full GUI and upgrade whatever needs it (takes a while):

sudo apt-get install --yes ubuntu-desktop samba
apt dist-upgrade --yes

As this probably updated grub, we need to both correct config (only if we have bare dataset) and copy files to the other boot partition (this has to be repeated on every grub update):

sed -i "s^root=ZFS=rpool/^root=ZFS=rpool^g" /boot/grub/grub.cfg
cp -rp /boot/* /boot2/

Having some swap is always a good idea:

zfs create -V 4G -b $(getconf PAGESIZE) -o compression=off -o logbias=throughput -o sync=always \
    -o primarycache=metadata -o secondarycache=none rpool/swap

mkswap -f /dev/zvol/rpool/swap
echo /dev/zvol/rpool/swap none swap defaults 0 0 >> /etc/fstab
echo RESUME=none > /etc/initramfs-tools/conf.d/resume

Almost there, it’s time to set root password:

passwd

And to create our user for desktop environment:

adduser ^^user^^
usermod -a -G adm,cdrom,dip,lpadmin,plugdev,sambashare,sudo ^^user^^

Finally, we can reboot (don’t forget to remove CD) and enjoy our system:

exit
reboot

Adding a Swap File to CentOS

Memory on desktop PC has been a solved problem for a while now. You have certain quantity of it and you rarely really run out of it. Even when you do, there is a virtual memory to soften the blow at the cost of performance. Enter the cloud…

When you deal with mini virtual machines running on a cloud, quite often they have modest memory specification - 1 GB or even less are the usual starting point. Fortunately, they run on Linux so they don’t need much memory - except when they do.

What to do if you need just a bit more memory on already configured machine and you really don’t want to deal with reboots required for proper upscaling? Well, you can always add a swap file.

First, you create a file (I’ll call mine Swapfile.sys for sentimental reasons) with the additional 1 GB (or whatever value you want):

dd if=/dev/zero of=/swapfile.sys bs=1M count=^^1024^^
chmod 600 /swapfile.sys

Then you format this file as a swap and tell the system to use it:

mkswap /swapfile.sys
swapon /swapfile.sys

Since this disappears upon reboot, you might also want to make it permanent by adding it to fstab. This step is a bit controversial since you should really think about bigger machine if you are always in need of the extra memory:

sed -i '$ a\/swapfile.sys\tnone\tswap\tsw\t0\t0' /etc/fstab

That’s it. A bit of room to breathe.

Extracting Single Ini Section Via Bash

While playing with my home network I was presented with a curios problem - parsing .ini file within bash.

Let’s take the following file as an example:

[Alfa]
IP=1.1.1.1
DNS=alfa.example.com

[Bravo]
IP=2.2.2.2
DNS=bravo.example.com

[Charlie]
IP=3.3.3.3
DNS=charlie.example.com

From this file I want to get both IP and DNS fields of one section - e.g. Bravo. I did find a solution that was rather close to what I wanted but I didn’t like the fact all entries got into associative array.

So I decided to make a similar solution adjusting the output to show only a single section and give it a prefix to avoid accidental conflict with other script variables. Here is the one-liner I came up with:

awk -v TARGET=^^Bravo^^ -F ' *= *' '{ if ($0 ~ /^\[.*\]$/) { gsub(/^\[|\]$/, "", $0); SECTION=$0 } else if (($2 != "") && (SECTION==TARGET)) { print "FIELD_" $1 "=\"" $2 "\"" }}' ^^My.ini^^

Or to present it in more human-friendly form:

awk -v TARGET=^^Bravo^^ -F ' *= *' '
  {
    if ($0 ~ /^\[.*\]$/) {
      gsub(/^\[|\]$/, "", $0)
      SECTION=$0
    } else if (($2 != "") && (SECTION==TARGET)) {
      print "FIELD_" $1 "=\"" $2 "\""
    }
  }
  ' ^^My.ini^^

The first argument (-v TARGET=Bravo) just specifies which section we’re searching. I am keeping it outside as that way I can use other variable (e.g. $MACHINE) without dealing with escaping awk statements.

The second argument (-F ' *= *') is actually regex ensuring there are no spaces around equals sign.

The third argument is what makes it all happen. Code matches section line and puts it in SECTION variable. Each line with name/value pair is further checked and printed if target section name is matched. Upon printing, a prefix “FIELD_” is added before name making the whole line essentially a variable declaration.

The fourth and last argument is simply a file name.

This particular command example will output the following text:

FIELD_IP="2.2.2.2"
FIELD_DNS="bravo.example.com"

How do you use it in a script? Simple source result of awk and you get to use .ini fields as any bash variable.

source < ( awk… )

Removing Line Breaks

Sometime in scripting you don’t get to choose your input format. For example, you might get data in multiple lines when you actually need it all in a single line. For such occasions you can go with:

cat ^^file^^ | awk '{printf "%s", $0}'

Likewise you might want lines separated by a space. Slight modification makes it happen:

cat ^^file^^ | awk '{printf "%s ", $0}'

Lastly, you might want to split a single line into multiple ones (handy for base64 printouts):

cat ^^file^^ | fold -w 72

PS: Check fmt if you need word-aware line splitting.

Extracting Public SSH Key From a Private One

Common key management method seen in Linux scripts is copying private and public SSH key around. While not necessarily the best way to approach things, getting your private SSH key does come in handy when easy automation is needed.

However, there is no need to copy public key if you are already copying the private one. Since private key contains everything, you can use ssh-keygen to extract public key from it:

ssh-keygen -yf ^^~/.ssh/id_rsa^^ > ^^~/.ssh/id_rsa.pub^^

What is the advantage you ask? Isn’t it easier just to copy two files instead of copying one and dealing with shell scripting for second?

Well, yes. However, it is also more error prone as you must always keep private and public key in sync. If you replace one and by accident forget to replace the other, you will be chasing your tail in no time.

Allowing Root Login For Red Hat QCow2 Image

You should never depend on root login when dealing with OpenStack cloud. Pretty much all pre-prepared cloud images have it disabled by default. Ideally all your user provisioning should be done as part of the cloud-init procedure and there you should either create your own user or work with the default cloud-user and the key you provisioned. But what if you are troubleshooting some weird (network) issue and you need console login for your image?

Well, you can always re-enable root user by directly modifying qcow2 image.

To edit qcow2 images, we need first to install libguestfs-tools. On my Linux Mint, that requires the following:

sudo apt-get install libguestfs-tools

Of course, if you are using yum or some other package manager, adjust accordingly. :)

Once installation is done, we simply mount image into /mnt/guestimage and modify the shadow file to assign password (changeme in this example) to the root user:

sudo mkdir /mnt/guestimage
sudo guestmount -a rhel-server-7.5-update-3-x86_64-kvm.qcow2 -m /dev/sda1 /mnt/guestimage
sudo sed -i 's/root:!!/root:$1$QiSwNHrs$uID6S6qOifSNZKzfXsmQG1/' /mnt/guestimage/etc/shadow
sudo guestunmount /mnt/guestimage
sudo rmdir /mnt/guestimage

All nodes installed from this image will now allow you to use root login with password authentication. Just don’t forget to remove this change once you’re done troubleshooting.

PS: While I use Red Hat image in the example, the procedure also applies to CentOS and most of other cloud distributions too.

Mounting Encrypted Volume on Mint 19

As I tried to upgrade Linux Mint from 18.3 to 19, all went kaboom and I was forced to decide if I want to reinstall OS from scratch or go and try to fix it. Since I was dealing with virtual machine, reinstalling it from scratch seemed like a better idea.

Once all was installed, I wanted to copy some files from the old volume. As full disk encryption was present, I knew a bit more complicated mount is needed. In theory, it should all work with the following commands:

sudo cryptsetup luksOpen /dev/sdb5 encrypted_mapper
sudo mkdir -p /mnt/encrypted_volume
sudo mount /dev/mapper/encrypted_mapper /mnt/encrypted_volume
sudo cryptsetup luksClose encrypted_mapper

In practice I got the following error:

sudo mount /dev/mapper/encrypted_mapper /mnt/encrypted_volume
 mount: /mnt/encrypted_volume: unknown filesystem type 'LVM2_member'.

Issue was with volume manager’s dislike for both my current installation and previous one having the exactly same volume group name - mint-vg - and thus refusing to even consider doing anything with my old disk.

Before doing anything else, a rename of volume group was required. As names are equal, we will need to know UUID of the secondary volume. The easiest way to distinguish old and new volume is by looking at Open LV value. If it’s 0, we have our target.

sudo cryptsetup luksOpen /dev/sdb5 encrypted_mapper

sudo vgdisplay
  --- Volume group ---
  VG Name               mint-vg
  Cur LV                2
  Open LV               ^^0^^
  VG UUID               ^^Xu0pMS-HF20-Swb5-Yef3-XsjQ-9pzf-3nW4nn^^

sudo vgrename Xu0pMS-HF20-Swb5-Yef3-XsjQ-9pzf-3nW4nn mint-old-vg
  Processing VG mint-vg because of matching UUID Xu0pMS-HF20-Swb5-Yef3-XsjQ-9pzf-3nW4nn
  Volume group "Xu0pMS-HF20-Swb5-Yef3-XsjQ-9pzf-3nW4nn" successfully renamed to "mint-old-vg"

sudo vgchange -ay
  2 logical volume(s) in volume group "mint-vg" now active
  2 logical volume(s) in volume group "mint-old-vg" now active

With the volume finally activated, we can proceed mounting the old disk:

sudo mkdir -p /mnt/encrypted_volume
sudo mount /dev/mint-old-vg/root /mnt/encrypted_volume
sudo umount /mnt/encrypted_volume
sudo cryptsetup luksClose encrypted_mapper