SignTool Failing with 0x80096005

After creating a new setup package I noticed my certificate signing wasn’t working. I kept getting error while running the same signing command I always had.

sign -s "My" -sha1 $CERTIFICATE_THUMBPRINT -tr ^^http://timestamp.comodoca.com/rfc3161^^ -v App.exe
 SignTool Error: An unexpected internal error has occurred.
 Error information: "Error: SignerSign() failed." (-2146869243/0x80096005)

A bit of troubleshooting later and I narrowed my problem to the timestamping server as removing /tr option made it work as usually (albeit without the timestamping portion). There were some certificate changes for the timestamp server but I don’t believe this was the issue as the new certificate was ok and I remember their server occasionally not working for days even before this.

And then I remembered what I did the last time Comodo’s timestamp server crapped out. Quite often you can use other, more reliable, timestamp server. In my case I went with timestamp.digicert.com.

sign -s "My" -sha1 $CERTIFICATE_THUMBPRINT -tr ^^http://timestamp.digicert.com^^ -v App.exe
 Successfully signed: App.exe

PS: This same error might happen due to servers refusing SHA-1.

Connecting to CyberPower OR500LCDRM1U UPS Serial Port

Illustration

To keep my file server and networking equipment running a bit longer in the case of power outage, I have them connected to CyberPower OR500LCDRM1U UPS. It’s a nice enough 1U UPS but with a major issue - no USB connection.

Well, technically there is an USB connection but it doesn’t work under anything else than Windows. If you want it working under Unix, the only option is RMCARD205, optional network module upward of $150. Essentially doubling the price of UPS.

And it’s those internal connections Jeff Mayes took advantage of for a simple serial interface. If the only thing you want is a serial interface, you might as well go with his interface driver as price is really reasonable.

However, his boards require you to either have a serial port or to have an USB-to-serial cable. What I wanted was direct USB connection. Since there was nothing out there, I decided to roll my own.

Since I had an UPS locally, it was easy enough to get physical dimensions. Unfortunately just measuring them wasn’t sufficient as they narrow as you go deeper so my first assumption of 3.1x1.7 inches was a bit off. Due to that and bottom connector that was a bit shallower then expected, the final board dimensions were more like 71x43 mm. It took a bit of probing to find the 4 signals I needed were grouped together with GND and RX on the bottom while TX and 12 V were on the top.

Connecting the appropriate serial connections to UART-to-USB converter like MCP2221A was a minimum required but I felt a bit queasy about connecting it directly to my computer. Therefore I decided to isolate the UPS interface from the computer. For this purpose I used Si8621 digital isolator offering 2,500 V isolation which was probably an overkill but allowed me to sleep better.

The last physical piece needed was a cover for card to avoid having a large opening in the back of my rack. While risk of anything getting inside is reasonably low, making a 3D printed cover was easy enough. It took a few tries to get cover design right in TinkerCAD but it avoided having a gaping hole.

If you are interested in making one for yourself, check project page for all the files.

Testing Native ZFS Encryption Speed

[2020-11-02: There is a newer version of this post]

As I wrote about installing ZFS with the native encryption on the Ubuntu 20.04, it got me thinking… Should I abandon my LUKS-setup and switch? Well, I guess some performance testing was in order.

For this purpose I decided to go with the Ubuntu Server (to minimize impact desktop environment might have) inside of the 2 CPU Virtual Machine with 24 GB of RAM. Two CPUs should be enough to show any multithreading performance difference while 24 GB of RAM is there to give home to our ZFS disks. I didn’t want to depend on disk speed and variation it gives. For the testing purpose I only care about the relative speed difference and using the RAM instead of the real disks would give more repeatable results.

For OS I used Ubuntu Server with ZFS packages, carved a chunk of memory for RAM disks, and limited ZFS ARC to 1G.

sudo -i << EOF
    apt update
    apt dist-upgrade -y
    apt install -y zfsutils-linux
    grep "/ramdisk" /etc/fstab || echo "tmpfs  /ramdisk  tmpfs  rw,size=20G  0  0" \
        | sudo tee -a /etc/fstab
    grep "zfs_arc_max" /etc/modprobe.d/zfs.conf || echo "options zfs zfs_arc_max=1073741824" \
        | sudo tee /etc/modprobe.d/zfs.conf
    reboot
EOF

With the system in pristine state, I created data used for testing (random 2 GiB).

dd if=/dev/urandom of=/ramdisk/data.bin bs=1M count=2048

Data disks are just bunch of zeros (3 GB each) and the (RAID-Z2) ZFS pool has the usual stuff but with compression turned off and sync set to always in order to minimize their impact on the results.

for I in {1..6}; do dd if=/dev/zero of=/ramdisk/disk$I.bin bs=1MB count=3000; done
echo "12345678" | zpool create -o ashift=12 -O normalization=formD \
    -O acltype=posixacl -O xattr=sa -O dnodesize=auto -O atime=off \
    -O encryption=^^aes-256-gcm^^ -O keylocation=prompt -O keyformat=passphrase \
    -O compression=off -O sync=always -O mountpoint=/zfs TestPool raidz2 \
    /ramdisk/disk1.bin /ramdisk/disk2.bin /ramdisk/disk3.bin \
    /ramdisk/disk4.bin /ramdisk/disk5.bin /ramdisk/disk6.bin

To get write speed, I simply copied the data file multiple times and took the time reported by dd. To get a single figure, I removed the highest and the lowest value averaging the rest.

sudo -i << EOF
    sudo dd if=/ramdisk/data.bin of=/zfs/data1.bin bs=1M
    sudo dd if=/ramdisk/data.bin of=/zfs/data2.bin bs=1M
    sudo dd if=/ramdisk/data.bin of=/zfs/data3.bin bs=1M
    sudo dd if=/ramdisk/data.bin of=/zfs/data4.bin bs=1M
    sudo dd if=/ramdisk/data.bin of=/zfs/data5.bin bs=1M
EOF

For reads I took the file that was written and dumped it to /dev/null. Averaging procedure was the same as for writes.

sudo -i << EOF
    sudo dd if=/zfs/data1.bin of=/dev/null bs=1M
    sudo dd if=/zfs/data2.bin of=/dev/null bs=1M
    sudo dd if=/zfs/data3.bin of=/dev/null bs=1M
    sudo dd if=/zfs/data4.bin of=/dev/null bs=1M
    sudo dd if=/zfs/data5.bin of=/dev/null bs=1M
EOF

Illustration

With all that completed, I had my results.

I was quite surprised how close a different bit sizes were in the performance. If your processor supports AES instruction set, there is no reason not to go with 256 bits. Only when you have an older processor without the encryption support does the 128-bit crypto make sense. There was a 15% difference when it comes to the read speeds in the favor of the GCM mode so I would probably go with that as my cipher of choice.

However, once I added measurements without the encryption and for the LUKS-based crypto I was shocked. I expected thing to go faster without the encryption but I didn’t expect such a huge difference. Also surprising was seeing the LUKS encryption to have triple the performance of the native one.

Illustration

Now, this test is not completely fair. In the real life, with a more powerful machine, and on the proper disks you won’t see such a huge difference. The sync=always setting is a performance killer and results in more encryption calls than you would normally see. However, you will still see some difference and good old LUKS seems like the winner here. It’s faster out of box, it will use less CPU, and it will encrypt all the data (not leaving metadata in the plain as ZFS does).

I will also admit that comparison leans toward apples-to-oranges kind. Reason to use ZFS’ native encryption is not due to its performance but due to the extra benefits it brings. Part of those extra cycles go into the authentication of each written block using a strong MAC. Leaving metadata unencrypted does leak a bit of (meta)data but it also enables send/receive without either side even being decrypted - just ideal for a backup box in the untrusted environment. You can backup the data without ever needing to enter password on the remote side. Lastly let’s not forget allowing ZFS direct access to the physical drives allows it to shine when it comes to the fault detection and handling of the same. You will not get anything similar if you are interfacing over the virtual device.

Personally, I will continue using the LUKS-based full disk encryption for my desktop machines. It’s just much faster. And I probably won’t touch my servers for now either. But I have a feeling that really soon I might give native ZFS encryption a spin.

[2020-11-01: Newer updates of 0.8.3 (0.8.3-1ubuntu12.4) have greatly improved GCM speed. With those optimizations GCM mode is now faster than Luks. For more details check 20.10 post.]


PS: You can take a peek at the raw data if you’re so inclined.

Installing UEFI ZFS Root on Ubuntu 20.04 (with Native Encryption)

There is a newer version of this guide for Ubuntu 21.10.


Technically, I already have a guide for encrypted ZFS setup on Ubuntu 20.04. However, that guide used Geli and, as correctly one reader noted in comments (thanks Alex!), there was no reason not to use ZFS’ native encryption. So, here is adjusted variant of my setup.

First of all, Ubuntu 20.04 has a ZFS setup option as of 19.10. You should use it instead of the manual installation procedure unless you need something special. Namely, manual installation allows for encryption, in addition to the custom pool layout and naming. You should also check the great Root on ZFS installation guide that’s part of ZFS-on-Linux project for a full picture. I find its final ZFS layout a bit too complicated for my taste but there is a lot of interesting tidbits on that page. Here is my somewhat simplified version of the same, intended for a singe disk installation.

After booting into Ubuntu desktop installation we want to get a root prompt. All further commands are going to need root credentials anyhow.

sudo -i

The very first step should be setting up a few variables - disk, pool, host name, and user name. This way we can use them going forward and avoid accidental mistakes. Just make sure to replace these values with ones appropriate for your system.

DISK=/dev/disk/by-id/^^ata_disk^^
POOL=^^ubuntu^^
HOST=^^desktop^^
USER=^^user^^

General idea of my disk setup is to maximize amount of space available for pool with the minimum of supporting partitions. If you are planning to have multiple kernels, increasing boot partition size might be a good idea.

blkdiscard $DISK

sgdisk --zap-all                        $DISK

sgdisk -n1:1M:+127M -t1:EF00 -c1:EFI    $DISK
sgdisk -n2:0:+512M  -t2:8300 -c2:Boot   $DISK
sgdisk -n3:0:0      -t3:8309 -c3:Ubuntu $DISK

sgdisk --print                          $DISK

Finally we’re ready to create system ZFS pool. Note that you need to encrypt it at the moment it’s created.

zpool create -o ashift=12 -o autotrim=on \
    -O compression=lz4 -O normalization=formD \
    -O acltype=posixacl -O xattr=sa -O dnodesize=auto -O atime=off \
    -O encryption=aes-256-gcm -O keylocation=prompt -O keyformat=passphrase \
    -O canmount=off -O mountpoint=none -R /mnt/install $POOL $DISK-part3

On top of this encrypted pool, we can create our root dataset.

zfs create -o canmount=noauto -o mountpoint=/ $POOL/root
zfs mount $POOL/root

Assuming UEFI boot, two additional partitions are needed. One for EFI and one for booting. Unlike what you get with the official guide, here I don’t have ZFS pool for boot partition but a plain old ext4. I find potential fixup works better that way and there is a better boot compatibility. If you are thinking about mirroring, making it bigger and ZFS might be a good idea. For a single disk, ext4 will do.

yes | mkfs.ext4 $DISK-part2
mkdir /mnt/install/boot
mount $DISK-part2 /mnt/install/boot/

mkfs.msdos -F 32 -n EFI $DISK-part1
mkdir /mnt/install/boot/efi
mount $DISK-part1 /mnt/install/boot/efi

To start the fun we need debootstrap package.

apt install --yes debootstrap

Bootstrapping Ubuntu on the newly created pool is next. This will take a while.

debootstrap focal /mnt/install/

zfs set devices=off $POOL

Our newly copied system is lacking a few files and we should make sure they exist before proceeding.

echo $HOST > /mnt/install/etc/hostname
sed "s/ubuntu/$HOST/" /etc/hosts > /mnt/install/etc/hosts
sed '/cdrom/d' /etc/apt/sources.list > /mnt/install/etc/apt/sources.list
cp /etc/netplan/*.yaml /mnt/install/etc/netplan/

If you are installing via WiFi, you might as well copy your wireless credentials. Don’t worry if this returns errors - that just means you are not using wireless.

mkdir -p /mnt/install/etc/NetworkManager/system-connections/
cp /etc/NetworkManager/system-connections/* /mnt/install/etc/NetworkManager/system-connections/

Finally we’re ready to “chroot” into our new system.

mount --rbind /dev  /mnt/install/dev
mount --rbind /proc /mnt/install/proc
mount --rbind /sys  /mnt/install/sys
chroot /mnt/install \
    /usr/bin/env DISK=$DISK POOL=$POOL USER=$USER \
    bash --login

Let’s not forget to setup locale and time zone.

locale-gen --purge "en_US.UTF-8"
update-locale LANG=en_US.UTF-8 LANGUAGE=en_US
dpkg-reconfigure --frontend noninteractive locales

dpkg-reconfigure tzdata

Now we’re ready to onboard the latest Linux image.

apt update
apt install --yes --no-install-recommends linux-image-generic linux-headers-generic

Followed by boot environment packages.

apt install --yes zfs-initramfs grub-efi-amd64-signed shim-signed tasksel

To mount boot and EFI partition, we need to do some fstab setup.

echo "PARTUUID=$(blkid -s PARTUUID -o value $DISK-part2) \
    /boot ext4 noatime,nofail,x-systemd.device-timeout=5s 0 1" >> /etc/fstab
echo "PARTUUID=$(blkid -s PARTUUID -o value $DISK-part1) \
    /boot/efi vfat noatime,nofail,x-systemd.device-timeout=5s 0 1" >> /etc/fstab
cat /etc/fstab

Now we get grub started and update our boot environment.

KERNEL=`ls /usr/lib/modules/ | cut -d/ -f1 | sed 's/linux-image-//'`
update-initramfs -u -k $KERNEL

Grub update is what makes EFI tick.

update-grub
grub-install --target=x86_64-efi --efi-directory=/boot/efi --bootloader-id=Ubuntu \
    --recheck --no-floppy

Finally we install out GUI environment. I personally like ubuntu-desktop-minimal but you can opt for ubuntu-desktop. In any case, it’ll take a considerable amount of time.

tasksel install ubuntu-desktop-minimal

Short package upgrade will not hurt.

apt dist-upgrade --yes

We can omit creation of the swap dataset but I personally find a small one handy.

zfs create -V 4G -b $(getconf PAGESIZE) -o compression=off -o logbias=throughput \
    -o sync=always -o primarycache=metadata -o secondarycache=none $POOL/swap
mkswap -f /dev/zvol/$POOL/swap
echo "/dev/zvol/$POOL/swap none swap defaults 0 0" >> /etc/fstab
echo RESUME=none > /etc/initramfs-tools/conf.d/resume

If one is so inclined, /home directory can get a separate dataset too.

rmdir /home
zfs create -o mountpoint=/home $POOL/home

The only remaining task before restart is to create the user, assign a few extra groups to it, and make sure its home has correct owner.

adduser --disabled-password --gecos '' $USER
usermod -a -G adm,cdrom,dip,lpadmin,plugdev,sudo $USER
passwd $USER

As install is ready, we can exit our chroot environment.

exit

And cleanup our mount points.

umount /mnt/install/boot/efi
umount /mnt/install/boot
mount | grep -v zfs | tac | awk '/\/mnt/ {print $3}' | xargs -i{} umount -lf {}
zpool export -a

After the reboot you should be able to enjoy your installation.

reboot


PS: There are versions of this guide using the native ZFS encryption for other Ubuntu versions: 22.04 and 20.04

PPS: For LUKS-based ZFS setup, check the following posts: 20.04, 19.10, 19.04, and 18.10.

[2020-06-27: Added blkdiscard and autotrim.]

Fixing Git Author Name and Email

After moving Mercurial repository to Git you might want to update user names and emails.

The first step would be to see the all the names:

git log --format='%an <%ae>' | git log --format='%cn <%ce>' | sort | uniq

With this information in hand, we can adjust names with filter-branch:

git filter-branch --force --commit-filter '
    OLD_NAME="^^unknown^^"
    NEW_NAME="^^My name^^"
    NEW_EMAIL="^^myemail@example.com^^"
    if [ "$GIT_AUTHOR_NAME" = "$OLD_NAME" ]; then
        GIT_AUTHOR_NAME="$NEW_NAME"
        GIT_AUTHOR_EMAIL="$NEW_EMAIL"
    fi
    if [ "$GIT_COMMITTER_NAME" = "$OLD_NAME" ]; then
        GIT_COMMITTER_NAME="$NEW_NAME"
        GIT_COMMITTER_EMAIL="$NEW_EMAIL"
    fi
    git commit-tree "$@";
' --tag-name-filter cat -- --branches --tags --all

git for-each-ref --format='delete %(refname)' refs/original | git update-ref --stdin