[2022-10-30: There is a newer version of this post]
With the new Ubuntu LTS release, it came time to repeat my ZFS encryption testing. Is ZFS speed better, worse, or the same?
I won't go into the test procedure much since I explained it back when I did it the first time. Outside of really minor differences in the exact disk size, procedure didn't change. What did change is that I am not doing it on virtual machine anymore.
These tests I did on Framework laptop with i5-1135G7 processor and 32GB of RAM. It's a bit more consistent setup than the virtual machine I used before. Due to this change, numbers are not really comparable to ones from previous tests but that should be fine - our main interest is in the relative numbers.
First of all, we can see that CCM encryption is not worth a dime if you have any AES-capable processor. Difference between CCM and any other encryption I tested is huge with CCM being 5-6 times slower. Only once I turned off the AES support in BIOS does its inclusion make even a minimal sense as this actually improves its performance. And no, it doesn't suck less - it's just that all other encryption methods suck more.
Assuming our machine has a processor made in the last 5 or so years, the native ZFS GCM encryption becomes the clear winner. Yes, 128-bit variant is a bit faster than 256-bit one (as expected) but difference is small enough that it probably wont matter. What will matter is that any GCM wins over LUKS. Yes, reads are slightly faster using standard XTS LUKS but writes are clearly favoring the native ZFS encryption.
Unless you really need the ultimate cryptographic opacity a LUKS encryption brings, a native ZFS encryption using GCM is still a way to go. And yes, even though GCM modes are performant, we still lose about 10-15% in writes and about 30% on reads when compared to no encryption at all. Mind you, as with all synthetic tests giving you the worst figures, the real performance loss is much lower.
Make what you want of it, but I'll keep encrypting my drives. They're plenty fast.
PS: You can take a peek at the raw data if you’re so inclined.
Since you’re running RaidZ on 6 concurrent LUKS containers it’s not really much of a surprise that it’s slower.
Could you do the same test without RaidZ? Many run ZFS on a single disk. Of course you then lose the self-repair feature, but you still get the benefits of compression, snapshots, etc. – so this is still a valid setup for desktop machines.
I might look into this for the next round of testing. However, I would be surprised if results differed a lot when it comes to ratios; i.e., I believe 10% speed difference will still be a 10% speed difference since these measurements were done in memory and thus disk throughput wasn’t a limiting factor.
I did test a single disk ZFS setup on the physical disk back with 0.8.4. At that time, Luks was actually better according to in-memory tests and what I discovered was that it was better on physical disk too with a similar ration.
That said, since encryption does make use of a specific CPU instructions, any test I make won’t necessarily mean anything for you as your CPU/OS might deal with AES operations differently thus resulting in different speed. I would always advise testing it yourself (actual commands are in linked document) and not trusting people speaking on the Internet. :)
Can you please test again? I am seeing significant performance regressions on latest Ubuntu 22.04 kernels — 100% reproducible. This is on a E3-1240 v6 / Supermicro X11SSM-F, but I can repro on several devices.
5.15.0-27: 902 MB/s write speed
5.15.0-37: 106 MB/s write speed
root@blink:~# uname -a
Linux blink 5.15.0-37-generic #39-Ubuntu SMP Wed Jun 1 19:16:45 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
root@blink:~# cat /dev/zero | pv > /tmp/mount/zero
3.61GiB 0:00:34
root@blink:~# uname -a
Linux blink 5.15.0-27-generic #28-Ubuntu SMP Thu Apr 14 04:55:28 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
root@blink:~# cat /dev/zero | pv > /tmp/mount/zero
3.61GiB 0:00:04
This is the test script:
mkdir /tmp/tmpfs
mkdir /tmp/mount
mount -t tmpfs none /tmp/tmpfs
dd if=/dev/zero of=/tmp/tmpfs/disk1.img bs=1M count=4000
dd if=/dev/zero of=/tmp/tmpfs/disk2.img bs=1M count=4000
zpool create -f \
-O atime=off \
-O compression=off \
-O dedup=off \
-O canmount=off \
-m none \
-o ashift=12 \
test mirror \
/tmp/tmpfs/disk1.img \
/tmp/tmpfs/disk2.img
zfs create -u \
-o mountpoint=/tmp/mount \
-o keylocation=prompt \
-o keyformat=passphrase \
-o encryption=aes-256-gcm \
test/test
zfs mount test/test
I’ve seen the exact same problem. Was hoping someone else would track it down. 5.15.0-27 last known good, still broken in 5.15.0-39
Known Problem, fix pending https://bugs.launchpad.net/ubuntu/+source/zfs-linux/+bug/1969482
The updated kernel was released, and it fixes the performance regression.
# uname -a
Linux arpa 5.15.0-43-generic #46-Ubuntu SMP Tue Jul 12 10:30:17 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
# zfs –version
zfs-2.1.4-0ubuntu0.1
zfs-kmod-2.1.4-0ubuntu0.1
# grep . /sys/module/icp/parameters/*impl*
/sys/module/icp/parameters/icp_aes_impl:cycle [fastest] generic x86_64 aesni
/sys/module/icp/parameters/icp_gcm_impl:cycle [fastest] avx generic pclmulqdq
# dd if=14GBfile.tmp of=/dev/null bs=1M
13411+1 records in
13411+1 records out
14062902185 bytes (14 GB, 13 GiB) copied, 12.6139 s, 1.1 GB/s
Hmm I wonder how you get these results where native ZFS encryption is faster or on par.
At least with recent 6.2 Ubuntu kernel and ZFS 2.1.12 I am getting *very* bad performance with native encryption, it falls behind the ZFS+LUKS on the orders of 2-4x:
– doing zfs send/receive over a 10gig link yields 100% cpu usage on receiver (writing to dataset with native encryption) and 1.5Gbit speed, switching to LUKS bumped it to 4Gbit
– reading a big file using dd with native encryption gives me 220MB/s, with LUKS – 720MB/s
These are results from my NAS with Xeon D-2123IT and a pool of 8x6TB drives (4 mirrors). Default encryption options (aes-256-gcm for ZFS and aes-256-xts for LUKS).