ZFS Pool on SSD

I am a creature of habit. Long time ago I found ZFS setup that works for me and didn’t change much since. But sometime I wonder if those settings still hold with SSDs in game. Most notably, are 4K blocks still the best?

Since I already “had” to update my desktop to Ubuntu 21.10, I used that opportunity to clear my disks and have it installed from scratch. And it would be a shame not to run some tests first on my XPG SX6000 Pro - SSD I use for pure data storage. After trimming this DRAM-less SSD, I tested the pool across multiple recordsize values and at ashift values of 12 (4K block) and 13 (8K block).

My goal was finding a good default settings for both bulk storage and virtual machines. Unfortunately, those are quite opposite requirements. Bulk storage benefits greatly from good sequential access while virtual machines love random IO more. Fortunately, with ZFS, one can accomplish both using two datasets with different recordsize values. But ashift value has to be the same.

Illustration

Due to erase block sizes getting larger and larger, I expected performance to be better with 8K “sectors” (ashift=13) than what I usually used (ashift=12). But I was surprised.

First of all, results were all over the place but it seems that ashift=12 is still a valid starting point. It might be due to my SSD having smaller than expected erase page but I doubt it. My thoughts go more toward SSDs being optimized for the 4K load. And the specific SSD I used to test with is DRAM-less thus allowing any such optimizations to be even more visible.

Optimizations are probably also the reason for 128K performing so well in the random IO scenarios. Yes, for sequential access you would expect it, but for random access it makes no sense how fast it is. No matter what’s happening, it’s definitely making recordsize=128K still the best general choice. Regardless, for VMs, I created a sub-dataset with much smaller 4K records (and compression off) just to lower write-amplification a bit.

The full test results are in Google Sheets. For testing I used fio’s fio-rand-RW.fio and fio-seq-RW.fio profiles.

Overescaping By Default

Writing JSON has became trivial in C# and there’s no class I like better for that purpose than Utf8JsonWriter. Just look at a simple example:

var jsonUtf8 = new Utf8JsonWriter(Console.OpenStandardOutput(),
                                  new JsonWriterOptions() { Indented = true });
jsonUtf8.WriteStartObject();
jsonUtf8.WriteString("Test", "2+2");
jsonUtf8.WriteEndObject();
jsonUtf8.Flush();

This simple code will produce perfectly valid JSON:

{
  "Test": "2\u002B2"
}

While valid, you’ll notice this is slightly different than any other programming language would do. A single plus character became escape sequence \u002B.

In their eternal wisdom, .NET architects decided that, by default, JSON should be over-escaped and they “explained” their reasoning in the ticket. Essentially they did it out of abundance of caution to avoid any issues if someone puts JSON where it might not be expected.

Mind you, in 99% of cases JSON is used in HTTP body and thus doesn’t need this but I guess one odd case justifies this non-standard but valid output in their minds. And no, other JSON encoders don’t behave this way either. Only .NET as far as I can tell.

Fortunately, some time later, they also implemented what I (alongside probably 90% of developers) consider the proper JSON encoder which escapes just mandatory characters and leaves the rest of text alone. It just requires a small extra parameter.

var jsonUtf8 = new Utf8JsonWriter(Console.OpenStandardOutput(),
                                  new JsonWriterOptions() { Indented = true,
                                    ^^Encoder = JavaScriptEncoder.UnsafeRelaxedJsonEscaping^^});
jsonUtf8.WriteStartObject();
jsonUtf8.WriteString("Test", "2+2");
jsonUtf8.WriteEndObject();
jsonUtf8.Flush();

Using UnsafeRelaxedJsonEscaping is not unsafe despite it’s name; darn it, it’s not even relaxed as compared to the specification. It’s just a properly implemented JSON encoder without any extra nonsense thrown in.

Framework Expansion Board

Illustration

One of most exciting recent developments in laptop world for me is definitely the framework laptop. A major component of that concept are its expansion cards. And, of course, you can build your own.

This repository is quite encompassing if you’re using KiCAD. However, for those who love nicer tools (ehm, DipTrace), it’s annoying to find that there is no board size specification in human readable format (and no, KiCAD XML is not). So I decided to figure it out.

To cut the long story short, here are the board outline points for the expansion card PCB:

  • (0.0, 0.0)
  • (26.0, 0.0)
  • (26.0, 26.5)
  • (25.0, 26.5)
  • (25.0, 30.0)
  • (17.7, 30.0)
  • (17.7, 28.0)
  • (16.0, 28.0)
  • (16.0, 29.0)
  • (10.0, 29.0)
  • (10.0, 28.0)
  • (8.3, 28.0)
  • (8.3, 30.0)
  • (1.0, 30.0)
  • (1.0, 26.5)
  • (0.0, 26.5)

In order to make it slightly nicer to handle, each corner is additionally rounded with a 0.3 mm radius.

And let’s not forget two holes at (1.7, 10.5) and (24.3, 10.5), both with a 2.2 mm diameter and 4.9 mm keepout region.

With that information in hand, one can create PCB board in any program they might prefer. Of course, I already did so for DipTrace and you download the files here.

And yes, PCB is just a first step in a development process. What I found the hardest is actually getting appropriate connectors for the enclosure as there’s not too much height to work with.


PS: No, I do not own framework laptop at this time. I am waiting for 15.6" model as 13.5" is simply too small for me when not used external monitor.

Web Server Certificate From a File

If one desires to run HTTPs server from C#, they might get the following warning:

Unable to configure HTTPS endpoint. No server certificate was specified, and the default developer certificate could not be found or is out of date. To generate a developer certificate run 'dotnet dev-certs https'. To trust the certificate (Windows and macOS only) run 'dotnet dev-certs https --trust'. For more information on configuring HTTPS see https://go.microsoft.com/fwlink/?linkid=848054.

And yes, once could follow instructions and have everything running. But where’s the fun in that?

Alternative approach would be to load certificate from the file and .NET makes that really easy.

private static X509Certificate2? GetCertificate() {
  var certFilename = Path.Combine(AppContext.BaseDirectory, "my.pfx");
  if (File.Exists(certFilename)) {
    try {
      return new X509Certificate2(certFilename);
    } catch (CryptographicException ex) {
      // log error or whatever
    }
  }
  return null;
}

So, when bringing server up we can just call it using something like this:

var cert = GetCertificate();
options.Listen(IPAddress.Any, 443, listenOptions => {
  listenOptions.Protocols = HttpProtocols.Http1AndHttp2;
  if (cert != null) {
    listenOptions.UseHttps(cert);
  } else {
    listenOptions.UseHttps();
  }
});

Zip in Git Bash

While creating build system that works across the platforms, one can find issues in the most basic things. And that’s even when shell is the same. For example, while Bash on Linux and Windows works out the same, a lot of supporting tools differ - a lot. And there’s no better example than creating a zip archive.

Under Linux you can count on zip command being available. Even if one doesn’t have it, it’s easy to install without messing with their desktop. On Windows story gets more complicated. Git Bash for example doesn’t have it even compiled and there’s no really good way to add it. Yes, you can use any application but different one is installed on every system. To create more “fun”, supporting multiple applications also means dealing with their command-line arguments. And yes, 7-Zip has completely different syntax as compared to WinRAR.

However, when it comes to making zip archive, there’s actually a solution that works for both Windows (via Git Bash) and Linux. Surprisingly, the answer is perl.

If one is careful to use Perl’s older IO::Compress::Zip library, creating an archive becomes a simple task:

perl -e '
  use strict;
  use warnings;
  use autodie;
  use IO::Compress::Zip qw(:all);
  zip [
    "src/mimetype",
    <"src/META-INF/*.*">,
    <"src/OEBPS/*.*">,
    <"src/OEBPS/chapters/*.*">
  ] => "bin/book.epub",
       FilterName => sub { s[^src/][] },
       Zip64 => 0,
  or die "Zip failed: $ZipError\n";
'

Yeah, might not be ideal when it comes to beauty but it definitely works across platforms.