To Pool or Not to Pool

Illustration

For a project of mine I “had” to do a lot of string concatenations. Easy solution was just to have a string builder and go wild. But I wondered, does it make sense to use ObjectPool (found in Microsoft.Extensions.ObjectPool package). Thus, I decided to do a few benchmarks.

For my use case, “small” was just appending 3 items to a StringBuilder. The “medium” is does total of 21 appends. And finally, “large” does 201 appends. And no, there is no real reason why I used those exact numbers other than loop ended up being nice. :)

After all this, benchmark results (courtesy of BenchmarkDotNet):

TestMeanErrorStdDevGen0Gen1Allocated
StringBuilder (small)16.295 ns0.1240 ns0.1160 ns0.0181-152 B
StringBuilder Pool (small)17.958 ns0.3125 ns0.2609 ns0.0057-48 B
StringBuilder (medium)87.052 ns1.5177 ns1.4197 ns0.08320.0001696 B
StringBuilder Pool (medium)31.245 ns0.1815 ns0.1417 ns0.0181-152 B
StringBuilder (large)304.724 ns1.6736 ns1.3975 ns0.45200.00293784 B
StringBuilder Pool (large)172.615 ns1.5325 ns1.4335 ns0.1471-1232 B

As you can see, if you are doing just a few appends, it’s probably not worth messing with ObjectPool. Not that you should use StringBuilder either. If you are adding 4 or fewer strings, you might as well concatenate them - it’s actually more performant.

However, if you are adding 5 or more strings together, pool is no worse than instantiating a new StringBuilder. So, for pretty much any scenario where you would use StringBuilder, it pays off to pool it.

Is there a situation where you would avoid pool? Well, performance-wise, I would say probably no. I ran multiple tests and, on my computer, there was no situation where StringBuilder alone was better than either pool or concat. Yes, StringBuilder is performant at low number of appends, but string concatenation is better. As soon as you go over a few appends, ObjectPool actually makes sense.

However, an elephant in the room is ObjectPool’s dependency on external package. Call me old fashioned but there is a value in not depending on extra packages.

The final decision is, of course, dependant on you. But, if performance is important, I see no reason why not to use ObjectPool. I only wish it wasn’t an extra package.


For curious ones, code was as follows:

[Benchmark]
public string Large_WithoutPool() {
    var sb = new StringBuilder();
    sb.Append("Hello");
    for (var i = 0; i < 100; i++) {
        sb.Append(' ');
        sb.Append("World");
    }
    return sb.ToString();
}

[Benchmark]
public string Large_WithPool() {
    var sb = StringBuilderPool.Get();
    try {
        sb.Append("Hello");
        for (var i = 0; i < 100; i++) {
            sb.Append(' ');
            sb.Append("World");
        }
        return sb.ToString();
    } finally {
        sb.Length = 0;
        StringBuilderPool.Return(sb);
    }
}

And yes, I also tested just a simple string concatenation (quite optimized for smaller number of concatenations):

TestMeanErrorStdDevGen0Gen1Allocated
Concatenation (small)9.820 ns0.2365 ns0.2429 ns0.0105-88 B
Concatenation (medium)146.901 ns1.6561 ns1.2930 ns0.2294-1920 B
Concatenation (large)4,710.573 ns43.5370 ns96.4750 ns15.20540.0458127200 B

LocoNS

If one checks all the freeware stuff I made over the years, they might notice a theme. They are usually solving problem that only I seemingly have. And yes, this program one of those too.

As many people do, I have most of my internal DNS resolution handled by mDNS. I used to have it done by my router, but over time I moved to encrypted DNS and spinning that one internally seemed like an overkill. So, I just rely on all elements having their mDNS running and all getting auto-magically resolved. For devices that are not capable of resolving mDNS themselves, I use to run Avahi on my main server. Avahi uses my hosts file and thus I avoid having to distribute config to each machine. Except that Avahi doesn’t really understand my hosts file.

Part of an issue is having two different names for the same server. For example, I have main server and its backup with unique name each (vilya and nenya). But I don’t use that name directly. I usually access the active one using common name (ring) that is switched between them as I need to do some work. Usually ring is the same IP as my main server (vilya). But, if I know I am going to do some work, I will redirect it to the backup server (nenya) in order to keep (read-only) access to all the family stuff. Once done, ring just moves back.

And this simple scenario is something Avahi specifically will not do. Avahi allows only one DNS name per IP, no exceptions. And that’s probably how it should be. But that’s not how I want it. So, I built LocoNS.

LocoNS is as dumb as mDNS servers get. By default it will get onto all available interfaces and use hosts file as its source of truth. If there are multiple names for an IP address (as it’s explicitly allowed in hosts file), it will learn all of them. In addition, it will listen to other mDNS traffic and remember where things are. If there is any query, LocoNS will respond immediately.

The whole application is setup so it works with unmodified hosts file and no special configuration should be necessary for it to work. Of course, you can still change functionality. For example, you can define which interfaces you want to use, whether you want to even “learn” from other mDNS server, or even if you want to use hosts file to begin with. But, configuration is kept simple intentionally.

And no, LocoNS is not a full mDNS solution. To start with, it only supports A and AAAA records. Its intention is to be only a supporting element that will solve one issue mDNS doesn’t usually solve for me.

If this peaked your curiosity, download is available on its page. You can download either AppImage, Debian package, or a docker image. And yes, I know there is no Windows download. While LocoNS will work under Windows, I am just too lazy to make it into a service. I guess I might, if enough people scream at me. Chances are, that probably won’t happen.

If this all sounds as problem you also need solved, do check it out.

Moving from Legacy to UEFI Boot

Illustration

My home media PC is running on old hardware which wasn’t really an issue. But, recently, it started messing with me. So, I decided to move it to a (slightly) newer computer. And this should be as easy as transferring disk. But, in my case I used legacy boot on the old system and the new system only does UEFI. So, in order for disk transplant to take, I had to first move to UEFI on the old computer.

Fortunatelly, Microsoft actually has a half decent answer. But, more importanty, it also provides you with a tool to automate the process. Call me a chicken, but I am always worried when I touch my partitions.

First, you need to boot into the recovery environment. In theory, this should be possible by holding a <Shift> key. In practice, I rarely succeed using this method. What I found works more reliably is simply turning off machine in the middle of the boot. It’s a bit of a brute force solution but, after two unsuccessful boots, Windows will hapilly cooperate.

Once you boot into the Windows Recovery environment, you need to go Troubleshoot, Advanced, Command Prompt. There you can run validation command:

mbr2gpt.exe /validate

This command will let you know if anything is unsupported with your setup. If you have a standard Windows installation, you’ll be fine. If you have extra partitions on your boot drive, you might want to remove them before proceeding.

Once your validation passes, we can trigger the conversion from MBR to GPT which, in this case, also means changing the boot from legacy to UEFI.

mbr2gpt.exe /convert

This command will be done in less then a minute. You might get a warning about WinRE but don’t worry about that right now. Next, you power off the system using Turn off your PC option.

When you start system the next time, you will probably need to go into BIOS (F2 or Del usually do the trick). Now you can select UEFI boot option and disable the old Legacy one. Short reboot later, your Windows should boot using UEFI.

Now you can sort out the WinRE warnng by disabling and re-enabling it again.

reagentc /disable
reagentc /enable

The Story of a Persistent Companion

Illustration

I am a fan of science fiction books and I rarely go toward other genres. I mean, why bother with dark present (or dark past) when you can read about dark future? But, I occasionally do read things that contain no aliens. And one of the alien-deficient authors I like is John Green.

If that name sounds familiar, it’s probably from his Crash Course World History. I watched that darn series with my kids multiple times and, even though there was some growelling, it was an overall enjoyable experience. My first notion of him as an author was Looking for Alaska, a book that I am definitely too old for but one that I enjoyed immensely. Suffice to say that, if he writes something, it’s highly probable I will eventually read it. Maybe not immediately (again, not enough aliens in his work), but I will get around to it.

This time I actually jumped early on his literally train by actually preordering Everything Is Tuberculosis back in 2024 (31st December still counts as 2024!). After reading many of his books, I felt sure enough that book would be readable enough. Book did arrive on time, but then spent a few days just sitting around because I had no time for it.

But, when I got to it, I didn’t let the darn thing go. As often happens with good books and my poor writing skills, I cannot really tell you what made it such a good read. Maybe it was John’s voice playing in my head as if I was listening to one of his Crash Course series. Maybe it was vivid stories about impact of tuberculosis to the real human beings. Maybe it was as simple as me and my personal experiences. It doesn’t really matter, this book touched something that hasn’t been tickled in a while.

I won’t go directly into book content. Not due to spoilers - tuberculosis is quite an old story. Reason is that you can watch John’s own The Deadliest Infectious Disease of All Time video where you’re essentially given the highlights. But, as good video is, book is so much more. It really brings you along for a trip.

If you are going to read one book this year, it might as well be this one.

RayHunter and Access Denied

If you have a spare Orbic RC400L laying around, EFF’s RayHunter might give it a new lease to life. It always warms my heart to see old (and cheap) equipment get some even as it gets gray in hair. So, of course, I tried to get RayHunter running.

Fortunately, instructions are reasonably clear. Just download the latest release and run install-linux.sh. However, on my computer that resulted in an error:

thread 'main' panicked at serial/src/main.rs:151:27:
device found but failed to open: Access denied (insufficient permissions)
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace

Error is clear - insufficient permissions. And you can get around it by running stuff as root. But that should be only the last resort. Proper way to handle this is to add USB device rule that will put it into plugdev group and thus allow current user to access it (at least on Ubuntu).

To do this, first add a file to /etc/udev/rules.d/ directory for 05c6:f601 device (double-check numbers using lsusb, if needed).

sudo tee /etc/udev/rules.d/42-orbic-rc400l.rules << EOF
ACTION=="add", \
SUBSYSTEM=="usb", \
ATTRS{idVendor}=="05c6", \
ATTRS{idProduct}=="f601", \
GROUP="plugdev", \
TAG+="uaccess", \
ATTR{power/control}:="auto"
EOF

Once file is in place, just reload the rules (or restart the computer).

sudo udevadm control --reload-rules && sudo udevadm trigger

With this, script should now update device without any further problems.


PS: It’s really hard for me to tell if IMSI catcher update even works since I never had it trigger.

PPS: Rather than messing with wireless, I like to just access server via USB (adb can be found in platform-tools directory):

./adb forward tcp:8080 tcp:8080
firefox http://localhost:8080/