Programming in C#, Java, and god knows what not

.NET Plugins Without a Common Assembly

For a new project of mine, I wanted to use a plugin architecture. Since it had been a while since I did plugins in .NET the last time, I wanted to see what’s new in .NET 8. Well, no news at all - plugins for .NET application are literally the same as they always were.

Don’t misunderstand me, there were some steps forward. For one, there is a working example showing you exactly how it’s done. And honestly, that is an example you should use. However, this is the same way we did it back in .NET 2.0 days.

And yes, I am a bit unfair since there were a lot of upgrades in the backend and .NET 8 will give you more options on how to load stuff. Let’s not even get into performance improvements. However, I still have to create a common assembly that inevitably becomes a hell to maintain. And what about single-file publishing? Nope, still not supported.

While I was ok doing it the classical-style, I really hated not having a single-file, self-contained deployment. They are just so freeing when it comes to actual deployments and worth every additional byte they consume. And, since the whole framework is bundled in a single package, there is no real reason why it cannot be done. Is there?

Well, I decided to give it a try.

But before dealing with single-file deployments, what about removing the need for common assembly? Well, if you can get away with simple Get/Set interface that returns objects that can then use other standard interfaces; or, said plainly, if you can get away with forwarding standard .NET classes/interfaces, answer as always lies in good old IDesignerOptionService.

While quite a lot of interfaces you could use for plugins got trimmed with time (especially during the Windows Forms exodus), this one somehow survived. And it’s almost perfect for lously-coupled plugins. It gives you two methods: get and set. Thus, you can simply use something like that:

public class MyPlugin : IDesignerOptionService {

    public object GetOptionValue(string pageName, string valueName) {
        switch (pageName) {
            // do something that returns object
        }
    }

    public void SetOptionValue(string pageName, string valueName, object value) {
        switch (pageName) {
            // do something that with object
        }
    }

}

As long as you stick to built-in objects (or you’re willing to do a lot of reflection), you’re golden. I agree, there is a performance impact and design is not as clean as it could be, but I would argue it’s quite often worth it since we don’t have to deal with common assembly versioning and all the fun that can cause.

Thus, that only leaves single-file deployment as our goal. Is it really not supported?

Indeed, if you try to make a single-file deployment of the plugin dll, it will say that you cannot do that unless OutputType is Exe. And, if you try to combine that with common PluginBase assembly, it will not be able to load anything because PluginBase as a separate assembly is not the same as PluginBase that got packed. However, if you are ok with this janky IDesignerOptionService setup, you can make your host a single-file application.

And remember, the whole .NET is essentially packed there so this application (assuming you didn’t trim it), will have no issues loading our plugin DLLs.

So, to summarize, you can have your host application deployed as a single file (only the executable is needed) and then load any class from plugin dll that implements IDesignerOptionService interface. Such class will then use .NET from a host itself to run without .NET being installed separately.

To see it in action, download the example. Don’t forget to run Make.sh in order to copy files around.

Grayscale Avalonia Icons

For disabled icons in Avalonia toolbar, you can go two ways. One is just using an external tool to convert your existing color icons into their color-less variant and have them as a completely separate set. The one I prefer is to actually convert images on demand.

As I’m currently playing with this in Avalonia, I decided to share my code. And it’s not as straightforward as I would like. To start with, here is the code: As I’m currently playing with this in Avalonia, I decided to share my code. And it’s not as straightforward as I would like. To start with, here is the code:

public static Bitmap BitmapAsGreyscale(Bitmap bitmap) {
    var width = bitmap.PixelSize.Width;
    var height = bitmap.PixelSize.Height;

    var buffer = new byte[width * height * 4];
    var bufferPtr = GCHandle.Alloc(buffer, GCHandleType.Pinned);
    try {
        var stride = 4 * width;
        bitmap.CopyPixels(default, bufferPtr.AddrOfPinnedObject(), buffer.Length, stride);

        for (var i = 0; i < buffer.Length; i += 4) {
            var b = buffer[i + 0];
            var g = buffer[i + 1];
            var r = buffer[i + 2];

            var grey = byte.CreateSaturating(0.299 * r + 0.587 * g + 0.114 * b);

            buffer[i + 0] = grey;
            buffer[i + 1] = grey;
            buffer[i + 2] = grey;
        }

        var writableBitmap = new WriteableBitmap(new PixelSize(width, height), new Vector(96, 96), Avalonia.Platform.PixelFormat.Bgra8888);
        using (var stream = writableBitmap.Lock());
        Marshal.Copy(buffer, 0, stream.Address, buffer.Length);

        return writableBitmap;
    } finally {
        bufferPtr.Free();
    }
}

Since Avalonia doesn’t really expose pixel-level operations, first we need to obtain values of all the pixels. The easiest approach I found was just using the CopyPixels method to get all the data to our buffer. As this code in Avalonia is quite low-level and requires a pointer, we need to have our buffer pinned. Anything pinned also needs releasing, thus our finally block.

Once we have raw bytes, there is just a matter of figuring out which byte holds which value, and here I suspect that pretty much anybody will use the most common RGBA byte ordering. It’s most common by far, and I would say it will be 99% what you end up with.

To get gray, we can use averages, but I prefer using slightly more complicated BT.601 luma calculation. And yes, this doesn’t take into account gamma correction; nor is it the only way to get a grayscale. However, I found it works well for icons without much calculation needed. You can opt to use any conversion you prefer as long as the result is a nice 8-bit value. Using this value for each of RGB components gives us the gray component. Further, note that in code above, I only modify RGB values, leaving the alpha channel alone.

Once the bytes are in the desired state, just create a WritableBitmap based on that same buffer and with the same overall properties (including 32-bit color).

Dealing with X11 Primary Clipboard under Avalonia

It all started with Avalonia and a discovery that its clipboard handling under Linux doesn’t include the primary buffer. Since I was building a new GUI for my password manager this was an issue for me. I wanted to be able to paste directly into the terminal using Shift+Insert instead of messing with the mouse. And honestly, especially for password prompts, having a different result depending on which paste operation you use is annoying at best. So, I went onto building my own X11 code to deal with it.

The first idea was to see if somebody else had written the necessary code already. Most useful source for this became mono repository. However, its clipboard support was intertwined with other X11 stuff. It was way too much code to deal with for a simple paste operation but I did make use of it for figuring out X11 structures.

What helped me the most was actually seeing how X11 clipboard handling was implemented in C. From that I managed to get my initialization code running:

DisplayPtr = NativeMethods.XOpenDisplay(null);
RootWindowPtr = NativeMethods.XDefaultRootWindow(DisplayPtr);
WindowPtr = NativeMethods.XCreateSimpleWindow(DisplayPtr, RootWindowPtr, -10, -10, 1, 1, 0, 0, 0);
TargetsAtom = NativeMethods.XInternAtom(DisplayPtr, "TARGETS", only_if_exists: false);
ClipboardAtom = NativeMethods.XInternAtom(DisplayPtr, "PRIMARY", only_if_exists: false);
Utf8StringAtom = NativeMethods.XInternAtom(DisplayPtr, "UTF8_STRING", only_if_exists: false);
MetaSelectionAtom = NativeMethods.XInternAtom(DisplayPtr, "META_SELECTION", only_if_exists: false);
EventThread = new Thread(EventLoop) {  // last to initialize so we can use it as detection for successful init
  IsBackground = true,
};
EventThread.Start();

This code creates a window (XCreateSimpleWindow) for an event loop (that we’ll handle in a separate thread) and also specifies a few X11 atoms for clipboard handling.

In order to set clipboard text, we need to tell X11 that we’re the owner of the clipboard and that it should use our event handler to answer any queries. I also opted to prepare UTF-8 string bytes so we don’t need to deal with them in the loop.

private byte[] BytesOut = [];

public void SetText(string text) {
  BytesOut = Encoding.UTF8.GetBytes(text);
  NativeMethods.XSetSelectionOwner(DisplayPtr, ClipboardAtom, WindowPtr, 0);
}

But the real code is actually in our event loop where we wait for SelectionRequest event:

private void Loop() {
  while (true) {
    NativeMethods.XEvent @event = new();
    NativeMethods.XNextEvent(DisplayPtr, ref @event);

    switch (@event.type) {
      case NativeMethods.XEventType.SelectionRequest: {
        var requestEvent = @event.xselectionrequest;
        if (NativeMethods.XGetSelectionOwner(DisplayPtr, requestEvent.selection) != WindowPtr) { continue; }
        if (requestEvent.selection != ClipboardAtom) { continue; }
        if (requestEvent.property == IntPtr.Zero) { continue; }

        // rest of selection handling code
        break;
    }
  }
}

There are two subrequests possible here. The first one for the other application asking for text formats (i.e., TARGETS atom query). Here we can give it UTF8_STRING atom that seems to be universally supported. We could have given it more formats but I honestly saw no point in messing with ANSI support. It’s 2024, for god’s sake.

if (requestEvent.target == TargetsAtom) {
  var formats = new[] { Utf8StringAtom };
  NativeMethods.XChangeProperty(requestEvent.display,
                                requestEvent.requestor,
                                requestEvent.property,
                                4,   // XA_ATOM
                                32,  // 32-bit data
                                0,   // Replace
                                formats,
                                formats.Length);

  var sendEvent = ... // create XSelectionEvent structure with type=SelectionNotify
  NativeMethods.XSendEvent(DisplayPtr,
                           requestEvent.requestor,
                           propagate: false,
                           eventMask: IntPtr.Zero,
                           ref sendEvent);
}

After we told the terminal what formats we support, we can expect a query for that data type next within the same SelectionRequest event type. Here we can finally use previously prepared our UTF-8 bytes. I opted to allocate a new buffer to avoid any issues. As in the previous case, all work is done by setting the property on the destination window (XChangeProperty) with XSendEvent serving to inform the window we’re done.

if (requestEvent.target == Utf8StringAtom) {
  var bufferPtr = IntPtr.Zero;
  int bufferLength;
  try {
    bufferPtr = Marshal.AllocHGlobal(BytesOut.Length);
    bufferLength = BytesOut.Length;
    Marshal.Copy(BytesOut, 0, bufferPtr, BytesOut.Length);
  }

  NativeMethods.XChangeProperty(DisplayPtr,
                                requestEvent.requestor,
                                requestEvent.property,
                                requestEvent.target,
                                8,  // 8-bit data
                                0,  // Replace
                                bufferPtr,
                                bufferLength);
  } finally {
    if (bufferPtr != IntPtr.Zero) { Marshal.FreeHGlobal(bufferPtr); }
  }

  var sendEvent = ... // create XSelectionEvent structure with type=SelectionNotify
  NativeMethods.XSendEvent(DisplayPtr,
                           requestEvent.requestor,
                           propagate: false,
                           eventMask: IntPtr.Zero,
                           ref sendEvent);
}

And that’s all you just need in order to support SetText. However, it seemed like a waste not to implement GetText method too.

The code for retrieving text is a bit more complicated since we cannot retrieve text directly. We must ask for it using our UTF8_STRING atom. However, we cannot read the clipboard text directly but only via our event loop. So, we need to wait for our AutoResetEvent to signal data is ready before returning.

private readonly AutoResetEvent BytesInLock = new(false);
private byte[] BytesIn = [];

public string GetText() {
  NativeMethods.XConvertSelection(DisplayPtr,
                                  ClipboardAtom,
                                  Utf8StringAtom,
                                  MetaSelectionAtom,
                                  WindowPtr,
                                  IntPtr.Zero);
  NativeMethods.XFlush(DisplayPtr);

  if (BytesInLock.WaitOne(100)) {  // 100 ms wait
    return Encoding.UTF8.GetString(BytesIn);
  } else {
    return string.Empty;
  }
}

In the event loop, we need to add an extra case for SelectionNotify event where we can handle reading the data and signaling our AutoResetEvent.

case NativeMethods.XEventType.SelectionNotify: {
  var selectionEvent = @event.xselection;
  if (selectionEvent.target != Utf8StringAtom) { continue; }  // we ignore anything not clipboard

  if (selectionEvent.property == 0) {  // nothing in clipboard
    BytesIn = [];
    BytesInLock.Set();
    continue;
  }

  var data = IntPtr.Zero;
  NativeMethods.XGetWindowProperty(DisplayPtr,
                                   selectionEvent.requestor,
                                   selectionEvent.property,
                                   long_offset: 0,
                                   long_length: int.MaxValue,
                                   delete: false,
                                   0,  // AnyPropertyType
                                   out var type,
                                   out var format,
                                   out var nitems,
                                   out var bytes_after,
                                   ref data);
  BytesIn = new byte[nitems.ToInt32()];
  Marshal.Copy(data, BytesIn, 0, BytesIn.Length);
  BytesInLock.Set();
  NativeMethods.XFree(data);
}
break;

With all this code in, you can now handle primary (aka, middle-click) clipboard just fine. And yes, the code is not fully complete, so you might want to check my X11Clipboard class, which not only provides support for primary but also for a normal clipboard too. E.g.:

X11Clipboard.Primary.SetText("A");
X11Clipboard.Primary.SetText("B");

Avalonia Workaround for ShowDialog Focus

As I was working on a Linux C# application, I was kinda annoyed with the new dialog not having keyboard control. The window would actually display correctly on top of its owner, but it would never take the keyboard focus onto itself. Thus, no keyboard control. A bit of searching also showed that this was already a known issue, sitting pretty for a while now.

But thankfully, the workaround is rather simple. From within your code, just attach (or override) to your Activated event and manually select the button.

Activated += delegate { button.Focus(); };

With this, Avalonia will switch focus to the new window, and everything else will work as expected.

Adding Tools to .NET Container

When Microsoft provides you with container image, they provide everything you need to run .NET application. And no more. But what if we want to add our own tools?

Well, there’s nothing preventing you from using just standard docker stuff. For example, enriching default Alpine Linux image would just require creating a Dockerfile with the following content:

FROM mcr.microsoft.com/dotnet/runtime:7.0-alpine
RUN apk add iputils traceroute curl netcat-openbsd

Essentially we tell Docker to use Microsoft’s image as our baseline and to install a few packages. To “execute” those commands, simply use the file to build an image:

docker build --tag dotnet-runtime-7.0-alpine-withtools .

To see if all works as intended, we can simply test it with Docker.

docker run --rm -it dotnet-runtime-7.0-alpine-withtools sh

Once happy, just tag and push it. In this case, I’m adding it to the local repository.

docker tag dotnet-runtime-7.0-alpine-withtools:latest localhost:5000/dotnet-runtime:7.0-alpine-withtools
docker push localhost:5000/dotnet-runtime:7.0-alpine-withtools

In our .NET project, we just need to change the ContainerBaseImage value and publish it as usual:

<ContainerBaseImage>localhost:5000/dotnet-runtime:7.0-alpine-withtools</ContainerBaseImage>

PS: If you don’t have Docker running locally, don’t forget to start it:

docker run -d -p 5000:5000 --name registry registry:2

Using Alpine Linux Docker Image for .Net 7.0

With .NET 7 publishing a docker image became trivial. Really, all that’s needed is to add a few entries into .csproj file.

<ContainerBaseImage>mcr.microsoft.com/dotnet/runtime:7.0</ContainerBaseImage>
<ContainerRuntimeIdentifier>linux-x64</ContainerRuntimeIdentifier>
<ContainerImageName>test</ContainerImageName>
<ContainerImageTags>0.0.1</ContainerImageTags>

With those in place, and assuming we have docker working, we can then “publish” the image.

dotnet publish -c Release --no-self-contained \
    /t:PublishContainer -p:PublishProfile=DefaultContainer \
    Test.csproj

And there’s nothing wrong with this. However, what if you want an image that’s smaller than 270 MB this method offers? Well, there’s always Alpine Linux. And yes, Microsoft offers an image for Alpine too.

So I changed my project values.

<ContainerBaseImage>mcr.microsoft.com/dotnet/runtime:7.0-alpine</ContainerBaseImage>
<ContainerRuntimeIdentifier>linux-x64</ContainerRuntimeIdentifier>
<ContainerImageName>test</ContainerImageName>
<ContainerImageTags>0.0.1</ContainerImageTags>

And that led me to a dreadful Error/CrashLoopBackOff state. My application simply wouldn’t run and since the container crashed, it was really annoying to troubleshoot anything. But those familiar with .NET and Alpine Linux might see the issue. While almost any other Linux is happy with the linux-x64 moniker, our Alpine needs a special linux-musl-x64 value due to using a different libc implementation. And no, you cannot simply put that in .csproj as you’ll get error that The RuntimeIdentifier 'linux-musl-x64' is not supported by dotnet/runtime:7.0-alpine.

You need to add it to the publish command line as an option

dotnet publish -c Release --no-self-contained  -r linux-musl-x64\
    /t:PublishContainer -p:PublishProfile=DefaultContainer \
    Test.csproj

And now, our application should work on Alpine with considerable size savings without any issues.

Quick and Dirty ChatGPT Proofreader

While I find ChatGPT’s reliability dubious when it comes to difficult real-life questions, I found one niche where it functions almost flawlessly - proofreading.

For many non-native speakers (or me at least), pinning down all details of English language (especially getting those pesky indefinite articles at correct places) might be difficult. ChatGPT, at least to my untrained eye, seems to do a really nice job when it comes to correcting the output.

And yes, one can use its chat interface directly to do the proofreading, but ChatGPT’s API is reasonably cheap so you might as well make use of it.

var apiEndpoint = "https://api.openai.com/v1/chat/completions";
var apiKey = "sk-XXX";

var httpClient = new HttpClient();
httpClient.DefaultRequestHeaders.Authorization
    = new System.Net.Http.Headers.AuthenticationHeaderValue("Bearer", apiKey);

var inputText = File.ReadAllText("<inputfile>");
inputText = "Proofread text below. Output it as markdown.\n\n"
    + inputText.Replace("\r", "");

var requestBody = new {
    model = "gpt-3.5-turbo",
    messages = new[] {
        new {
            role = "user",
            content = inputText,
        }
    }
};

var jsonRequestBody = JsonSerializer.Serialize(requestBody);
var httpContent = new StringContent(jsonRequestBody,
                                    Encoding.UTF8, "application/json");

var httpResponse = await httpClient.PostAsync(apiEndpoint, httpContent);
var responseJson = await httpResponse.Content.ReadAsStringAsync();
dynamic responseObject = JsonSerializer.Deserialize<dynamic>(responseJson);

string outputText = responseObject.GetProperty("choices")[0]
    .GetProperty("message").GetProperty("content").GetString();

Console.WriteLine(outputText);

And yes, this code doesn’t really check for errors and requires a lot more “plumbing” to be a proper application but it does actually work.

Happy proofreading!

Hashing It Out

While .NET finally includes CRC-32 and CRC-64 algorithms, it stops at bare minimum and offers only a single standard polynomial for each. Perfectly sufficient if one wants to create something from scratch but woefully inadequate when it comes to integrating with other software.

You see, CRC is just the method of computation and it’s not sufficient to fully describe the result. What you need is polynomial and there’s a bunch of them. At any useful bit length you will find many “standard” polynomials. While .NETs solution gives probably most common 32 and 64 bit variant, it doesn’t cover shorter bit lengths nor does it allow for custom polynomial.

Well, for that purpose I created a library following the same inheritance-from-NonCryptographicHashAlgorithm-class pattern. Not only does it allow for 8, 16, 32, and 64 bit widths, but it also offers a bunch of well-known polynomials in addition to custom polynomial support.

Below is the list of currently supported variants and, as always, code is available on GitHub.

CRC-8CRC-16CRC-32CRC-64
ATMACORNAAL5ECMA-182
AUTOSARARCADCCPGO-ECMA
BLUETOOTHAUG-CCITTAIXMGO-ISO
C2AUTOSARAUTOSARMS
CCITTBUYPASSBASE91-CREDIS
CDMA2000CCITTBASE91-DWE
DARCCCITT-FALSEBZIP2XZ
DVB-S2CCITT-TRUECASTAGNOLI
GSM-ACDMA2000CD-ROM-EDC
GSM-BCMSCKSUM
HITAGDARCDECT-B
I-432-1DDS-110IEEE-802.3
I-CODEDECT-RINTERLAKEN
ITUDECT-XISCSI
LTEDNPISO-HDLC
MAXIMEN-13757JAMCRC
MAXIM-DOWEPCMPEG-2
MIFAREEPC-C1G2PKZIP
MIFARE-MADGENIBUSPOSIX
NRSC-5GSMV-42
OPENSAFETYI-CODEXFER
ROHCIBM-3740XZ
SAE-J1850IBM-SDLC
SMBUSIEC-61158-2
TECH-3250IEEE 802.3
WCDMA2000ISO-HDLD
ISO-IEC-14443-3-A
ISO-IEC-14443-3-B
KERMIT
LHA
LJ1200
LTE
MAXIM
MAXIM-DOW
MCRF4XX
MODBUS
NRSC-5
OPENSAFETY-A
OPENSAFETY-B
PROFIBUS
RIELLO
SPI-FUJITSU
T10-DIF
TELEDISK
TMS37157
UMTS
USB
V-41-LSB
V-41-MSB
VERIFONE
X-25
XMODEM
ZMODEM

UUID Version 7 Implementation and Conundrums

During otherwise uninteresting summer, without too much noise, we got new UUID version(s). While 6 and 8 are nice numbers, version 7 got me intrigued. It’s essentially just a combination of Unix timestamp with some random bits mixed in. Exactly what a doctor might order if you want to use such UUID as a primary key in a database whose index you don’t want to fragment to hell.

Format is easy enough as its ASCII description would suggest:

 0                   1                   2                   3
 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
|                           unix_ts_ms                          |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
|          unix_ts_ms           |  ver  |       rand_a          |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
|var|                        rand_b                             |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
|                            rand_b                             |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+

It’s essentially 48 bits of (Unix) timestamp, followed by 74 bits of randomness, with 6 remaining bits being version and variant information. While timestamp ensures that IDs generated close in time get sorted close to each other, randomness is there to ensure uniqueness for stuff that happens at the same millisecond. And, of course, I am simplifying things a bit, especially for rand_a field but you get the gist of it.

That all behind us, let me guide you through my implementation of the same.

Generating the first 48-bits is straightforward with the only really issue being endianness. Since BinaryPrimitives doesn’t really deal with 48-bit integers, a temporary array was needed.

var msBytes = new byte[8];
BinaryPrimitives.WriteInt64BigEndian(msBytes, ms);
Buffer.BlockCopy(msBytes, 2, Bytes, 0, 6);

Generating randomness has two paths. Every time new millisecond is detected, it will generate 10 bytes of randomness. The lowest 10 bits of first 2 bytes will be used to initialize random starting value (as per Monotonic Random method in 6.2). After accounting for the 4-bit version field that shares the same space, we have 2 bits remaining. Those are simply set to 0 (as per Counter Rollover Guards in the same section). I decided not to implement “bit stealing” from rand_b field nor to implement any fancy rollover handling. If we’re still in the same millisecond, 10-bit counter is just increased and it’s lower bits are written.

if (LastMillisecond != ms) {
    LastMillisecond = ms;
    RandomNumberGenerator.Fill(Bytes.AsSpan(6));
    RandomA = (ushort)(((Bytes[6] & 0x03) << 8) | Bytes[7]);
} else {
    RandomA++;
    Bytes[7] = (byte)(RandomA & 0xFF);
    RandomNumberGenerator.Fill(Bytes.AsSpan(8));
}

Despite code looking a bit smelly when it comes to multithreading, it’s actually thread safe. Why? Well, it comes to both RandomA and LastMillisecond field having a special ThreadStatic attribute.

[ThreadStatic] private static long LastMillisecond;
[ThreadStatic] private static ushort RandomA;

This actually makes each thread have a separate copy of these variables and thus no collisions will happen. Of course, since counters are determined for each thread separately, you don’t get a sequential output between threads. Not ideal, but a conscious choice to avoid a performance hit a proper locking would introduce.

The last part is to fixup bytes in order to add version bits (always a binary 0111) and variant bits (always a binary 10).

Bytes[6] = (byte)(0x70 | ((RandomA >> 8) & 0x0F));
Bytes[8] = (byte)(0x80 | (Bytes[8] & 0x3F));

Add a couple of overrides and that’s it. You can even convert it to Guid. However…

Microsoft’s implementation of UUID know to all as System.Guid is slightly broken. Or isn’t. I guess it depends from where you look at it from. If you look at it as how RFC4122 specifies components, you can see them as data types. And that’s how the original developer thought of it. Not as a binary blob but as a structure containing (little-endian on x86) numbers even though the specification clearly says all numbers are big endian.

Had it stopped at just internal storage, it would be fine. But Microsoft went as far as to convert endianness when converting 128-bit value to the string. And that is fine if you work only with Microsoft’s implementation but it causes issues when you try to deal with almost any other variant.

This also causes one peculiar problem when it comes to converting my version 7 UUID to Microsoft’s Guid. While their binary representations are the same, converting them to string format yields a different value. You can have either binary or string compatibility between those two. But never both. In my case I decided that binary compatibility is more important since you should really be using UUID in its binary form and not space-wasting hexadecimal format.

As always, the full code is available on GitHub.

[2023-01-12: Code has been adjusted a bit in order to follow the current RFC draft. Main changes are introduction of longer monotonic counter and altenate text conversion methods (Base35 and Base58).]

Single Instance Application for .NET 6 or 7

A while ago I wrote C# code to handle single instance application. And that code has served me well on Windows. However, due to its dependency on the Windows API, you really couldn’t target multiplatform .NET code. It was time for an update.

My original code was using a combination of a global mutex in order to detect another instance running, followed by a named pipe communication to transfer arguments to the first-running instance. Fortunatelly, .NET 6 also contained those primitives. Even better, I could replace my named pipe API calls with multiplatform NamedPipeServerStream and NamedPipeClientStream classes.

Unlike my Windows-specific code, I had to use Global\\ prefix in order for code to work properly on Linux. While unfortunate, it actually wasn’t too bad as my mutex name already included the user name. Combine that with assembly location, hash it a bit, and you have a globally unique identifier. While the exact code was changed slightly, the logic remained the same and new code worked without much effort.

Code to transfer arguments had a few more issues. First of all, I had to swap my binary serializer for JSON. Afterward, I had to write a new pipe handling code, albeit using portable .NET implementation as a base this time. Mind you, back when I wrote it for Windows, neither has been supported. Regardless, a bit of time later, both tasks were successfuly done and the freshly updated code has been tested on Linux. Success!

But success was shortlived as the same code didn’t work on Windows. Well, technically it did work but the old instance newer saw the data that was sent. It took a bit of troubleshooting to figure a basic named pipe constructor limited communication to a single process and overload setting PipeOptions.CurrentUserOnly for both client and server was needed. Thankfuly, that didn’t present any issues on Linux so the same code was good for both.

And that was it. Now I had working .NET 6 (or 7) code for a single instance application working for both Windows and Linux (probably MacOS too), allowing not only for detection but also argument forwarding. Just what I needed. :)

You can see both this class and example of its usage in my Medo repository.