Programming in C#, Java, and god knows what not

New Solution File Format

Illustration

Not all heroes wear capes. I mean, bunch of them cannot be bothered to wear pants. But all heroes should at least get a beer. And none more than those that finally took the darn .sln format behind the barn.

Yep, without much fanfare, a new solution file format was introduced. Instead of big ugly sln file everybody was used to but nobody ever loved, we got much simpler slnx file. In just a few lines new format pretty much does the only thing you need it to - list darn projects.

Gone are GUIDs, gone are Debug and Release profiles, and finally, gone is darn BOM with an empty starting line. Essentially everything is gone except for what you actually need. And yes, you can still have debug and release profiles - you just don’t need to explicitly define them in the solution file.

Migration is as easy as it gets:

dotnet sln <solution.sln> migrate
rm <solution.sln>

Looking at the whole .NET ecosystem, this feature is small. In general, I think this syntactic sugar category often gets overlooked. If it’s good, you will actually probably forgot all about how things were before. I hope that, in a few years time, sln will be just a distant memory and a way to scare children into eating their broccoli.

Forwarding Makefile Targets to a Script

I love make files. There is something special when you just run make and all gets built automatically. Even better, you can use multiple targets to chain a few operations together, e.g., make clean test debug All this is available to you under Linux. Under Windows, all this magic is gone.

For Windows, most of the time I see either a separate script handling build tasks, or nothing at all. A separate script is not a bad solution but it does introduce a potential difference between builds. Assuming you have Git installed, the easiest way out is to simply forward Makefile entries to the bash script. Something like this:

.PHONY: clean test debug release

clean:
	@./Make.sh clean

test:
	@./Make.sh test

debug:
	@./Make.sh debug

release:
	@./Make.sh release

And honestly, this is probably good enough. If you are on linux, you use make debug and on Windows you use Make.sh debug. For years now I used this approach whenever I needed things to work on both Linux and Windows. But there were issues - mainly with target “chaining”.

For example, if you want to run clean as a prerequisite to release, you can do that in Makefile.

…

clean:
	@./Make.sh clean

release: clean
	@./Make.sh release

This will, under Linux, do what you expect it. But, under Windows, this is not enough. So, alternatively, you might leave Makefile as-is and do the chaining in Make.sh. And that works on Windows but, under Linux, it will double call to clean, i.e.,

make clean release

will translate into

./Make.sh clean    # first call doing only clean
./Move.sh release  # second call internally does clean again

It’s not the worst issue out there and god knows I lived with it for a long time. What I need was to just forward whatever arguments I receive in make command to my Make.sh script. Reading GNU make documentation did point toward MAKECMDGOALS special variable that was exactly what I needed. It even pointed to last resort %:: syntax. So, the following Makefile looked to be all I needed.

%::
	@./Make.sh $(MAKECMDGOALS)

Only if life was that easy. This last-resort rule will unfortunately call script once for each target given to make. I.e., the final call in our example would be:

./Make.sh clean release
./Move.sh clean release

And there is no fool-proof way I found to prevent the second run. You cannot set a variable, you cannot really detect which argument you’re forwarding, you cannot exit. You could write in file that you are already running but that gets messy when task is cancelled.

I spent quite a lot of time messing with this but I never found a generic way. But, I finally managed to find something incredibly close.

all clean run test debug release &:
	@./Make.sh $(MAKECMDGOALS)

As long as you list all targets, listing only one or all of them will lead to the same command. And, because they are all grouped together, it will run it only one. It’s not ideal because I do need to keep target list in two places, but that list is not likely to change.

If you want to check my whole build script, you can check my GitHub.

Calculate This

Illustration

As I moved to Linux, I slowly started moving all my apps along. But, as I played with electronics, I often had to boot up Windows just to get to a simple calculator. I made this calculator ages ago in order to calculate various values. But I made it for Windows Store which meant it was time to make it again, this time a bit more portable.

With the help of Avalonia and a bit of C# code it was a long overdue weekend project. Most of the time I just need LDO power and voltage divider calculations, but it seemed a shame not to reimplement the others.

Since, I wanted application to work on Linux (both KDE and Gnome), choice fell between two frameworks: Avalonia and ETO Forms. I was tempted to go the ETO Forms route because I actually like(d) Windows Forms. They’re easy, event drive, and without 50 shades of indirection. But, after playing with both for a while, Avalonia just seemed more suitable.

As previously, I created the following calculators:

  • E-Series
  • LDO Power
  • LED
  • LM117
  • LM317
  • Microchip PIC PWM
  • Microchip PIC TMR0
  • Ohm’s Law
  • Parallel and Series Resistors
  • Voltage Divider

I will implement more as I need them.

While development environment does contain unit tests, currently it’s a bit low on their count. I was too lazy to implement them all. Probably I’ll write them only as I fix bug since I’m lazy that way.

If this app seems interesting, You can download it here. It should work pretty much on any glibc-based Linux out there. I will eventually make Windows setup version too, but you can you Windows Store one in meantime.

.NET Plugins Without a Common Assembly

For a new project of mine, I wanted to use a plugin architecture. Since it had been a while since I did plugins in .NET the last time, I wanted to see what’s new in .NET 8. Well, no news at all - plugins for .NET application are literally the same as they always were.

Don’t misunderstand me, there were some steps forward. For one, there is a working example showing you exactly how it’s done. And honestly, that is an example you should use. However, this is the same way we did it back in .NET 2.0 days.

And yes, I am a bit unfair since there were a lot of upgrades in the backend and .NET 8 will give you more options on how to load stuff. Let’s not even get into performance improvements. However, I still have to create a common assembly that inevitably becomes a hell to maintain. And what about single-file publishing? Nope, still not supported.

While I was ok doing it the classical-style, I really hated not having a single-file, self-contained deployment. They are just so freeing when it comes to actual deployments and worth every additional byte they consume. And, since the whole framework is bundled in a single package, there is no real reason why it cannot be done. Is there?

Well, I decided to give it a try.

But before dealing with single-file deployments, what about removing the need for common assembly? Well, if you can get away with simple Get/Set interface that returns objects that can then use other standard interfaces; or, said plainly, if you can get away with forwarding standard .NET classes/interfaces, answer as always lies in good old IDesignerOptionService.

While quite a lot of interfaces you could use for plugins got trimmed with time (especially during the Windows Forms exodus), this one somehow survived. And it’s almost perfect for lously-coupled plugins. It gives you two methods: get and set. Thus, you can simply use something like that:

public class MyPlugin : IDesignerOptionService {

    public object GetOptionValue(string pageName, string valueName) {
        switch (pageName) {
            // do something that returns object
        }
    }

    public void SetOptionValue(string pageName, string valueName, object value) {
        switch (pageName) {
            // do something that with object
        }
    }

}

As long as you stick to built-in objects (or you’re willing to do a lot of reflection), you’re golden. I agree, there is a performance impact and design is not as clean as it could be, but I would argue it’s quite often worth it since we don’t have to deal with common assembly versioning and all the fun that can cause.

Thus, that only leaves single-file deployment as our goal. Is it really not supported?

Indeed, if you try to make a single-file deployment of the plugin dll, it will say that you cannot do that unless OutputType is Exe. And, if you try to combine that with common PluginBase assembly, it will not be able to load anything because PluginBase as a separate assembly is not the same as PluginBase that got packed. However, if you are ok with this janky IDesignerOptionService setup, you can make your host a single-file application.

And remember, the whole .NET is essentially packed there so this application (assuming you didn’t trim it), will have no issues loading our plugin DLLs.

So, to summarize, you can have your host application deployed as a single file (only the executable is needed) and then load any class from plugin dll that implements IDesignerOptionService interface. Such class will then use .NET from a host itself to run without .NET being installed separately.

To see it in action, download the example. Don’t forget to run Make.sh in order to copy files around.

Grayscale Avalonia Icons

For disabled icons in Avalonia toolbar, you can go two ways. One is just using an external tool to convert your existing color icons into their color-less variant and have them as a completely separate set. The one I prefer is to actually convert images on demand.

As I’m currently playing with this in Avalonia, I decided to share my code. And it’s not as straightforward as I would like. To start with, here is the code: As I’m currently playing with this in Avalonia, I decided to share my code. And it’s not as straightforward as I would like. To start with, here is the code:

public static Bitmap BitmapAsGreyscale(Bitmap bitmap) {
    var width = bitmap.PixelSize.Width;
    var height = bitmap.PixelSize.Height;

    var buffer = new byte[width * height * 4];
    var bufferPtr = GCHandle.Alloc(buffer, GCHandleType.Pinned);
    try {
        var stride = 4 * width;
        bitmap.CopyPixels(default, bufferPtr.AddrOfPinnedObject(), buffer.Length, stride);

        for (var i = 0; i < buffer.Length; i += 4) {
            var b = buffer[i + 0];
            var g = buffer[i + 1];
            var r = buffer[i + 2];

            var grey = byte.CreateSaturating(0.299 * r + 0.587 * g + 0.114 * b);

            buffer[i + 0] = grey;
            buffer[i + 1] = grey;
            buffer[i + 2] = grey;
        }

        var writableBitmap = new WriteableBitmap(new PixelSize(width, height), new Vector(96, 96), Avalonia.Platform.PixelFormat.Bgra8888);
        using (var stream = writableBitmap.Lock());
        Marshal.Copy(buffer, 0, stream.Address, buffer.Length);

        return writableBitmap;
    } finally {
        bufferPtr.Free();
    }
}

Since Avalonia doesn’t really expose pixel-level operations, first we need to obtain values of all the pixels. The easiest approach I found was just using the CopyPixels method to get all the data to our buffer. As this code in Avalonia is quite low-level and requires a pointer, we need to have our buffer pinned. Anything pinned also needs releasing, thus our finally block.

Once we have raw bytes, there is just a matter of figuring out which byte holds which value, and here I suspect that pretty much anybody will use the most common RGBA byte ordering. It’s most common by far, and I would say it will be 99% what you end up with.

To get gray, we can use averages, but I prefer using slightly more complicated BT.601 luma calculation. And yes, this doesn’t take into account gamma correction; nor is it the only way to get a grayscale. However, I found it works well for icons without much calculation needed. You can opt to use any conversion you prefer as long as the result is a nice 8-bit value. Using this value for each of RGB components gives us the gray component. Further, note that in code above, I only modify RGB values, leaving the alpha channel alone.

Once the bytes are in the desired state, just create a WritableBitmap based on that same buffer and with the same overall properties (including 32-bit color).

Dealing with X11 Primary Clipboard under Avalonia

It all started with Avalonia and a discovery that its clipboard handling under Linux doesn’t include the primary buffer. Since I was building a new GUI for my password manager this was an issue for me. I wanted to be able to paste directly into the terminal using Shift+Insert instead of messing with the mouse. And honestly, especially for password prompts, having a different result depending on which paste operation you use is annoying at best. So, I went onto building my own X11 code to deal with it.

The first idea was to see if somebody else had written the necessary code already. Most useful source for this became mono repository. However, its clipboard support was intertwined with other X11 stuff. It was way too much code to deal with for a simple paste operation but I did make use of it for figuring out X11 structures.

What helped me the most was actually seeing how X11 clipboard handling was implemented in C. From that I managed to get my initialization code running:

DisplayPtr = NativeMethods.XOpenDisplay(null);
RootWindowPtr = NativeMethods.XDefaultRootWindow(DisplayPtr);
WindowPtr = NativeMethods.XCreateSimpleWindow(DisplayPtr, RootWindowPtr, -10, -10, 1, 1, 0, 0, 0);
TargetsAtom = NativeMethods.XInternAtom(DisplayPtr, "TARGETS", only_if_exists: false);
ClipboardAtom = NativeMethods.XInternAtom(DisplayPtr, "PRIMARY", only_if_exists: false);
Utf8StringAtom = NativeMethods.XInternAtom(DisplayPtr, "UTF8_STRING", only_if_exists: false);
MetaSelectionAtom = NativeMethods.XInternAtom(DisplayPtr, "META_SELECTION", only_if_exists: false);
EventThread = new Thread(EventLoop) {  // last to initialize so we can use it as detection for successful init
  IsBackground = true,
};
EventThread.Start();

This code creates a window (XCreateSimpleWindow) for an event loop (that we’ll handle in a separate thread) and also specifies a few X11 atoms for clipboard handling.

In order to set clipboard text, we need to tell X11 that we’re the owner of the clipboard and that it should use our event handler to answer any queries. I also opted to prepare UTF-8 string bytes so we don’t need to deal with them in the loop.

private byte[] BytesOut = [];

public void SetText(string text) {
  BytesOut = Encoding.UTF8.GetBytes(text);
  NativeMethods.XSetSelectionOwner(DisplayPtr, ClipboardAtom, WindowPtr, 0);
}

But the real code is actually in our event loop where we wait for SelectionRequest event:

private void Loop() {
  while (true) {
    NativeMethods.XEvent @event = new();
    NativeMethods.XNextEvent(DisplayPtr, ref @event);

    switch (@event.type) {
      case NativeMethods.XEventType.SelectionRequest: {
        var requestEvent = @event.xselectionrequest;
        if (NativeMethods.XGetSelectionOwner(DisplayPtr, requestEvent.selection) != WindowPtr) { continue; }
        if (requestEvent.selection != ClipboardAtom) { continue; }
        if (requestEvent.property == IntPtr.Zero) { continue; }

        // rest of selection handling code
        break;
    }
  }
}

There are two subrequests possible here. The first one for the other application asking for text formats (i.e., TARGETS atom query). Here we can give it UTF8_STRING atom that seems to be universally supported. We could have given it more formats but I honestly saw no point in messing with ANSI support. It’s 2024, for god’s sake.

if (requestEvent.target == TargetsAtom) {
  var formats = new[] { Utf8StringAtom };
  NativeMethods.XChangeProperty(requestEvent.display,
                                requestEvent.requestor,
                                requestEvent.property,
                                4,   // XA_ATOM
                                32,  // 32-bit data
                                0,   // Replace
                                formats,
                                formats.Length);

  var sendEvent = ... // create XSelectionEvent structure with type=SelectionNotify
  NativeMethods.XSendEvent(DisplayPtr,
                           requestEvent.requestor,
                           propagate: false,
                           eventMask: IntPtr.Zero,
                           ref sendEvent);
}

After we told the terminal what formats we support, we can expect a query for that data type next within the same SelectionRequest event type. Here we can finally use previously prepared our UTF-8 bytes. I opted to allocate a new buffer to avoid any issues. As in the previous case, all work is done by setting the property on the destination window (XChangeProperty) with XSendEvent serving to inform the window we’re done.

if (requestEvent.target == Utf8StringAtom) {
  var bufferPtr = IntPtr.Zero;
  int bufferLength;
  try {
    bufferPtr = Marshal.AllocHGlobal(BytesOut.Length);
    bufferLength = BytesOut.Length;
    Marshal.Copy(BytesOut, 0, bufferPtr, BytesOut.Length);
  }

  NativeMethods.XChangeProperty(DisplayPtr,
                                requestEvent.requestor,
                                requestEvent.property,
                                requestEvent.target,
                                8,  // 8-bit data
                                0,  // Replace
                                bufferPtr,
                                bufferLength);
  } finally {
    if (bufferPtr != IntPtr.Zero) { Marshal.FreeHGlobal(bufferPtr); }
  }

  var sendEvent = ... // create XSelectionEvent structure with type=SelectionNotify
  NativeMethods.XSendEvent(DisplayPtr,
                           requestEvent.requestor,
                           propagate: false,
                           eventMask: IntPtr.Zero,
                           ref sendEvent);
}

And that’s all you just need in order to support SetText. However, it seemed like a waste not to implement GetText method too.

The code for retrieving text is a bit more complicated since we cannot retrieve text directly. We must ask for it using our UTF8_STRING atom. However, we cannot read the clipboard text directly but only via our event loop. So, we need to wait for our AutoResetEvent to signal data is ready before returning.

private readonly AutoResetEvent BytesInLock = new(false);
private byte[] BytesIn = [];

public string GetText() {
  NativeMethods.XConvertSelection(DisplayPtr,
                                  ClipboardAtom,
                                  Utf8StringAtom,
                                  MetaSelectionAtom,
                                  WindowPtr,
                                  IntPtr.Zero);
  NativeMethods.XFlush(DisplayPtr);

  if (BytesInLock.WaitOne(100)) {  // 100 ms wait
    return Encoding.UTF8.GetString(BytesIn);
  } else {
    return string.Empty;
  }
}

In the event loop, we need to add an extra case for SelectionNotify event where we can handle reading the data and signaling our AutoResetEvent.

case NativeMethods.XEventType.SelectionNotify: {
  var selectionEvent = @event.xselection;
  if (selectionEvent.target != Utf8StringAtom) { continue; }  // we ignore anything not clipboard

  if (selectionEvent.property == 0) {  // nothing in clipboard
    BytesIn = [];
    BytesInLock.Set();
    continue;
  }

  var data = IntPtr.Zero;
  NativeMethods.XGetWindowProperty(DisplayPtr,
                                   selectionEvent.requestor,
                                   selectionEvent.property,
                                   long_offset: 0,
                                   long_length: int.MaxValue,
                                   delete: false,
                                   0,  // AnyPropertyType
                                   out var type,
                                   out var format,
                                   out var nitems,
                                   out var bytes_after,
                                   ref data);
  BytesIn = new byte[nitems.ToInt32()];
  Marshal.Copy(data, BytesIn, 0, BytesIn.Length);
  BytesInLock.Set();
  NativeMethods.XFree(data);
}
break;

With all this code in, you can now handle primary (aka, middle-click) clipboard just fine. And yes, the code is not fully complete, so you might want to check my X11Clipboard class, which not only provides support for primary but also for a normal clipboard too. E.g.:

X11Clipboard.Primary.SetText("A");
X11Clipboard.Primary.SetText("B");

Avalonia Workaround for ShowDialog Focus

As I was working on a Linux C# application, I was kinda annoyed with the new dialog not having keyboard control. The window would actually display correctly on top of its owner, but it would never take the keyboard focus onto itself. Thus, no keyboard control. A bit of searching also showed that this was already a known issue, sitting pretty for a while now.

But thankfully, the workaround is rather simple. From within your code, just attach (or override) to your Activated event and manually select the button.

Activated += delegate { button.Focus(); };

With this, Avalonia will switch focus to the new window, and everything else will work as expected.

Adding Tools to .NET Container

When Microsoft provides you with container image, they provide everything you need to run .NET application. And no more. But what if we want to add our own tools?

Well, there’s nothing preventing you from using just standard docker stuff. For example, enriching default Alpine Linux image would just require creating a Dockerfile with the following content:

FROM mcr.microsoft.com/dotnet/runtime:7.0-alpine
RUN apk add iputils traceroute curl netcat-openbsd

Essentially we tell Docker to use Microsoft’s image as our baseline and to install a few packages. To “execute” those commands, simply use the file to build an image:

docker build --tag dotnet-runtime-7.0-alpine-withtools .

To see if all works as intended, we can simply test it with Docker.

docker run --rm -it dotnet-runtime-7.0-alpine-withtools sh

Once happy, just tag and push it. In this case, I’m adding it to the local repository.

docker tag dotnet-runtime-7.0-alpine-withtools:latest localhost:5000/dotnet-runtime:7.0-alpine-withtools
docker push localhost:5000/dotnet-runtime:7.0-alpine-withtools

In our .NET project, we just need to change the ContainerBaseImage value and publish it as usual:

<ContainerBaseImage>localhost:5000/dotnet-runtime:7.0-alpine-withtools</ContainerBaseImage>

PS: If you don’t have Docker running locally, don’t forget to start it:

docker run -d -p 5000:5000 --name registry registry:2

Using Alpine Linux Docker Image for .Net 7.0

With .NET 7 publishing a docker image became trivial. Really, all that’s needed is to add a few entries into .csproj file.

<ContainerBaseImage>mcr.microsoft.com/dotnet/runtime:7.0</ContainerBaseImage>
<ContainerRuntimeIdentifier>linux-x64</ContainerRuntimeIdentifier>
<ContainerImageName>test</ContainerImageName>
<ContainerImageTags>0.0.1</ContainerImageTags>

With those in place, and assuming we have docker working, we can then “publish” the image.

dotnet publish -c Release --no-self-contained \
    /t:PublishContainer -p:PublishProfile=DefaultContainer \
    Test.csproj

And there’s nothing wrong with this. However, what if you want an image that’s smaller than 270 MB this method offers? Well, there’s always Alpine Linux. And yes, Microsoft offers an image for Alpine too.

So I changed my project values.

<ContainerBaseImage>mcr.microsoft.com/dotnet/runtime:7.0-alpine</ContainerBaseImage>
<ContainerRuntimeIdentifier>linux-x64</ContainerRuntimeIdentifier>
<ContainerImageName>test</ContainerImageName>
<ContainerImageTags>0.0.1</ContainerImageTags>

And that led me to a dreadful Error/CrashLoopBackOff state. My application simply wouldn’t run and since the container crashed, it was really annoying to troubleshoot anything. But those familiar with .NET and Alpine Linux might see the issue. While almost any other Linux is happy with the linux-x64 moniker, our Alpine needs a special linux-musl-x64 value due to using a different libc implementation. And no, you cannot simply put that in .csproj as you’ll get error that The RuntimeIdentifier 'linux-musl-x64' is not supported by dotnet/runtime:7.0-alpine.

You need to add it to the publish command line as an option

dotnet publish -c Release --no-self-contained  -r linux-musl-x64\
    /t:PublishContainer -p:PublishProfile=DefaultContainer \
    Test.csproj

And now, our application should work on Alpine with considerable size savings without any issues.

Quick and Dirty ChatGPT Proofreader

While I find ChatGPT’s reliability dubious when it comes to difficult real-life questions, I found one niche where it functions almost flawlessly - proofreading.

For many non-native speakers (or me at least), pinning down all details of English language (especially getting those pesky indefinite articles at correct places) might be difficult. ChatGPT, at least to my untrained eye, seems to do a really nice job when it comes to correcting the output.

And yes, one can use its chat interface directly to do the proofreading, but ChatGPT’s API is reasonably cheap so you might as well make use of it.

var apiEndpoint = "https://api.openai.com/v1/chat/completions";
var apiKey = "sk-XXX";

var httpClient = new HttpClient();
httpClient.DefaultRequestHeaders.Authorization
    = new System.Net.Http.Headers.AuthenticationHeaderValue("Bearer", apiKey);

var inputText = File.ReadAllText("<inputfile>");
inputText = "Proofread text below. Output it as markdown.\n\n"
    + inputText.Replace("\r", "");

var requestBody = new {
    model = "gpt-3.5-turbo",
    messages = new[] {
        new {
            role = "user",
            content = inputText,
        }
    }
};

var jsonRequestBody = JsonSerializer.Serialize(requestBody);
var httpContent = new StringContent(jsonRequestBody,
                                    Encoding.UTF8, "application/json");

var httpResponse = await httpClient.PostAsync(apiEndpoint, httpContent);
var responseJson = await httpResponse.Content.ReadAsStringAsync();
dynamic responseObject = JsonSerializer.Deserialize<dynamic>(responseJson);

string outputText = responseObject.GetProperty("choices")[0]
    .GetProperty("message").GetProperty("content").GetString();

Console.WriteLine(outputText);

And yes, this code doesn’t really check for errors and requires a lot more “plumbing” to be a proper application but it does actually work.

Happy proofreading!