Programming in C#, Java, and god knows what not

LocalPaper

Illustration

Ever since I got my first Be Book reader I was a fan of epaper displays. So, when I saw a decently looking Trmnl dashboard, I was immediatelly drawn to it. It’s a nice looking device with actually great dashboard software. It’s not perfect (e.g. darn display updates take ages), but there is nothing major I mind. It even has option to host your own server and half decent instructions on how to get it running.

However, even with all this goodness, I decided to roll my own software regardless. Reason? Well, my main beef was that I just wanted a simple list of events for today and tomorrow. On left I would have general family events while on right each of my kids would get their own column. I did try to use calendar for this but main issue was that it was setup as calendar and not really as event list I wanted. Also, it was not really possible to filter just based on date and not on time. For example, I didn’t want past events to dissapear until day is over. And I really wanted a separate box for tomorrow. Essentially, I wanted exactly what is depicted above.

To make things more complicated, I also didn’t want to use calendars for this. Main reason was because it was rather annoying to have all events displayed. For example, my kids’ school schedule is not really something they will enter in calendar. That would make calendar overcrowded for them. As for me, my problem was oposite one as I quite often have stuff in calendar that nobody else cares about and that would just make visual mess (e.g. dates for passport renewal are entries in my calendar). And yes, most of these issues could be sorted by a separate calendar for dashboard. But I didn’t really like that workflow and, most importantly, it wouldn’t show what happens the next day. And that is something my wife really wanted.

With all this I figured I would spend less time rolling my own solution than creating plugins.

Thankfully, Trmnl was kind enough to anticipate the need for custom server in their device setup. Just select your custom destination and you’re good. Mind you, as an opensource project, it would be simple enough to change servers on your own but actually having it available in their code does simplify future upgrades.

On server side, you just need three URLs to server.

The initial one is /api/setup. This one gets called only when device is first time pointed toward the new server and its purpose is to setup authentication keys. Because this was limited to my home network and I really didn’t care, I simply just respond 200 with a basic json.

{
  "status": 200,
  "api_key": "12:34:56:78:90:AB",
  "friendly_id": "1234567890AB",
  "image_url": "http://10.20.30.40:8084/hello.bmp",
  "filename": "empty_state"
}

Once device is setup, the next API call it makes will probably be /api/log. This one I ignore because I could. :) While device is sending the request, it doesn’t care about the answer. At the time I wrote this, even Trmnl own API documentation didn’t cover what it does. While they later did update documentation, I didn’t bother updating the code since 404 works here just fine and data provided in this call is actually also available in the next one.

In /api/display device actually asks you at predefined intervals what to do. Response gives two important pieces of information to the device: where is image to draw and when should I ask for the next image. The next image is easy - I decided upon 5 minutes. You really cannot do it more often as every refresh flashes the screen. Probably the only thing I really hate and actualy the one that will be solved eventually since there is no reason to do a full epaper reset on every draw. But, even once that is solved, you don’t want to do it more often because you will drain your battery. With 5 minute interval it will last you a month. I could have used longer intervals but then any update I make wouldn’t be visible for a while. Month is good enoough for me. To keep things simple, for the file name I just gave specially formatted URL that my software will process later.

{
  "status": 0,
  "image_url": "http://10.20.30.40:8084/A085E37A1984_2025-06-01T04-55-00.bmp",
  "filename": "A085E37A1984_2025-06-01T04-55-00.bmp",
  "refresh_rate": 300,
  "reset_firmware": false,
  "update_firmware": false,
  "firmware_url": null,
  "special_function": "identify"
}

And finally, the last part of API was actually providing the bitmap. Based on the file name and time requested, I would simply generate one on the fly. But you cannot just give it any old bitmap - it has to be 1-bit bitmap (aka 1bpp). And none of the C# libraries supports it out of box. Even SkiaSharp that is capable of doing any manipulation you can imagine simply refuses to deal with something that simple. After trying all reasonably popular graphic libraries only to end up with the same issue, I decided to simply go over the bits in for loop and output my own raw bitmap bytes. Ironically, I spent less time on that then what I spent on testing different libraries. In essence, bitmap Trmnl device wanted has 62 byte header that is followed by simple bit-by-bit dump of image data. You can check Get1BPPImageBytes function if you are curious.

And that is all there is to API. Is it perfect? No. But it is easy to implement. The only pet peeve I have with it is not really the API but device behavior in case of missing server. Instead of just keeping the old data, it goes to an error screen. While I can see the logic, in my case where 95% of time nothing changes on display, it seems counter-productive. But again, I can see why some people would prefer fresh error screen to the old data. To each their own, I guess. Second issue I have is that there is no way to order device NOT to update. For example, if my image is exactly the same as the previous one, why flash the screen? But, again, those are minor things.

After all this talk, one might ask - what about data? Well, all my data comes in the form of pseudo-ini files. The main one being the configuration that setups what goes where. The full example, is on GitHub, I will just show interesting parts here.

[Events.All]
Directory=All
Top=0
Bottom=214
Left=0
Right=265

[Events.Thing1]
Directory=Thing1
Top=0
Bottom=214
Left=267
Right=532

[Events.Thing2]
Directory=Thing2
Top=0
Bottom=214
Left=534
Right=799

[Events.All+1d]
Directory=All
Offset=24
Top=265
Bottom=479
Left=0
Right=265

[Events.Thing1+1d]
Directory=Thing1
Offset=24
Top=265
Bottom=479
Left=267
Right=532

[Events.Thing2+1d]
Directory=Thing2
Offset=24
Top=265
Bottom=479
Left=534
Right=799

Then in each of those directories, you would find something like this.

[2025-05-26]
Lunch=Šnicle
Lunch=Krumpir salata
Lunch=Riža

[2025-05-27]
Lunch=Piletina na lovački
Lunch=Pire krumpir
Lunch=Zelena salata

Each date gets its own section and all entries underneath it are displayed in order. Even better, if they have the same key, that is used as a common header. So, the “Lunch” entries above are all combined together.

Since files are only read when updating, I exposed them on a file share so everybody can put anything on “the wall” by simply editing a text file. Setup is definitelly something that is not going to fit many people. I would almost bet that it will fit only me. However, that is a beauty of being a developer. Often you get to scratch the itch only you have.


You can find my code on GitHub. If you want to test it yourself, docker image is probably the easiest way to do so.

Lock Object

Lock statement existed in C# from the very beginning. I still remember the first example.

lock (typeof(ClassName)) {
    // do something
}

Those who use C# will immediatelly yell how perilous locking on the typeof is. But hey, I am just posting an official Microsoft’s advice here.

Of course, Microsoft did correct their example (albeit it took them a while) to now common (and correct) pattern.

private object SyncRoot = new object();lock (SyncRoot) {
    // do something
}

One curiosity of C# as a language is that you get to lock on any object. And here we just, as a convention, use the simplest object there is.

And yes, you can improve a bit on this if you use later .NET versions.

private readonly object SyncRoot = new();lock (SyncRoot) {
    // do something
}

However, if you are using C# 9 or later, you can do one better.

private readonly Lock SyncRoot = new();lock (SyncRoot) {
    // do something
}

What’s better there? Well, for starters we now have a dedicated object type. Combine that with a code analysis and now compiler can give you a warning if you make a typo and lock onto something else by accident. And also, …, wait …, wait …, yep, that’s it. Performance in all of these cases (yes, I am exluding typeof one) is literally the same.

As features go, this one is small and can be easily overlooked. It’s essentially just a syntatic sugar. An I can never refuse something that sweet.

Modulo or Bitwise

I had an interesting thing said to me: “Did you know that modulo is much less efficient than bitwise comparison?” As someone who spent time I painstakingly went through all E-series resistor values to find those that would make my voltage divider be power of 2, I definitely saw that in action. But, that got me thinking. While 8-bit PIC microcontroller doesn’t have a hardware divider and thus any modulo is a torture, what about modern computers? How much slower do they get?

Quick search brought me a few hits and one conclusive StackOverflow answer. Searching a bit more brought me to another answer where they even did measurements. And difference was six-fold. But I was left with a bit of nagging as both of these were 10+ years old. What is a difference you might expect on a modern CPU? And, more importantly for me, what are differences in C#?

Well, I quickly ran some benchmarks and results are below.

TestParallelMeanStDev
(i % 4) == 0No202.3 us0.24 us
(i & 0b11) == 0No201.9 us0.12 us
(i % 4) == 0CpuCount206.4 us7.78 us
(i & 0b11) == 0CpuCount196.5 us5.63 us
(i % 4) == 0CpuCount*2563.9 us7.90 us
(i & 0b11) == 0CpuCount*2573.9 us6.52 us

My expectations were not only wrong but slightly confusing too.

As you can see from table above, I did 3 tests, single threaded, default parallel for, and then parallel for loop with CPU overcommitment. Single threaded test is where I saw what I expected but not in amount expected. Bitwise was quite consistently winning but by ridiculous margins. Unless I was doing something VERY specific, there is no chance I would care about the difference.

If we run test in Parallel.For, difference becomes slightly more obvious. And had I stayed just on those two, I would have said that assumption holds for modern CPUs too.

However, once I overcommitted CPU resources, suddely modulo was actually better. And that is something that’s hard to explain if we take assumption that modulo just uses divide to be true.

So, I decided to sneak a bit larger peek - into .NET CLR. And I discovered that bitwise operation was fully omitted while modulo operation was still there. However, then runtime smartly decided to remove both. Thus, I was testing nothing vs. almost nothing.

Ok, after I placed a strategic extra instructions to prevent optimization, I got the results below.

TestParallelMeanStDev
(i % 4) == 0No203.1 us0.16 us
(i & 0b11) == 0No202.9 us0.06 us
(i % 4) == 0CpuCount1,848.6 us13.13 us
(i & 0b11) == 0CpuCount1,843.9 us6.76 us
(i % 4) == 0CpuCount*21,202.7 us7.32 us
(i & 0b11) == 0CpuCount*21,201.6 us6.75 us

And yes, bitwise is indeed faster than modulo but by really low margin. The only thing new test “fixed” was that discrepancy in speed when you have too many threads.

Just to make extra sure that the compiler wasn’t doing “funny stuff”, I decompiled both to IL.

ldloc.1
ldc.i4.4
rem
ldc.i4.0
ceq
ldloc.1
ldc.i4.3
and
ldc.i4.0
ceq

Pretty much exactly the same, the only difference being usage of and for bitwise check while rem was used for modulo. In modern CPUs these two instructions seem pretty much equivalent. And when I say modern, I use that lossely since I saw the same going back a few generations .

Interestingly, just in case runtime changed those to the same code, I also checked modulo 10 just to confirm. That one was actually faster than modulo 4. That leads me to believe there are some nice optimizations happening here. But I still didn’t know if this was .NET framework or really something CPU does.

As a last resort, I went down to C and compiled it with -O0 -S. Unfortunately, even with -O0, if you use % 4, it will be converted for bitwise. Thus, I checked it against % 5.

Bitwise check compiled down to just 3 instructions (or just one if we exclude load and check).

movl	-28(%rbp), %eax
andl	$3, %eax
testl	%eax, %eax

But modulo went crazy route.

movl	-28(%rbp), %ecx
movslq	%ecx, %rax
imulq	$1717986919, %rax, %rax
shrq	$32, %rax
movl	%eax, %edx
sarl	%edx
movl	%ecx, %eax
sarl	$31, %eax
subl	%eax, %edx
movl	%edx, %eax
sall	$2, %eax
addl	%edx, %eax
subl	%eax, %ecx
movl	%ecx, %edx
testl	%edx, %edx

It converted division into multiplication and gets to remainder that way. All in all, quite impressive optimization. And yes, this occupies more memory so there are other consequences to the performance (e.g. uses more cache memory).

So, if you are really persistent with testint, difference does exist. It’s not six-fold but it can be noticeable.

At the end, do I care? Not really. Unless I am working on microcontrollers, I won’t stop using modulo where it makes sense. It makes intent much more clear and that, to me, is worth it. Even better, compilers will just take care of this for you.

So, while modulo is less efficient, stories of its slowness have been exaggerated a bit.

PS: If you want to run my tests on your system, files are available.

Never Gonna BOM You Up

.NET supported Unicode from its very beginning. Pretty much anything you might need for Unicode manipulation is there. Yes, as early adopters, they made a bet on UTF-16 that didn’t pay off since rest of the world has moved toward UTF-8 as an (almost) exclusive encoding. However, if we ignore a bit higher memory footprint, C# strings made Unicode as easy as it gets.

And, while UTF-8 is not a native encoding for its strings, C# is no slouch and has a convenient Encoding.UTF8 static property allowing for easy conversion. However, if you do use that Encoding.UTF8.GetBytes() function, you will get a bit extra.

That something extra is Byte order mark. Its intention is noble - to help detect endianess. However, its usage for UTF-8 is of dubious help since 8-bit encoding doesn’t really have issues with endianness to start with. Unicode specification itself does allows for one but doesn’t recommend it. It merely acknowledges it might happen as a side-effect of data conversion from other unicode encodings that do have endianness.

So, in theory, UTF-8 with BOM should be perfectly acceptable. In practice, only Microsoft really embraced UTF-8 BOM. Pretty much everybody else decided to have UTF-8 without BOM as that allowed for full compatibility with 7-bit ASCII.

With time, .NET/C# stopped being Windows-only and, by today, became really good multiplatform solution. And now, helper function that ought to simplify things is actually producing output that will annoy many command-line tools that don’t expect it. If you read the documentation, solution exists - just create your own UTF-8 converter instance.

private static readonly Encoding Utf8 = new UTF8Encoding(encoderShouldEmitUTF8Identifier: false);

Now you can call Utf8.GetBytes() instead and you will get expected result on all platforms, including Windows - no BOM, no problems.

So, one could argue that Encoding.UTF8 default should be changed to what is more appropriate value. I mean, .NET is multiplatform and the current default doesn’t work everywhere. One could argue but this default is not changing, ever.

When any project starts, decisions must be made. And you won’t know for a while if those decisions were good. On the other hand, people will start depending on whatever behavior you selected.

In the case of BOM, it might be that developer got so used to having those three extra bytes that, instead checking the file content, they simply use <=3 as a signal file is empty. Or they have a script that takes output of some C# application and just strips the first three bytes blindly before moving it to non-BOM friendly input. Or any other decision somebody made in project years ago. It doesn’t really matter how bad someones code is. What matters is that code is currently working and new C# release shouldn’t silently break somebody’s code.

So, I am reasonably sure that Microsoft won’t ever change this default. And, begrudgingly, I agree with that. Some bad choices are simply meant to stay around.


PS: And don’t let me start talking about GUIDs and their binary format…

CoreCompile into the Ages

For one project of mine I started having a curious issue. After adding a few, admittedly a bit complicated, classes my compile times under Linux shot to eternity. But that was only when running with dotnet command line tools. In Visual Studio under Windows, all worked just fine.

Under dotnet I would just see CoreCompile step counting seconds, and then minutes. I tried increasing log level - nothing. I tried not cleaning stuff, i.e. using cached files - nothing. So, I tried cleaning up my .csproj file - hm… things improved, albeit just a bit.

A bit of triage later and I was reasonably sure that .NET code analyzer are the culprit. Reason why changes to .csproj reduced the time was because I had AnalysisMode set quite high. Default AnalysisMode simply checks less.

While disabling .NET analyzers altogether was out of question, I was quite OK with not running them all the time. So, until .NET under Linux gets a bit more performant, I simply included EnableNETAnalyzers=false in my build scripts.

  -p:EnableNETAnalyzers=false

Another problem solved.

Custom StringBuilder Pool

In my last post I grumbled about ObjectPool being a separate package. That was essentially the single downside to use it. So, how hard is to implement our own StringBuilder pool?

Well, not that hard. The whole thing can be something like this:

internal static class StringBuilderPool {

    private static readonly ConcurrentQueue<StringBuilder> Pool = new();

    [MethodImpl(MethodImplOptions.AggressiveInlining)]
    public static StringBuilder Get() {
        return Pool.TryDequeue(out StringBuilder? sb) ? sb : new StringBuilder(4096);
    }

    [MethodImpl(MethodImplOptions.AggressiveInlining)]
    public static bool Return(StringBuilder sb) {
        sb.Length = 0;
        Pool.Enqueue(sb);
        return true;
    }

}

In our Get method we check if we have any stored StringBuilder. If yes, we just return the same. If no, we create a new instance.

In the Return method we just add the returned instance to the queue.

Now, this is not exactly an ObjectPool equivalent. For example, it doesn’t limit the pool size. And it will keep large objects around forever. However, for my case it was good enough and unlikely to cause any problems.

And performance… Well, performance is promising, to say the least:

TestMeanErrorStdDevGen0Gen1Allocated
StringBuilder (small)15.762 ns0.3650 ns0.4057 ns0.0181-152 B
ObjectPool (small)17.257 ns0.0616 ns0.0576 ns0.0057-48 B
Custom pool (small)16.864 ns0.0192 ns0.0150 ns0.0057-48 B
Concatenation (small)9.716 ns0.1634 ns0.1528 ns0.0105-88 B
StringBuilder (medium)58.125 ns0.6429 ns0.6013 ns0.0526-440 B
ObjectPool (medium)23.226 ns0.0517 ns0.0484 ns0.0115-96 B
Custom pool (medium)23.660 ns0.2515 ns0.1963 ns0.0115-96 B
Concatenation (medium)66.353 ns1.3307 ns1.2447 ns0.0793-664 B
StringBuilder (large)190.293 ns0.7781 ns0.6498 ns0.24960.00102088 B
ObjectPool (large)92.556 ns0.9281 ns0.8228 ns0.0755-632 B
Custom pool (large)91.470 ns0.5478 ns0.5124 ns0.0755-632 B
Concatenation (large)1,430.599 ns11.5971 ns10.8479 ns4.01690.005733600 B

Pretty much its on-par with ObjectPool implementation. Honestly, results are close enough to be equivalent for all practical purposes.

So, if you don’t want to pull the whole Microsoft.Extensions.ObjectPool just for caching a few StringBuilder instances, consider rolling your own.

To Pool or Not to Pool

Illustration

For a project of mine I “had” to do a lot of string concatenations. Easy solution was just to have a string builder and go wild. But I wondered, does it make sense to use ObjectPool (found in Microsoft.Extensions.ObjectPool package). Thus, I decided to do a few benchmarks.

For my use case, “small” was just appending 3 items to a StringBuilder. The “medium” is does total of 21 appends. And finally, “large” does 201 appends. And no, there is no real reason why I used those exact numbers other than loop ended up being nice. :)

After all this, benchmark results (courtesy of BenchmarkDotNet):

TestMeanErrorStdDevGen0Gen1Allocated
StringBuilder (small)16.295 ns0.1240 ns0.1160 ns0.0181-152 B
StringBuilder Pool (small)17.958 ns0.3125 ns0.2609 ns0.0057-48 B
StringBuilder (medium)87.052 ns1.5177 ns1.4197 ns0.08320.0001696 B
StringBuilder Pool (medium)31.245 ns0.1815 ns0.1417 ns0.0181-152 B
StringBuilder (large)304.724 ns1.6736 ns1.3975 ns0.45200.00293784 B
StringBuilder Pool (large)172.615 ns1.5325 ns1.4335 ns0.1471-1232 B

As you can see, if you are doing just a few appends, it’s probably not worth messing with ObjectPool. Not that you should use StringBuilder either. If you are adding 4 or fewer strings, you might as well concatenate them - it’s actually more performant.

However, if you are adding 5 or more strings together, pool is no worse than instantiating a new StringBuilder. So, for pretty much any scenario where you would use StringBuilder, it pays off to pool it.

Is there a situation where you would avoid pool? Well, performance-wise, I would say probably no. I ran multiple tests and, on my computer, there was no situation where StringBuilder alone was better than either pool or concat. Yes, StringBuilder is performant at low number of appends, but string concatenation is better. As soon as you go over a few appends, ObjectPool actually makes sense.

However, an elephant in the room is ObjectPool’s dependency on external package. Call me old fashioned but there is a value in not depending on extra packages.

The final decision is, of course, dependant on you. But, if performance is important, I see no reason why not to use ObjectPool. I only wish it wasn’t an extra package.


For curious ones, code was as follows:

[Benchmark]
public string Large_WithoutPool() {
    var sb = new StringBuilder();
    sb.Append("Hello");
    for (var i = 0; i < 100; i++) {
        sb.Append(' ');
        sb.Append("World");
    }
    return sb.ToString();
}

[Benchmark]
public string Large_WithPool() {
    var sb = StringBuilderPool.Get();
    try {
        sb.Append("Hello");
        for (var i = 0; i < 100; i++) {
            sb.Append(' ');
            sb.Append("World");
        }
        return sb.ToString();
    } finally {
        sb.Length = 0;
        StringBuilderPool.Return(sb);
    }
}

And yes, I also tested just a simple string concatenation (quite optimized for smaller number of concatenations):

TestMeanErrorStdDevGen0Gen1Allocated
Concatenation (small)9.820 ns0.2365 ns0.2429 ns0.0105-88 B
Concatenation (medium)146.901 ns1.6561 ns1.2930 ns0.2294-1920 B
Concatenation (large)4,710.573 ns43.5370 ns96.4750 ns15.20540.0458127200 B

New Solution File Format

Illustration

Not all heroes wear capes. I mean, bunch of them cannot be bothered to wear pants. But all heroes should at least get a beer. And none more than those that finally took the darn .sln format behind the barn.

Yep, without much fanfare, a new solution file format was introduced. Instead of big ugly sln file everybody was used to but nobody ever loved, we got much simpler slnx file. In just a few lines new format pretty much does the only thing you need it to - list darn projects.

Gone are GUIDs, gone are Debug and Release profiles, and finally, gone is darn BOM with an empty starting line. Essentially everything is gone except for what you actually need. And yes, you can still have debug and release profiles - you just don’t need to explicitly define them in the solution file.

Migration is as easy as it gets:

dotnet sln <solution.sln> migrate
rm <solution.sln>

Looking at the whole .NET ecosystem, this feature is small. In general, I think this syntactic sugar category often gets overlooked. If it’s good, you will actually probably forgot all about how things were before. I hope that, in a few years time, sln will be just a distant memory and a way to scare children into eating their broccoli.

Forwarding Makefile Targets to a Script

I love make files. There is something special when you just run make and all gets built automatically. Even better, you can use multiple targets to chain a few operations together, e.g., make clean test debug All this is available to you under Linux. Under Windows, all this magic is gone.

For Windows, most of the time I see either a separate script handling build tasks, or nothing at all. A separate script is not a bad solution but it does introduce a potential difference between builds. Assuming you have Git installed, the easiest way out is to simply forward Makefile entries to the bash script. Something like this:

.PHONY: clean test debug release

clean:
	@./Make.sh clean

test:
	@./Make.sh test

debug:
	@./Make.sh debug

release:
	@./Make.sh release

And honestly, this is probably good enough. If you are on linux, you use make debug and on Windows you use Make.sh debug. For years now I used this approach whenever I needed things to work on both Linux and Windows. But there were issues - mainly with target “chaining”.

For example, if you want to run clean as a prerequisite to release, you can do that in Makefile.

…

clean:
	@./Make.sh clean

release: clean
	@./Make.sh release

This will, under Linux, do what you expect it. But, under Windows, this is not enough. So, alternatively, you might leave Makefile as-is and do the chaining in Make.sh. And that works on Windows but, under Linux, it will double call to clean, i.e.,

make clean release

will translate into

./Make.sh clean    # first call doing only clean
./Move.sh release  # second call internally does clean again

It’s not the worst issue out there and god knows I lived with it for a long time. What I need was to just forward whatever arguments I receive in make command to my Make.sh script. Reading GNU make documentation did point toward MAKECMDGOALS special variable that was exactly what I needed. It even pointed to last resort %:: syntax. So, the following Makefile looked to be all I needed.

%::
	@./Make.sh $(MAKECMDGOALS)

Only if life was that easy. This last-resort rule will unfortunately call script once for each target given to make. I.e., the final call in our example would be:

./Make.sh clean release
./Move.sh clean release

And there is no fool-proof way I found to prevent the second run. You cannot set a variable, you cannot really detect which argument you’re forwarding, you cannot exit. You could write in file that you are already running but that gets messy when task is cancelled.

I spent quite a lot of time messing with this but I never found a generic way. But, I finally managed to find something incredibly close.

all clean run test debug release &:
	@./Make.sh $(MAKECMDGOALS)

As long as you list all targets, listing only one or all of them will lead to the same command. And, because they are all grouped together, it will run it only one. It’s not ideal because I do need to keep target list in two places, but that list is not likely to change.

If you want to check my whole build script, you can check my GitHub.

Calculate This

Illustration

As I moved to Linux, I slowly started moving all my apps along. But, as I played with electronics, I often had to boot up Windows just to get to a simple calculator. I made this calculator ages ago in order to calculate various values. But I made it for Windows Store which meant it was time to make it again, this time a bit more portable.

With the help of Avalonia and a bit of C# code it was a long overdue weekend project. Most of the time I just need LDO power and voltage divider calculations, but it seemed a shame not to reimplement the others.

Since, I wanted application to work on Linux (both KDE and Gnome), choice fell between two frameworks: Avalonia and ETO Forms. I was tempted to go the ETO Forms route because I actually like(d) Windows Forms. They’re easy, event drive, and without 50 shades of indirection. But, after playing with both for a while, Avalonia just seemed more suitable.

As previously, I created the following calculators:

  • E-Series
  • LDO Power
  • LED
  • LM117
  • LM317
  • Microchip PIC PWM
  • Microchip PIC TMR0
  • Ohm’s Law
  • Parallel and Series Resistors
  • Voltage Divider

I will implement more as I need them.

While development environment does contain unit tests, currently it’s a bit low on their count. I was too lazy to implement them all. Probably I’ll write them only as I fix bug since I’m lazy that way.

If this app seems interesting, You can download it here. It should work pretty much on any glibc-based Linux out there. I will eventually make Windows setup version too, but you can you Windows Store one in meantime.