Pretty much every application I’ve created has some logging included. Mostly it’s just humble Debug and Trace statements but I’ve used more complex solutions for bigger apps. But one thing was lacking - standardized format.
I’ve noticed all my apps use slightly different log format. My bas scripts, e.g. for docker apps, usually have completely different format than my C# applications. And C# applications are also all different from one another, depending which framework I use. For example, applications using Serilog look completely different than apps I made using just Debug.WriteLine. And no, it’s not really a huge problem since each log output is similar enough for me to parse and use without using much brain power. But, combining those apps (e.g. in Docker) is what makes it annoying. You can clearly see where entry-point script ends, and other application begins. It just looks ugly.
So, I decided to reduce number of different log outputs by figuring out what is important to me when dealing with console logs. So I landed on the following list:
instantly recognizable - there should be no though required to figure out what each of fields does
minimum effort - same overall log format must be outputtable from simple bash script or complex application
grepable - any output must be easily filtered using grep
easily parsable - it must be possible to extract each field using basic linux tools (e.g. cut and awk)
single line - single line per log entry; if more is needed, in parallel output to different format, e.g. json
using std/err output - anything more serious than info should go to stderr output
colored - different log levels should result in different colors
In the end, I settled on something like this:
DATE TIME LEVEL CATEGORY+ID+TEXT
1969-07-20 20:17:40.000 INFO lander: Sequence completed
1969-07-20 23:39:33.000 INFO person:1: Hatch open
Date and time fields were easy choice to start message with. In a lot of my applications I used proper ISO format (e.g. 1969-07-20T20:17:40.000) but I opted to “standardize” on space. Reason is legibility. While date field is needed, I rarely care about it when troubleshooting - for that I most often just care about time. Separating time by a space allows for much greater legibility. As for time-zone, console output will always use the local time-zone.
I am a huge fan of UTC and I believe one should use it - most of the time. But it is hard to justify its usage on home servers where instead of helping it actually hinders troubleshooting. Compromise is just to use local time zone. If server is UTC, output will be UTC. And, as much as I love UTC I hate its denominator - Z. If the whole log is in UTC, adding Z as a suffix just makes things less readable.
I also spent way too much time thinking if I should include milliseconds or not. Since I found them valuable plenty of times, I decided they’re worth of extra 4 characters. Interestingly, I found that getting ahold of them in bash is not always straightforward. While under most Linux distributions, you can get time using date +'%Y-%m-%d %H:%M:%S.%3N', this doesn’t work on Alpine Linux. It’s busybox date doesn’t offer %N as an option. I found that date +"%F $(nmeter -d0 '%3t' | head -n1)" is a simple workaround.
Next space-separated fields is log level. Up to now I often used a single letter log level, e.g. E: for error. But I found that this is not always user friendly. Thus, I decided to expand name a bit:
Text
Color
.NET
Serilog
Syslog
Stream
TRACE
Dark blue
Trace
Verbose
-
1>
DEBUG
Bright blue
Debug
Debug
Debug (7)
1>
INFO
Bright cyan
Information
Information
Information (6)
1>
WARN
Bright yellow
Warning
Warning
Warning (4)
2>
ERROR
Bright red
Error
Error
Error (3)
2>
FATAL
Bright red
Critical
Fatal
Critical (2)
2>
Each log level is now 5 characters long. This makes parsing easier while still maintaining readability. I was really tempted to enclose them in square brackets, e.g. [INFO] since I find this format really readable. However, this would require escaping in grep and that is something I would hate to do.
Making log level field always 5 characters in length also helps to align text that follows. Just cut the first 30 characters and you get rid of date, time, and log level.
Next field contains category name which usually matches the application. This field is not fixed size but it should be easily parsable regardless due to it ending in colon (:) character followed by a space. If log entry has ID, the same is embedded within two colon characters. If ID is 0, it can be omitted. For practical reason, I try sticking to ID numbers 1000-9999 but field has no official width so anything within u64 should be expected
I don’t really use ID for my events in every application but they are so valueable when it comes to a large code base that I simply couldn’t omit them. However, they are just annoying when it comes to small application so I didn’t want to make this a separate field. In the end, I decided to keep them between two colons as that impacted my parsing the least.
And, finally, the last component is the actual log text. This is a free form field with only one rule - no control characters. Any control character (e.g. LF) should be escaped or stripped.
Of course, sometime you will need additional details, exceptions, or execution output. In those cases, I will just drop all that text verbatim with two spaces at front. This will make it not only visually distinct but also really easy to remove using grep.
With this in mind, I will now update my apps.
Will I change all of them? Not really. Most of my old applications will never get a log format update. Not only it’s a lot of work to update them but it might also mess with their tools and scripts.
This is just something I’ll try to keep in my mind going forward.
Sometimes something that ought to be simple might lead you to the wild goose chase. This time, I was searching for humble 2.5" SSD screws.
And yes, this was the first time in my life I had to search for them. Back in Croatia I have a bin full of leftover PC screws. It would have been easy to just grab them.
However, I moved to the US years ago and I never bothered to bring assorted screws with me. Not that I missed them - pretty much all cases accepting 2.5" drives came with screws. It wasn’t until I made a 3D printed cage for 2.5" drives that I figured I have none laying around.
So, simple enough, I needed between 10 screws to fully screw 2 disks in and attach them to the chassis. I mean, I could have gotten away with using just 3. But, since I was buying screws anyhow, I might as well fill all the holes.
It was easy to find that the screw is a flat top M3 with fine threads (0.5mm). But for length I saw multiple values. Everything from 2 to 5 millimeters.
So I went on to measure the screws I had in my computers, only to find three different dimensions: 3, 3.5, and 4 mm. And that was based on the total of 4 sets of screws (1/1/2 distribution, for curious). I discounted M3x3.5 almost immediately since it was hard to find it at a reasonable price. That left me with M3x3mm and M3x4mm as seemingly equaiy good candidates.
But then I struck the gold - WDs specification. There, in black and white, it’s clearly stated that a 2.5" inch drive will accommodate up to 3mm screw length for side mounting holes and up to 2.5mm for the holes on the bottom. Minimum thread requirements were 1.5mm for the side hole and 1mm for the bottom hole. If I wanted an universall screw for any set of holes, I had to aim for something that has thread length between 1.5 and 2.5 mm.
If we account for sheet metal holding the drive, that means M3x3mm is a clear universal winner. At least in theory.
But how come 2 of my screw sets were 4mm? Wouldn’t that present a problem? Well, all 2.5" drives I had (2 spinning rust, 4 SATA SSD) accepted the full 4mm for the side holes without any issue. All SSD drives with bottom holes were happy to accept the same. And, based on my (limited) sample, using M3x4mm will work just fine - even on WD’s own drives.
After watching stuttering 1080p@60 video once too many, I decided to retire my old NUC5i3RYH and switch it with Gen 11 Framework board I had lying around. It was supposed to be a quick swap. Just take SSD from old computer, move it to the new one, place new one into a Cooler Master case, and connect back all the cables. What could go wrong?
Well, everything. First, there was an issue with mounting. NUC uses “sorta” VESA 100mm, Framework uses VESA 100mm, while TV uses VESA 200mm. Thus I assumed I could use NUC’s mounting. Albeit, 200-to-100mm adapter used for NUC was just a fraction too thick for placing Framework screws. So I spent an hour with a dremel making slots slightly thinner. Funny how shaving metal looks awfully like shaving yak.
Well, at least after mounting my case onto TV, there would be no issues. Full of hope, I turned on the computer and … nothing. Gen 11 motherboards have an issue where they would literally destroy their CMOS battery. And then it would refuse to start until battery charges enough. Some time back I fixed that using a soldering mod to use laptop battery instead. However, guess what my newly mounted laptop didn’t have? Yep, Cooler Master case contains no battery. So, coaxing board to power on took another 30 minutes and future order for ML1220 battery.
With system powered on, there was an additional issue lurking. My NUC used mini-HDMI output while Framework provides HDMI via its expansion card. So, that required a trip to the garage and going over all the cables. I am ashamed to say there was not a single 4K@60Hz cable to be found. So, I took the best looking one and tried it out. It actually worked with just a bit of “shimmering”. Downgrading my settings to 4K@30 solved that issue.
And now finally I was able to relax and watch some Starcraft. Notice the use of word “watch” since I definitely noticed there was no sound. After all that, my “new” computer wouldn’t give me a peep. I mean, output was there. And all was connected. But TV didn’t understand that.
And on this I spent ages. First I tried different HDMI expansion cards - since I did a soldering mod on mine, I thought that might be an issue. Then I tried connecting things using analog audio - it took a while to find analog 3.5mm cable and it took much longer banging my head into the wall when I noticed that TV as no analog input. Then I tried bluetooth on my soundbar - that one kinda worked until you switch input on TV when HDMI ARC would take over and bluetooth would turn off. I was half-tempted to leave it like this.
But, in the moment of desperation I though of connecting via bluetooth to my TV and then using existing ARC connection to my soundbar. Unfortunately, it was then when I found out my TV only has bluetooth output and no bluetooth input. Fortunately, search for non-existent bluetooth input settings led me to audio “Expert Settings”. There I saw “HDMI Input Audio Format” setting. With NUC my TV has happy to work in “Bitstream” mode. However, switching this to “PCM” actually made my Framework work properly.
Now, why my TV had Bitstream set? I have no idea. Why NUC was happy to output compressed audio on HDMI while Framework wasn’t? I have no idea. Will I remember this next time I change computer? Definitely not.
After a long day, I did get to watch some Starcraft. So, I guess this can be considered a success.
A while ago I got myself a Trmnl device. However, I didn’t really want to use it for its Trmnl capabilities (which are admittedly great). What I wanted is to use Trmnl firmware with my server. And support for Trmnl firmware was the reason I got myself XIAO 7.5" ePaper Panel. I mean, it even comes with Trmnl firmware flashing instructions.
So, with Xiao display in my hand I tried following the instructions only to mostly fail. Why I say “mostly”? Well, display didn’t work and it still had picture that came from factory. But, since I was using my own server, I could easily see that it actually did communicate to my server. It did everything I expected it to, except showing the image.
The next step was, of course, contacting support. After following their steps, display did update using Arduino examples. But I didn’t want those - I wanted Trmnl firmware. So, I decided to dig into a Trmnl firmware itself. And one of the first thing I saw was that ESP32-C3 support was just added in 1.5.6. So much for the device supporting Trmnl firmware out of box. :)
Looking into source, it was easy to see that it contained defines for BOARD_TRMNL and BOARD_SEEED_XIAO_ESP32C3. The following was GPIO config for BOARD_TRMNL:
Yep, e-paper control constants were completely different fitting what I saw happening. Firmware was running due to the same microcontroller. But Trmnl firmware literally couldn’t communicate with my display.
Fix was easy. I simply ignored instructions provided as method 1. Since I didn’t want to use old firmware, I also ignored method 2. This nicely led me to method 3. Building directly from source.
I already had PlatformIO installed from before so I only needed to figure out how to flash the firmware. What worked for me was holding “Boot” button and pressing “Reset”. Then wait for 2-3 seconds and press “Reset” twice in row. Sometimes it would be ok to double-click “Reset” only - but not every time. In PlatformIO one can now select seeed_xiao_esp32c3 as a device and upload working firmware.
With the properly firmware uploaded, my device finally worked. Curiously, the only thing missing was reading battery voltage. Worst case scenario, this will need a bit of hardware modification to work. But that is story for another time.
After I upgraded my family PC from Windows 11 to Bazzite, I found nothing lacking. At least for a few week. It took me a while but I finally noticed that my Epson V600 scanner, connected to that PC was no longer working.
Well, onto Epson site I went and, lo and behold, they had Linux drivers. While Bazzite is an atomic distribution and not supported by drivers directly, you can still install RPMs using rpm-ostree. So, with drivers unpacked, I tried just that:
While the first two packages installed just fine, the third package was attempting to change stuff installed by the first two. And, due to atomic nature of Bazzite, it ran into a mkdir: cannot create directory ‘/var/lib/iscan’: Read-only file system error. And no, it doesn’t matter if you install all three RPMs together or all at once - the last one always fails.
Well, if we cannot get packages installed on Bazzite, how about we give it a separate system? Enter, Distrobox. Without going into too many details, it’s essentially container for your Linux distribution. To create it, just enter and you will be asked which distribution you want to create. I went with Fedora.
toolbox enter
After it pulls all packages, you have essentially running Fedora system inside your Bazzite. And, since Fedora is supported by Epson drivers, you can simply use the provided ./install.sh script to install it. If you run it manually, software can now start.
iscan
Since everybody in the family needed this application, I really wanted application in the start menu. However, Distrobox for some reason doesn’t provide this functionality. So, you need to do a bit of manual magic.
cp /usr/share/applications/iscan.desktop ~/.local/share/applications/
sed-i's|^Exec=.*|Exec=distrobox enter -- iscan|' ~/.local/share/applications/iscan.desktop
With that, you can finally find Image Scan! for Linux in your start menu.
After all this effort to have it running, I expected something like Epson’s Windows application. Only to be faced with barely functional application. Definitelly not satisfactory.
But, before I went onto creating Widnows dual boot, I decided to check if Flatpak has something to offer. And, wouldn’t you know it, somebody already packed Epson Scan 2. While still not really equivalent to the Windows counterpart, this one was actually good enough for my use case. And it could be installed without trickery.