Overthinking the LED Blinking

The very first thing I add to every piece of hardware I make is an LED. It not only signals that the device is on, but you can also use one to show processing or even do some simple debugging. What I often go for is a simple resistor+LED pair. And here is where the first decision comes - do I go for high-side or low-side drive?

Illustration

High side drive is when your output pin goes into the anode of the LED. The exact resistor location doesn’t really matter, and either before or after the LED results in the same behavior. When your pin goes high, the LED lights up. When the pin goes low, the LED turns off.

Illustration

Low side drive is pretty much the same idea but with logic reversed. When the pin goes low, the LED turns on, while the pin going high turns it off.

With both being essentially the same, why would you select one over the other? Honestly, there is no difference worth mentioning. It all goes to personal preference and/or what is easier to route on the PCB.

Illustration

For open-drain LED control we make use of the tri-state output available on most microcontrollers (high-Z state simulated using a switch). In addition to the standard low and high setup, we have an additional one sometimes called high-Z (or high impedance). In Microchip PIC, this is done by manipulating the TRIS pin. If the output TRIS is 1 (high-Z), the LED will be off. If TRIS is 0 and the output is low (the only valid open-drain state), the LED has a path to the ground and lights up.

This is quite similar to the previously mentioned low-side drive and has pretty much the same effect but with a slightly different way of controlling the pin. I would argue that both are essentially the same if your circuit uses just a single voltage. Where things change is when you have different voltages around as open-drain will work for driving LED on any rail (ok, some limits apply) while low side drive will not turn off fully if, e.g., you are controlling 12V LED with a 5V microcontroller.

But this open-drain setup got me thinking whether I can have use something similar to have LED be on by default? Why, you wonder? Well, my first troubleshooting step is to see if input voltage is present. My second troubleshooting step is to see if microcontroller is booting. Quite often, those tasks fall onto two different LEDs. If I could have LED on by default, that would tell me if there’s any voltage to start with and then blinking boot sequence would tell me microcontroller is alive and programmed.

Illustration

Well, there is one way to do it, and I call it default on open-drain. Unprogrammed Microchip PIC has pins in tri-state (high-Z) configuration. Thus, current will flow through LED as soon as voltage is applied. However, if pin goes low, current will flow to the pin and thus LED will go off. Exactly what we wanted.

Albeit, nothing comes without the cost and here “cost” is twofold. The first issue is current consumption. While our “on” state is comparable to what we had with other setups, our “off” state uses even more current. For blinking it might not matter but it should be accounted for if you have it off for longer duration.

Second potential issue comes if our ping ever goes high since that will have LED pull as much current as pin can provide. While Microchip’s current limiting is usually quite capable, you are exceeding current limit and that’s never good for a long-term PIC health. And chips that have no current limiting are going to fry outright. It is critical for functioning of this LED setup that the only two states are ever present: either low or high-Z.

Illustration

To avoid this danger, we can do the same setup using two resistor split. Assuming both resistors together are about the same overall value, in “on” state this circuit matches what we had before. In “off” state, we unfortunately increased current even further but, if we ever switch pin to high, we are fully protected. Even better, this setup allows for three LED states: “on”, “off”, and “bright”. This feature alone might be worth increased current consumption.

In order to deal with higher current consumption, we could reorganize resistors a bit. This circuit behaves the same as the previous one if microcontroller’s pin is high-Z (or not present). If microcontroller pin goes high, “bright” state gets even brighter as you’re essentially putting two resistors in parallel.

Illustration

When pin goes low, we have those resistors in series and thus current consumption, while still present, is not exceedingly higher than what “on” state requires. And yes, having made what’s essentially a voltage divider does mean that LED is not really fully off. However, it goes dark enough that nobody will be able to tell the difference when blinking starts.

Which one is best?

Well, I still use a bog-standard resistor-LED drive the most (either low, high, or open-drain). It’s simple, it’s fool-proof and definitely has the lowest current consumption among examples given. However, for boards where remote debugging is a premium but space doesn’t allow for bigger display, I find that having three brightness settings at a cost of a single resistor is quite a useful bargain.

O Brother, Where Art Thou? (Printer Edition)

While most of my server machines are Linux and I am daily driving Ubuntu on my laptop, I still use Windows for some things. For example, scanning and printing has been thus far in the exclusive Windows domain.

Well, not anymore. Time has come to connect Ubuntu 23.04 desktop to my trusty Brother MFC-J475DW printer/scanner.

I first went to the Brother website and surprisingly they provide Linux drivers for something that’s now ancient device. And that’s where pleasant surprise stopped as instructions didn’t really work. But, with a bit of adjustments, it did eventually work out.

First of all, I connect to this printer via network and thus you’ll see IP in instructions. I’ll use 192.168.0.10 as an example - if your printer uses different address, adjust accordingly.

Secondly, you need the following packages (and yes, I’m not downloading fax drivers):

Thirdly, we need a few extra packets as a prerequisite.

sudo apt-get install lib32z1 lprng cups

For printer, we need both LPR and CUPS drivers:

sudo dpkg -i --force-all mfcj475dwlpr-3.0.0-1.i386.deb
sudo sed -i 's|\:lp.*|\:rm=192.168.0.10\\\n\t:rp=lp\\|g' /etc/printcap
sudo systemctl restart lprng

sudo dpkg -i --force-all mfcj475dwcupswrapper-3.0.0-1.i386.deb
lpadmin -p MFC-J475DW -E -v lpd://192.168.0.10/MFC-J475DW -P /usr/share/cups/model/Brother/brother_mfcj475dw_printer_en.ppd

For scanner, steps are even simpler:

sudo dpkg -i --force-all brscan4-0.4.11-1.amd64.deb
sudo brsaneconfig4 -a name=MFC-J475DW model=MFC-J475DW ip=192.168.0.10

If all went ok, you should be able to print and scan now.


PS: Huge props to Brother for providing Linux drivers for pretty much all their devices.

Xeoma in Docker (but Not in Kubernetes)

Illustration

After running my own homebrew solution involving ffmpeg and curl scriptery for a while, I decided it’s time to give up on that and go with a proper multi-camera viewing solution.

I had narrowed my desired setup to a few usual suspects: ZoneMinder, Shinobi, BlueIris, and one that I hadn’t heard about before: Xeoma.

I wanted something that would record (and archive) 2 cameras on my local network, could run in Kubernetes, and would allow for a mobile application some time down the road.

BlueIris was an immediate no-go for me as it’s Windows-only software. There’s a docker version of it but it messes with Wine. And one does not simply mess with Wine.

I did consider ZoneMinder and Shinobi, but both had setups that were way too complex for my mini Kubernetes (Alpine K3s). Mind you, there were guides out there but none of them started without a lot of troubleshooting. And even when I got them running, I had a bunch of issues still lingering. I will probably revisit ZoneMinder at one point in the future, but I didn’t have enough time to properly mess with it.

That left Xeoma. While not a free application, I found it cheap enough for my use case. Most importantly, while updates were not necessarily free, all licenses were perpetual. There’s no monthly fee unless you want to use their cloud.

Xeoma’s instructions were okay, but not specific for Kubernetes. However, if one figures out how to install stuff in docker, it’s trivial to get it running in Kubernetes. Here is my .yaml file:

---
apiVersion: apps/v1
kind: Deployment

metadata:
  namespace: default
  name: xeoma
  labels:
    app: xeoma

spec:
  replicas: 1
  strategy:
    type: Recreate
  selector:
    matchLabels:
      app: xeoma
  template:
    metadata:
      labels:
        app: xeoma
    spec:
      containers:
      - name: xeoma
        image: coppit/xeoma:latest
        env:
        - name: VERSION
          value: "latest"
        - name: PASSWORD
          value: "changeme"
        volumeMounts:
          - name: config-volume
            mountPath: /config
          - name: archive-volume
            mountPath: /archive
      volumes:
        - name: config-volume
          hostPath:
            path: /srv/xeoma/config/
            type: Directory
        - name: archive-volume
          hostPath:
            path: /srv/xeoma/archive
            type: Directory

---
apiVersion: v1
kind: Service

metadata:
  namespace: default
  name: xeoma
  labels:
    app: xeoma

spec:
  type: LoadBalancer
  selector:
    app: xeoma
  ports:
  - name: server
    protocol: TCP
    port: 8090
    targetPort: 8090
  - name: web
    protocol: TCP
    port: 10090
    targetPort: 10090

And yes, this might not be the best setup - especially using directory volume mounts - but I find it very useful in my home lab. For example, the backup becomes just a trivial rsync affair.

With Kubernetes running, my next task was to select a license level. While there is a great licensing overview page, it still took me probably more than half an hour to finally select a version.

Free and Starter were immediately discounted due to their archive retention only going up to 5 days, and I wanted much more. The limit of 3 modules is not hugely problematic for my case, but later I found that might be a bit too low (due to how chaining works) even for me. I likewise removed Pro from consideration as it was way more expensive and it actually didn’t offer anything that I needed for my monitoring setup.

So my decision was between Lite and Standard. As I only needed 2 cameras at this time, Lite made a bit more sense. Out of things I was interested in, you do lose user profiles (i.e., everybody logs in as the same user) and the module for issue detection (e.g., camera offline) is strangely missing. But those weren’t deal breakers.

Do note that Lite also doesn’t offer upgrades to the software. The version you have is the version you’re stuck with. For a professional setup, I would definitely go with Standard, but again, for my home use case, I don’t need updates all the time.

So, I got the key, plugged it into the software, played a bit, decided to restart the server, and… my license was gone. One thing I couldn’t notice in trial mode was that the license was connected to the MAC address of the pod the software was running on. And the pod will get a new MAC address each time you restart it.

I tried quite a few tricks to make the MAC address stick: manually setting the address in /etc/network/interfaces, messing with ifconfig, docker-in-docker sidecar… No matter what I did, I couldn’t get the licensing to work in combination with Kubernetes.

And therein lies the danger of any licensing. If you are too strict, especially in a virtual world, real users get impacted while crackers are probably just fine…

In their defense, you can also get demo licenses. After I figured out what the issue was, having 10 demo licenses to mess with allowed me to play with the system a bit more than what I would be able to do with my perpetual license. Regardless, I was defeated - Kubernetes was not to be. I strongly recommend you to obtain Demo licenses if you have any unusual setup.

Regardless of the failed Kubernetes setup, I also had a good old docker on the same machine. With a few extra line items, that one worked wonderfully. My final setup for docker was the following command:

docker run --name=xeoma \
  -d \
  -p 8090:8090 \
  -p 10090:10090 \
  -v /srv/xeoma/config:/config \
  -v /srv/xeoma/archive:/archive \
  -e VERSION=https://felenasoft.com/xeoma/downloads/2023-08-10/linux/xeoma_linux64.tgz \
  -e PASSWORD=admin \
  --hostname=xeoma \
  --mac-address 08:ae:ef:44:26:57 \
  --restart on-failure \
  coppit/xeoma

Here, -d ensures that the container is “daemonized” and, as such, goes into the background.

Port 8090 as output is mandatory for setup, while the web interface running on 10090 can be removed if one doesn’t plan to use it. I decided to allow both.

The directory setup is equivalent to what I planned to use with Kubernetes. I simply expose both the /config and /archive directories.

Passing the URL as the VERSION environment variable is due to Lite not supporting upgrades. Doing it this way ensures we always get the current version. Of course, readers from the future will need to find their URL on the History of changes webpage. Changing the PASSWORD environment variable is encouraged.

In order for licensing to play nicely, we need the --mac-address parameter to be set to a fixed value. The easiest way forward is just generating one randomly. And no, this is not MAC address connected to my license. :)

For restarts, I found on-failure setting works best for me as it will start the container when the system goes up, in addition to restarting it in the case of errors.

And lastly, the Docker image coppit/xeoma is given as a blueprint for our setup.

And this Docker solution works nicely for me. Your mileage (and needs) may vary.

Disabling Caps Lock Under Linux

Different keyboard layouts on different laptops bring different annoyances. But there is one key that annoys me on any keyboard: CAPS LOCK. There is literally no reason for that key to exist. And yes, I am using literally appropriately here. The only appropriate action is to get rid of it.

If you’re running any systemd-enabled Linux distribution that is easy enough. My approach is as follows:

echo -e "evdev:atkbd:*\n KEYBOARD_KEY_3a=f15" \
  | sudo tee /etc/udev/hwdb.d/42-nocapslock.hwdb

To apply, either reboot the system or reload with udevadm:

sudo udevadm -d hwdb --update
sudo udevadm -d control --reload
sudo udevadm trigger

Congrats, your keyboard is now treating CapsLock as F15 (aka the highest F key you can assign keyboard shortcuts too in Gnome settings). Of course, you can select and other key of your liking. For that, you can take a look at SystemD GitHub for ideas. Of course, setting it to nothing (i.e., reserved) is a valid choice as well.


PS: If you want to limit change to just your laptop (e.g., if you’re propagating changes via Ansible and you don’t want to touch your desktop), you can check content of /sys/class/dmi/id/modalias for your computer IDs. Then you can limit your input appropriately. For example, limiting change to my Framework 13 laptop would look something like this:

evdev:atkbd:dmi:bvn*:bvr*:bd*:br*:svnFramework:*
 KEYBOARD_KEY_3a=f20

PPS: In case Caps Lock is not 3a key on your computer, you might need to adjust files appropriately. To figure out which key it is, run evtest. When you press Caps Lock, you’ll get something like this:

-------------- SYN_REPORT ------------
type 4 (EV_MSC), code 4 (MSC_SCAN), value 3a
type 1 (EV_KEY), code 58 (KEY_CAPSLOCK), value 0

Value you want is after MSC_SCAN.


PPPS: Another way to debug keyboard is by using libinput (part of libinput-tools package):

sudo libinput debug-events --show-keycodes

PPPPS: And yes, you can remap other keys too. F1 is my second “favorite”, close after Caps Lock.

Installing Windows Onto Framework Expansion Card

Illustration

While I use Linux as the primary OS of choice on my Framework Laptop, I still need Windows from time to time. And yes, a virtual machine is usually sufficient, but there is one scenario where Windows is much better - gaming.

First of all, this setup is not limited to the Framework Expansion Card. You can get it working on pretty much any USB these days. However, there is a difference between “can” and “should”. Most notably, most USB drives out there will not actually give you enough raw speed to comfortably use Windows. You need something with a bit more umph, and both Framework expansion SSD cards fit this nicely.

The trick in getting it all done is using Rufus and installing Windows in To Go mode. This retains much of the normal Windows behavior, but it also improves handling of what is essentially just a USB drive (e.g., you can unplug it during running). Default Rufus settings are actually good here, just make sure to select “Windows To Go” and everything else will be as normal.

Illustration

Lastly, while I do love encryption, TPIM is a slight annoyance in many scenarios where you might end up moving your installation around. Thus, while TPIM is available on the Framework laptop, I wanted my BitLocker not to make any use of it. I found editing Group Policy settings using these steps works for me.

  1. Open gpedit.msc.
  2. Navigate to Computer Configuration → Administrative Templates → Windows Components → BitLocker Drive Encryption → Operating System Drives.
  3. Require additional authentication at startup:
    • Enabled.
    • Allow BitLocker without a compatible TPM: Checked (was already)
    • Configure TPM startup: Do not allow TPM
    • Configure TPM startup PIN: Require startup PIN with TPM
    • Configure TPM startup key: Do not allow startup key with TPM
    • Configure TPM startup key and PIN: Do not allow startup key and PIN with TPM
  4. Allow enhanced PINs for startup:
    • Enabled
  5. Configure use of passwords for operating system drives:
    • Enabled
    • Configure password complexity for operating system drives: Allow password complexity (already)

Now onto to play some games. :)