Randomizing Serial Number During MPLAB X Build

Illustration

Quite often, especially when dealing with USB devices, a serial number comes really handy. One example is my TmpUsb project. If you don’t update its serial number, it will still work when only one is plugged in. But plug in two, and all kinds of shenanigans will ensue.

This is the exact reason why I already created a script to randomize its serial number. However, that script had one major fault - it didn’t work under Linux. So, it came time to rewrite the thing and maybe adjust it a bit.

The way the original script worked was as a post-build step in MPLAB X project that would patch the Intel Hex object file and that’s something I won’t change as it integrates flawlessly with programming steps.

The resulting serial number was 12 hexadecimal digits in length (48 bits or random data) and that was probably excessive. Even worse, that led to an overly complicated script that was essentially a C# program to patch hex after the modification was done since any randomization always impacted more than 1 line. As I wanted a solution that could work anywhere the Bash can run (even Windows), I wanted to make my life easier by limiting any change to a single line.

Well, first things first, to limit serial number length, I had to figure out how much of a serial number can fit in one. Looking at the Intel hex produced by MPLAB X, we can see that each line is 16 bytes, which means any serial number intended for USB consumption can be up to 8 characters. However, what if other code pushes the serial number further down the line? Well, now you get only a single character.

What we need is a way to fix the serial number location. The solution to this is in the __at keyword. Using it, we can align our string descriptors wherever we want them. In the TmpUsb example, that would be something like this

const struct {
    uint8_t bLength;
    uint8_t bDscType;
    uint16_t string[7];
} sd003 __at(0x1000) = {
    sizeof(sd003),
    USB_DESCRIPTOR_STRING,
    { '2','8','4','4','3','4','2' }
};

The whole USB descriptor structure has to fit into 16 bytes as to limit any subsequent modification to the single line. The first 2 bytes are length and type bytes, leaving us with 14 bytes for our serial. Since USB likes 16-bit unicode, this means we have 7 characters to play with. If we stay in the hexadecimal realm, this provides 28 bits of randomization. Not bad, but we can slightly improve on it by expanding character selection a bit.

That’s where base32 comes in. It’s a nice enough encoding that isn’t case sensitive and it omits most easily confused characters. And yes, it would take 40 bits to fully utilize base32 but trimming it at 7 will leave you with 35 bits which is plenty.

How do I get this serial number in an easy way? Well, getting 5 bytes using dd and then passing it to base32 will do.

NEW_SERIAL=`dd if=/dev/urandom bs=5 count=1 2>/dev/null | base32 | cut -c 1-7`

If we pass the non-random serial number in, with some mangling to get it expanded, it is trivial to swap it with the new one using sed:

LINE=`cat "$INPUT" | grep "$SERIAL_UNICODE"`
NEW_LINE=`echo -n "$LINE" | sed "s/$SERIAL_UNICODE/$NEW_SERIAL_UNICODE/g"`
sed -i "s/$LINE/$NEW_NEW_LINE/g" "$INPUT"

But no, this is no good. Changing content of a line will invalidate checksum character that comes at the end. To make this work we need to adjust that checksum. Thankfully, checksum is just a sum of all characters preceding with a bit of inversion as a last step. Something like this:

CHECKSUM=0
for ((i = 1; i < $(( ${#NEW_LINE} - 2 )); i += 2)); do
    BYTE_HEX="${NEW_LINE:$i:2}"
    NEW_NEW_LINE="$NEW_NEW_LINE$BYTE_HEX"
    BYTE_VALUE=$(printf "%d" 0x$BYTE_HEX)
    CHECKSUM=$(( (CHECKSUM + BYTE_VALUE) % 256 ))
done
CHECKSUM_HEX=`printf "%02X" $(( (~CHECKSUM + 1) % 256 )) | tail -c 2`

Once all this is combined into a script, we can call it by giving it a few arguments, most notably, project directory (${ProjectDir}), image path (${ImagePath}), and our predefined serial number (2844342):

bash "${ProjectDir}/../package/randomize-usb-serial.sh"
  "${ProjectDir}"
  "${ImagePath}"
  "2844342"

Script is available for download here but you can also see it in action as part of TmpUsb.

MPLABX and the Symbol Lookup Error

After performing the latest Ubuntu upgrade, my MPLAB X v6.15 installation stopped working. It would briefly display the splash screen and then crash. When attempting to run it manually, I encountered a symbol lookup error in libUSBAccessLink_3_38.so.

$ /opt/microchip/mplabx/v6.15/mplab_platform/bin/mplab_ide

/opt/microchip/mplabx/v6.15/sys/java/zulu8.64.0.19-ca-fx-jre8.0.345-linux_x64/bin/java: symbol lookup error: /tmp/mplab_ide/mplabcomm4864006927221691126/mplabcomm5312997795113373971libUSBAccessLink_3_38.so: undefined symbol: libusb_handle_events_timeout_completed

Since there were a few links to the old MPLAB directory, I cleaned up /usr/lib a bit, but that didn’t help. What did help was removing the older libusb altogether.

sudo rm /usr/local/lib/libusb-1.0.so.0

With that link out of the way, everything started working once again.

Overthinking the LED Blinking

The very first thing I add to every piece of hardware I make is an LED. It not only signals that the device is on, but you can also use one to show processing or even do some simple debugging. What I often go for is a simple resistor+LED pair. And here is where the first decision comes - do I go for high-side or low-side drive?

Illustration

High side drive is when your output pin goes into the anode of the LED. The exact resistor location doesn’t really matter, and either before or after the LED results in the same behavior. When your pin goes high, the LED lights up. When the pin goes low, the LED turns off.

Illustration

Low side drive is pretty much the same idea but with logic reversed. When the pin goes low, the LED turns on, while the pin going high turns it off.

With both being essentially the same, why would you select one over the other? Honestly, there is no difference worth mentioning. It all goes to personal preference and/or what is easier to route on the PCB.

Illustration

For open-drain LED control we make use of the tri-state output available on most microcontrollers (high-Z state simulated using a switch). In addition to the standard low and high setup, we have an additional one sometimes called high-Z (or high impedance). In Microchip PIC, this is done by manipulating the TRIS pin. If the output TRIS is 1 (high-Z), the LED will be off. If TRIS is 0 and the output is low (the only valid open-drain state), the LED has a path to the ground and lights up.

This is quite similar to the previously mentioned low-side drive and has pretty much the same effect but with a slightly different way of controlling the pin. I would argue that both are essentially the same if your circuit uses just a single voltage. Where things change is when you have different voltages around as open-drain will work for driving LED on any rail (ok, some limits apply) while low side drive will not turn off fully if, e.g., you are controlling 12V LED with a 5V microcontroller.

But this open-drain setup got me thinking whether I can have use something similar to have LED be on by default? Why, you wonder? Well, my first troubleshooting step is to see if input voltage is present. My second troubleshooting step is to see if microcontroller is booting. Quite often, those tasks fall onto two different LEDs. If I could have LED on by default, that would tell me if there’s any voltage to start with and then blinking boot sequence would tell me microcontroller is alive and programmed.

Illustration

Well, there is one way to do it, and I call it default on open-drain. Unprogrammed Microchip PIC has pins in tri-state (high-Z) configuration. Thus, current will flow through LED as soon as voltage is applied. However, if pin goes low, current will flow to the pin and thus LED will go off. Exactly what we wanted.

Albeit, nothing comes without the cost and here “cost” is twofold. The first issue is current consumption. While our “on” state is comparable to what we had with other setups, our “off” state uses even more current. For blinking it might not matter but it should be accounted for if you have it off for longer duration.

Second potential issue comes if our ping ever goes high since that will have LED pull as much current as pin can provide. While Microchip’s current limiting is usually quite capable, you are exceeding current limit and that’s never good for a long-term PIC health. And chips that have no current limiting are going to fry outright. It is critical for functioning of this LED setup that the only two states are ever present: either low or high-Z.

Illustration

To avoid this danger, we can do the same setup using two resistor split. Assuming both resistors together are about the same overall value, in “on” state this circuit matches what we had before. In “off” state, we unfortunately increased current even further but, if we ever switch pin to high, we are fully protected. Even better, this setup allows for three LED states: “on”, “off”, and “bright”. This feature alone might be worth increased current consumption.

In order to deal with higher current consumption, we could reorganize resistors a bit. This circuit behaves the same as the previous one if microcontroller’s pin is high-Z (or not present). If microcontroller pin goes high, “bright” state gets even brighter as you’re essentially putting two resistors in parallel.

Illustration

When pin goes low, we have those resistors in series and thus current consumption, while still present, is not exceedingly higher than what “on” state requires. And yes, having made what’s essentially a voltage divider does mean that LED is not really fully off. However, it goes dark enough that nobody will be able to tell the difference when blinking starts.

Which one is best?

Well, I still use a bog-standard resistor-LED drive the most (either low, high, or open-drain). It’s simple, it’s fool-proof and definitely has the lowest current consumption among examples given. However, for boards where remote debugging is a premium but space doesn’t allow for bigger display, I find that having three brightness settings at a cost of a single resistor is quite a useful bargain.

O Brother, Where Art Thou? (Printer Edition)

While most of my server machines are Linux and I am daily driving Ubuntu on my laptop, I still use Windows for some things. For example, scanning and printing has been thus far in the exclusive Windows domain.

Well, not anymore. Time has come to connect Ubuntu 23.04 desktop to my trusty Brother MFC-J475DW printer/scanner.

I first went to the Brother website and surprisingly they provide Linux drivers for something that’s now ancient device. And that’s where pleasant surprise stopped as instructions didn’t really work. But, with a bit of adjustments, it did eventually work out.

First of all, I connect to this printer via network and thus you’ll see IP in instructions. I’ll use 192.168.0.10 as an example - if your printer uses different address, adjust accordingly.

Secondly, you need the following packages (and yes, I’m not downloading fax drivers):

Thirdly, we need a few extra packets as a prerequisite.

sudo apt-get install lib32z1 lprng cups

For printer, we need both LPR and CUPS drivers:

sudo dpkg -i --force-all mfcj475dwlpr-3.0.0-1.i386.deb
sudo sed -i 's|\:lp.*|\:rm=192.168.0.10\\\n\t:rp=lp\\|g' /etc/printcap
sudo systemctl restart lprng

sudo dpkg -i --force-all mfcj475dwcupswrapper-3.0.0-1.i386.deb
lpadmin -p MFC-J475DW -E -v lpd://192.168.0.10/MFC-J475DW -P /usr/share/cups/model/Brother/brother_mfcj475dw_printer_en.ppd

For scanner, steps are even simpler:

sudo dpkg -i --force-all brscan4-0.4.11-1.amd64.deb
sudo brsaneconfig4 -a name=MFC-J475DW model=MFC-J475DW ip=192.168.0.10

If all went ok, you should be able to print and scan now.


PS: Huge props to Brother for providing Linux drivers for pretty much all their devices.

Xeoma in Docker (but Not in Kubernetes)

Illustration

After running my own homebrew solution involving ffmpeg and curl scriptery for a while, I decided it’s time to give up on that and go with a proper multi-camera viewing solution.

I had narrowed my desired setup to a few usual suspects: ZoneMinder, Shinobi, BlueIris, and one that I hadn’t heard about before: Xeoma.

I wanted something that would record (and archive) 2 cameras on my local network, could run in Kubernetes, and would allow for a mobile application some time down the road.

BlueIris was an immediate no-go for me as it’s Windows-only software. There’s a docker version of it but it messes with Wine. And one does not simply mess with Wine.

I did consider ZoneMinder and Shinobi, but both had setups that were way too complex for my mini Kubernetes (Alpine K3s). Mind you, there were guides out there but none of them started without a lot of troubleshooting. And even when I got them running, I had a bunch of issues still lingering. I will probably revisit ZoneMinder at one point in the future, but I didn’t have enough time to properly mess with it.

That left Xeoma. While not a free application, I found it cheap enough for my use case. Most importantly, while updates were not necessarily free, all licenses were perpetual. There’s no monthly fee unless you want to use their cloud.

Xeoma’s instructions were okay, but not specific for Kubernetes. However, if one figures out how to install stuff in docker, it’s trivial to get it running in Kubernetes. Here is my .yaml file:

---
apiVersion: apps/v1
kind: Deployment

metadata:
  namespace: default
  name: xeoma
  labels:
    app: xeoma

spec:
  replicas: 1
  strategy:
    type: Recreate
  selector:
    matchLabels:
      app: xeoma
  template:
    metadata:
      labels:
        app: xeoma
    spec:
      containers:
      - name: xeoma
        image: coppit/xeoma:latest
        env:
        - name: VERSION
          value: "latest"
        - name: PASSWORD
          value: "changeme"
        volumeMounts:
          - name: config-volume
            mountPath: /config
          - name: archive-volume
            mountPath: /archive
      volumes:
        - name: config-volume
          hostPath:
            path: /srv/xeoma/config/
            type: Directory
        - name: archive-volume
          hostPath:
            path: /srv/xeoma/archive
            type: Directory

---
apiVersion: v1
kind: Service

metadata:
  namespace: default
  name: xeoma
  labels:
    app: xeoma

spec:
  type: LoadBalancer
  selector:
    app: xeoma
  ports:
  - name: server
    protocol: TCP
    port: 8090
    targetPort: 8090
  - name: web
    protocol: TCP
    port: 10090
    targetPort: 10090

And yes, this might not be the best setup - especially using directory volume mounts - but I find it very useful in my home lab. For example, the backup becomes just a trivial rsync affair.

With Kubernetes running, my next task was to select a license level. While there is a great licensing overview page, it still took me probably more than half an hour to finally select a version.

Free and Starter were immediately discounted due to their archive retention only going up to 5 days, and I wanted much more. The limit of 3 modules is not hugely problematic for my case, but later I found that might be a bit too low (due to how chaining works) even for me. I likewise removed Pro from consideration as it was way more expensive and it actually didn’t offer anything that I needed for my monitoring setup.

So my decision was between Lite and Standard. As I only needed 2 cameras at this time, Lite made a bit more sense. Out of things I was interested in, you do lose user profiles (i.e., everybody logs in as the same user) and the module for issue detection (e.g., camera offline) is strangely missing. But those weren’t deal breakers.

Do note that Lite also doesn’t offer upgrades to the software. The version you have is the version you’re stuck with. For a professional setup, I would definitely go with Standard, but again, for my home use case, I don’t need updates all the time.

So, I got the key, plugged it into the software, played a bit, decided to restart the server, and… my license was gone. One thing I couldn’t notice in trial mode was that the license was connected to the MAC address of the pod the software was running on. And the pod will get a new MAC address each time you restart it.

I tried quite a few tricks to make the MAC address stick: manually setting the address in /etc/network/interfaces, messing with ifconfig, docker-in-docker sidecar… No matter what I did, I couldn’t get the licensing to work in combination with Kubernetes.

And therein lies the danger of any licensing. If you are too strict, especially in a virtual world, real users get impacted while crackers are probably just fine…

In their defense, you can also get demo licenses. After I figured out what the issue was, having 10 demo licenses to mess with allowed me to play with the system a bit more than what I would be able to do with my perpetual license. Regardless, I was defeated - Kubernetes was not to be. I strongly recommend you to obtain Demo licenses if you have any unusual setup.

Regardless of the failed Kubernetes setup, I also had a good old docker on the same machine. With a few extra line items, that one worked wonderfully. My final setup for docker was the following command:

docker run --name=xeoma \
  -d \
  -p 8090:8090 \
  -p 10090:10090 \
  -v /srv/xeoma/config:/config \
  -v /srv/xeoma/archive:/archive \
  -e VERSION=https://felenasoft.com/xeoma/downloads/2023-08-10/linux/xeoma_linux64.tgz \
  -e PASSWORD=admin \
  --hostname=xeoma \
  --mac-address 08:ae:ef:44:26:57 \
  --restart on-failure \
  coppit/xeoma

Here, -d ensures that the container is “daemonized” and, as such, goes into the background.

Port 8090 as output is mandatory for setup, while the web interface running on 10090 can be removed if one doesn’t plan to use it. I decided to allow both.

The directory setup is equivalent to what I planned to use with Kubernetes. I simply expose both the /config and /archive directories.

Passing the URL as the VERSION environment variable is due to Lite not supporting upgrades. Doing it this way ensures we always get the current version. Of course, readers from the future will need to find their URL on the History of changes webpage. Changing the PASSWORD environment variable is encouraged.

In order for licensing to play nicely, we need the --mac-address parameter to be set to a fixed value. The easiest way forward is just generating one randomly. And no, this is not MAC address connected to my license. :)

For restarts, I found on-failure setting works best for me as it will start the container when the system goes up, in addition to restarting it in the case of errors.

And lastly, the Docker image coppit/xeoma is given as a blueprint for our setup.

And this Docker solution works nicely for me. Your mileage (and needs) may vary.