Xeoma in Docker (but Not in Kubernetes)

Illustration

After running my own homebrew solution involving ffmpeg and curl scriptery for a while, I decided it’s time to give up on that and go with a proper multi-camera viewing solution.

I had narrowed my desired setup to a few usual suspects: ZoneMinder, Shinobi, BlueIris, and one that I hadn’t heard about before: Xeoma.

I wanted something that would record (and archive) 2 cameras on my local network, could run in Kubernetes, and would allow for a mobile application some time down the road.

BlueIris was an immediate no-go for me as it’s Windows-only software. There’s a docker version of it but it messes with Wine. And one does not simply mess with Wine.

I did consider ZoneMinder and Shinobi, but both had setups that were way too complex for my mini Kubernetes (Alpine K3s). Mind you, there were guides out there but none of them started without a lot of troubleshooting. And even when I got them running, I had a bunch of issues still lingering. I will probably revisit ZoneMinder at one point in the future, but I didn’t have enough time to properly mess with it.

That left Xeoma. While not a free application, I found it cheap enough for my use case. Most importantly, while updates were not necessarily free, all licenses were perpetual. There’s no monthly fee unless you want to use their cloud.

Xeoma’s instructions were okay, but not specific for Kubernetes. However, if one figures out how to install stuff in docker, it’s trivial to get it running in Kubernetes. Here is my .yaml file:

---
apiVersion: apps/v1
kind: Deployment

metadata:
  namespace: default
  name: xeoma
  labels:
    app: xeoma

spec:
  replicas: 1
  strategy:
    type: Recreate
  selector:
    matchLabels:
      app: xeoma
  template:
    metadata:
      labels:
        app: xeoma
    spec:
      containers:
      - name: xeoma
        image: coppit/xeoma:latest
        env:
        - name: VERSION
          value: "latest"
        - name: PASSWORD
          value: "changeme"
        volumeMounts:
          - name: config-volume
            mountPath: /config
          - name: archive-volume
            mountPath: /archive
      volumes:
        - name: config-volume
          hostPath:
            path: /srv/xeoma/config/
            type: Directory
        - name: archive-volume
          hostPath:
            path: /srv/xeoma/archive
            type: Directory

---
apiVersion: v1
kind: Service

metadata:
  namespace: default
  name: xeoma
  labels:
    app: xeoma

spec:
  type: LoadBalancer
  selector:
    app: xeoma
  ports:
  - name: server
    protocol: TCP
    port: 8090
    targetPort: 8090
  - name: web
    protocol: TCP
    port: 10090
    targetPort: 10090

And yes, this might not be the best setup - especially using directory volume mounts - but I find it very useful in my home lab. For example, the backup becomes just a trivial rsync affair.

With Kubernetes running, my next task was to select a license level. While there is a great licensing overview page, it still took me probably more than half an hour to finally select a version.

Free and Starter were immediately discounted due to their archive retention only going up to 5 days, and I wanted much more. The limit of 3 modules is not hugely problematic for my case, but later I found that might be a bit too low (due to how chaining works) even for me. I likewise removed Pro from consideration as it was way more expensive and it actually didn’t offer anything that I needed for my monitoring setup.

So my decision was between Lite and Standard. As I only needed 2 cameras at this time, Lite made a bit more sense. Out of things I was interested in, you do lose user profiles (i.e., everybody logs in as the same user) and the module for issue detection (e.g., camera offline) is strangely missing. But those weren’t deal breakers.

Do note that Lite also doesn’t offer upgrades to the software. The version you have is the version you’re stuck with. For a professional setup, I would definitely go with Standard, but again, for my home use case, I don’t need updates all the time.

So, I got the key, plugged it into the software, played a bit, decided to restart the server, and… my license was gone. One thing I couldn’t notice in trial mode was that the license was connected to the MAC address of the pod the software was running on. And the pod will get a new MAC address each time you restart it.

I tried quite a few tricks to make the MAC address stick: manually setting the address in /etc/network/interfaces, messing with ifconfig, docker-in-docker sidecar… No matter what I did, I couldn’t get the licensing to work in combination with Kubernetes.

And therein lies the danger of any licensing. If you are too strict, especially in a virtual world, real users get impacted while crackers are probably just fine…

In their defense, you can also get demo licenses. After I figured out what the issue was, having 10 demo licenses to mess with allowed me to play with the system a bit more than what I would be able to do with my perpetual license. Regardless, I was defeated - Kubernetes was not to be. I strongly recommend you to obtain Demo licenses if you have any unusual setup.

Regardless of the failed Kubernetes setup, I also had a good old docker on the same machine. With a few extra line items, that one worked wonderfully. My final setup for docker was the following command:

docker run --name=xeoma \
  -d \
  -p 8090:8090 \
  -p 10090:10090 \
  -v /srv/xeoma/config:/config \
  -v /srv/xeoma/archive:/archive \
  -e VERSION=https://felenasoft.com/xeoma/downloads/2023-08-10/linux/xeoma_linux64.tgz \
  -e PASSWORD=admin \
  --hostname=xeoma \
  --mac-address 08:ae:ef:44:26:57 \
  --restart on-failure \
  coppit/xeoma

Here, -d ensures that the container is “daemonized” and, as such, goes into the background.

Port 8090 as output is mandatory for setup, while the web interface running on 10090 can be removed if one doesn’t plan to use it. I decided to allow both.

The directory setup is equivalent to what I planned to use with Kubernetes. I simply expose both the /config and /archive directories.

Passing the URL as the VERSION environment variable is due to Lite not supporting upgrades. Doing it this way ensures we always get the current version. Of course, readers from the future will need to find their URL on the History of changes webpage. Changing the PASSWORD environment variable is encouraged.

In order for licensing to play nicely, we need the --mac-address parameter to be set to a fixed value. The easiest way forward is just generating one randomly. And no, this is not MAC address connected to my license. :)

For restarts, I found on-failure setting works best for me as it will start the container when the system goes up, in addition to restarting it in the case of errors.

And lastly, the Docker image coppit/xeoma is given as a blueprint for our setup.

And this Docker solution works nicely for me. Your mileage (and needs) may vary.

Disabling Caps Lock Under Linux

Different keyboard layouts on different laptops bring different annoyances. But there is one key that annoys me on any keyboard: CAPS LOCK. There is literally no reason for that key to exist. And yes, I am using literally appropriately here. The only appropriate action is to get rid of it.

If you’re running any systemd-enabled Linux distribution that is easy enough. My approach is as follows:

echo -e "evdev:atkbd:*\n KEYBOARD_KEY_3a=f15" \
  | sudo tee /etc/udev/hwdb.d/42-nocapslock.hwdb

To apply, either reboot the system or reload with udevadm:

sudo udevadm -d hwdb --update
sudo udevadm -d control --reload
sudo udevadm trigger

Congrats, your keyboard is now treating CapsLock as F15 (aka the highest F key you can assign keyboard shortcuts too in Gnome settings). Of course, you can select and other key of your liking. For that, you can take a look at SystemD GitHub for ideas. Of course, setting it to nothing (i.e., reserved) is a valid choice as well.


PS: If you want to limit change to just your laptop (e.g., if you’re propagating changes via Ansible and you don’t want to touch your desktop), you can check content of /sys/class/dmi/id/modalias for your computer IDs. Then you can limit your input appropriately. For example, limiting change to my Framework 13 laptop would look something like this:

evdev:atkbd:dmi:bvn*:bvr*:bd*:br*:svnFramework:*
 KEYBOARD_KEY_3a=f20

PPS: In case Caps Lock is not 3a key on your computer, you might need to adjust files appropriately. To figure out which key it is, run evtest. When you press Caps Lock, you’ll get something like this:

-------------- SYN_REPORT ------------
type 4 (EV_MSC), code 4 (MSC_SCAN), value 3a
type 1 (EV_KEY), code 58 (KEY_CAPSLOCK), value 0

Value you want is after MSC_SCAN.


PPPS: Another way to debug keyboard is by using libinput (part of libinput-tools package):

sudo libinput debug-events --show-keycodes

PPPPS: And yes, you can remap other keys too. F1 is my second “favorite”, close after Caps Lock.

Installing Windows Onto Framework Expansion Card

Illustration

While I use Linux as the primary OS of choice on my Framework Laptop, I still need Windows from time to time. And yes, a virtual machine is usually sufficient, but there is one scenario where Windows is much better - gaming.

First of all, this setup is not limited to the Framework Expansion Card. You can get it working on pretty much any USB these days. However, there is a difference between “can” and “should”. Most notably, most USB drives out there will not actually give you enough raw speed to comfortably use Windows. You need something with a bit more umph, and both Framework expansion SSD cards fit this nicely.

The trick in getting it all done is using Rufus and installing Windows in To Go mode. This retains much of the normal Windows behavior, but it also improves handling of what is essentially just a USB drive (e.g., you can unplug it during running). Default Rufus settings are actually good here, just make sure to select “Windows To Go” and everything else will be as normal.

Illustration

Lastly, while I do love encryption, TPIM is a slight annoyance in many scenarios where you might end up moving your installation around. Thus, while TPIM is available on the Framework laptop, I wanted my BitLocker not to make any use of it. I found editing Group Policy settings using these steps works for me.

  1. Open gpedit.msc.
  2. Navigate to Computer Configuration → Administrative Templates → Windows Components → BitLocker Drive Encryption → Operating System Drives.
  3. Require additional authentication at startup:
    • Enabled.
    • Allow BitLocker without a compatible TPM: Checked (was already)
    • Configure TPM startup: Do not allow TPM
    • Configure TPM startup PIN: Require startup PIN with TPM
    • Configure TPM startup key: Do not allow startup key with TPM
    • Configure TPM startup key and PIN: Do not allow startup key and PIN with TPM
  4. Allow enhanced PINs for startup:
    • Enabled
  5. Configure use of passwords for operating system drives:
    • Enabled
    • Configure password complexity for operating system drives: Allow password complexity (already)

Now onto to play some games. :)

Unreal Tournament 2004 Server on Kubernetes

From time to time I play Unreal Tournament 2004 with my kids. Aging game, being older than either of them, is not an obstacle to having fun. However, the inability to host a networked game is. As Windows updates and firewall rules change, we end up figuring out who gets to host the game every few months. Well, no more!

To solve my issues, I decided to go with Laclede’s LAN Unreal Tournament 2004 Dedicated Freeplay Server. This neat docker package has everything you need to host games on Linux infrastructure and, as often happens with docker, is trivial to run. If this is what you need, stop reading and start gaming.

But I wanted to go a small step further and set up this server to run on Kubernetes. Surprisingly, I didn’t find any YAML to do the same, so I decided to prepare one myself.

skopeo copy docker://lacledeslan/gamesvr-ut2004-freeplay:latest docker-archive:gamesvr-ut2004-freeplay.tar
---
apiVersion: apps/v1
kind: Deployment

metadata:
  namespace: default
  name: ut2004server
  labels:
    app: ut2004server

spec:
  replicas: 1
  selector:
    matchLabels:
      app: ut2004server
  template:
    metadata:
      labels:
        app: ut2004server
    spec:
      containers:
        - name: ut2004server
          image: lacledeslan/gamesvr-ut2004-freeplay:latest
          workingDir: "/app/System"
          command: ["/app/System/ucc-bin"]
          args:
            [
              "server",
              "DM-Antalus.ut2?AdminName=admin?AdminPassword=admin?AutoAdjust=true?bPlayerMustBeReady=true?Game=XGame.XDeathMatch?MinPlayers=2?WeaponStay=false",
              "nohomedir",
              "lanplay",
            ]
          envFrom:
          resources:
            requests:
              memory: "1Gi"
              cpu: "500m"

---
apiVersion: v1
kind: Service

metadata:
  namespace: default
  name: ut2004server
  labels:
    app: ut2004server

spec:
  type: LoadBalancer
  selector:
    app: ut2004server
  ports:
    - name: game
      protocol: UDP
      port: 7777
      targetPort: 7777
    - name: web
      protocol: TCP
      port: 8888
      targetPort: 8888

Happy gaming!


PS: While the server image has been available for a while now and it seems that nobody minds, I would consider downloading an image copy, just in case.

Running Comfast CF-953AX on Ubuntu 22.04

Based on list of wireless cards supported on Linux, I decided to buy Comfast CF-953AX as it should have been supported since Linux kernel 5.19. And HWE kernel on Ubuntu 22.04 LTS brings me right there. With only the source of that card being AliExpress, it took some time for it to arrive. All that wait for nothing to happen once I plugged it in.

In order to troubleshoot the issue, I first checked my kernel, and it was at the expected version.

Linux 5.19.0-38-generic #39~22.04.1-Ubuntu SMP PREEMPT_DYNAMIC

Then I checked with lsusb my devices, and there it was.

Bus 002 Device 003: ID 3574:6211 MediaTek Inc. Wireless_Device

However, checking for network using lshw -C network showed nothing. As always, looking stuff up on the Internet brought a bit more clarity to the issue. Not only the driver wasn’t loaded but the USB VID:PID combination was unrecognized. The solution was simple enough: load the driver and teach it the new VID:POD combination.

sudo modprobe mt7921u
echo 3574 6211 | sudo tee /sys/bus/usb/drivers/mt7921u/new_id

Running lshw has found the card.

  *-network
       description: Wireless interface
       physical id: 5
       bus info: usb@2:1
       logical name: wlxe0e1a9389d77
       serial: e0:e1:a9:38:9d:77
       capabilities: ethernet physical wireless
       configuration: broadcast=yes driver=mt7921u driverversion=6.2.0-20-generic firmware=____010000-20230302150956
       multicast=yes wireless=IEEE 802.11

Well, now to make changes permanent, we need to teach Linux a new rule:

sudo tee /etc/udev/rules.d/90-usb-3574:6211-mt7921u.rules << EOF
ACTION=="add", \
    SUBSYSTEM=="usb", \
    ENV{ID_VENDOR_ID}=="3574", \
    ENV{ID_MODEL_ID}=="6211", \
    RUN+="/usr/sbin/modprobe mt7921u", \
    RUN+="/bin/sh -c 'echo 3574 6211 > /sys/bus/usb/drivers/mt7921u/new_id'"
EOF

After that, we should update our initramfs.

sudo update-initramfs -k all -u

And that’s it. Our old 22.04 just learned a new trick.