Processing Data on CAPsMAN

Illustration

As I was going over changelog for Mikrotik’s RouterOS 7.21, I noticed the new functionality: “On CAPsMAN Data Processing”. Well, when I say new, it’s actually the functionality that was available on 6.x and earlier. However, with RouterOS 7.x it was gone. Since I used it, I did try to resist as long as I could but eventually I had to cry myself through RouterOS 7 upgrade.

But, let’s rewind a bit. What this feature actually allows? For this we need to understand how “normally” CAPsMAN on Mikrotik processed wireless traffic.

CAPsMAN is a way for Mikrotik to allow centralized control over multiple wireless devices. In my example, than meant controlleing two different access points from my (non-wireless) router. When doing so, you do essentially all configuration on router (CAPsMAN) and wireless devices (CAPs) get their configuration automatically updated to match. CAP devices essentially become just fancy wireless switches without much logic outside of (admittedly quite a few) customizations.

As switches, that means that anything on the same medium, i.e. wireless network, gets to communicate with each other. Of course, you can isolate users of networks but there is no real way to centrally allow some users to communicate with others or not. I am lying a bit here - you can always setup local firewall rules for stuff happening between networks but it gets annoying to keep config on multiple devices. I mean, CAPSman was there to centralize config, not to leave us in the same mess we were in before.

If you were willing to sacrifice some speed, RouterOS 6.x allowed you to redirect ALL wireless traffic to your CAPsMAN. This ensured your router sees each packet and thus you get to use firewall on your router. Even for devices connected on the same network on the same access-point. This made configuration really centralized.

But, those more familiar with enterprise setups might tell I am doing it wrong. And yes, this is not a feature for places where you want complete isolation and control. This is more of a control-light approach.

In enterprise networks, guest network will be completely isolated - usually using VLANs. And it should be. In my network, I will isolate guest network using firewall. And that is way less secure as you’re always one error away from complete mess. But, for home network where even guests are reasonably trusted, it is a small price to pay for flexibility. For example, allowing guests to access rest of home computers for the purpose of a local LAN game becomes trivial.

This is not a feature for professionals and strict environments. But it simplifies home network design a lot. And finally, RouterOS 7 line brings that feature back.

Sudo Can Asterisk

With sudo tool getting its Rust variant, it was bound sooner or later to have an incompatibility that is in the eye of the beholder. That dubious honor fell onto pwfeedback setting.

Most complete reporting was already done by Brodie Robertson so watch that video for details. Suffice to say, I am firmly in the camp of those who believe that the new behavior is correct. Let them have asterisk!

But, if you don’t want to wait for the bright future but you want this password echo behavior now, this is really easy to achieve.

echo "Defaults pwfeedback" | sudo tee /etc/sudoers.d/pwfeedback >/dev/null
sudo chmod 440 /etc/sudoers.d/pwfeedback

Now you can experience the future today.

Upgrading Inventree Postgres to 17

Due to me moving, I haven’t really played with electronics lately. However, that was about to change. So I started InvenTree, my electronics inventory handling tool, only to be greeted by an error. The darn thing was not working because it upgraded to 1.2.x which requires Postgres 14 at minimum and I had Postgres 13. Well, I figured to just simply upgrade the database. Let the yak shaving begin!

Before I started with anything I pinned my Inventree to 1.1.12 (last version to use Postgres 13) and made backup of my docker containers and data. Then, the first stop was Inventree’s migration instructions. Instructions are clearly written and I am almost positive they worked before. Unfortunately, since I procrastinated, they worked no longer. I tried a few variations but each ended in transaction_timeout issue.

pg_restore: error: could not execute query: ERROR:  unrecognized configuration parameter "transaction_timeout"

After investigating a bit, I found out that my migration from 13 to 14 might be slightly complicated by the fact Inventree dumped database using binaries for Postgres 17. Again, this wouldn’t have been an issue if I haven’t been lazy with my upgrades. However, even when I tried to go directly to 17, I ended up with the seeminly empty database. Yes, database wasn’t really empty, but does it really matter if nothing appears in UI?

So I decided to go with a different route - export followed by import. It just makes sense it would work. But no, restoration always resulted in issue with key duplication

psycopg.errors.UniqueViolation: duplicate key value violates unique constraint "common_inventreesetting_key_key"

After messing with it for a while to no avail, I decided to do the most straightforward thing possible. Why not just dump and restore the database manually without involving Inventree? Procedure that ended up working for me was as follows:

  1. At first, I just dumped database while running Inventree 1.1.12 on top of Postgres 13.
docker exec -it inventree-db pg_dump -U pguser inventree > inventree_backup.sql
  1. With that out of way, I stopped the containers.
docker compose down
  1. Delete old database copy so we have it clean. Remember to do the backup beforehand.
rm -rf data/pgdb/
  1. Edit compose.yaml and bump Postgres to 17

  2. Bring up containers and let them create new database (once it is done, use d to "detach).

docker compose up
  1. Stop Inventree services, in my case inventree-server and inventree-worker.
docker compose stop inventree-server inventree-worker
  1. Drop the database and recreate it empty.
docker exec -it inventree-db psql -U pguser -d postgres -c "DROP DATABASE inventree;"
docker exec -it inventree-db psql -U pguser -d postgres -c "CREATE DATABASE inventree;"
  1. Restore from backup
docker exec -i inventree-db psql -U pguser inventree < ./inventree_backup.sql
  1. Start Inventree services back up.
docker compose start inventree-server inventree-worker
  1. Now upgrade 1.2.x version, in my case by setting INVENTREE_TAG=stable (in .env file)

Since we kept the same version of Inventree for steps 1-9, we avoided the need to schema update. Inventree was not aware of any change and Postgres could simply upgrade its data to the version 17. Once we were at version 17, Inventree’s automatic upgrade scripts knew how to handle version bump from 1.1.12 to 1.2.3 as version 17 is supported by both.

Now, onto getting some soldering done.

GoAccess for Caddy

My current web server setup kinda grew based on my WordPress setup years back. Due to Ansible, I can redeploy it quickly but, in reality, it’s just a bunch of stuff thrown together because I needed it at one time or other. Add to that me running multiple websites on the same server and you have a bit of a mess. First step to cut through all that mess was placing it all behind load balancer running behind docker.

I won’t go too deep into why exactly I opted for Caddy. I suspect nginx would do nicely. Even Apache might have worked as well. But, after a bit of testing and playing with CertBot integration, I decided Caddy was best match for me.

The next step was to ensure I have some visibility into what was going on and there I found GoAccess. But guides I found didn’t really work. Some had issues with log format, some had issues with web sockets, some even had issues with syntax.

After a bit of twiddling, I ended up with the following compose.yaml.

services:
  caddy:
    image: caddy:latest
    container_name: caddy
    restart: unless-stopped
    cap_add:
      - NET_ADMIN
    ports:
      - 80:80
      - 443:443
      - 443:443/udp
    volumes:
      - ./config/:/etc/caddy/
      - ./logs/:/var/log/caddy/
      - ./stats/:/var/www/goaccess/:ro

  stats:
    image: allinurl/goaccess:latest
    container_name: stats
    volumes:
      - ./logs/:/var/log/caddy/:ro
      - ./stats/:/var/www/goaccess/
    ports:
      - 7890:7890
    command: "/var/log/caddy/access.log --log-format=CADDY -o /var/www/goaccess/index.html --real-time-html --ws-url=wss://stats.example.com:443/ws --port=7890"

Caddy setup is quite straight-forward. It will use three volumes: one for config, one for output logs, and the last one for viewing stats (read-only).

Stats themselves are setup similarly too, this time with only two volumes needed. Logs are read from logs (read-only) and the web pages are written into stats. Command being ran is one allowing for real-time monitoring. And that requires websocket support.

Caddy configuration would look something like this:

example.com {
    log {
        output file /var/log/caddy/access.log
        format json
    }
    respond "Hello World!"
}

stats.example.com {
    root * /var/www/goaccess
    file_server
    reverse_proxy /ws stats:7890
}

And this setup will allow you to see stats as they come.

In-Wall Doorbell Transformer

Illustration

One really puzzling thing to me is how undefined location and mounting of doorbell transformer is in US dwellings. Not only that it has no specificed location but they also have no real mounting specification. Yes, it is specified in NEC they must be separated from high-voltage wires, but other than that is a free-for-all. I’ve seen 1/2" hole, wall box edge screw, and free-style two-screw mounting as most common examples, but you really cannot know for sure until you open it.

For example, my transformer has a nice plate mounting for a square edge mount but the transformer itself is not mounted in. Assuming it was ever mounted properly, by the virtue of sticking out, it was eventually dislodged. And don’t let me even start about low-voltage wires just entering drywall willy-nilly.

Since I wanted to upgrade transformer anyhow, I decided to clean this up. But surprisingly, there are still no real solutions for this. Almost everything I found assumes this transformer is either hanging of the wall or hanging in the wall. It took me a while but I think I found reasonable solution for my use case.

First of all, I wanted it all enclosed in the box. Since NEC forbids high-voltage (110V in this case) wires next to low-voltage ones, I needed one box with multiple compartments. Southwire MSBMMT3G 3-gang box is a rare one that fits my needs. AC side was not actually the problem but most of other dual voltage boxes had 1-gang for high-voltage and 1-gang for low-voltage side. This wouldn’t do in my case since my transformer was bigger than a single “gang” width and I really wanted it fully enclosed as to avoid “fell into the wall” accidents. It’s not ideal mind you since wire-clamps get in the way but it was best I’ve found.

Next task was selection of transformer. My existing one was 16 V so I opted to go with the same. Due to doorbell, I needed a rating of at least 30VA and that finally drove me toward Maxdot 16V 30VA. It has 1/2" hole mounting and it fits into my selected box with a bit of space to spare.

Since I had a 3D printer and voltage monitor display, my thoughts immediately went toward making a custom cover. One of more annoying steps needed to troubleshoot failing transformer is measuring voltage and with a voltage monitor, that would be trivial. And it would look cool.

I went as far as designing the cover and printing it out before remembering the NEC rules. Any cover must also be UL listed and certified for purpose. PLA, being both easily malleable with increasing temperature and fairly flamable is definitely not fit for the purpose. Thus, I ended up with shattered dreams and a plain 3-gang cover.