Linux, Unix, and whatever they call that world these days

Inappropriate Ioctl for Device

After disconnecting a serial USB cable from my Ubuntu Server 20.04, I would often receive “Inappropriate ioctl for device” error when trying to redirect output to serial port.

stty -F /dev/ttyACM0 -echo -onlcr
 stty: /dev/ttyACM0: Inappropriate ioctl for device

Quick search yielded multiple results but nothing that actually worked for me. Most promising were restarting udev and manual driver unbind but they didn’t really solve anything end-to-end. The only solution was to reboot.

However, after a bit of playing with unloading drivers, I did find solution that worked. Unload driver, manually delete device, and finally load driver again.

modprobe -r cdc_acm
rm -f /dev/ttyACM0
modprobe cdc_acm

I am not sure why unloading driver didn’t remove device link itself, but regardless, I could finally get it to work without waiting for reboot.

Killing a Connection on Ubuntu Server 20.04

If you really want to kill a connection on a newer kernel Ubuntu, there is a ss command. For example, to kill connection toward 192.168.1.1 with dynamic remote port 40000 you can use the following:

ss -K dst 192.168.1.1 dport = 40000

Nice, quick, and it definitelly beats messing with routes and waiting for a timeout. This is assuming your kernel was compiled with CONFIG_INET_DIAG_DESTROY (true on Ubuntu).


To get a quick list of established connections for given port, one can use netstat with a quick’n’dirty grep:

$ netstat -nap | grep ESTABLISHED | grep ^^<port>^^

Cleaning Disk

Some time ago I explained my procedure for initializing disks I plan to use in ZFS pool. And the first step was to fill them with random data from /dev/urandom.

However, FreeBSD /dev/urandom is not really the speed monster. If you need something faster but still really secure, you can go with a random AES stream.

openssl enc -aes-128-ctr -pass pass:"$(dd if=/dev/urandom bs=128 count=1 2&gt;/dev/null | hexdump)" \
    -pbkdf2 -nosalt &lt;/dev/zero | dd of=/dev/diskid/^^DISK-ID-123^^ bs=1M

Since the key is derived from random data, in theory it should be equally secure but (depending on CPU), multiple times faster than urandom.

Basic XigmaNAS Stats for InfluxDB

My home monitoring included pretty much anything I wanted to see with one exception - my backup NAS. You see, I use embedded XigmaNAS for my backup server and getting telegraf client onto it is problematic at best. However, who needs Telegraf client anyhow?

Collecting stats themselves is easy enough. Basic CPU stats you get from Telegraf client usually can be easily read via command line tools. As long as you keep the same tags and fields as what Telegraf usually sends you can nicely mingle our manually collected stats with what proper client sends.

And how do we send it? Telegram protocol is essentially just a set of lines pushed using HTTP POST. Yes, if you have a bit more secure system, it’s probably HTTPS and it might even be authenticated. But it’s still POST in essence.

And therein lies XigmaNAS’ problem. There is no curl or wget tooling available. And thus sending HTTP POST on embedded XigmaNAS is not possible. Or is it?

Well, here is the beauty of HTTP - it’s just a freaking text over TCP connection. And ancient (but still beloved) nc tool is good at exactly that - sending stuff over network. As long as you can “echo” stuff, you can redirect it to nc and pretend you have a proper HTTP client. Just don’t forget to set headers.

To cut the story short - here is my script using nc to push statistics from XigmaNAS to my Grafana setup. It’ll send basic CPU, memory, temperature, disk, and ZFS stats. Enjoy.

Mikrotik SNMP via Telegraf

As I moved most of my home to Grafana/InfluxDB monitoring, I got two challenges to deal with. One was monitoring my XigmaNAS servers and the other was properly handling Mikrotik routers. I’ll come back to XigmaNAS in one of later posts but today let’s see what can be done for Miktorik.

Well, Miktorik is a router and essentially all routers are meant to be monitored over SNMP. So, the first step is going to be turning it on from within System/SNMP. You want it read-only and you want to customize community string. You might also want SHA1/AES authentication/encryption but that has to be configured on both sides and I generally skip it for my home network.

Once you’re done you can turn on SNMP input plugin and data will flow. But data that flows will not include Mikrotik-specific stuff. Most notably, I wanted simple queues. And, once you know the process, it’s actually reasonably easy.

At heart of SNMP we have OIDs. Mikrotik is really shitty with documenting them but they do provide MIB so one can take a look. However, there is an easier approach. Just run print oid for any section, e.g.:

/queue simple print oid
 0
  name=.1.3.6.1.4.1.14988.1.1.2.1.1.2.1
  bytes-in=.1.3.6.1.4.1.14988.1.1.2.1.1.8.1
  bytes-out=.1.3.6.1.4.1.14988.1.1.2.1.1.9.1
  packets-in=.1.3.6.1.4.1.14988.1.1.2.1.1.10.1
  packets-out=.1.3.6.1.4.1.14988.1.1.2.1.1.11.1
  queues-in=.1.3.6.1.4.1.14988.1.1.2.1.1.12.1
  queues-out=.1.3.6.1.4.1.14988.1.1.2.1.1.13.1

This can than be converted into telegraf format looking something like this:

[[inputs.snmp.table.field]]
  name = "mtxrQueueSimpleName"
  oid = ".1.3.6.1.4.1.14988.1.1.2.1.1.2"
  is_tag = true
[[inputs.snmp.table.field]]
  name = "mtxrQueueSimpleBytesIn"
  oid = ".1.3.6.1.4.1.14988.1.1.2.1.1.8"
[[inputs.snmp.table.field]]
  name = "mtxrQueueSimpleBytesOut"
  oid = ".1.3.6.1.4.1.14988.1.1.2.1.1.9"
[[inputs.snmp.table.field]]
  name = "mtxrQueueSimplePacketsIn"
  oid = ".1.3.6.1.4.1.14988.1.1.2.1.1.10"
[[inputs.snmp.table.field]]
  name = "mtxrQueueSimplePacketsOut"
  oid = ".1.3.6.1.4.1.14988.1.1.2.1.1.11"
[[inputs.snmp.table.field]]
  name = "mtxrQueueSimplePCQQueuesIn"
  oid = ".1.3.6.1.4.1.14988.1.1.2.1.1.12"
[[inputs.snmp.table.field]]
  name= "mtxrQueueSimplePCQQueuesOut"
  oid= ".1.3.6.1.4.1.14988.1.1.2.1.1.13"

Where did I get the name from? Technically, you can use whatever you want, but I usually look them up from oid-info.com. Once you restart telegraf daemon, data will flow into Grafana and you can chart it to your heart’s desire.

You can see my full SNMP input config for Mikrotik at GitHub.

Monitoring Home Network

Illustration

While monitoring home network is not something that’s really needed, I find it always comes in handy. If nothing else, you get to see lot of nice colors and numbers flying around. For people like me, I need nothing more as encouragement.

Over time I tried many different systems but lately I fell in love with Grafana combined with InfluxDB. Grafana gives really nice and simple GUI while InfluxDB serves as the database for all the metrics.

I find Grafana hits just a right balance of being simple enough to learn basics but powerful enough that you can get into advanced stuff if you need it. Even better, it fits right into a small network without any adjustments needed to the installation. Yes, you can make it more complex later but starting point is spot on.

InfluxDB makes it really easy to push custom metrics from command line or literally anything that can speak HTTP and I find that really useful in heterogeneous network filled with various IoT devices. While version 2.0 is available, I actually prefer using 1.8 as it’s simpler in setup, lighter on resources (important if you run it in virtual machine), and it comes without GUI. Since I only use it as backend, that actually means I have less things to secure.

Installing Grafana on top of Ubuntu Server 20.04 is easy enough.

sudo apt-get install -y apt-transport-https
wget -4qO - https://packages.grafana.com/gpg.key | sudo apt-key add -
echo "deb https://packages.grafana.com/oss/deb stable main" \
    | sudo tee -a /etc/apt/sources.list.d/grafana.list

sudo apt update
sudo apt --yes install grafana

sudo systemctl start grafana-server
sudo systemctl enable grafana-server
sudo systemctl status grafana-server

That’s it. Grafana is now listening on port 3000. If you want it on port 80, some NAT magic is required.

sudo apt install --yes netfilter-persistent
sudo iptables -t nat -A PREROUTING -p tcp --dport 80 -j REDIRECT --to-port 3000
sudo netfilter-persistent save
sudo iptables -L -t nat

With Grafana installed, it’s time to get InfluxDB onboard too. Setup is again simple enough.

wget -4qO - https://repos.influxdata.com/influxdb.key | sudo apt-key add -
echo "deb https://repos.influxdata.com/ubuntu focal stable" \
    | sudo tee /etc/apt/sources.list.d/influxdb.list

sudo apt update
sudo apt --yes install influxdb

sudo systemctl start influxdb
sudo systemctl enable influxdb
sudo systemctl status influxdb

Once installation is done, the only remaining task is creating the database. In example I named it “telegraf”, but you can select whatever name you want.

curl -i -XPOST http://localhost:8086/query --data-urlencode "q=CREATE DATABASE ^^telegraf^^"

With both installed, we might as well install Telegraf so we can push some stats. Installation is again really similar:

wget -4qO- https://repos.influxdata.com/influxdb.key | sudo apt-key add -
echo "deb https://repos.influxdata.com/ubuntu focal stable" \
    | sudo tee /etc/apt/sources.list.d/influxdb.list

sudo apt-get update
sudo apt-get install telegraf

sudo sed -i 's*# database = "telegraf"$*database = "^^telegraf^^"*' /etc/telegraf/telegraf.conf
sudo sed -ri 's*# (urls = \["http://127.0.0.1:8086"\])$*\1*' /etc/telegraf/telegraf.conf
sudo sed -ri 's*# (\[\[inputs.syslog\]\])$*\1*' /etc/telegraf/telegraf.conf
sudo sed -ri 's*# (  server = "tcp://:6514")*\1*' /etc/telegraf/telegraf.conf

sudo systemctl restart telegraf
sudo systemctl status telegraf

And a minor update is needed for rsyslog daemon in order to forward syslog messages.

echo '*.notice action(type="omfwd" target="localhost" port="6514"' \
    'protocol="tcp" tcp_framing="octet-counted" template="RSYSLOG_SyslogProtocol23Format")' \
    | sudo tee /etc/rsyslog.d/99-forward.conf
sudo systemctl restart rsyslog

If you want to accept remove syslog messages, that’s also just a command away:

echo 'module(load="imudp")'$'\n''input(type="imudp" port="514")' \
    | sudo tee /etc/rsyslog.d/98-accept.conf
sudo systemctl restart rsyslog

That’s it. You have your metric server fully installed and its own metrics are flowing.

And yes, this is not secure and you should look into having TLS enabled at minimum, ideally with proper authentication for all your clients. However, this setup does allow you to dip your toes and see whether you like it or not.


PS: While creating graphs is easy enough, dealing with logs is a bit more complicated. NWMichl Blog has link to really nice dashboard for this purpose.

Ubuntu 20.10 on Surface Go

Surface Go is almost a perfect Ubuntu machine. The only reason for “almost” being the lack of camera support. All else works out of box or with minor updates. While you can use the standard installation setup, I like to do it in a bit more involved setup.

Mind you, you will need to have a network adapter plugged during install as debootstrap requires it and enabling wireless is one of things not working out of box. If that’s the problem, stick with the default install instead.

First you of course you need to boot from install USB. After booting into Ubuntu desktop installation one needs a root prompt. All further commands are going to need root credentials anyhow.

sudo -i

Now we can set a few variables - disk, pool, host name, and user name. This way we can use them going forward and avoid accidental mistakes. Just make sure to replace these values with ones appropriate for your system.

DISK=/dev/disk/by-id/^^ata_disk^^
HOST=^^desktop^^
USER=^^user^^

Disk setup is really minimal. If there was a chance of dual-boot, EFI partition would be too small. For multiple kernels, one would need to increase boot partition. However, considering Surface Go has 64 MB or disk space, keeping those partitions small is probably a better choice. And no, you cannot make EFI partition smaller than 32 GB despite not needing more than a few megs.

blkdiscard $DISK

sgdisk --zap-all                        $DISK

sgdisk -n1:1M:+63M  -t1:EF00 -c1:EFI    $DISK
sgdisk -n2:0:+448M  -t2:8300 -c2:Boot   $DISK
sgdisk -n3:0:0      -t3:8309 -c3:Ubuntu $DISK

sgdisk --print                          $DISK

Having boot and EFI partition unencrypted does offer advantages and having standard kernels exposed is not much of a security issue. However, one must encrypt root partition.

cryptsetup luksFormat -q --cipher aes-xts-plain64 --key-size 256 \
    --pbkdf pbkdf2 --hash sha256 $DISK-part3

Since crypt device name is displayed on every startup, for Surface Go I like to use host name here.

cryptsetup luksOpen $DISK-part3 ${HOST^}

Now we can prepare all needed partitions.

yes | mkfs.ext4 /dev/mapper/${HOST^}
mkdir /mnt/install
mount /dev/mapper/${HOST^} /mnt/install/

yes | mkfs.ext4 $DISK-part2
mkdir /mnt/install/boot
mount $DISK-part2 /mnt/install/boot/

mkfs.msdos -F 32 -n EFI $DISK-part1
mkdir /mnt/install/boot/efi
mount $DISK-part1 /mnt/install/boot/efi

To start the fun we need debootstrap package.

apt update ; apt install --yes debootstrap

And then we can get basic OS on the disk. This will take a while.

debootstrap focal /mnt/install/

Our newly copied system is lacking a few files and we should make sure they exist before proceeding.

echo $HOST > /mnt/install/etc/hostname
sed "s/ubuntu/$HOST/" /etc/hosts > /mnt/install/etc/hosts
sed '/cdrom/d' /etc/apt/sources.list > /mnt/install/etc/apt/sources.list
cp /etc/netplan/*.yaml /mnt/install/etc/netplan/

If you are installing via WiFi, you might as well copy your wireless credentials:

mkdir -p /mnt/install/etc/NetworkManager/system-connections/
cp /etc/NetworkManager/system-connections/* /mnt/install/etc/NetworkManager/system-connections/

Finally we’re ready to “chroot” into our new system.

mount --rbind /dev  /mnt/install/dev
mount --rbind /proc /mnt/install/proc
mount --rbind /sys  /mnt/install/sys
chroot /mnt/install \
    /usr/bin/env DISK=$DISK HOST=$HOST USER=$USER \
    bash --login

Let’s not forget to setup locale and time zone.

locale-gen --purge "en_US.UTF-8"
update-locale LANG=en_US.UTF-8 LANGUAGE=en_US
dpkg-reconfigure --frontend noninteractive locales

dpkg-reconfigure tzdata

Now we’re ready to onboard the latest Linux image and the boot environment packages.

apt install --yes --no-install-recommends linux-image-generic linux-headers-generic \
    --yes initramfs-tools cryptsetup keyutils grub-efi-amd64-signed shim-signed tasksel

Since we’re dealing with encrypted data, we should auto mount it via crypttab. If there are multiple encrypted drives or partitions, keyscript really comes in handy to open them all with the same password. As it doesn’t have negative consequences, I just add it even for a single disk setup.

echo "${HOST^}  UUID=$(blkid -s UUID -o value $DISK-part3)  none \
    luks,discard,initramfs,keyscript=decrypt_keyctl" >> /etc/crypttab
cat /etc/crypttab

To mount boot and EFI partition, we need to do some fstab setup too:

echo "UUID=$(blkid -s UUID -o value /dev/mapper/${HOST^}) \
    / ext4 noatime,nofail,x-systemd.device-timeout=5s 0 1" >> /etc/fstab
echo "PARTUUID=$(blkid -s PARTUUID -o value $DISK-part2) \
    /boot ext4 noatime,nofail,x-systemd.device-timeout=5s 0 1" >> /etc/fstab
echo "PARTUUID=$(blkid -s PARTUUID -o value $DISK-part1) \
    /boot/efi vfat noatime,nofail,x-systemd.device-timeout=5s 0 1" >> /etc/fstab
cat /etc/fstab

Now we update our boot environment.

KERNEL=`ls /usr/lib/modules/ | cut -d/ -f1 | sed 's/linux-image-//'`
update-initramfs -u -k $KERNEL

Grub update is what makes EFI tick.

sed -i "s/^GRUB_CMDLINE_LINUX_DEFAULT.*/GRUB_CMDLINE_LINUX_DEFAULT=\"quiet splash mem_sleep_default=deep\"/" /etc/default/grub
update-grub
grub-install --target=x86_64-efi --efi-directory=/boot/efi --bootloader-id=Ubuntu \
    --recheck --no-floppy

Finally we install out GUI environment. I personally like ubuntu-desktop-minimal but you can opt for ubuntu-desktop. In any case, it’ll take a considerable amount of time.

tasksel install ubuntu-desktop-minimal

Short package upgrade will not hurt.

apt update ; apt dist-upgrade --yes

The only remaining task before restart is to create the user, assign a few extra groups to it, and make sure its home has correct owner.

sudo adduser --disabled-password --gecos '' $USER
usermod -a -G adm,cdrom,dip,lpadmin,plugdev,sudo $USER
echo "$USER ALL=NOPASSWD:ALL" | sudo tee /etc/sudoers.d/$USER >/dev/null
passwd $USER

Before finishing it up, I like to install Surface Go WiFi and backlight tracer packages. This will allow for usage of wireless once we boot into installed system and for remembering light level between plugged/unplugged states.

wget -O /tmp/surface-go-wifi_amd64.deb \
    https://www.medo64.com/download/surface-go-wifi_0.0.5_amd64.deb
apt install --yes /tmp/surface-go-wifi_amd64.deb

wget -O /tmp/backlight-tracer_amd64.deb \
    https://www.medo64.com/download/backlight-tracer_0.1.1_all.deb
apt install --yes /tmp/backlight-tracer_amd64.deb

As install is ready, we can exit our chroot environment.

exit

And cleanup our mount points.

umount /mnt/install/boot/efi
umount /mnt/install/boot
mount | grep '/mnt/install/' | tac | awk '/\/mnt/ {print $3}' | xargs -i{} umount -lf {}
umount /mnt/install

After the reboot you should be able to enjoy your installation. Seeing errors is fine - just reboot manually if stuck.

reboot

Once booted I like to setup suspend to react on power button and and to disable automatic brightness changes.

gsettings set org.gnome.settings-daemon.plugins.power button-power 'suspend'
gsettings set org.gnome.settings-daemon.plugins.power power-button-action 'suspend'
gsettings set org.gnome.settings-daemon.plugins.power ambient-enabled 'false'
gsettings set org.gnome.mutter experimental-features "['x11-randr-fractional-scaling']"

My preferred scale factor is 150% (instead of default 200%) but you’ll need to change that in settings manually.

Case-insensitive ZFS

Don’t.

Well, this was a short one. :)

From the very start of its existence, ZFS supported case-insensitive datasets. In theory, if you share disk with Windows machine, this is what you should use. But reality is a bit more complicated. It’s not that setting doesn’t work. It’s more a case of working too well.

Realistically you are going to be running ZFS on some *nix machine and access it from Windows, it’ll be via Samba. As *nix API generally expects case-sensitivity, Samba will dynamically convert what it shares from the case-sensitive world into the case-insensitive one. If file system is case-insensitive, Samba will get confused and you will suddenly have issues renaming files that differ only in case.

For example, you won’t be able to rename test.txt into Test.txt. Before doing rename Samba will if the new file already exists (step needed if underlying system is case-insensitive) in order to avoid overwriting unrelated file. This second check will fail on case-insensitive dataset as ZFS will report Test.txt exists. Because of this check (that would be necessary on case-sensitive file system) Samba will incorrectly think that destination already exists and not allow the rename. Yep, any rename differing only in case will fail.

Now, this could be fixable. If Samba would recognize the file system is case-insensitive, it could skip that check. But what if you have case-sensitive file system mounted within case-insensitive dataset? Or vice-versa? Should Samba check on every access or cache results? For something that doesn’t happen on *nix often, this would be either flaky implementation or a big performance hit.

Therefore, Samba assumes that file system is case-sensitive. In 99% of cases, this is true. Unless you want to chase ghosts, just give it what it wants.

Changing ZFS Key Location

Back when I was creating my original pool, I decided to use password prompt as my encryption key unlocking method. And it was good. But then I wanted to automate this a bit. I wanted my key to be read of USB drive.

To do that one can simply prepare a new key and point the pool toward it.

dd if=/dev/urandom of=^^/usb/key.dat^^ bs=32 count=1
zfs change-key -o keylocation=file://^^/usb/key.dat^^ -o keyformat=raw Pool

Of course, it’s easy to return it back to password prompt too:

zfs change-key -o keylocation=prompt -o keyformat=passphrase Pool

Simple enough.

SDR on Ubuntu x86

[This is a post 3 in two-part series :), for hardware setup go here]

While running SDR radio on Raspberry was fine, I kinda wanted to move this to one of my x86 servers. Due to this, I had to revisit my old guide.

When device is plugged in, trace should be seen in dmesg. If everything is fine, you should see some activity.

dmesg | tail[4306437.661393] usbcore: registered new interface driver dvb_usb_rtl28xxu

To get SDR running, there is some work involved with its compilation. Note the DETACH_KERNEL_DRIVER=ON flag enabling SDR application to access device without disabling its driver. Rest is really similar to official instructions.

sudo apt-get install -y git build-essential cmake libusb-1.0-0-dev libglib2.0-dev
cd ~
git clone git://git.osmocom.org/rtl-sdr.git
cd rtl-sdr/
mkdir build
cd build
cmake ../ -DINSTALL_UDEV_RULES=ON -DDETACH_KERNEL_DRIVER=ON
make
sudo make install
sudo ldconfig

This is an ideal time to test it. As I have the iptables active, I manually enable port on external interface. Other than that I will not restrict application to a single IP but allow it to listen on all interfaces.

iptables -A INPUT -i ^^eth0^^ -p tcp --dport 1234 -j ACCEPT
/usr/local/bin/rtl_tcp -a 0.0.0.0

The last step is to enable running it as a service. We need to create a separate user, enable service, and finalize it all with reboot.

sudo adduser --disabled-password --gecos "" sdr
sudo usermod -a -G plugdev sdr

sudo cat &gt; /lib/systemd/system/rtl_tcp.service &lt;&lt;- EOF
[Unit]
After=network.target

[Service]
Type=exec
ExecStart=/usr/local/bin/rtl_tcp -a 0.0.0.0
KillMode=process
Restart=on-failure
RestartSec=10
User=sdr

[Install]
WantedBy=multi-user.target
Alias=rtl_tcp.service
EOF

sudo systemctl enable /lib/systemd/system/rtl_tcp.service
sudo reboot

And that’s it. Now you can run SDR TCP server on your Ubuntu server.