Linux, Unix, and whatever they call that world these days

Recording Both Microphone and Speaker Under Ubuntu 21.04

When one records audio under Linux, issue that quite a few applications have is recording both microphone input and headphone output. And that’s true for SimpleScreenRecorder, otherwise really good recording application.

However, Linux always has a way around those restrictions and those can be actually found on SimpleScreenRecorder pages if you look deep enough.

pactl load-module module-null-sink \
  sink_name=duplex_out sink_properties=device.description="\"Duplex\ Output\""
pactl load-module module-null-sink \
  sink_name=app_out sink_properties=device.description="\"Application\ Output\""
pactl load-module module-loopback source=app_out.monitor
pactl load-module module-loopback source=app_out.monitor sink=duplex_out
pactl load-module module-loopback sink=duplex_out
pactl set-default-sink app_out

Trick is to essentially create two new output devices (i.e. sinks). One of them (app_out) will just be a target toward which applications should direct their output. Magic happens with the second output (duplex_out) which combines application output and what comes from microphone.

Now when you record audio, you can just point application to Duplex Output and record both sides.


PS: To make these changes permanent, they can be entered into /etc/pulse/default.pa. Of course, quoting rules are bit different so adjust accordingly if you have a space in your description.

…
# Recording setup
load-module module-null-sink sink_name=duplex_out sink_properties=device.description="Duplex\ Output"
load-module module-null-sink sink_name=app_out sink_properties=device.description="Application\ Output"
load-module module-loopback source=app_out.monitor
load-module module-loopback source=app_out.monitor sink=duplex_out
load-module module-loopback sink=duplex_out
set-default-sink app_out

DevStack on VirtualBox Ubuntu 20.04

Illustration

The first step for DevStack inside of VirtualBox is creating the virtual machine. There are two obvious changes that you need to make and those are increasing processor count and assigned memory as high as you can afford it. The other two are a bit more “sneaky”.

We really have to enable Nested VT-x/AMD-V under System, Processor and if we want to access our system we should really set network forwarding rules for port 80 (HTTP for Dashboard) and port 22 (SSH, optional but really helpful). I usually set them to be 56080 and 56022 respectively under my localhost but the actual numbers can be of your choosing. And yes, there are other ways to setup networking but NAT with forwarding is mine.

With the virtual machine set, the next step toward DevStack is installing OS. While official guidelines prefer Ubuntu 18.04, I like to go with a slightly newer Ubuntu 20.04 Server. Whole installation is essentially one big click-next event with the only non-default part being installation of OpenSSH.

Once OS is installed, I also like to add my user to password-less sudoers and do any needed updates:

echo "$USER ALL=(ALL) NOPASSWD: ALL" | sudo tee /etc/sudoers.d/$USER

sudo apt update
sudo apt dist-upgrade --yes

And now finally we can follow the DevStack instructions with customized host IP (otherwise you’ll get “Could not determine host ip address” error) and admin password.

sudo useradd -s /bin/bash -d /opt/stack -m stack
echo "stack ALL=(ALL) NOPASSWD: ALL" | sudo tee /etc/sudoers.d/stack
sudo su - stack

git clone https://github.com/openstack-dev/devstack.git -b $STACK_BRANCH devstack/
STACK_BRANCH=stable/^^wallaby^^

cd devstack
cp samples/local.conf .
sed -i 's/#HOST_IP=w.x.y.z/HOST_IP=^^10.0.2.15^^/' local.conf
sed -i 's/ADMIN_PASSWORD=nomoresecret/ADMIN_PASSWORD=^^devstack^^/' local.conf
echo "#Enable heat services" >> local.conf
echo "enable_service h-eng h-api h-api-cfn h-api-cw" >> local.conf
echo "#Enable heat plugin" >> local.conf
echo "enable_plugin heat https://git.openstack.org/openstack/heat $STACK_BRANCH" >> local.conf

./stack.sh

Once done, GUI is available at http://localhost:56080/dashboard/.

Wait For Mountpoint

I have quite a few scripts running on my home server and they love writing on disk. They love it so much that, after reboot, they don’t necessarily wait for mount points to appear - they just start writing. Unfortunately, such eagerness also means that my other scripts mounting ZFS might find directory already in use and give up.

What I needed was a way to check if mount point is already there before starting with write. The easiest approach for me was using mountpoint command.

TEST_PATH=/test/directory
while(true); do  # wait for mount point
    mountpoint "$TEST_PATH" >/dev/null
    if [[ $? != 0 ]]; then
        sleep 1
        continue
    fi
    break
done

Script fragment above will check if given directory has something mounted and, if not, wait for 1 more second. Once test succeeds, it will break out of the infinite loop and proceed with whatever follows.

Easy enough.

One Taken Every Second

In order to keep an eye on my garage I placed a Wyze camera in it. So when I noticed one day that somebody has swiped my tool bag, I thought I’ll find the culprit quickly. Well, it was not meant to be.

I had recording turned on but only a 32 GB card in it. And I noticed tool bag missing only after two weeks or so. So any recording was already gone by the time I took a look. Since only other people that had access to the garage were Amazon delivery guys, I could narrow it down but not sufficiently to do anything about it.

So I went to look into a cheap solution to record images long term and RTSP immediately came to mind. Even better, Wyze cameras already support it (albeit via a special firmware).

My idea was to simply record an image every second and save it on my NAS using ffmpeg. While this was a simple task in theory, it proved to be a bit annoying to find parameters that would be stable enough. Most notably, sometime it would just stop ingesting new frames and thus require restart.

After testing a lot of different parameters, I came with the following code:

while (true); do
    ffmpeg \
        -probesize 1000000 \
        -analyzeduration 1000000 \
        -flags low_delay \
        -fflags discardcorrupt \
        -re \
        -rtsp_transport tcp \
        -stimeout 10000000 \
        -allowed_media_types video \
        -i rtsp://${CAMERA_USER}:${CAMERA_PASS}@${CAMERA_IP}/live \
        -f image2 \
        -strftime 1 \
        -vf fps=fps=1 \
        -async 1 \
        -vsync 1 \
        -threads 1 \
        -use_wallclock_as_timestamps 1 \
        "${BASE_DIRECTORY}/${CAMERA_IP}~%Y-%m-%d-%H-%M-%S.jpg"
    sleep 1
done

Using this setup ffmpeg will keep taking image every second. If it gets stuck, it will exit and then it’s on while to restart the capture again. One can then use a separate process to convert these files into a mjpeg file but that’s story for another day.

Inappropriate Ioctl for Device

After disconnecting a serial USB cable from my Ubuntu Server 20.04, I would often receive “Inappropriate ioctl for device” error when trying to redirect output to serial port.

stty -F /dev/ttyACM0 -echo -onlcr
 stty: /dev/ttyACM0: Inappropriate ioctl for device

Quick search yielded multiple results but nothing that actually worked for me. Most promising were restarting udev and manual driver unbind but they didn’t really solve anything end-to-end. The only solution was to reboot.

However, after a bit of playing with unloading drivers, I did find solution that worked. Unload driver, manually delete device, and finally load driver again.

modprobe -r cdc_acm
rm -f /dev/ttyACM0
modprobe cdc_acm

I am not sure why unloading driver didn’t remove device link itself, but regardless, I could finally get it to work without waiting for reboot.

Killing a Connection on Ubuntu Server 20.04

If you really want to kill a connection on a newer kernel Ubuntu, there is a ss command. For example, to kill connection toward 192.168.1.1 with dynamic remote port 40000 you can use the following:

ss -K dst 192.168.1.1 dport = 40000

Nice, quick, and it definitelly beats messing with routes and waiting for a timeout. This is assuming your kernel was compiled with CONFIG_INET_DIAG_DESTROY (true on Ubuntu).


To get a quick list of established connections for given port, one can use netstat with a quick’n’dirty grep:

$ netstat -nap | grep ESTABLISHED | grep ^^<port>^^

Cleaning Disk

Some time ago I explained my procedure for initializing disks I plan to use in ZFS pool. And the first step was to fill them with random data from /dev/urandom.

However, FreeBSD /dev/urandom is not really the speed monster. If you need something faster but still really secure, you can go with a random AES stream.

openssl enc -aes-128-ctr -pass pass:"$(dd if=/dev/urandom bs=128 count=1 2&gt;/dev/null | hexdump)" \
    -pbkdf2 -nosalt &lt;/dev/zero | dd of=/dev/diskid/^^DISK-ID-123^^ bs=1M

Since the key is derived from random data, in theory it should be equally secure but (depending on CPU), multiple times faster than urandom.

Basic XigmaNAS Stats for InfluxDB

My home monitoring included pretty much anything I wanted to see with one exception - my backup NAS. You see, I use embedded XigmaNAS for my backup server and getting telegraf client onto it is problematic at best. However, who needs Telegraf client anyhow?

Collecting stats themselves is easy enough. Basic CPU stats you get from Telegraf client usually can be easily read via command line tools. As long as you keep the same tags and fields as what Telegraf usually sends you can nicely mingle our manually collected stats with what proper client sends.

And how do we send it? Telegram protocol is essentially just a set of lines pushed using HTTP POST. Yes, if you have a bit more secure system, it’s probably HTTPS and it might even be authenticated. But it’s still POST in essence.

And therein lies XigmaNAS’ problem. There is no curl or wget tooling available. And thus sending HTTP POST on embedded XigmaNAS is not possible. Or is it?

Well, here is the beauty of HTTP - it’s just a freaking text over TCP connection. And ancient (but still beloved) nc tool is good at exactly that - sending stuff over network. As long as you can “echo” stuff, you can redirect it to nc and pretend you have a proper HTTP client. Just don’t forget to set headers.

To cut the story short - here is my script using nc to push statistics from XigmaNAS to my Grafana setup. It’ll send basic CPU, memory, temperature, disk, and ZFS stats. Enjoy.

Mikrotik SNMP via Telegraf

As I moved most of my home to Grafana/InfluxDB monitoring, I got two challenges to deal with. One was monitoring my XigmaNAS servers and the other was properly handling Mikrotik routers. I’ll come back to XigmaNAS in one of later posts but today let’s see what can be done for Miktorik.

Well, Miktorik is a router and essentially all routers are meant to be monitored over SNMP. So, the first step is going to be turning it on from within System/SNMP. You want it read-only and you want to customize community string. You might also want SHA1/AES authentication/encryption but that has to be configured on both sides and I generally skip it for my home network.

Once you’re done you can turn on SNMP input plugin and data will flow. But data that flows will not include Mikrotik-specific stuff. Most notably, I wanted simple queues. And, once you know the process, it’s actually reasonably easy.

At heart of SNMP we have OIDs. Mikrotik is really shitty with documenting them but they do provide MIB so one can take a look. However, there is an easier approach. Just run print oid for any section, e.g.:

/queue simple print oid
 0
  name=.1.3.6.1.4.1.14988.1.1.2.1.1.2.1
  bytes-in=.1.3.6.1.4.1.14988.1.1.2.1.1.8.1
  bytes-out=.1.3.6.1.4.1.14988.1.1.2.1.1.9.1
  packets-in=.1.3.6.1.4.1.14988.1.1.2.1.1.10.1
  packets-out=.1.3.6.1.4.1.14988.1.1.2.1.1.11.1
  queues-in=.1.3.6.1.4.1.14988.1.1.2.1.1.12.1
  queues-out=.1.3.6.1.4.1.14988.1.1.2.1.1.13.1

This can than be converted into telegraf format looking something like this:

[[inputs.snmp.table.field]]
  name = "mtxrQueueSimpleName"
  oid = ".1.3.6.1.4.1.14988.1.1.2.1.1.2"
  is_tag = true
[[inputs.snmp.table.field]]
  name = "mtxrQueueSimpleBytesIn"
  oid = ".1.3.6.1.4.1.14988.1.1.2.1.1.8"
[[inputs.snmp.table.field]]
  name = "mtxrQueueSimpleBytesOut"
  oid = ".1.3.6.1.4.1.14988.1.1.2.1.1.9"
[[inputs.snmp.table.field]]
  name = "mtxrQueueSimplePacketsIn"
  oid = ".1.3.6.1.4.1.14988.1.1.2.1.1.10"
[[inputs.snmp.table.field]]
  name = "mtxrQueueSimplePacketsOut"
  oid = ".1.3.6.1.4.1.14988.1.1.2.1.1.11"
[[inputs.snmp.table.field]]
  name = "mtxrQueueSimplePCQQueuesIn"
  oid = ".1.3.6.1.4.1.14988.1.1.2.1.1.12"
[[inputs.snmp.table.field]]
  name= "mtxrQueueSimplePCQQueuesOut"
  oid= ".1.3.6.1.4.1.14988.1.1.2.1.1.13"

Where did I get the name from? Technically, you can use whatever you want, but I usually look them up from oid-info.com. Once you restart telegraf daemon, data will flow into Grafana and you can chart it to your heart’s desire.

You can see my full SNMP input config for Mikrotik at GitHub.

Monitoring Home Network

Illustration

While monitoring home network is not something that’s really needed, I find it always comes in handy. If nothing else, you get to see lot of nice colors and numbers flying around. For people like me, I need nothing more as encouragement.

Over time I tried many different systems but lately I fell in love with Grafana combined with InfluxDB. Grafana gives really nice and simple GUI while InfluxDB serves as the database for all the metrics.

I find Grafana hits just a right balance of being simple enough to learn basics but powerful enough that you can get into advanced stuff if you need it. Even better, it fits right into a small network without any adjustments needed to the installation. Yes, you can make it more complex later but starting point is spot on.

InfluxDB makes it really easy to push custom metrics from command line or literally anything that can speak HTTP and I find that really useful in heterogeneous network filled with various IoT devices. While version 2.0 is available, I actually prefer using 1.8 as it’s simpler in setup, lighter on resources (important if you run it in virtual machine), and it comes without GUI. Since I only use it as backend, that actually means I have less things to secure.

Installing Grafana on top of Ubuntu Server 20.04 is easy enough.

sudo apt-get install -y apt-transport-https
wget -4qO - https://packages.grafana.com/gpg.key | sudo apt-key add -
echo "deb https://packages.grafana.com/oss/deb stable main" \
    | sudo tee -a /etc/apt/sources.list.d/grafana.list

sudo apt update
sudo apt --yes install grafana

sudo systemctl start grafana-server
sudo systemctl enable grafana-server
sudo systemctl status grafana-server

That’s it. Grafana is now listening on port 3000. If you want it on port 80, some NAT magic is required.

sudo apt install --yes netfilter-persistent
sudo iptables -t nat -A PREROUTING -p tcp --dport 80 -j REDIRECT --to-port 3000
sudo netfilter-persistent save
sudo iptables -L -t nat

With Grafana installed, it’s time to get InfluxDB onboard too. Setup is again simple enough.

wget -4qO - https://repos.influxdata.com/influxdb.key | sudo apt-key add -
echo "deb https://repos.influxdata.com/ubuntu focal stable" \
    | sudo tee /etc/apt/sources.list.d/influxdb.list

sudo apt update
sudo apt --yes install influxdb

sudo systemctl start influxdb
sudo systemctl enable influxdb
sudo systemctl status influxdb

Once installation is done, the only remaining task is creating the database. In example I named it “telegraf”, but you can select whatever name you want.

curl -i -XPOST http://localhost:8086/query --data-urlencode "q=CREATE DATABASE ^^telegraf^^"

With both installed, we might as well install Telegraf so we can push some stats. Installation is again really similar:

wget -4qO- https://repos.influxdata.com/influxdb.key | sudo apt-key add -
echo "deb https://repos.influxdata.com/ubuntu focal stable" \
    | sudo tee /etc/apt/sources.list.d/influxdb.list

sudo apt-get update
sudo apt-get install telegraf

sudo sed -i 's*# database = "telegraf"$*database = "^^telegraf^^"*' /etc/telegraf/telegraf.conf
sudo sed -ri 's*# (urls = \["http://127.0.0.1:8086"\])$*\1*' /etc/telegraf/telegraf.conf
sudo sed -ri 's*# (\[\[inputs.syslog\]\])$*\1*' /etc/telegraf/telegraf.conf
sudo sed -ri 's*# (  server = "tcp://:6514")*\1*' /etc/telegraf/telegraf.conf

sudo systemctl restart telegraf
sudo systemctl status telegraf

And a minor update is needed for rsyslog daemon in order to forward syslog messages.

echo '*.notice action(type="omfwd" target="localhost" port="6514"' \
    'protocol="tcp" tcp_framing="octet-counted" template="RSYSLOG_SyslogProtocol23Format")' \
    | sudo tee /etc/rsyslog.d/99-forward.conf
sudo systemctl restart rsyslog

If you want to accept remove syslog messages, that’s also just a command away:

echo 'module(load="imudp")'$'\n''input(type="imudp" port="514")' \
    | sudo tee /etc/rsyslog.d/98-accept.conf
sudo systemctl restart rsyslog

That’s it. You have your metric server fully installed and its own metrics are flowing.

And yes, this is not secure and you should look into having TLS enabled at minimum, ideally with proper authentication for all your clients. However, this setup does allow you to dip your toes and see whether you like it or not.


PS: While creating graphs is easy enough, dealing with logs is a bit more complicated. NWMichl Blog has link to really nice dashboard for this purpose.