Linux, Unix, and whatever they call that world these days

Custom Samba Sizing

After reorganizing my ZFS datasets a bit, I suddenly noted I couldn’t copy any file larger than a few MB. A bit of investigation later and I figured why it was so.

My ZFS data sets were as follows:

zfs list
 NAME                            USED  AVAIL  REFER  MOUNTPOINT
 Data                           2.06T   965G    96K  none
 Data/Users                      181G   965G    96K  none
 Data/Users/User1               44.3G  19.7G  2.23G  /Data/Users/User1
 Data/Users/User2               14.7G  49.3G   264K  /Data/Users/User2
 Data/Users/User3                224K  64.0G    96K  /Data/Users/User3

And my Samba share was pointing to /Data/Users/.

Guess what? Path /Data/Users was not pointing to any dataset as my parent dataset for Data/Users was not mounted. Instead it pointed to memory disk md0 which had just a few MB free. Samba doesn’t check full path for disk size but only its root share.

The easiest way to workaround this would be to simply mount parent dataset. But why go for easy?

A bit more complicated solution is getting Samba to use custom script to determine free space. We can then use this script to return available disk space for our parent dataset instead of built-in samba calculation.

To do this, we first create script /myScripts/sambaDiskFree:

#!/bin/sh
DATASET=`pwd | cut -c2-`
zfs list -H -p -o available,used $DATASET | awk '{print $1+$2 " " $1}'

This script will check current directory, map its name to dataset (in my case it is as easy as stripping first slash character) and return two numbers. First is total disk space, followed by available diskspace - both in bytes.

Once script is saved and marked as executable (chmod +x), we just need to reference it in Services > CIFS/SMB > Settings under Additional parameters:

dfree command = /myScripts/sambaDiskFree

This will tell Samba to use our script for disk space determinations.

My Backup ZFS Machine (Correction)

Illustration

Before going on vacation I finished setting up my new ZFS backup machine, initialized first replication, and happily gone to see the big hole.

When I remotely connected to my main machine a few days later, I’ve found my sync command has failed before finishing. Also I couldn’t connect to my backup server. Well, that was unfortunate but I had enough foresight to get it connected via smart plug so I did power-off/power-on dance. My system booted and I restarted replication. I checked on it a few days later, to find it stuck again. Rinse-repeat. And the next day too. And the one after that…

Why? I have not idea as I was connected only remotely and I literally came home on the last day when I could return it to Amazon. Since I did raise a case with Supermicro in regards to video card error (5 beeps) which seemed hardware related my suspicions were definitely pointing only in direction of motherboard issue. I know memory was fine as I tested it thoroughly in another machine and power supply is happily working even now.

For my peace of mind I needed something that would allow me not only to reboot machine but to also access its screen and keyboard directly without any OS involvement. Variants are known under different names and slightly different execution. Whether it is KVM, iLO, AMT, or IPMI.

So I decided to upgrade to more manageable Supermicro A1SRi-2558F. With its C2558 processor (4 cores) and quad LAN it was definitely an overkill for my purpose but it was the cheapest IPMI-capable board I could find at $225 (compared to $150 for X10SBA-L). Unfortunately for my budget its ECC requirement meant adding another $35 for ECC RAM. And of course, different layout made my 6" right-angle SATA cable useless so now they decorate my drawer.

Board itself is really full of stuff with total of six USB ports (four are USB 3.0), one of which was even soldered on motherboard for internal USB needs. Having four gigabit ports is probably useless as Atom is unlikely to be able to drive them all at full speed but I guess it does allow for more relaxed network configuration. Moreover two SATA3 and four SATA2 just scream NAS. And rear bracket on my 1U case fits rear IO perfectly. Frankly, the only thing missing is HDMI albeit IPMI greatly reduces chance of ever needing it.

Total difference in system cost was $100 and it gave me a rock-solid experience (hasn’t crashed a single time in more than a month). Here is updated shopping list:

Supermicro SuperChassis 504-203B$100
Supermicro A1SRI-2558F$225
Kingston ValueRAM 4GB 1600MHz DDR3L ECC2x $45
SATA cable, 8", round (2x)$7
TOTAL$422

Setting Up Private Internet Access on Mint, 2018 Edition

I have already written about getting Private Internet Access running on Linux Mint back in 2016. Main reason is that with Linux Mint 18, not all DNS changes are properly propagated.

As OpenVPN client is installed by default these days, we only need to download PIA’s OpenVPN configuration files. More careful ones will notice these files are slightly different than recommended default. These have VPN server IP instead of DNS name. While this might cause long term issues if that IP ever changes, it does help a lot with firewall setup as we won’t need to poke a hole for DNS over our eth0 adapter.

From downloaded archive select .ovpn file with desired destination (usually going with one closest to you gives the best results) and also get both .crt and .pem file. Copy them all to your desktop and we’ll use them later for setup. Yes, you can use any other directory too - this is just one I prefer.

With this done we can go into configuring VPN from Terminal window (replacing username and password with actual values):

sudo mv ~/Desktop/*.crt /etc/openvpn/
sudo mv ~/Desktop/*.pem /etc/openvpn/
sudo mv ~/Desktop/*.ovpn /etc/openvpn/client.conf

sudo sed -i "s*ca *ca /etc/openvpn/*" /etc/openvpn/client.conf
sudo sed -i "s*crl-verify *crl-verify /etc/openvpn/*" /etc/openvpn/client.conf

sudo echo "auth-user-pass /etc/openvpn/client.login" >> /etc/openvpn/client.conf
sudo echo "mssfix 1400" >> /etc/openvpn/client.conf
sudo echo "dhcp-option DNS 209.222.18.218" >> /etc/openvpn/client.conf
sudo echo "dhcp-option DNS 209.222.18.222" >> /etc/openvpn/client.conf
sudo echo "script-security 2" >> /etc/openvpn/client.conf
sudo echo "up /etc/openvpn/update-resolv-conf" >> /etc/openvpn/client.conf
sudo echo "down /etc/openvpn/update-resolv-conf" >> /etc/openvpn/client.conf

unset HISTFILE
echo '^^username^^' | sudo tee -a /etc/openvpn/client.login
echo '^^password^^' | sudo tee -a /etc/openvpn/client.login
sudo chmod 500 /etc/openvpn/client.login

Now we can test our VPN connection:

sudo openvpn --config /etc/openvpn/client.conf

Assuming that this last step ended with Initialization Sequence Completed, we just need to verify whether this connection is actually used and I’ve found whatismyipaddress.com quite helpful here. Just check if IP detected there is different then IP you usually get without VPN.

Stop the test connection using Ctrl+C so we can configure automatic startup and test it.

echo "AUTOSTART=all" | sudo tee -a /etc/default/openvpn
sudo reboot

Once computer has booted and you are satisfied with VPN configuration, you can think about firewall and disabling default interface when VPN is not active. This means allowing traffic only on tun0 interface (VPN) and allowing only port 1198.

sudo ufw reset
sudo ufw default deny incoming
sudo ufw default deny outgoing
sudo ufw allow out on tun0
sudo ufw allow out on `route | grep '^default' | grep -v "tun0$" | grep -o '[^ ]*$'` proto udp to `cat /etc/openvpn/client.conf | grep "^remote " | grep -o ' [^ ]* '` port 1198
sudo ufw enable

Assuming all went well, VPN should be happily running.

Adding Mirrored Disk to Existing ZFS Pool

Great thing about ZFS is that even with a single disk you get some benefits - data integrity being the most important. And all ZFS commands work perfectly well, for example status:

zpool status
   pool: Data.Tertiary``
  state: ONLINE``
 config:``
         NAME                   STATE     READ WRITE CKSUM``
         Data.Tertiary          ONLINE       0     0     0``
           diskid/DISK-XXX.eli  ONLINE       0     0     0``

However, what if one disk is not sufficient any more? It is clear zpool add can be used to create striped pool for higher speeds. And it is clear we can add another device to make a three way mirror. But what if we want to convert solo disk to mirror configuration?

Well, in that case we can get creative with attach command giving it both disks as an argument:

zpool attach Data.Tertiary ^^diskid/DISK-XXX.eli^^ ^^diskid/DISK-YYY.eli^^

After a few seconds, our mirror is created with all our data intact:

zpool status
   pool: Data.Tertiary
  state: ONLINE
 status: One or more devices is currently being resilvered.  The pool will
         continue to function, possibly in a degraded state.
 action: Wait for the resilver to complete.
 config:
         NAME                     STATE     READ WRITE CKSUM
         Data.Tertiary            ONLINE       0     0     0
           mirror-0               ONLINE       0     0     0
             diskid/DISK-XXX.eli  ONLINE       0     0     0
             diskid/DISK-YYY.eli  ONLINE       0     0     0  (resilvering)

PS: Yes, I use encrypted disks from /dev/diskid/ as I used them in previous ZFS examples. If you want plain devices, just use ada0 and companions instead.

Stuck at System Initializing

Illustration

After a routine SATA cable change, my Supermicro A1SRi-2558F motherboard simply wouldn’t boot. From its fortunate IPMI interface I saw it was hanging at “System initializing…” with code 19 being prominently in bottom right corner.

As only thing I did was to replace SATA cable, I first returned the old one to be greeted with the same issue. It took retracing my steps to I notice I replaced memory module into the wrong slot (had to remove it to more easily reach SATA connector latch). Since this motherboard does require DIMMs to be fitted in certain order, error was clearly mine but two things confuse me.

First one is why I haven’t got beep notification that something is wrong with memory. This board does beep at you if there is no memory (5 short, 1 long) but there is not a beep if memory is incorrectly installed. Why?

Secondly, why the heck IPMI doesn’t include more details about system status - dare I say useful error log? If memory is wrongly installed, I should be able to see an error message in log. With this I am scared how ECC experience is going to look like - will it just simply fail without a message?

In any case, reinstalling the memory module at the correct spot did the trick and board happily worked ever since. :)

Why I Keep My Home Servers in UTC

Except for desktop computers and mobile phones, all my networked devices live in UTC timezone (sometime incorrectly referred to as GMT).

First, the most obvious reason is that my servers and devices live in two very different locations. Most of them are in USA but a few still remain in Croatia (yep, I have transcontinental offsite backup). For anything that needs time sync, I would need to manually calculate time difference. And not only once - thanks to different daylight time schedule there are four different time offsets throughout the year. With multiple devices around, mistakes are practically assured.

However, I would use UTC even with all devices in the same location. And the reason is aforementioned daylight saving time. Before I switched to UTC every year after daylight starts or ends I would have one hour difference on something. Bigger devices (e.g. NAS) would usually switch time but smaller IoT devices would not.

Since my network has centralized logging I can be sure that some devices will be one hour off at any time. And I am sure to notice this only when I need the logs, leaving me to add mental calculations to already annoying troubleshooting task. And, even if I remember to reconfigure it, I can be sure damn daylight saving screws it again later.

And yes, it might not be necessarily important for all my servers and devices to share the same time in the grand scheme of things. But UTC makes it easy enough and adjusting to it is reasonably easy.

If you have the same issues, jump in - you’ll not be sorry.

PS: The only downside is that my server sends me e-mail health report at different time depending if it is winter or summer.

PPS: Why the heck we still use daylight saving time?

Boot Linux ISO From USB

Illustration

Let’s face it - nobody uses DVD drives for installations any more. Even if your computer has it, chances are it also has USB drive support. And USB drive is MUCH faster than DVD.

There are many different ways to get Linux ISO onto USB for the purpose of Penguinification. My favorite desktop distribution - Linux Mint - has instructions for quite a few of them. However, with great selection comes great confusion.

Assuming you have Windows computer lying around, I will describe what I’ve found to be the least intrusive method leaving no permanent traces on Windows nor requiring installation of any applications.

Assuming you already downloaded Linux ISO file, you will also need to download PORTABLE version of Rufus. Yes, you could also install it but we are looking into the least intrusive way so portable reflect that philosophy better.

What you will see is trivial interface with all defaults being set properly for any modern Linux distribution, whether you need UEFI or BIOS installation. The only thing is selecting appropriate ISO image hidden behind button next to combo box saying ISO Image. If you forget this you will find yourself booting into Free DOS. Good for getting BIOS firmware updates and not much more.

If you are installing a bit newer version of Linux, you will probably get a warning that different ldlinux.sys and ldlinux.bss are needed. Answering yes will let Rufus download them from Internet.

The next question might be (depending on options selected) about a method of USB creation. USB mode worked for me every time.

After answering Yes to the final warning of imminent data destruction of the destination, your USB drive will get ISO applied to it and you are ready to use it for installing a Linux of your choice.

PS: I personally tested this with Linux Mint and Fedora but I don’t believe there is any that will not work.

My SSH Crypto Settings

With ever-expanding number of scripts on my NAS I noticed that pretty much every one had similar, but not quite the same parameters. For example, my automatic replication would use one set of encryption parameters while my Mikrotik router backup script would use other, and my website backup script would use a third variant.

So I decided to see if I could still keep the reasonable security but consolidate all these to a single type.

For key exchange, I had choice of diffie-hellman-group-exchange-sha256, diffie-hellman-group-exchange-sha1, diffie-hellman-group14-sha1, and diffie-hellman-group1-sha1. Unfortunately there is no curve25519-sha256@libssh.org or similar algorithms that are considered more secure.

For a while I considered using diffie-hellman-group14-sha1 as it uses 2048 bit prime but its abandonment by modern SSH versions made me go with diffie-hellman-group-exchange-sha256. As this method allows for custom groups, it should be theoretically better but it also allows server to setup connection with known weak parameters. As servers are in my control, that should not pose an huge issue here.

For cipher my hands were extremely tied - Mikrotik, my router of choice, supports only aes256-ctr and aes192-ctr. Both are of acceptable security so I went with faster: aes192-ctr.

For authentication Mikrotik was again extremely limited - only hmac-sha2-256 and hmac-sha1 were supported. While I was tempted to go with hmac-sha1 which is still secure enough despite SHA1 being broken (HMAC part really does make a difference), I went with hmac-sha2-256 as former might get obsoleted soon.

My final set of “standard” parameters is as follows:

-2 -o KexAlgorithms=diffie-hellman-group-exchange-sha256 -c aes192-ctr -o MACs=hmac-sha2-256

Additional parameter is not strictly encryption related but I find it very reasonable to enforce SSH protocol version 2.

Running Script Without Forking

Default way of running scripts in Linux is that shell forks new process based on hashbang (#!) found in the first line and gives rest of content to that process. And this works beautifully most of the time.

But what if we really need something found only in our current shell?

Fortunately, as long you are using bash, it is easy to run script without creating a separate shell. Just prefix it with dot (.):

./myScript

Some restrictions apply of course - the biggest gotcha being that script should be either bash or with only simple commands as content will be executed directly regardless of hash-bang (#!) specified.

PS: Yes, this works with other shells too, I use bash here as it is most common shell by far.

Micro CA

If you decide to handle your own certificate authority for the purposes of internal certificates, you will be annoyed by all the house keeping tasks involved. This will ring especially true if you need a new certificate just few times a year and having a separate, always-ready machine is way too much overhead to handle.

As pretty much all above applies to me, I decided to create a helper script to ensure I setup stuff the same every time and I kept it really close to how I would do it manually.

First action is to create root CA certificate (will be saved in ca.cer/ca.key):

./microca.sh -r

Then we can give out, for example, TLS client and server certificates or just something for testing:

./microca.sh -u Client myclient
./microca.sh -u Server myserver
./microca.sh mytest

It is even possible to create an intermediate CA and use it to create other certificates:

./microca.sh -a intermediate
./microca.sh -c intermediate -u Client myclient
./microca.sh -c intermediate -u Server myserver
./microca.sh -c intermediate mytest

You can download script from GitHub alongside with brief documentation and it works on both Linux and Windows (via Git Bash).

[2017-03-17: Setting subjectAltName is also supported.] [2018-12-16: MicroCA has its own page now.]