Linux, Unix, and whatever they call that world these days

Tailing Two Files

Illustration

As I got my web server running, it came to me to track Apache logs for potential issues. My idea was to have a base script that would, on a single screen, show both access and error logs in green/yellow/red pattern depending on HTTP status and error severity. And I didn’t want to see the whole log - I wanted to keep information at minimum - just enough to determine if things are going good or bad. If I see something suspicious, I can always check full logs.

Error log is easy enough but parsing access log in the common log format (aka NCSA) is annoyingly difficult due to its “interesting” choice of delimiters.

Just looks at this example line:

108.162.245.230 - - [26/Dec/2017:01:16:45 +0000] "GET /download/bimil231.exe HTTP/1.1" 200 1024176 "-" "Mozilla/5.0 (Windows NT 10.0; WOW64; Trident/7.0; rv:11.0) like Gecko"

First three entries are space separated - easy enough. Then comes date in probably the craziest format one could fine and enclosed in square brackets. Then we have request line in quotes, followed by a bit more space-separated values. And we finish with a few quoted values again. Command-line parsing was definitely not in mind of whoever “designed” this.

With Apache you can of course customize format for logging - but guess what? While you can make something that works better with command-line tools, you will lose a plethora of tools that already work with NCSA format - most notably Webalizer. It might be a bad choice for command line, but it’s the standard regardless.

And extreme flexibility of Linux tools also means you can do trickery to parse fields even when you deal with something as mangled as NCSA.

After a bit of trial and error, my final product was the script looking a bit like this:

#!/bin/bash

LOG_DIRECTORY="/var/www"

trap 'kill $(jobs -p)' EXIT

tail -Fn0 $LOG_DIRECTORY/apache_access.log | gawk '
  BEGIN { FPAT="([^ ]+)|(\"[^\"]+\")|(\\[[^\\]]+\\])" }
  {
    code=$6
    request=$5

    ansi="0"
    if (code==200 || code==206 || code==303 || code==304) {
      ansi="32;1"
    } else if (code==301 || code==302 || code==307) {
      ansi="33;1"
    } else if (code==400 || code==401 || code==403 || code==404 || code==500) {
      ansi="31;1"
    }
    printf "%c[%sm%s%c[0m\n", 27, ansi, code " " request, 27
  }
' &

tail -Fn0 $LOG_DIRECTORY/apache_error.log | gawk '
  BEGIN { FPAT="([^ ]+)|(\"[^\"]+\")|(\\[[^\\]]+\\])" }
  {
    level=$2
    text=$5 " " $6 " " $7 " " $8 " " $9 " " $10 " " $11 " " $12 " " $13 " " $14 " " $15 " " $16

    ansi="0"
    if (level~/info/) {
      ansi="32"
    } else if (level~/warn/ || level~/notice/) {
      ansi="33"
    } else if (level~/emerg/ || level~/alert/ || level~/crit/ || level~/error/) {
      ansi="31"
    }
    printf "%c[%sm%s%c[0m\n", 27, ansi, level " " text, 27
  }
' &

wait

Script tails both error and access logs, waiting for Ctrl+C. Upon exit, it will kill spawned jobs via trap.

For access log, gawk script will check status code and color entries accordingly. Green color is for 200 OK, 206 Partial Content, 303 See Other, and 304 Not Modified; yellow for 301 Moved Permanently, 302 Found, and 307 Temporary Redirect; red for 400 Bad Request, 401 Unauthorized, 403 Forbidden, and 404 Not Found. All other codes will remain default/gray. Only code and first request line will be printed.

For error log, gawk script will check only error level. Green color will be used for Info; yellow color is for Warn and Notice; red is for Emerg, Alert, Crit, and Error. All other (essentially debug and trace) will remain default/gray. Printout will consist just of error level and first 12 words.

This script will not only shorten quite long error and access log lines to their most essential parts, but coloring will enable one to see the most important issues at a glance - even when lines are flying around. Additionally, having them interleaved lends itself nicely to a single screen monitoring station.

[2018-02-09: If you are running this via SSH on remote server, don’t forget to use -t for proper cleanup after SSH connection fails.]

Creating ISO From the Command Line

Creating read-only archives is often beneficial. This is especially so when we are dealing with something standard across many system. And rarely you will find anything more standard than CD/DVD .iso files. You can mount it on both Windows 10 and Linux without any issues.

There are quite a few programs that will allow you to create .iso files but they are often overflowing with ads. Fortunately every Linux distribution comes with a small tool capable of the same without any extra annoyances. That tool is called [mkisofs](https://linux.die.net/man/8/mkisofs).

Basic syntax is easy:

mkisofs -input-charset -utf8 -udf -V "My Label" -o MyDVD.iso ~/MyData/

Setting input charset is essentially only needed to suppress warning. UTF-8 is default anyhow and in 99% cases exactly what you want.

Using UDF as output format enables a bit more flexible file and directory naming rules. Standard ISO 9660 format (even when using level 3) is so full of restrictions making it annoying at best- most notable being support for only uppercase file names. UDF allows Unicode file names up to 255 characters in length and has no limit to directory depth.

Lastly, DVD label is always a nice thing to have.

Solving \"Failed to Mount Windows Share\"

Illustration

Most of the time I access my home NAS via samba shares. For increased security and performance I force it to use SMB v3 protocol. And therein lies the issue.

Whenever I tried to access my NAS from Linux Mint machine using Caja browser, I would get the same error: “Failed to mount Windows share: Connection timed out.” And it wasn’t connectivity issues as everything would work if I dropped my NAS to SMB v2. And it wasn’t unsupported feature either as Linux supports SMB3 for a while now.

It was just a case of a bit unfortunate default configuration. Albeit man pages tell client max protocol is SMB3, something simply doesn’t click. However, if one manually specifies only SMB3 is to be used, everything starts magically working.

Configuring it is easy; in /etc/samba/smb.conf, within [global], one needs to add

client min protocol = SMB3
client max protocol = SMB3

Alternatively, this can also be done with the following one-liner:

sudo sed -i "/\\[global\\]/a client min protocol = SMB3\nclient max protocol = SMB3" /etc/samba/smb.conf

Once these settings are in, share is accessible.

Private Internet Access Client On Encrypted Linux Mint

Upon getting Linux Mint installed, I went ahead with installing Private Internet Access VPN client. All the same motions as usually albeit now with slightly different result - it wouldn’t connect.

Looking at logs ($HOME/.pia_manager/log/openvpn.log) just gave cryptic operation not permitted and no such device errors:

 SIOCSIFADDR: Operation not permitted
 : ERROR while getting interface flags: No such device
 SIOCSIFDSTADDR: Operation not permitted

Quick search on internet brought me to Linux Mint forum where exactly the same problem was described. And familiarity didn’t stop there; author had one other similarity - encrypted home folder - the root cause of the whole problem. Sounded like a perfect fit so I killed PIA client and went with his procedure:

sudo mkdir /home/pia
sudo chown -R $USER:$USER /home/pia
mv ~/.pia_manager /home/pia/.pia_manager
ln -s /home/pia/.pia_manager ~/.pia_manager

However, this didn’t help. Still the same issue in my log files.

So I decided to go with nuclear option. First I killed PIA client (again) and removed PIA completely together with all my modifications:

rm ~/.pia_manager
rm -R /home/pia
sudo rm ~/.local/share/applications/pia_manager.desktop

With all perfectly clean, I decided to start with fresh directory structure, essentially the same as in the original solution:

sudo mkdir -p /home/pia/.pia_manager
sudo chown -R $USER:$USER /home/pia
ln -s /home/pia/.pia_manager ~/.pia_manager

Than I repeated installation of PIA client:

cd ~/Downloads
tar -xzf pia-v72-installer-linux.tar.gz
./pia-v72-installer-linux.sh

And it worked! :)

Installing Wordpress on Linode CentOS

Illustration

For the purpose of testing new stuff, it is always handy to have Wordpress installation ready. And probably one of the cheapest ways to do so is to use one of virtual server providers - in my case it is Linode.

I won’t be going into specifics of creating server on Linode as it is trivial. Instead, this guide starts at moment your CentOS is installed are you are logged in.

First of all, Linode’s CentOS installation has firewall disabled. As this server will be open to public, enabling firewall is not the worst idea ever:

systemctl start firewalld

systemctl enable firewalld

firewall-cmd --state
 running``

Next you need to install database:

yum install -y mariadb-server

To have database running as a separate user, instead of root, you need to add user=mysql in /etc/my.cnf. You can do that either manually or use the following command to the same effect:

sed -i "/\[mysqld\]/auser=mysql" /etc/my.cnf

Now you can start MariaDB and ensure it starts automatically upon reboot.

systemctl start mariadb

systemctl enable mariadb
  Created symlink from …

I always highly advise securing database a bit. Luckily, there is a script for that. Going with defaults will ensure quite a secure setup.

mysql_secure_installation

A good test for MariaDB setup is creating WordPress database:

mysql -e "CREATE DATABASE ^^wordpress^^;"

mysql -e "GRANT ALL PRIVILEGES ON ^^wordpress^^.* TO ^^'username'^^@'localhost' IDENTIFIED BY '^^password^^';"

mysql -e "FLUSH PRIVILEGES;"

With database sorted out, you can move onto installation of PHP:

yum install -y httpd mod_ssl php php-mysql php-gd

We can start Apache at this time and allow it to start automatically upon reboot:

systemctl start httpd

systemctl enable httpd
 Created symlink from …

With all else installed and assuming you have firewall running, it is time to poke some holes through it:

firewall-cmd --add-service http --permanent
 success

firewall-cmd --add-service https --permanent
 success

firewall-cmd --reload
 success

If all went well, you can now see welcome page when you point your favorite browser to server IP address.

Now finally you get to install WordPress:

yum install -y wget

wget http://wordpress.org/latest.tar.gz -O /var/tmp/wordpress.tgz

tar -xzvf /var/tmp/wordpress.tgz -C /var/www/html/ --strip 1

chown -R apache:apache /var/www/html/

Of course, you will need to create initial file - sample is a good beginning:

cp /var/www/html/wp-config-sample.php /var/www/html/wp-config.php

sed -i "s/database_name_here/^^wordpress^^/" /var/www/html/wp-config.php

sed -i "s/username_here/^^username^^/" /var/www/html/wp-config.php

sed -i "s/password_here/^^password^^/" /var/www/html/wp-config.php

while $(grep -q "put your unique phrase here" /var/www/html/wp-config.php); do
  sed -i "0,/put your unique phrase here/s//$(uuidgen -r)/" /var/www/html/wp-config.php;
done

With wp-config.php fields fully filled, you can go to server’s IP address and follow remaining WordPress installation steps (e.g. site title and similar details).

PS: While this is guide for Linode and CentOS, it should also work with other Linux flavors provided you swap httpd for apache.

Interface Stats

Sometime you just wanna check how many packets and bytes are transferred via network interface. For my Linode NTP server I solved that need using the following script:

#!/bin/bash

INTERFACE=eth0

LINE_COUNT=`tput lines`
LINE=-1

while true
do
    if (( LINE % (LINE_COUNT-1) == 0 ))
    then
        echo "INTERFACE   RX bytes packets     TX bytes packets"
    fi
    LINE=$(( LINE+1 ))

    RX1_BYTES=$RX2_BYTES
    TX1_BYTES=$TX2_BYTES
    RX1_PACKETS=$RX2_PACKETS
    TX1_PACKETS=$TX2_PACKETS
    sleep 1
    RX2_BYTES=`cat /sys/class/net/$INTERFACE/statistics/rx_bytes`
    TX2_BYTES=`cat /sys/class/net/$INTERFACE/statistics/tx_bytes`
    RX2_PACKETS=`cat /sys/class/net/$INTERFACE/statistics/rx_packets`
    TX2_PACKETS=`cat /sys/class/net/$INTERFACE/statistics/tx_packets`

    if [[ "$RX1_BYTES" != "" ]]
    then
        RX_BYTES=$(( RX2_BYTES - RX1_BYTES ))
        TX_BYTES=$(( TX2_BYTES - TX1_BYTES ))
        RX_PACKETS=$(( RX2_PACKETS - RX1_PACKETS ))
        TX_PACKETS=$(( TX2_PACKETS - TX1_PACKETS ))

        printf "%-7s  %'11d %'7d  %'11d %'7d\n" $INTERFACE $RX_BYTES $RX_PACKETS $TX_BYTES $TX_PACKETS
    fi
done

Custom Directory for Apache Logs

On my web server I wanted to use a separate directory for my logs. All I needed was to configure ErrorLog and CustomLog directives and that’s it. Well, I did that only to have following error: Job for httpd.service failed because the control process exited with error code. See "systemctl status httpd.service" and "journalctl -xe" for details.

And no, there weren’t any details worth mentioning in systemctl status httpd.service nor journalctl -xe.

To cut long story short, after a bit of investigation I narrowed the problem to SELinux that is enabled by default on CentOS. Armed with that knowledge, I simply transferred security from default log directory to my desired location:

chcon -R --reference=/etc/httpd/logs/ ^^/var/www/logs/^^

With that simple adjustment, my httpd daemon started and my logs lived happily ever after.

Linode NTP

Illustration

One of the features I added to Bimil was NTP client support for time-based two-factor authentication. For this I needed NTP server so I turned to ntp.org pool and requested vendor zone. Once zone got approved I suddenly had infinite* amount of NTP servers at my disposal.

So, when I decided to give Linode’s $5 virtual server a try, I didn’t want just to create dummy machine. I also wanted to do something for community. As NTP pool service is one of invisible pillars of Internet-connected devices and I was really happy such service was provided for free to myself, it was easy to decide. I am going to build NTP server.

Creating account on linode was a breeze as it was creating the machine. It was literally, click-next, click-next process. Once I finally logged on to it, the first action was to update system to the latest packages. Surprisingly, on Linode there was literally nothing to do - all was already up to date. Awesome!

yum update -y
 …
 No packages marked for update

By default, Linode’s CentOS installation has firewall disabled. As this server will be open to public, enabling firewall is not the worst idea ever:

systemctl start firewalld

systemctl enable firewalld

firewall-cmd --state
 running

And, while dealing with firewall, you might as well allow NTP through and check if configuration is correct:

firewall-cmd --permanent --add-service ntp
 success

firewall-cmd --reload
 success

firewall-cmd --list-all
 public (active)
  target: default
  icmp-block-inversion: no
  interfaces: eth0
  sources:
  services: ssh dhcpv6-client ^^ntp^^
  ports:
  protocols:
  masquerade: no
  forward-ports:
  source-ports:
  icmp-blocks:
  rich rules:

With firewall configuration completed, you can finally install NTP:

yum install -y ntp

And this brings you to the most involved part of the process. You need to go over available stratum 1 time servers and select between four and seven of them for your devious synchronization purposes. Which servers should you select? As long as they are reasonably close (in the terms of network distance) you will be fine.

Using your favorite editor, you need to adjust /etc/ntp.conf file. Following ntp.org recommendations always worked for me but with a slight adjustment in the form of a separate log file and forcing IPv4 resolving for servers. Quite a few IPv6 capable servers only serve clients over IPv6 and don’t like other servers via the same. I personally use the following configuration (don’t forget to adjust servers names):

driftfile /var/lib/ntp/drift

restrict -4 default kod limited nomodify notrap nopeer noquery
restrict -6 default kod limited nomodify notrap nopeer noquery

restrict -4 127.0.0.1
restrict -6 ::1

server -4 ^^clock.fmt.he.net^^ iburst
server -4 ^^clock.sjc.he.net^^ iburst
server -4 ^^usno.hpl.hp.com^^ iburst
server -4 ^^clepsydra.dec.com^^ iburst
server -4 ^^tick.ucla.edu^^ iburst
server -4 ^^time-a.timefreq.bldrdoc.gov^^ iburst
server -4 ^^time-c.timefreq.bldrdoc.gov^^ iburst

logfile /var/log/ntp.log

With configuration ready, it is the moment of truth - start the NTP daemon and configure its automatic startup upon boot. Don’t forget to disable chrony too:

systemctl start ntpd
systemctl enable ntpd
systemctl disable chronyd

With all up, wait for couple minutes while checking state with ntpstat or ntpq. Forgetting it for hour or two will save you lot of angst :) I consider sync good enough whenever pooling interval goes to 1024s.

watch "ntpq -np ; echo ; ntpstat"
      remote           refid      st t when poll reach   delay   offset  jitter                                                            ``==============================================================================``
 *66.220.9.122    .CDMA.           1 u   41  512  377    2.022    6.680   6.798
 +216.218.254.202 .CDMA.           1 u   77 1024  377    2.127    5.663   6.180
 +204.123.2.72    .GPS.            1 u  257  512  377    4.908    2.753   5.031
 +204.123.2.5     .GPS.            1 u   40  512  377    5.232    5.278   6.052
 +164.67.62.194   .GPS.            1 u  532  512  377    9.978   -0.637   3.795
 +132.163.4.101   .NIST.           1 u  362 1024  377   35.226    5.489   7.610
 +132.163.4.103   .NIST.           1 u  430  512  377   35.148    5.353   7.607
 synchronised to NTP server (66.220.9.122) at stratum 2
   time correct to within 19 ms
   polling server every 1024 s

It will take some time for other servers to “discipline” yours so do be patient. If servers are showing INIT refid for a while, this might indicate a permanent issue (e.g. server might be down) or just something temporary (e.g. server might be overloaded). If server is not reachable for a while, toss it out and select another one from stratum 1 list (followed by systemctl restart ntpd).

I personally gave server an hour or two to get into the shape before proceeding with the final step - adding it to pool. This can be done at ntp.org management pages and it is as easy as simply adding server using either host name or IP address.

After monitoring server for some time and assuming its time is stable, your score will raise and you get to be the part of the collective NTP pool.

* some restrictions apply

Custom Samba Sizing

After reorganizing my ZFS datasets a bit, I suddenly noted I couldn’t copy any file larger than a few MB. A bit of investigation later and I figured why it was so.

My ZFS data sets were as follows:

zfs list
 NAME                            USED  AVAIL  REFER  MOUNTPOINT
 Data                           2.06T   965G    96K  none
 Data/Users                      181G   965G    96K  none
 Data/Users/User1               44.3G  19.7G  2.23G  /Data/Users/User1
 Data/Users/User2               14.7G  49.3G   264K  /Data/Users/User2
 Data/Users/User3                224K  64.0G    96K  /Data/Users/User3

And my Samba share was pointing to /Data/Users/.

Guess what? Path /Data/Users was not pointing to any dataset as my parent dataset for Data/Users was not mounted. Instead it pointed to memory disk md0 which had just a few MB free. Samba doesn’t check full path for disk size but only its root share.

The easiest way to workaround this would be to simply mount parent dataset. But why go for easy?

A bit more complicated solution is getting Samba to use custom script to determine free space. We can then use this script to return available disk space for our parent dataset instead of built-in samba calculation.

To do this, we first create script /myScripts/sambaDiskFree:

#!/bin/sh
DATASET=`pwd | cut -c2-`
zfs list -H -p -o available,used $DATASET | awk '{print $1+$2 " " $1}'

This script will check current directory, map its name to dataset (in my case it is as easy as stripping first slash character) and return two numbers. First is total disk space, followed by available diskspace - both in bytes.

Once script is saved and marked as executable (chmod +x), we just need to reference it in Services > CIFS/SMB > Settings under Additional parameters:

dfree command = /myScripts/sambaDiskFree

This will tell Samba to use our script for disk space determinations.

My Backup ZFS Machine (Correction)

Illustration

Before going on vacation I finished setting up my new ZFS backup machine, initialized first replication, and happily gone to see the big hole.

When I remotely connected to my main machine a few days later, I’ve found my sync command has failed before finishing. Also I couldn’t connect to my backup server. Well, that was unfortunate but I had enough foresight to get it connected via smart plug so I did power-off/power-on dance. My system booted and I restarted replication. I checked on it a few days later, to find it stuck again. Rinse-repeat. And the next day too. And the one after that…

Why? I have not idea as I was connected only remotely and I literally came home on the last day when I could return it to Amazon. Since I did raise a case with Supermicro in regards to video card error (5 beeps) which seemed hardware related my suspicions were definitely pointing only in direction of motherboard issue. I know memory was fine as I tested it thoroughly in another machine and power supply is happily working even now.

For my peace of mind I needed something that would allow me not only to reboot machine but to also access its screen and keyboard directly without any OS involvement. Variants are known under different names and slightly different execution. Whether it is KVM, iLO, AMT, or IPMI.

So I decided to upgrade to more manageable Supermicro A1SRi-2558F. With its C2558 processor (4 cores) and quad LAN it was definitely an overkill for my purpose but it was the cheapest IPMI-capable board I could find at $225 (compared to $150 for X10SBA-L). Unfortunately for my budget its ECC requirement meant adding another $35 for ECC RAM. And of course, different layout made my 6" right-angle SATA cable useless so now they decorate my drawer.

Board itself is really full of stuff with total of six USB ports (four are USB 3.0), one of which was even soldered on motherboard for internal USB needs. Having four gigabit ports is probably useless as Atom is unlikely to be able to drive them all at full speed but I guess it does allow for more relaxed network configuration. Moreover two SATA3 and four SATA2 just scream NAS. And rear bracket on my 1U case fits rear IO perfectly. Frankly, the only thing missing is HDMI albeit IPMI greatly reduces chance of ever needing it.

Total difference in system cost was $100 and it gave me a rock-solid experience (hasn’t crashed a single time in more than a month). Here is updated shopping list:

Supermicro SuperChassis 504-203B$100
Supermicro A1SRI-2558F$225
Kingston ValueRAM 4GB 1600MHz DDR3L ECC2x $45
SATA cable, 8", round (2x)$7
TOTAL$422