Linux, Unix, and whatever they call that world these days

Extracting Public SSH Key From a Private One

Common key management method seen in Linux scripts is copying private and public SSH key around. While not necessarily the best way to approach things, getting your private SSH key does come in handy when easy automation is needed.

However, there is no need to copy public key if you are already copying the private one. Since private key contains everything, you can use ssh-keygen to extract public key from it:

ssh-keygen -yf ^^~/.ssh/id_rsa^^ > ^^~/.ssh/id_rsa.pub^^

What is the advantage you ask? Isn’t it easier just to copy two files instead of copying one and dealing with shell scripting for second?

Well, yes. However, it is also more error prone as you must always keep private and public key in sync. If you replace one and by accident forget to replace the other, you will be chasing your tail in no time.

Allowing Root Login For Red Hat QCow2 Image

You should never depend on root login when dealing with OpenStack cloud. Pretty much all pre-prepared cloud images have it disabled by default. Ideally all your user provisioning should be done as part of the cloud-init procedure and there you should either create your own user or work with the default cloud-user and the key you provisioned. But what if you are troubleshooting some weird (network) issue and you need console login for your image?

Well, you can always re-enable root user by directly modifying qcow2 image.

To edit qcow2 images, we need first to install libguestfs-tools. On my Linux Mint, that requires the following:

sudo apt-get install libguestfs-tools

Of course, if you are using yum or some other package manager, adjust accordingly. :)

Once installation is done, we simply mount image into /mnt/guestimage and modify the shadow file to assign password (changeme in this example) to the root user:

sudo mkdir /mnt/guestimage
sudo guestmount -a rhel-server-7.5-update-3-x86_64-kvm.qcow2 -m /dev/sda1 /mnt/guestimage
sudo sed -i 's/root:!!/root:$1$QiSwNHrs$uID6S6qOifSNZKzfXsmQG1/' /mnt/guestimage/etc/shadow
sudo guestunmount /mnt/guestimage
sudo rmdir /mnt/guestimage

All nodes installed from this image will now allow you to use root login with password authentication. Just don’t forget to remove this change once you’re done troubleshooting.

PS: While I use Red Hat image in the example, the procedure also applies to CentOS and most of other cloud distributions too.

Mounting Encrypted Volume on Mint 19

As I tried to upgrade Linux Mint from 18.3 to 19, all went kaboom and I was forced to decide if I want to reinstall OS from scratch or go and try to fix it. Since I was dealing with virtual machine, reinstalling it from scratch seemed like a better idea.

Once all was installed, I wanted to copy some files from the old volume. As full disk encryption was present, I knew a bit more complicated mount is needed. In theory, it should all work with the following commands:

sudo cryptsetup luksOpen /dev/sdb5 encrypted_mapper
sudo mkdir -p /mnt/encrypted_volume
sudo mount /dev/mapper/encrypted_mapper /mnt/encrypted_volume
sudo cryptsetup luksClose encrypted_mapper

In practice I got the following error:

sudo mount /dev/mapper/encrypted_mapper /mnt/encrypted_volume
 mount: /mnt/encrypted_volume: unknown filesystem type 'LVM2_member'.

Issue was with volume manager’s dislike for both my current installation and previous one having the exactly same volume group name - mint-vg - and thus refusing to even consider doing anything with my old disk.

Before doing anything else, a rename of volume group was required. As names are equal, we will need to know UUID of the secondary volume. The easiest way to distinguish old and new volume is by looking at Open LV value. If it’s 0, we have our target.

sudo cryptsetup luksOpen /dev/sdb5 encrypted_mapper

sudo vgdisplay
  --- Volume group ---
  VG Name               mint-vg
  Cur LV                2
  Open LV               ^^0^^
  VG UUID               ^^Xu0pMS-HF20-Swb5-Yef3-XsjQ-9pzf-3nW4nn^^

sudo vgrename Xu0pMS-HF20-Swb5-Yef3-XsjQ-9pzf-3nW4nn mint-old-vg
  Processing VG mint-vg because of matching UUID Xu0pMS-HF20-Swb5-Yef3-XsjQ-9pzf-3nW4nn
  Volume group "Xu0pMS-HF20-Swb5-Yef3-XsjQ-9pzf-3nW4nn" successfully renamed to "mint-old-vg"

sudo vgchange -ay
  2 logical volume(s) in volume group "mint-vg" now active
  2 logical volume(s) in volume group "mint-old-vg" now active

With the volume finally activated, we can proceed mounting the old disk:

sudo mkdir -p /mnt/encrypted_volume
sudo mount /dev/mint-old-vg/root /mnt/encrypted_volume
sudo umount /mnt/encrypted_volume
sudo cryptsetup luksClose encrypted_mapper

Failed to Load SELinux Policy

After a failed yum upgrade (darn low memory) I noticed my CentOS NTP server was not booting anymore. Look at console showed progress bar still loading but pressing Escape showed the real issue: Failed to load SELinux policy, freezing.

The first thing in that situation is to try booting without SELinux and the easiest way I found to accomplish this was pressing e on boot menu and then adding selinux=0 at the end of line starting with linux16. Continuing boot with Ctrl+X will load CentOS but with SELinux disabled.

As I actually don’t run my public-facing servers without SELinux, it was time to fix it. Since I didn’t have package before, I installed selinux-policy-targeted but I would equally use reinstall if package was already present. In any case, running both doesn’t hurt:

sudo yum install -y selinux-policy-targeted
sudo yum reinstall -y selinux-policy-targeted

Finally we need to let system know SELinux should be reapplied. This can be done by creating a special .autorelabel file in the root directory followed by a reboot:

sudo touch /.autorelabel
sudo reboot

During reboot SELinux will reapply all labeling it needs and we can enjoy our server again.

Firefox and Java Console

Illustration

When you’re dealing with a lot of Linux servers, having a Linux client really comes in handy. My setup consisted of Linux Mint 18 and I could perform almost every task. I say almost because one task was always out of reach - viewing HP iLO console.

Two options were offered there - ActiveX and Java. While ActiveX had obvious platform restrictions, multi-platform promise of Java made its absence a bit of a curiosity. Quick search on Internet resolved that curiosity quickly - Firefox version 53 and above dropped NPAPI plugin system support and HP was just too lazy and Windows-centric to ever replace it. However, Firefox 52 still has Java support and that release is even still supported (albeit not after 2018). So why not install it and use it for Java iLO console?

First we need to download Firefox 52 ESR - the latest version still allowing for Java plugin. You can download these from Mozzila but do make sure you select release 52 and appropriate release for your computer (64-bit or 32-bit).

With the release downloaded, we can install it manually into a separate directory (/opt/firefox52) as not to disturbe the latest version. In addition to Firefox, we’ll also need IcedTea plugin installed:

tar -xjf ~/Downloads/firefox-52.8.0esr.tar.bz2

sudo mv firefox /opt/firefox52

sudo apt install -y icedtea-plugin

Of course, just installing is worthless if we cannot start it. For this having a desktop entry is helpful. I like to use a separate profile for it as that makes running side-by-side the newest and this release possible. After this is done you’ll find “Firefox 52 ESR” right next to a normal Firefox entry.

mkdir -p ~/.mozilla/firefox52

sudo bash -c 'cat << 'EOF' > /usr/share/applications/firefox52.desktop 
[Desktop Entry]
Name=Firefox 52 ESR
GenericName=Web Browser
Exec=/opt/firefox52/firefox --no-remote --profile ~/.mozilla/firefox52
Icon=firefox
Type=Application
Categories=GNOME;GTK;Network;WebBrowser;
EOF'

The final step is going to “about:plugins” within Firefox 52 ESR and selecting “Always Activate” for IcedTea plugin.

Now you can use Firefox 52 ESR whenever you need the Java Console.

Resolving Interrupted Yum Upgrade

Illustration

Running recent CentOS update on machine with 512 MB of RAM caused yum to run out of memory. Thinking nothing of it, I stopped it to see what can be done. After stopping all services I was greeted with “Warning: RPMDB altered outside of yum” and “Found 93 pre-existing rpmdb problem(s), ‘yum check’ output follows”.

After trying a lot of things, I found the one that works. Removing older package without removing its dependencies and reinstalling the newer one worked a charm:

rpm --erase --nodeps --noscript ^^yum-plugin-fastestmirror-1.1.31-42.el7.noarch^^
yum reinstall -y ^^yum-plugin-fastestmirror^^

Of course, the same can be scripted but I leave that to more daring souls. :)

PS: Yes, the same procedure works on Red Hat too.

Webalizer With a Dash of Logrotate

Recently I decided to stop using Google Analytics for my website traffic analysis. Yes, it’s still probably the best analytics tool out there but I actually don’t care about details that much. The only thing I care about is trend - is site getting better or worse - and nothing much else. For that purpose, a simple Webalizer setup would do.

My Webalizer setup was actually as close to plain as it gets. The only trick up my sleeve was that I had a separate configuration file for each of my sites. Of course, since I use logrotate to split my Apache traffic logs, I also needed to add a bit of prerotate action into the /etc/logrotate.conf to ensure I don’t miss any entries.

My first try was this:

…
  prerotate
    for FILE in /srv/configs/**/webalizer.conf
    do
      /usr/bin/webalizer -c $FILE
    done
  endscript
…

And, while this did wonders from command line, it seemed to do absolutely nothing when executed by logrotate itself. Reason for its misbehavior was that logrotate (and crontab) uses sh as its shell and not bash.

To get around this, a Posix-compatible command is needed:

…
  prerotate
    find /srv/configs/ -name "webalizer.conf" -exec /usr/bin/webalizer -c {} \;
  endscript
…

This now executes just before my Apache log file gets rotated-out and Webalizer gets one last chance to catch-up with its stats.

Chrony NTP Server on CentOS 7.3

Illustration

I already wrote about setting pool NTP server using good old ntpd daemon. However, CentOS comes with another daemon installed by default - Chrony. Let me guide you through the setup of Chrony NTP server on CentOS 7.3 for the purpose of joining the pool.

While NTP server needs a good (single-core) CPU and fast network, it cares nothing about the RAM. That makes it ideal for even the smallest cloud instances. Both $5 Linode (1024 MB RAM) and $2.50 Vultr (512 MB RAM) will work wonderfully. Just don’t set your pool speed over 100 Mbps to avoid any extra bandwidth charges.

Immediately after CentOS has been installed, it’s best to update system to the latest and greatest:

yum -y upgrade

As Chrony is installed by default on CentOS, there are no new packages to install. However, the firewall still must allow for external requests:

firewall-cmd --permanent --add-service ntp
firewall-cmd --reload

Just installing NTP is no good if we don’t have any servers to synchronize with. To setup those, editing /etc/chrony.conf is needed. Depending which data center you select for the server, you will want to have 4 to 7 servers from the stratum one list physically close to your location while obeying any restrictions noted (especially if server is open access or not). For the virtual machine located in Miami, the following servers would work (do not forget to remove servers already in file):

server ntp-s1.cise.ufl.edu iburst
server time-a.bbnx.net iburst
server time-b.bbnx.net iburst
server time-a.timefreq.bldrdoc.gov iburst
server time-c.timefreq.bldrdoc.gov iburst
…

Of course, a server won’t be the server until we add the following line to /etc/chrony.conf so that external connections are accepted while control is limited to local machine only:

…
allow all
bindcmdaddress 127.0.0.1
bindcmdaddress ::1

Now finally, with all configured, we can restart our server:

systemctl restart chronyd

To verify the server is working, the first step is to see all sources the server synchronizes with. As soon as one of them has asterisk next to its name, all is good.

chronyc sources

A bit more realistic check will be actually requesting the time from remote computer:

ntpdate -q ^^<ip>^^

Once server works for a while, it can be added to the NTP.org pool. Since the server has both IPv4 and IPv6 address (I surely hope you added IPv6 ;)), it will be necessary to add it twice. At the start it will be only monitored but, as its score increases, soon you will start seeing traffic from all over the world.

PS: It would be wise to keep 1 Mpbs as defined speed until you see how much traffic you’re actually getting. Once the server has been handling requests at basic speed for a while you can think about increasing the number.

Reboot E-mail Via Google's SMTP

Setting up NTP server is easy. But actually monitoring that server is a bit more difficult. A bare minimum should be getting an e-mail after reboot. However, even that simple step requires a bit of setup.

First you need to install sendmail, its configuration compiler, and a few SASL authentication methods:

yum install -y sendmail sendmail-cf cyrus-sasl-plain cyrus-sasl-md5

Next step is preparing authentication database (do substitute e-mail and password):

mkdir -p -m 700 /etc/mail/authinfo
echo 'AuthInfo: "U:root" "I:^^relay@gmail.com^^" "P:^^password^^"' > /etc/mail/authinfo/mail
makemap hash /etc/mail/authinfo/mail &gt; /etc/mail/authinfo/mail

The last configuration step is adding the following lines into /etc/mail/sendmail.mc just ABOVE the first MAILER line:

…
define(`SMART_HOST',`[smtp.gmail.com]')dnl
define(`RELAY_MAILER_ARGS', `TCP $h 587')dnl
define(`ESMTP_MAILER_ARGS', `TCP $h 587')dnl
define(`confAUTH_OPTIONS', `A p')dnl
define(`confAUTH_MECHANISMS', `EXTERNAL GSSAPI DIGEST-MD5 CRAM-MD5 LOGIN PLAIN')dnl
define(`confCACERT', `/etc/pki/tls/certs/ca-bundle.trust.crt')dnl
TRUST_AUTH_MECH(`EXTERNAL DIGEST-MD5 CRAM-MD5 LOGIN PLAIN')dnl
FEATURE(`authinfo',`hash -o /etc/mail/authinfo/gmail.db')dnl
…

With configuration out of the way, we can proceed with “compiling” that new configuration and restarting the daemon:

make -C /etc/mail
systemctl start sendmail

Finally, we are ready to test e-mail via command line:

echo "Subject: Test via sendmail from `hostname`" | sendmail -v ^^youremail@example.com^^

Assuming everything works, the only remaining task is adding cron task (crontab -e):

@reboot  echo -e "Subject: `hostname` status\n\nHost rebooted at `date -R`." | /usr/sbin/sendmail -v ^^youremail@example.com^^

Now every reboot will result in a e-mail message.

Random Slacking

It all started as a joke.

As few of us started using Slack it seemed oddly appropriate that #random channel should have a freshly squeezed random number every day. But there were some complaints about the quality. The first issue arose when 42 was randomly selected a few days in a row and it all went down hill from there culminating in a whole weekend without a random number. Unforgivable!

To replace such flawed human being a simple script was needed. It was clear from the get-go that script would be written in Bash. Not only my favorite but also supported on my personal servers and extremely easy to schedule via crontab.

Albeit single digit number had a previous occurrence, single-person decision was made that two-digit numbers look the best and should be used going forward. Due to the previous issue with number 42, it was also decided such number cannot appear too often. After all, you don’t answer the question of life, the universe, and everything more than once in a blue moon.

Too keep things on a low key, it was necessary to avoid any Slack bot interface. No, the message should always appear to come from a user. After a while chat.postMessage call was discovered enabling just that. This did require a (legacy) token and came at a cost of future extensibility but it also allowed a lot of faking so it all worked out.

In any case, here is the final script:

#!/bin/bash

TOKEN="xoxp-111111111111-222222222222-333333333333-abcdefabcdefabcdefabcdefabcdef"
CHANNEL="random"
USERNAME="myuser"

TAGLINE_FILE="/srv/taglines.txt"

NUMBER=$(( RANDOM % 89 + 10)) #random number 10-99
if (( $NUMBER == 42 )) ; then NUMBER=$(( RANDOM % 89 + 10)) ; fi  #about 0.01% chance to get 42 second time

TAGLINE=`shuf -n 1 $TAGLINE_FILE | cut -d'*' -f1`

TEXT="Random number of the day is ${NUMBER}.\\n${TAGLINE}"

curl -X POST \
     -H "Authorization: Bearer $TOKEN" \
     -H 'Content-type: application/json; charset=utf-8' \
     --data "{\"channel\":\"$CHANNEL\",\"text\":\"$TEXT\",\"as_user\":\"true\",\"username\":\"$USERNAME\"}" \
     https://slack.com/api/chat.postMessage

PS: No, illusion is not full, as there will be hints this is sent via API and not by human being. However, hints are small enough that not many will note.