Ham Check 1.00

Illustration

This program is just one of many programs helping prepare for the amateur radio exams. What differs it from the other similar programs is the support for keyboard-only operation, ability to zoom into the text, and showing the image next to text. While these might seem basic options, not many other exam applications support them.

The whole idea is to have a simple program allowing for quick learning and easy presenting the exam content to others.

Download is available on the program’s page.

Cloning My Website

One disadvantage of having a reliable hosting provider is that you tend to forget about backups. In my ten years with Plus hosting there was not a single case of data loss. Regardless I wanted to go “better safe than sorry” route and make automated backups. And, while I am at it, I might as well use it to make “production replica” environment for testing.

My website environment is based on CentOS 6, MySQL database, and Apache. Ideally I would need exactly the same environment. But in this case I decided to upgrade all components to latest versions. In case of Linux that meant going with CentOS 7.1 (minimal).

Installation for CentOS is as simple as it gets. I basically click on Next until there is no button to press. :) Only possibility of error is forgetting to enable the Ethernet network adapter - not a catastrophic mistake; just annoying one. Once install was done, additional packages were in order:

yum install httpd mariadb-server php php-mysql rsync

To connect to my website I created new SSH keys:

ssh-keygen -b 4096

I appended newly created .ssh/id_rsa.pub key to .ssh/authorized_keys on my web server. That meant I could login and copy files without any passwords - great for scripting.

Setting up MySQL/MariaDB came next. It is just a basic setup followed by user and database creation:

mysql_install_db
chown -R mysql:mysql /var/lib/mysql/
service mariadb start
chkconfig mariadb on
mysql -e "CREATE USER '^^mydbuser_wp^^'@'localhost' IDENTIFIED BY '^^mydbpassword^^';"
mysql -e "CREATE DATABASE ^^mydatabase_wordpress^^"

For data replication (after making sure /home/myuser directory exists) I created simple /root/replicate.sh script with following content:

#!/bin/bash

ssh ^^myuser^^@^^myhost.com^^ "mysqldump -u ^^mydbuser_wp^^ -p^^mydbpassword^^ --opt ^^mydatabase_wordpress^^" > /var/tmp/mysql.dump
mysql ^^mydatabase_wordpress^^ < /var/tmp/mysql.dump
rm /var/tmp/mysql.dump

scp -r ^^myuser^^@^^myhost.com^^:/home/^^myuser^^/* /home/^^myuser^^
#rsync -avz -e ssh ^^myuser^^@^^myhost.com^^:/home/^^myuser^^ /home/^^myuser^^

First three lines ensure I have a fresh MySQL database import and SCP is tasked with file copy. Better approach would be rsync but I kept getting Out of memory errors. As my site is not huge, I opted for dummy copy instead of troubleshooting.

Once I ran script once to verify all is working as expected, i added it to crontab (crontab -e) so it runs every midnight:

…
00 00 * * * /root/replicate.sh

For Apache I edited /etc/httpd/conf/httpd.conf file to change its root:

…
DocumentRoot "^^/home/myuser/public_html^^"
<Directory "^^/home/myuser/public_html^^">
    AllowOverride None
    # Allow open access:
    Require all granted
</Directory>
…
<IfModule mime_module>
    …
    ^^AddType text/html .php .phps^^
</IfModule>
…

Opening filewall was next task:

firewall-cmd --permanent --zone=public --add-service=http
firewall-cmd --permanent --zone=public --add-service=https
firewall-cmd --reload

sudo iptables -I INPUT -p tcp --dport 80 -j ACCEPT
service iptables save

And all remaining was to start Apache:

chown -R apache:apache ^^/home/myuser/public_html^^
restorecon -R ^^/home/myuser/public_html^^
service httpd start
chkconfig httpd on

PS: Do notice that I didn’t describe security setup for this machine. Unless there are some mitigating circumstances, you pretty much want your backup as secure as a real thing.

Git Push to Multiple Repositories

There is no use in having the whole ZFS pool with backups if you won’t use it for your repositories. Backup for the private repositories is trivial - just create a repo on the server and push to it. But how do we add a new repository if one already has a remote (e.g., on GitHub)?

First step is to check current remotes. This information will come in handy a bit later

git remote -v
 origin  git@github.com:medo64/QText.git (fetch)
 origin  git@github.com:medo64/QText.git (push)

Next step is to create a bare backup repository, followed by adding both current and new destination as the push URLs:

git init --bare ^^//ring/Repositories/QText.git^^
git remote set-url --add --push origin git@github.com:medo64/QText.git
git remote set-url --add --push origin ^^//ring/Repositories/QText.git^^
git push -u origin --all

Reason behind the double add is due to Git “forgetting” its default location once the first manual add is executed. Any further update will not be affected.

Now a single push command will update both repositories.

Creating a ZFS Backup Machine

With my main ZFS machine completed, time is now to setup a remote backup. Unlike main server with two disks and an additional SSD, this one will have just a lonely 2 TB disk inside. Main desire is to have a cheap backup machine that we’ll hopefully never use for recovery.

OS of a choice is NAS4Free and I decided to install it directly on HD without a swap partition. Installing on a data drive is a bit controversial but it does simplify setup quite a bit if you move drive from machine to machine. And the swap partition is pretty much unnecessary if you have more than 2 GB of RAM. Remember, we are just going to sync to this machine - nothing else.

After NAS4Free got installed (option 4: Install embedded OS without swap), disk will contain a single boot partition with the rest of space flopping in the breeze. What we want is to add a simple partition on the 4K boundary for our data:

gpart add -t freebsd -b 1655136 -a 4k ada0
 ada0s2 added

Partition start location was selected to be the first one on a 4KB boundary after the 800 MB boot partition. We cannot rely on gpart as it would select the next available location and that would destroy performance on a 4K drives (pretty much any spinning drive these days). And we cannot use freebsd-zfs for partition type since we are playing with MBR partitions and not GPT.

To make disk easier to reach, we label that partition:

glabel label -v disk0 ada0s2

And we of course encrypt it:

geli init -e AES-XTS -l 128 -s 4096 /dev/label/disk0
geli attach /dev/label/disk0

Last step is to actually create our backup pool:

zpool create -O readonly=on -O canmount=off -O compression=on -O atime=off -O utf8only=on -O normalization=formD -O casesensitivity=sensitive -O recordsize=32K -m none Backup-Data label/disk0.eli

To backup data we can then use zfs sync for initial sync:

DATASET="Data/Install"
zfs snapshot ${DATASET}@$Backup_Next
zfs send -p $DATASET@$Backup_Next | ssh $REMOTE_HOST zfs receive -du Backup-Data
zfs rename $DATASET@$Backup_Next Backup_Prev

And similar one for incremental from then on:

DATASET="Data/Install"
zfs snapshot ${DATASET}@$Backup_Next
zfs send -p -i $DATASET@$Backup_Prev $DATASET@$Backup_Next | ssh $REMOTE_HOST zfs receive -du Backup-Data
zfs rename $DATASET@$Backup_Next Backup_Prev

There is a lot more details to think about so I will share script I am using - adjust at will.

Other ZFS posts in this series:

[2018-07-22: NAS4Free has been renamed to XigmaNAS as of July 2018]

Adding Cache to ZFS Pool

To increase performance of a ZFS pool I decided to use read cache in the form of SSD partition. Ss always with ZFS, certain amount of micromanagement is needed for optimal benefits.

Usual recommendation is to have up to 10 GB of cache for each 1 GB of available RAM since ZFS keeps headers for cached information always in RAM. As my machine had total of 8 GB, this pretty much restricted me to the cache size in 60es range.

To keep things sane, I decided to use 48 GB myself. As sizes go, this is quite unusual one and I doubt you can even get such SSD. Not that it mattered as I already had leftover 120 GB SSD laying around.

Since I already had Nas4Free installed on it, I checked partition status

gpart status
  Name  Status  Components
  da1s1      OK  da1
 ada1s1      OK  ada1
 ada1s2      OK  ada1
 ada1s3      OK  ada1
 ada1s1a     OK  ada1s1
 ada1s2b     OK  ada1s2
 ada1s3a     OK  ada1s3

and deleted the last partition:

gpart delete -i 3 ada1
 ada1s3 deleted

Then we have to create partition and label it (optional):

gpart add -t freebsd -s 48G ada1
 ada1s3 added

glabel label -v cache ada1s3

As I had encrypted data pool, it only made sense to encrypt cache too. For this it is very important to check physical sector size:

camcontrol identify ada1 | grep "sector size"
 sector size           logical 512, ^^physical 512^^, offset 0

Whichever physical sector size you see there is one you should give to geli as otherwise you will get permanent ZFS error status when you add cache device. It won’t hurt the pool but it will hide any real error going on so it is better to avoid. In my case, physical sector size was 512 bytes:

geli init -e AES-XTS -l 128 -s ^^512^^ /dev/label/cache
geli attach /dev/label/cache

Last step is adding encrypted cache to our pool:

zpool add Data cache label/cache.eli

All left is to enjoy the speed. :)

Other ZFS posts in this series:

[2018-07-22: NAS4Free has been renamed to XigmaNAS as of July 2018]