Windows 10 and Touchpad Disabling on Asus N56VJ

Even though it is a bit old now, I still love my Asus N56VJ laptop. It has nice hardware quite capable of running Window 10 and it does that almost flawlessly. Only thing I found misbehaving is button for disabling touchpad (<Fn>+<F9>). On laptops I previously used (mostly HPs) I wouldn’t consider that a huge issue. However, through great efforts, Asus has managed to manufacture such a lousy touchpad that I consider disabling it a mandatory function.

To get button working on a fresh installation of Windows, first step is to install the latest ATK Package followed by restart (important). On its own this does nothing but enables proper installation of Asus Smart Gesture touchpad application (followed by another restart). Only once both of these are installed, you will get <Fn>+<F9> working again.

Unfortunate news for Windows Insiders is that functionality will disappear as soon as new build is installed. And no, you cannot just repair applications. You will need to fully remove both ATK Package and Asus Smart Gesture followed by computer restart. Only then you can follow the original procedure once again and have the button working.

I guess I cannot expect wonders by using the latest Windows OS with now aging laptop, but I find this behavior most peculiar and worthy a frown.

Ham Check 1.00

Illustration

This program is just one of many programs helping prepare for the amateur radio exams. What differs it from the other similar programs is the support for keyboard-only operation, ability to zoom into the text, and showing the image next to text. While these might seem basic options, not many other exam applications support them.

The whole idea is to have a simple program allowing for quick learning and easy presenting the exam content to others.

Download is available on the program’s page.

Cloning My Website

One disadvantage of having a reliable hosting provider is that you tend to forget about backups. In my ten years with Plus hosting there was not a single case of data loss. Regardless I wanted to go “better safe than sorry” route and make automated backups. And, while I am at it, I might as well use it to make “production replica” environment for testing.

My website environment is based on CentOS 6, MySQL database, and Apache. Ideally I would need exactly the same environment. But in this case I decided to upgrade all components to latest versions. In case of Linux that meant going with CentOS 7.1 (minimal).

Installation for CentOS is as simple as it gets. I basically click on Next until there is no button to press. :) Only possibility of error is forgetting to enable the Ethernet network adapter - not a catastrophic mistake; just annoying one. Once install was done, additional packages were in order:

yum install httpd mariadb-server php php-mysql rsync

To connect to my website I created new SSH keys:

ssh-keygen -b 4096

I appended newly created .ssh/id_rsa.pub key to .ssh/authorized_keys on my web server. That meant I could login and copy files without any passwords - great for scripting.

Setting up MySQL/MariaDB came next. It is just a basic setup followed by user and database creation:

mysql_install_db
chown -R mysql:mysql /var/lib/mysql/
service mariadb start
chkconfig mariadb on
mysql -e "CREATE USER '^^mydbuser_wp^^'@'localhost' IDENTIFIED BY '^^mydbpassword^^';"
mysql -e "CREATE DATABASE ^^mydatabase_wordpress^^"

For data replication (after making sure /home/myuser directory exists) I created simple /root/replicate.sh script with following content:

#!/bin/bash

ssh ^^myuser^^@^^myhost.com^^ "mysqldump -u ^^mydbuser_wp^^ -p^^mydbpassword^^ --opt ^^mydatabase_wordpress^^" > /var/tmp/mysql.dump
mysql ^^mydatabase_wordpress^^ &lt; /var/tmp/mysql.dump
rm /var/tmp/mysql.dump

scp -r ^^myuser^^@^^myhost.com^^:/home/^^myuser^^/* /home/^^myuser^^
#rsync -avz -e ssh ^^myuser^^@^^myhost.com^^:/home/^^myuser^^ /home/^^myuser^^

First three lines ensure I have a fresh MySQL database import and SCP is tasked with file copy. Better approach would be rsync but I kept getting Out of memory errors. As my site is not huge, I opted for dummy copy instead of troubleshooting.

Once I ran script once to verify all is working as expected, i added it to crontab (crontab -e) so it runs every midnight:

…
00 00 * * * /root/replicate.sh

For Apache I edited /etc/httpd/conf/httpd.conf file to change its root:

…
DocumentRoot "^^/home/myuser/public_html^^"
<Directory "^^/home/myuser/public_html^^">
    AllowOverride None
    # Allow open access:
    Require all granted
</Directory>
…
<IfModule mime_module>
    …
    ^^AddType text/html .php .phps^^
</IfModule>
…

Opening filewall was next task:

firewall-cmd --permanent --zone=public --add-service=http
firewall-cmd --permanent --zone=public --add-service=https
firewall-cmd --reload

sudo iptables -I INPUT -p tcp --dport 80 -j ACCEPT
service iptables save

And all remaining was to start Apache:

chown -R apache:apache ^^/home/myuser/public_html^^
restorecon -R ^^/home/myuser/public_html^^
service httpd start
chkconfig httpd on

PS: Do notice that I didn’t describe security setup for this machine. Unless there are some mitigating circumstances, you pretty much want your backup as secure as a real thing.

Git Push to Multiple Repositories

There is no use in having the whole ZFS pool with backups if you won’t use it for your repositories. Backup for the private repositories is trivial - just create a repo on the server and push to it. But how do we add a new repository if one already has a remote (e.g., on GitHub)?

First step is to check current remotes. This information will come in handy a bit later

git remote -v
 origin  git@github.com:medo64/QText.git (fetch)
 origin  git@github.com:medo64/QText.git (push)

Next step is to create a bare backup repository, followed by adding both current and new destination as the push URLs:

git init --bare ^^//ring/Repositories/QText.git^^
git remote set-url --add --push origin git@github.com:medo64/QText.git
git remote set-url --add --push origin ^^//ring/Repositories/QText.git^^
git push -u origin --all

Reason behind the double add is due to Git “forgetting” its default location once the first manual add is executed. Any further update will not be affected.

Now a single push command will update both repositories.

Creating a ZFS Backup Machine

With my main ZFS machine completed, time is now to setup a remote backup. Unlike main server with two disks and an additional SSD, this one will have just a lonely 2 TB disk inside. Main desire is to have a cheap backup machine that we’ll hopefully never use for recovery.

OS of a choice is NAS4Free and I decided to install it directly on HD without a swap partition. Installing on a data drive is a bit controversial but it does simplify setup quite a bit if you move drive from machine to machine. And the swap partition is pretty much unnecessary if you have more than 2 GB of RAM. Remember, we are just going to sync to this machine - nothing else.

After NAS4Free got installed (option 4: Install embedded OS without swap), disk will contain a single boot partition with the rest of space flopping in the breeze. What we want is to add a simple partition on the 4K boundary for our data:

gpart add -t freebsd -b 1655136 -a 4k ada0
 ada0s2 added

Partition start location was selected to be the first one on a 4KB boundary after the 800 MB boot partition. We cannot rely on gpart as it would select the next available location and that would destroy performance on a 4K drives (pretty much any spinning drive these days). And we cannot use freebsd-zfs for partition type since we are playing with MBR partitions and not GPT.

To make disk easier to reach, we label that partition:

glabel label -v disk0 ada0s2

And we of course encrypt it:

geli init -e AES-XTS -l 128 -s 4096 /dev/label/disk0
geli attach /dev/label/disk0

Last step is to actually create our backup pool:

zpool create -O readonly=on -O canmount=off -O compression=on -O atime=off -O utf8only=on -O normalization=formD -O casesensitivity=sensitive -O recordsize=32K -m none Backup-Data label/disk0.eli

To backup data we can then use zfs sync for initial sync:

DATASET="Data/Install"
zfs snapshot ${DATASET}@$Backup_Next
zfs send -p $DATASET@$Backup_Next | ssh $REMOTE_HOST zfs receive -du Backup-Data
zfs rename $DATASET@$Backup_Next Backup_Prev

And similar one for incremental from then on:

DATASET="Data/Install"
zfs snapshot ${DATASET}@$Backup_Next
zfs send -p -i $DATASET@$Backup_Prev $DATASET@$Backup_Next | ssh $REMOTE_HOST zfs receive -du Backup-Data
zfs rename $DATASET@$Backup_Next Backup_Prev

There is a lot more details to think about so I will share script I am using - adjust at will.

Other ZFS posts in this series:

[2018-07-22: NAS4Free has been renamed to XigmaNAS as of July 2018]