Creating ISO From the Command Line

Creating read-only archives is often beneficial. This is especially so when we are dealing with something standard across many system. And rarely you will find anything more standard than CD/DVD .iso files. You can mount it on both Windows 10 and Linux without any issues.

There are quite a few programs that will allow you to create .iso files but they are often overflowing with ads. Fortunately every Linux distribution comes with a small tool capable of the same without any extra annoyances. That tool is called [mkisofs](https://linux.die.net/man/8/mkisofs).

Basic syntax is easy:

mkisofs -input-charset -utf8 -udf -V "My Label" -o MyDVD.iso ~/MyData/

Setting input charset is essentially only needed to suppress warning. UTF-8 is default anyhow and in 99% cases exactly what you want.

Using UDF as output format enables a bit more flexible file and directory naming rules. Standard ISO 9660 format (even when using level 3) is so full of restrictions making it annoying at best- most notable being support for only uppercase file names. UDF allows Unicode file names up to 255 characters in length and has no limit to directory depth.

Lastly, DVD label is always a nice thing to have.

Let's Encrypt on Linode CentOS 7

Having your web server running on Linode is just a first step. No installation is complete without HTTPS. So I turned to Let’s encrypt and official Certbot instructions.

Albeit, it was not meant to be. Official procedure always resulted in No package certbot-apache available error. So I went with slightly alternate approach:

yum install -y epel-release
yum install -y certbot-apache

Assuming your httpd.conf contains something like this

<VirtualHost *:80>
    ServerName ^^www.example.com^^
    ServerAlias ^^example.com^^
    DocumentRoot "/var/www/html/"
</VirtualHost>

All you need is to run certbot for the first time. Of course, do try staging environment first:

certbot --apache -d ^^example.com^^ -d ^^www.example.com^^ --staging

This will create file at /etc/httpd/conf/httpd-le-ssl.conf that will have your SSL configuration. If you prefer to have all your configuration visible together, you can go ahead and copy it back into httpd.conf with the following result:

<VirtualHost *:443>
    ServerName ^^www.example.com^^
    ServerAlias ^^example.com^^
    DocumentRoot "/var/www/html/"
    Include /etc/letsencrypt/options-ssl-apache.conf
    SSLCertificateFile /etc/letsencrypt/live/^^example.com^^/cert.pem
    SSLCertificateKeyFile /etc/letsencrypt/live/^^example.com^^/privkey.pem
    SSLCertificateChainFile /etc/letsencrypt/live/^^example.com^^/chain.pem
</VirtualHost>

Once you are happy with configuration (remember we are using the staging configuration at this time), you can get a proper production certificate. I personally don’t like my httpd.conf touched so I like to go with alternative “webroot” verification. As our staging certificate is fairly new, we need to force renewal.

certbot certonly --cert-name ^^example.com^^ --webroot --webroot-path /var/www/html/ --post-hook "apachectl graceful" --force-renew

To keep certificate up-to-date, we need to add following line that will attempt recertification twice a day (as recommended):

42 7,19 * * * certbot renew --cert-name ^^example.com^^ --webroot --webroot-path /var/www/html/ --post-hook "apachectl graceful"

Now you can enjoy your encrypted website in its full glory.

Showing Hidden \"Uploading Files\" Settings in WordPress

Illustration

If you install WordPress these days, you won’t even know that upload path is customizable. Setting that used to be under Settings/Media is simply no longer there.

However, if you configured that setting before WordPress 3.5, you will see two additional boxes. Newer versions of WordPress simply hide them if they are left blank. And that leaves us with chicken/egg problem: you cannot change it until you already changed it once before.

Fortunately, web interface is not the only way to change settings in WordPress. We can go directly to MySQL and change settings there.

Of course, adjust database name and path according to your needs. My path is a bit weird as I keep WordPress files in subdirectory, but SQL commands look something like this:

mysql -e "UPDATE ^^wordpress^^.wp_options SET option_value='^^../content/media^^' WHERE option_name='upload_path';"
mysql -e "UPDATE ^^wordpress^^.wp_options SET option_value='^^/content/media^^' WHERE option_name='upload_url_path';"

Now you can refresh admin interface and everything will be in place.

Convert

Back in 2011 I finally got fed up with Firefox. It was slow, crashing every few moments, and often it would hang. Suffice to say, I was not a happy camper. I tried Chrome, fell in love, and haven’t looked back.

Fast forward to 2017. Up to a few days ago I had a déjà vu feeling. Chrome was getting slow, it crashed daily, and it would hang often - especially on YouTube. The only difference being was difficulty of killing Chrome as it consists of multiple processes and some of them cannot be easily killed.

I did entertain idea of Edge for a day just to find it is still a piece of crap, less capable than even Internet Explorer, and incapable of properly handling shortcut toolbar editing. I also though of Safari for a moment but decided against it purely based on dislike of version I installed three years ago.

At the end I gave Firefox a try and now, month later, I am still using it.

Move itself was uneventful and definitively not a big jump. Interface is similar enough to Chrome to a point I mostly don’t even notice I switched - the only minor annoyance is having all downloaded files under menu. It is a bit lower on memory but not much. Not sure it is faster. Bookmark sync works flawlessly.

But the biggest benefit is that it is rock solid. It is essentially what Chrome was for me a year ago. Shows web pages and doesn’t get in the way. Knowing history, I won’t stay with Firefox forever. But I’ll enjoy it for now in hope Chrome will fix their code before Firefox spoils theirs.

Solving \"Failed to Mount Windows Share\"

Illustration

Most of the time I access my home NAS via samba shares. For increased security and performance I force it to use SMB v3 protocol. And therein lies the issue.

Whenever I tried to access my NAS from Linux Mint machine using Caja browser, I would get the same error: “Failed to mount Windows share: Connection timed out.” And it wasn’t connectivity issues as everything would work if I dropped my NAS to SMB v2. And it wasn’t unsupported feature either as Linux supports SMB3 for a while now.

It was just a case of a bit unfortunate default configuration. Albeit man pages tell client max protocol is SMB3, something simply doesn’t click. However, if one manually specifies only SMB3 is to be used, everything starts magically working.

Configuring it is easy; in /etc/samba/smb.conf, within [global], one needs to add

client min protocol = SMB3
client max protocol = SMB3

Alternatively, this can also be done with the following one-liner:

sudo sed -i "/\\[global\\]/a client min protocol = SMB3\nclient max protocol = SMB3" /etc/samba/smb.conf

Once these settings are in, share is accessible.