One addition to these pages you might have noticed is, at the very bottom, an icon allowing you to switch between dark and light mode. And it’s not just a simple switch, it’s a tri-switch! While it allows for fixed light and dark modes, it also includes an automatic mode (i.e., based on your system settings).
And yes, there are quite a few ways that smarter people have used, but neither one worked exactly how I wanted. So, let’s see yet another way to do the same thing.
First step is, of course, setting up CSS. All colors for light scheme get to be defined in section with prefers-color-scheme: light, while dark colors get their prefers-color-scheme: dark section. I personally like to use these to setup variables to be used later, but you can define styles directly too.
Next step is setting up a “button” for switching between themes. While we define three links, neither one of them is shown by default - we’ll sort that out later in the code.
And yes, for my pages I don’t actually use text but icons. Below are links for Lucide icons I use currently, but you can go with whichever set you want.
(automatic)
(dark)
(light)
Lastly, we come to the code and I’m just gonna drop the whole thing here. Explanations as to what each section does will be below.
<script>let systemScheme ='light';if(window.matchMedia('(prefers-color-scheme: light)').matches){ systemScheme ='light';}if(window.matchMedia('(prefers-color-scheme: dark)').matches){ systemScheme ='dark';}let savedScheme = localStorage.getItem("color-scheme");let currentScheme = systemScheme;switch(savedScheme){case"light":
currentScheme ="light";break;case"dark":
currentScheme ="dark";break;default:
savedScheme ="auto";break;}if(currentScheme !== systemScheme){// swap at start so there's no flashfor(var s =0; s < document.styleSheets.length; s++){try{for(var i =0; i < document.styleSheets[s].cssRules.length; i++){
rule = document.styleSheets[s].cssRules[i];if(rule && rule.media && rule.media.mediaText.includes("prefers-color-scheme")){
ruleMedia = rule.media.mediaText;if(ruleMedia.includes("light")){
newRuleMedia = ruleMedia.replace("light","dark");}elseif(ruleMedia.includes("dark")){
newRuleMedia = ruleMedia.replace("dark","light");}if(newRuleMedia !==null){
rule.media.deleteMedium(ruleMedia);
rule.media.appendMedium(newRuleMedia);}}}}catch(e){}}}functionnextColorScheme(){switch(savedScheme){case"light": localStorage.removeItem("color-scheme");break;case"dark": localStorage.setItem("color-scheme","light");break;default: localStorage.setItem("color-scheme","dark");break;}
window.location.reload();// to force button update}functionupdateButtons(){switch(savedScheme){case"light": document.getElementById("color-scheme-light").style.display ='inline';break;case"dark": document.getElementById("color-scheme-dark").style.display ='inline';break;default: document.getElementById("color-scheme-auto").style.display ='inline';break;}}
document.addEventListener('DOMContentLoaded',function(){
document.getElementById('color-scheme-auto').addEventListener('click', nextColorScheme);
document.getElementById('color-scheme-dark').addEventListener('click', nextColorScheme);
document.getElementById('color-scheme-light').addEventListener('click', nextColorScheme);updateButtons();});</script>
The first portion just determines system scheme and stores it in systemScheme variable. Variable will contain whatever system tells the preferred scheme should be - either light or dark.
Next portion is all about loading what user (maybe) saved the last time. For this purpose we’re using localStorage and the result gets stored in savedScheme variable. Its state will set currentScheme variable to match either what is stored or the system scheme if we have no better idea (i.e., automatic mode).
End result of this variable game is decision if currentScheme differs from systemScheme. If they are different, we simply swap dark and light settings around. This swap is actually what does all the heavy lifting.
The nextColorScheme method checks the current state (savedState variable) and moves to the next one. States are written as light and dark. For automatic handling, code simply deletes storage altogether. Once that is done, it won’t attempt to sort out any swaps needed to get colors in line. Nope, it will simply reload the page and let the loading code sort it out.
The updateButtons is what displays whatever scheme is selected for a bit of user feedback.
The last portion of code will add an event listener to the click event of each scheme “button” (identified by id) so that each click calls nextColorScheme method. Here is also where we call updateButtons method to show the current state.
With all this in place, our theme switching should just work.
As someone maintaining my own web server, I often use various tools to determine if things are good. As web servers are not my daily job, I found that is the only way to save both sanity and time.
One of the most helpful tools comes courtesy of SSL Labs. Their SSL/TLS test suite is both simple to use and full of really good data. While getting a good score doesn’t guarantee everything is secure, it shows you are doing at least some things right.
As of Jan 31st 2020, SSL Labs decided to cap grade to B for lower TLS (1.0 and 1.1) protocols. That means even if your server was a class star until then, starting February it got relegated to a B league. Totally unacceptable!
Fortunately, if you are using Apache, change is easy:
SSLProtocol all -SSLv2 -SSLv3 -TLSv1 -TLSv1.1
SSLCipherSuite HIGH:-MEDIUM:!LOW:!aNULL:!MD5
SSLHonorCipherOrder on
With this adjustment your server can enjoy A+ again.
PS: Cost? Say goodbye to Android 4.3, Windows Phone 8, Java 7, Safari 6, and Internet Explorer on Windows 7. For me personally all things I can live without.
I got into habit of installing the latest WordPress as soon as it gets out. Usually I don’t have any problems but this time it was not to be.
With 5.0, the first issue I noted was that I couldn’t schedule my posts. Yes, WordPress would tell me post was scheduled only to find out it was still in drafts. That alone wouldn’t drive me away if I could write new posts. Yep, after upgrade my blogging software wouldn’t blog any more.
It was clear that downgrade was in order. But how?
While going back from the backup was possible, there is something even better - WP Downgrade plugin. Once installed, you can specify WordPress version you want (4.9.8 in my case) and just pretend you’re doing another upgrade. Once completed, you are back on the old, working, version.
If you want to go forward, just deactivate plugin until needed again. Nice!
You finally got HTTPS running on your web server. Is there anything else you can do? Well, let me tell you about a few (free) things you can do.
Test HTTPS
Probably the most important work you can do when setting up HTTPS is testing all the changes. While you can use curl and “sweat of your brow”, I prefer using SSL Labs. It covers a bunch of stuff and it gets regularly updated with the latest recommendations. If test finds anything needing an improvement, you will get enough information to fix it.
To be sure your setup is not unnecessarily slow, a speed test does come in handy. If you run the same test toward both your HTTP and HTTPS setup, you should except numbers to be very close. While it will become impossible to test HTTP-only speed once you fully activate HTTPS, you can still benefit from “run A” vs “run B” testing.
There is a lot of small fiddly details with HTTPS and testing will prevent you from going at it blind.
Monitor Certificate Expiration
If you are using Let’s Encrypt it’s a pure necessity to monitor expiration of your certificates. Three month validity might seem long but, once everything starts working, you will forget to check and you have inaccessible web site on your hands. Half an hour needed to setup and testing monitoring is well worth it.
Of course, if you are using commercial certificate provider, you can ignore this as they’ll bug you enough.
Monitor Issued Certificates
As you are already monitoring your certificate expiry, you might also want to monitor who is generating them. If you use Cert Spotter, you’ll get an email every time one of your domains gets a new certificate. For 99% sites, including this, this is pure overkill. But that doesn’t mean you shouldn’t sign up. :)
Setup Expect-CT
If you use any decent certificate provider, you can expect them to report all issued certificates to Certificate Transparency project. Armed with this assumption, you can start sending Expect-CT HTTP header. In practice this protects you from man-in-the-middle attacks by certificate authorities already trusted by your computer. Great examples include your company or flight entertainment CA. If they try to fudge your TLS connection, this way you’ll know.
Setup CAA
If you have access to your DNS settings, you should think about setting up CAA. How far can you go depends on your DNS provider. Some of them, like CloudFlare, support only a subset of needed functionality. Realistically, even that is sufficient but for the full compliance to rules, raw DNS access is the best. In theory this will protect you against issuance of certificate by a non-trusted CA. Since this is based on gentlemen’s agreement, the actual enforcement is yet to be proven.
Setup HSTS
Lastly, once you sort everything else, do look into HSTS. It is a bit of work to apply and get certified for entrance into the preload list with multiple consequences. The most obvious one is that your domain will always be loaded in its HTTPS glory instead of the HTTP redirect. However, that pales in comparison to the most important benefit - the bragging rights since your website is explicitly compiled in the every major browser. That and sense of impending doom as any HTTPS mistake will render your website completely inaccessible. I guess this is not for those of weak heart.
While I have been using HTTPS for a while now and even went through trouble to include my domains for HSTS preloading, one security improvement I never opted to do was inclusion of HTTP Public Key Pinning header (HPKP for friends).
While not impossible to do, on short lived certificates (e.g. Let’s Encrypt) it was simply too much trouble to bother. And I wasn’t the only one to think so - less than 400 sites (out of top 1 million) decided to bother. Stakes were simply too high that a small mistake on web configuration side might kill your website connectivity.
Replacement for HPKP is offered in the form of the new CT-Expect header. Major benefits are both ease of configuration (just include header) and reliance on the already existing certificate transparency reports to detect issues. While not offering low-level control as HPKP does, it does increase certificate security significantly.
For my site, turning it on was as easy as adding a single directive in Apache httpd.conf:
Header always set Expect-CT "max-age=86400"
While this does require some support on the side of certificate authority, it’s nothing major. And you should probably run away if your authority has issues with it. When even free Let’s Encrypt supports certificate transparency, there is no excuse for others.
Whether this header will stick around for a while or also die in obscurity is hard to tell. However, it’s simplicity does make lasting implementation probable.
After my web server worked for a few days, I wanted to add additional domain. Easy enough, done it before - I though. However, I got hit by an error as I tried to get Let’s Encrypt certificate in place: The server experienced a TLS error during domain verification :: remote error: tls: handshake failure.
The most interesting thing was that I haven’t changed anything on the web server. So I tried a few old commands (history is handy here) with the Let’s Encrypt’s staging server. Although I was sure they worked when I was originally setting up the server, I was suddenly presented with the same error for all of them. And nothing changed!
After looking at the error message a bit more, I suddenly remembered that one thing did change - I moved my domain records to CloudFlare. Certbot was trying to authenticate my web server but ended up contacting CloudFlare and fetching their TLS certificate for my web site instead. Solution was simple - just temporarily disable caching on CloudFlare and certbot will successfully issue the certificate.
While solution was simple, it wasn’t ideal as this also meant I would need to repeat the same disabling every 90 days in order to do certificate renewal. I wanted solution that would allow automating renewal without the need for any manual action. This came by slightly altering how certbot performs verification.
Alternative approach above stores certificate into .well-known directory under the root web site path and thus works nicely with the CloudFlare caching.
While my site is publicly accessible (as proved by the fact you’re reading this), I also have a few more private domains. Whether for stats or just testing, some areas are simply not meant for general public. My usual approach has been to simply turn on basic password authentication and call it a day.
But, as I serve as my own certificate authority, I started to wonder whether I can skip password altogether and rely on client-side TLS certificate.
The easiest approach is simply modifying virtual host section in httpd.conf to verify client certificate:
However, this approach has downside of blocking every single file. If you are using Let’s Encrypt, congratulations, renewals are successfully blocked too. Since that is not really desired, a bit more complicated setup is needed.
Virtual host section remains almost the same, albeit with slight difference on where SSLVerifyClient is handled:
If you install WordPress these days, you won’t even know that upload path is customizable. Setting that used to be under Settings/Media is simply no longer there.
However, if you configured that setting before WordPress 3.5, you will see two additional boxes. Newer versions of WordPress simply hide them if they are left blank. And that leaves us with chicken/egg problem: you cannot change it until you already changed it once before.
Fortunately, web interface is not the only way to change settings in WordPress. We can go directly to MySQL and change settings there.
Of course, adjust database name and path according to your needs. My path is a bit weird as I keep WordPress files in subdirectory, but SQL commands look something like this:
mysql -e"UPDATE ^^wordpress^^.wp_options SET option_value='^^../content/media^^' WHERE option_name='upload_path';"
mysql -e"UPDATE ^^wordpress^^.wp_options SET option_value='^^/content/media^^' WHERE option_name='upload_url_path';"
Now you can refresh admin interface and everything will be in place.
Those downloading files over unreliable Internet connection are familiar with the curse of partially or badly downloaded file. For detecting such transmission errors, hash or CRC codes come in really handy. While none will fix your file, they will allow you to check whether bytes you received are the same bytes the server was sending.
I wanted to have SHA-256 hash codes available on my site too but I hated the idea of manually calculating them every time I upload something new. I wanted to have something that would work without any change to my usual workflow.
Solution ended up being a two separate parts. First part was generating SHA-256 hash. For this I simply created bash script to go over the every file in download and download/runtime directories:
This script I added as a cron job to simply run every day. A new file with .sha256 extension gets magically created after execution is done.
Second part was creating a WordPress plugin. For this I wanted to keep it simple and just make it work as a short-code. Its full duty would be, whenever it finds downhash short code to create a link and, if .sha256 file exists, to set SHA-256 as its title. In practice this means SHA-256 hash would appear as a tooltip when mouse gets over the link. Visible for those who want it, but unobtrusive for normal people. :)
And yes, the code does include a bit of hard-coded styling. In my defense, I don’t plan to publish this as an official plugin and it does simplify the code quite a bit:
I already wrote about optimizing images for your website. What I didn’t tell at that time is that, since I ran tools under Windows, this meant a lot of download/upload shenanigans. Yes, it was scriptable but annoying nonetheless. What I needed was a way to run both OptiPNG and jpegoptim automatically on the web host itself.
These pages are hosted by DreamHost which currently runs on Debian (Wheezy) and its linux environment is rather rich. Despite, neither of my preferred tools was installed and, since this was not my machine, just installing a package was also not really possible.
However, one thing I could do is to actually build them from sources. With jpegoptim this is easy as it uses GitHub for source. With OptiPNG it gets a bit more involved but nothing too far from just a basic download and compile:
mkdir-p ~/bin
cd ~/bin
git clone https://github.com/tjko/jpegoptim
cd jpegoptim/
./configure
make clean
makecd ~/bin
wget http://prdownloads.sourceforge.net/optipng/optipng-0.7.6.tar.gz -O /tmp/optipng.tgz
mkdir optipng
cd optipng
tar-xzvf /tmp/optipng.tgz --strip-components 1rm /tmp/optipng.tgz
./configure
make clean
make
With both tools compiles, we can finally go over all the images and get them into shape: