Git Client on NAS4Free

Using NAS4Free as my centralized storage also means I use it for Git repositories. As such it would be awesome if I could do some basic stuff with Git directly from my server’s bash command line.

It’s possible to install packages on NAS4Free - just like on any FreeBSD - you can write pkg install -y git. However, due to file system size of embedded installation that simple solution doesn’t work - it’s strange how fast those memory disks fill. Yes, one could go with the full NAS4Free install instead but there’s another way.

When it comes to file systems any BSD offers a lot of possibilities - one of which is unionfs. That one allows you to expand file system with the additional space while actually preserving stuff already present in directory.

To preserve the fleeting nature of embedded installation and due to sometime finicky nature of the united file system, I find it’s best to have a fresh directory for every boot, followed by mounting of fresh overlay file system:

rm -rf /mnt/.unionfs 2> /dev/null
mkdir -p /mnt/.unionfs/usr/local
mount_unionfs /mnt/.unionfs/usr/local /usr/local

Now installing Git actually works:

pkg install -y git

One could optionally also set Git configuration, especially user name and e-mail:

cat <<EOF > /root/.gitconfig
[system]
autocrlf = false
[core]
pager = less
whitespace = cr-at-eol
[user]
name = ^^Charlie Root^^
email = ^^root@my.home^^
EOF

If all this is placed in System, Advanced, Command scripts, it’ll get preserved with reboots.

[2018-07-22: NAS4Free has been renamed to XigmaNAS as of July 2018]

DNS Conundrum

DNS, known as the phone book of the Internet, is often something you get from your ISP provider with your IP address. And quite often people continue using those automatically given values without ever looking back as such DSN will serve its basic function just fine. But what if you want a bit more privacy, performance, or sometime stability? Enter the world of public DNS resolvers.

One of the first projects to bring DNS to people was OpenNIC. It’s a community driven project with its members providing DNS service on their own hardware similar to how NTP pool works. You select any server close to you and off you go. There are anycast addresses too but their usage is not encouraged albeit I find them much more practical - especially as individual servers can go down at any time. For this service you definitely want to fill secondary DNS server. There are no special privacy features it supports and whether you request is logged or not depends on server’s operator. Support for DNSCrypt is spotty and depends on the exact server you use.

Then came OpenDNS. It’s more centralized service and I found it often has the fastest response of all tested. DNSCrypt is fully supported but privacy is questionable. Enterprise users even have option of getting the logs themselves. If you are only interested in the speed and stability, you won’t go wrong if you select it but don’t expect you’ll know where your data goes.

The first DNS server that started the trend of memorable IP addresses was Google’s. I can almost bet most of its early adoption came from easily memorable 8.8.8.8 IP. Strangely DNSCrypt is not supported yet but privacy is decent. While temporary logs are kept, they are usually deleted within 48 hours. Limited information kept in permanent log is anonymized.

IBM didn’t want to lag behind Google much so they offered Quad9 DNS resolver to the world continuing the trend of easy to remember IP addresses. Privacy is a touch better than Google’s as IP address is not stored even in temporary logs. They do not support DNSCrypt and they do filter content for known phishing sites but supposedly there is no censorship involved. Those willing to deal with slightly more difficult-to-remember IP addresses can get unfiltered access and that’s really nice.

The latest to the party are Cloudflare and APNIC with their DNS. Unfortunately, DNSCrypt is not supported but it does support alternative DNS over HTTPS. While I have small preference toward more usual DNSCrypt, DoH seems to be a reasonable alternative. Most of the logged information is cleared after 24 hours and IP address is not logged in the first place so privacy should not be the issue. Information saved in permanent log does not contain personally identifiable data and it’s even further anonymized. In return for APNIC letting them use awesome 1.1.1.1 IP address, they do share some anonymous log data with them but only for research purposes.

For my personal network I decided to go with Cloudflare and non-filtered Quad9. While their DNS uptime has been impeccable so far, having them both configured does allow a bit of resilience if one network ever goes down. As my network setup unfortunately doesn’t allow me to make encrypted DNS queries I didn’t really take that into account when deciding. However, their claim that IP addresses are not logged did have a measurable impact. I know, there is a great deal of trust involved here but, as both companies do have decent reputation, I do believe their statements.

In the end, everybody values different things and all these choices are valid for one purpose or another. Here are IP addresses for public DNS services in my order of preference:

ProviderPrimary IPSecondary IP
Cloudflare1.1.1.11.0.0.1
2606:4700:4700::11112606:4700:4700::1001
Quad99.9.9.10149.112.112.10
2620:fe::10
Google8.8.8.88.8.4.4
2001:4860:4860::88882001:4860:4860::8844
Quad99.9.9.9149.112.112.112
2620:fe::fe2620:fe::9
OpenNIC185.121.177.177169.239.202.202
2a05:dfc7:5::532a05:dfc7:5::5353
OpenDNS208.67.222.222208.67.220.220
2620:0:ccc::22620:0:ccd::2

Expect-CT

While I have been using HTTPS for a while now and even went through trouble to include my domains for HSTS preloading, one security improvement I never opted to do was inclusion of HTTP Public Key Pinning header (HPKP for friends).

While not impossible to do, on short lived certificates (e.g. Let’s Encrypt) it was simply too much trouble to bother. And I wasn’t the only one to think so - less than 400 sites (out of top 1 million) decided to bother. Stakes were simply too high that a small mistake on web configuration side might kill your website connectivity.

And so, with Chrome 67, Google is abandoning it.

Replacement for HPKP is offered in the form of the new CT-Expect header. Major benefits are both ease of configuration (just include header) and reliance on the already existing certificate transparency reports to detect issues. While not offering low-level control as HPKP does, it does increase certificate security significantly.

For my site, turning it on was as easy as adding a single directive in Apache httpd.conf:

Header always set Expect-CT "max-age=86400"

While this does require some support on the side of certificate authority, it’s nothing major. And you should probably run away if your authority has issues with it. When even free Let’s Encrypt supports certificate transparency, there is no excuse for others.

Whether this header will stick around for a while or also die in obscurity is hard to tell. However, it’s simplicity does make lasting implementation probable.

Omitting Quotes From DebuggerDisplay

Using DebuggerDisplay is both simple and really helps with troubleshooting. However, its automatic quoting of strings can sometime result in less than optimal tooltip display.

For example, if you have Key and Value field with “Foo” and “Bar” as their respective content, you might end up with the following attribute:

DebuggerDisplay("{Key}: {Value}");

This will result in "Foo": "Bar" tooltip text. While not a big deal, these excessive quotes can be a bit of annoyance. Fortunately, you can tell DebuggerDisplay to stops its auto-quoting:

DebuggerDisplay("{Key,nq}: {Value,nq}");

This will result in much nicer-looking Foo: Bar output.

Firefox and Java Console

Illustration

When you’re dealing with a lot of Linux servers, having a Linux client really comes in handy. My setup consisted of Linux Mint 18 and I could perform almost every task. I say almost because one task was always out of reach - viewing HP iLO console.

Two options were offered there - ActiveX and Java. While ActiveX had obvious platform restrictions, multi-platform promise of Java made its absence a bit of a curiosity. Quick search on Internet resolved that curiosity quickly - Firefox version 53 and above dropped NPAPI plugin system support and HP was just too lazy and Windows-centric to ever replace it. However, Firefox 52 still has Java support and that release is even still supported (albeit not after 2018). So why not install it and use it for Java iLO console?

First we need to download Firefox 52 ESR - the latest version still allowing for Java plugin. You can download these from Mozzila but do make sure you select release 52 and appropriate release for your computer (64-bit or 32-bit).

With the release downloaded, we can install it manually into a separate directory (/opt/firefox52) as not to disturbe the latest version. In addition to Firefox, we’ll also need IcedTea plugin installed:

tar -xjf ~/Downloads/firefox-52.8.0esr.tar.bz2

sudo mv firefox /opt/firefox52

sudo apt install -y icedtea-plugin

Of course, just installing is worthless if we cannot start it. For this having a desktop entry is helpful. I like to use a separate profile for it as that makes running side-by-side the newest and this release possible. After this is done you’ll find “Firefox 52 ESR” right next to a normal Firefox entry.

mkdir -p ~/.mozilla/firefox52

sudo bash -c 'cat << 'EOF' > /usr/share/applications/firefox52.desktop 
[Desktop Entry]
Name=Firefox 52 ESR
GenericName=Web Browser
Exec=/opt/firefox52/firefox --no-remote --profile ~/.mozilla/firefox52
Icon=firefox
Type=Application
Categories=GNOME;GTK;Network;WebBrowser;
EOF'

The final step is going to “about:plugins” within Firefox 52 ESR and selecting “Always Activate” for IcedTea plugin.

Now you can use Firefox 52 ESR whenever you need the Java Console.