Initial encryption of ZFS pool does require a bit of work - especially when it comes to initial disk randomization. Yes, you could skip it but then encrypted bits are going to stick out. It’s best to randomize it all before even doing anything ZFS related.
The first problem I had with the old setup was the need to start randomizing each disk separately. Since operation takes a while (days!), this usually resulted in me starting all dd commands concurrently thus starving it of resources (mostly CPU for random number generation).
As my CPU can generate enough random data to saturate two disks, it made sense to use parallelize xargs using the serial number (diskid) of each disk as an input. While using /dev/sd* would work, I tend to explicitly specify disks serial number as it’s not destructive if ran on the wrong machine. I consider it a protection against myself. :)
The final command still takes ages but it requires only one window and it will take care to keep only two disks busy at a time:
There used to be times when I encrypted each disk with a separate password and that’s still a bit more secure than having a single one. However, with multiple passwords comes a great annoyance. These days I only have a single password for all the disks in the same pool. It makes my life MUCH easier.
In theory, somebody cracking one disk will immediately get access to all my data but in practice it makes no big difference. If somebody decrypted one disk, they either: found a gaping hole in Geli and/or underlying encryption and thus the other disks will suffer the same fate and there’s nothing I can do; or they intercepted one of my keys. As I always use all the keys together, chances are that intercepting one is the same effort as intercepting them all. So I trade a bit of security for a major simplification.
After a failed yum upgrade (darn low memory) I noticed my CentOS NTP server was not booting anymore. Look at console showed progress bar still loading but pressing Escape showed the real issue: Failed to load SELinux policy, freezing.
The first thing in that situation is to try booting without SELinux and the easiest way I found to accomplish this was pressing e on boot menu and then adding selinux=0 at the end of line starting with linux16. Continuing boot with Ctrl+X will load CentOS but with SELinux disabled.
As I actually don’t run my public-facing servers without SELinux, it was time to fix it. Since I didn’t have package before, I installed selinux-policy-targeted but I would equally use reinstall if package was already present. In any case, running both doesn’t hurt:
Finally we need to let system know SELinux should be reapplied. This can be done by creating a special .autorelabel file in the root directory followed by a reboot:
sudotouch /.autorelabel
sudoreboot
During reboot SELinux will reapply all labeling it needs and we can enjoy our server again.
One of my XigmaNAS machines had a curious issue. Upon boot, it wouldn’t display the console menu. After boot it would go directly to the bash prompt - no password asked. While practical, this was completely insecure as anyone capable of connecting monitor and keyboard would get a full access.
Upon checking configuration, culprit was obvious - I changed the default shell. As console menu execution is part of .cshrc configuration file and bash ignores that file, the console menu was forever lost.
I definitely didn’t like that.
Since I really wanted bash prompt but also preferred to have the console menu (that can be disabled), I decided upon a slightly different shell selection approach without system-wide consequences . Simply adding exec bash command at the end of .cshrc works nicely without the nasty side-effects The following PostInit script will do the trick:
And yes, check if we’re working with interactive shell ($?prompt part) is necessary because commands executed without terminal (e.g. directly over SSH or cron jobs) will fail otherwise.
When it came to setup my remote backup machine, only three things were important: use of 4K disks, two disk redundancy (raidz2), and a reasonably efficient storage of variously sized files. Reading around internet lead me to believe volblocksizetweaking was what I needed.
However, unless you create zvol, that knob is actually not available. The only available property impacting file storage capacity is recordsize. Therefore I decided to try out a couple record sizes and see how storage capacity compares.
For the purpose of test I decided to create virtual machine with six extra 20 GB disks. Yes, using virtual machine was not ideal but I was interested in relative results and not the absolute numbers so this would do. And mind you, I wasn’t interested in speed but just in data usage so again virtual machine seemed like a perfect environment.
Instead of properly testing with real files, I created 100000 files that were about 0.5K, 33000 files about 5K, 11000 files about 50K, 3700 files about 500K, 1200 files about 5M, and finally about 400 files around 50M. Essentially, there were six file sizes with each set being one decade bigger but happening only a third as often. The exact size for each file was actually randomly chosen to ensure some variety.
After repeating test three times with each size and for both 4 and 6 disk setup, I get the following result:
4 disks
6 disks
Record size
Used
Available
Used
Available
4K
-
0
61,557
17,064
8K
-
0
61,171
17,450
16K
34,008
4,191
31,223
47,398
32K
34,025
4,173
31,213
47,408
64K
31,300
6,899
31,268
47,353
128K
31,276
6,923
31,180
47,441
256K
30,719
7,481
31,432
47,189
512K
31,069
7,130
31,814
46,807
1024K
30,920
7,279
31,714
46,907
Two things of interest to note here. The first one is that small record size doesn’t really help at all. Quantity of metadata needed goes well over available disk space in the 4-disk case and causes extremely inefficient storage for 6 disks. Although test data set has 30.2 GB, with overhead occupancy goes into the 60+ GB territory. Quite inefficient.
The default 128K value is actually quite well selected. While my (artificial) data set has shown a bit better result with the larger record sizes, essentially everything at 64K and over doesn’t fare too badly.
PS: Excel with raw data and script example is available for download.
PPS: Yes, the script generates the same random numbers every time - this was done intentionally so that the same amount of logical space is used with every test. Do note that doesn’t translate to the same physical space usage as (mostly due to transaction group timing) a slightly different amount of metadata will be written.
My only console - X-Box 360 - is a bit aged by any standard. I don’t find that too bothersome except in one aspect - network connection. Being aged means it has only wired ethernet. Considering I “bought it” for actual cost of $0, paying $50 for wireless adapter would be a bit of a premium.
Fortunately, I had Mikrotik mAP Lite lying around. It’s a small device with 2.4 GHz and a single 100 Mbps RJ-45 Ethernet connector. While not obviously designed to be a wireless client, its powerful software does allow for this.
The very first step is not only resetting Mikrotik mAP lite configuration but actually deleting it fully. Either using System, Reset Configuration, and selecting No Default Configuration or going via terminal is equally good:
/system
reset-configuration no-defaults=yes
Starting with the blank slate would be problematic for many devices, but not Mikrotik as one can always use WinBox and its neighbor search option to connect using MAC address.
On the empty device, the first step is creating the security profile and connecting to the wireless via the bridge. In my case I used WPA2 and with n-only wireless. While default of b/g/n (2.4ghz-b/g/n) does offer a bit more flexibility when it comes to compatibility with other devices, using n-only does help with network’s speed (e.g. beacons are always transmitted at the slowest speed standard allows). Of course, you will also need to know the wireless SSID.
In the Mikrotik’s language these steps can be expressed with the following commands: