Watching Sector Count - Take 2

As I shucked my 12 TB drive and got it working, I noticed my reporting script was reporting it as 11997 GB. What the heck. Even with power-of-two shenanigans, I would expect capacity to follow LBA Count for Disk Drives Standard (LBA1-03) I already wrote about.

When I checked disk, I saw the following:

Drive:WDC WD120EMFZ-11A6JA0
Logical sector:512 b
Physical sector:4096 b
Sector count:23,437,770,752
Capacity:12,000,138,625,024

Based on my calculator, I expected to see size of 12,002,339,414,016 bytes. Was WDC placing non-standard capacity drives in their enclosures? Or did I miss something? Well, I missed something. :)

There is a later version of sector count standard coming from SFF Committee as SFF-8447. And this standard makes a difference between low capacity (80 - 8,000 GB) and high capacity (>8,000 GB) disk drives.

For lower capacity drives, formulas are ones we already know (first one is for 512-byte, second one for 4K sector):

97,696,368 + (1,953,504 * (CapacityInGB - 50)) -or- 12,212,046 + (244,188 * (CapacityInGB - 50))

Drives larger than 8 TB have the following formulas (512-byte, 4K sector sizes):

ceiling(CapacityInBytes / 512, 221) -or- ceiling(CapacityInBytes / 4096, 218)

Armed with both formulas, we can update the sector count calculator - find it below.

GB
 

Killing a Connection on Ubuntu Server 20.04

If you really want to kill a connection on a newer kernel Ubuntu, there is a ss command. For example, to kill connection toward 192.168.1.1 with dynamic remote port 40000 you can use the following:

ss -K dst 192.168.1.1 dport = 40000

Nice, quick, and it definitelly beats messing with routes and waiting for a timeout. This is assuming your kernel was compiled with CONFIG_INET_DIAG_DESTROY (true on Ubuntu).


To get a quick list of established connections for given port, one can use netstat with a quick’n’dirty grep:

$ netstat -nap | grep ESTABLISHED | grep ^^<port>^^

Cleaning Disk

Some time ago I explained my procedure for initializing disks I plan to use in ZFS pool. And the first step was to fill them with random data from /dev/urandom.

However, FreeBSD /dev/urandom is not really the speed monster. If you need something faster but still really secure, you can go with a random AES stream.

openssl enc -aes-128-ctr -pass pass:"$(dd if=/dev/urandom bs=128 count=1 2&gt;/dev/null | hexdump)" \
    -pbkdf2 -nosalt &lt;/dev/zero | dd of=/dev/diskid/^^DISK-ID-123^^ bs=1M

Since the key is derived from random data, in theory it should be equally secure but (depending on CPU), multiple times faster than urandom.

Basic XigmaNAS Stats for InfluxDB

My home monitoring included pretty much anything I wanted to see with one exception - my backup NAS. You see, I use embedded XigmaNAS for my backup server and getting telegraf client onto it is problematic at best. However, who needs Telegraf client anyhow?

Collecting stats themselves is easy enough. Basic CPU stats you get from Telegraf client usually can be easily read via command line tools. As long as you keep the same tags and fields as what Telegraf usually sends you can nicely mingle our manually collected stats with what proper client sends.

And how do we send it? Telegram protocol is essentially just a set of lines pushed using HTTP POST. Yes, if you have a bit more secure system, it’s probably HTTPS and it might even be authenticated. But it’s still POST in essence.

And therein lies XigmaNAS’ problem. There is no curl or wget tooling available. And thus sending HTTP POST on embedded XigmaNAS is not possible. Or is it?

Well, here is the beauty of HTTP - it’s just a freaking text over TCP connection. And ancient (but still beloved) nc tool is good at exactly that - sending stuff over network. As long as you can “echo” stuff, you can redirect it to nc and pretend you have a proper HTTP client. Just don’t forget to set headers.

To cut the story short - here is my script using nc to push statistics from XigmaNAS to my Grafana setup. It’ll send basic CPU, memory, temperature, disk, and ZFS stats. Enjoy.

Drive Shucking

Diskar morghulis.

All drives must die. That was the thought coming through my mind as I noticed remaps pending statistics for one of my backup NAS drives increasing. And it wasn’t just statistics, ZFS was showing checksum errors too. No doubt, it was a time for a new drive.

Even thought I was lucky enough to have older generation CMR Red drive, I was also unlucky enough to be out of warranty - by 2 months. Since my needs increased since, I also didn’t want to just get the same drive again. Nope, I wanted to do a first step toward more capacity in my backup mirror.

I checked prices of drives and saw that they’re not where I wanted them to be. So, after giving it some thought I went looking into alternatives. Finally I decided to go the same route I went when I created my very first NAS located in USA. Shucking.

For those wondering, shucking drives is just a funny name for buying an external drive, removing it from enclosure, and using it just as you would any internal drive. One major advantage is cost. These drives are significantly cheaper than special NAS drives. I got 12 TB priced at $195 when same size Red was around $300. Those are a significant savings.

Downside is that you have no idea what you’re gonna get. Yes, if you order drive larger than 8 TB, you can count on CMR but anything else is an unknown. Most of times you’re gonna end up with a “white label” drive. This seems to be enterprise-class drive with power disable feature (which causes issues with some desktop power supplies) spinning at 5,400 instead of 7,200. Essentially, there is a good chance you got a drive that couldn’t pass internal tests at the full speed.

This is also reflected in warranty. My drive only came with 2-year warranty. Even worse, there is a decent chance manufacturer will simply refuse a service unless you send the whole enclosure. If enclosure gets damaged while getting the drive out - you might be out of luck.

Regardless, savings were too tempting to refuse so I got myself one. It’s a backup machine after all.

To minimize any risk of dead-on-arrival, I actually used it in its intended form - as an USB drive. The first step was to encrypt the whole drive thus essentially writing over each byte. This took ages, reminding me why larger drives might not be the best choice. Once whole disk was filled with random data, I placed it into my ZFS mirror and let resilvering do its magic.

Only once the drive was fully accepted into a 3-way mirror, I performed the shucking as shown YouTube video. Once it was out of its enclosure I powered off the server (no hot-swap) and replaced the failing drive with it. ZFS was smart enough to recognize it’s the same drive and only remaining task was to manually remove now removed old drive.

No issues so far. Let’s see how it goes.


PS: I am not above buying used drive on eBay either but these days asking prices for used drives are just ridiculous…