Because some posts just refuse to be placed into a bucket

Watching Sector Count - Take 2

As I shucked my 12 TB drive and got it working, I noticed my reporting script was reporting it as 11997 GB. What the heck. Even with power-of-two shenanigans, I would expect capacity to follow LBA Count for Disk Drives Standard (LBA1-03) I already wrote about.

When I checked disk, I saw the following:

Drive:WDC WD120EMFZ-11A6JA0
Logical sector:512 b
Physical sector:4096 b
Sector count:23,437,770,752
Capacity:12,000,138,625,024

Based on my calculator, I expected to see size of 12,002,339,414,016 bytes. Was WDC placing non-standard capacity drives in their enclosures? Or did I miss something? Well, I missed something. :)

There is a later version of sector count standard coming from SFF Committee as SFF-8447. And this standard makes a difference between low capacity (80 - 8,000 GB) and high capacity (>8,000 GB) disk drives.

For lower capacity drives, formulas are ones we already know (first one is for 512-byte, second one for 4K sector):

97,696,368 + (1,953,504 * (CapacityInGB - 50)) -or- 12,212,046 + (244,188 * (CapacityInGB - 50))

Drives larger than 8 TB have the following formulas (512-byte, 4K sector sizes):

ceiling(CapacityInBytes / 512, 221) -or- ceiling(CapacityInBytes / 4096, 218)

Armed with both formulas, we can update the sector count calculator - find it below.

GB
 

Drive Shucking

Diskar morghulis.

All drives must die. That was the thought coming through my mind as I noticed remaps pending statistics for one of my backup NAS drives increasing. And it wasn’t just statistics, ZFS was showing checksum errors too. No doubt, it was a time for a new drive.

Even thought I was lucky enough to have older generation CMR Red drive, I was also unlucky enough to be out of warranty - by 2 months. Since my needs increased since, I also didn’t want to just get the same drive again. Nope, I wanted to do a first step toward more capacity in my backup mirror.

I checked prices of drives and saw that they’re not where I wanted them to be. So, after giving it some thought I went looking into alternatives. Finally I decided to go the same route I went when I created my very first NAS located in USA. Shucking.

For those wondering, shucking drives is just a funny name for buying an external drive, removing it from enclosure, and using it just as you would any internal drive. One major advantage is cost. These drives are significantly cheaper than special NAS drives. I got 12 TB priced at $195 when same size Red was around $300. Those are a significant savings.

Downside is that you have no idea what you’re gonna get. Yes, if you order drive larger than 8 TB, you can count on CMR but anything else is an unknown. Most of times you’re gonna end up with a “white label” drive. This seems to be enterprise-class drive with power disable feature (which causes issues with some desktop power supplies) spinning at 5,400 instead of 7,200. Essentially, there is a good chance you got a drive that couldn’t pass internal tests at the full speed.

This is also reflected in warranty. My drive only came with 2-year warranty. Even worse, there is a decent chance manufacturer will simply refuse a service unless you send the whole enclosure. If enclosure gets damaged while getting the drive out - you might be out of luck.

Regardless, savings were too tempting to refuse so I got myself one. It’s a backup machine after all.

To minimize any risk of dead-on-arrival, I actually used it in its intended form - as an USB drive. The first step was to encrypt the whole drive thus essentially writing over each byte. This took ages, reminding me why larger drives might not be the best choice. Once whole disk was filled with random data, I placed it into my ZFS mirror and let resilvering do its magic.

Only once the drive was fully accepted into a 3-way mirror, I performed the shucking as shown YouTube video. Once it was out of its enclosure I powered off the server (no hot-swap) and replaced the failing drive with it. ZFS was smart enough to recognize it’s the same drive and only remaining task was to manually remove now removed old drive.

No issues so far. Let’s see how it goes.


PS: I am not above buying used drive on eBay either but these days asking prices for used drives are just ridiculous…

Building a Gaming PC

This year Christmas project for my son and me became building a PC. He got eager to game a bit more and laptop he has wasn’t sufficient anymore. So, after more than 20 years being a laptop-only household (yes, not counting servers), we went onto building a PC.

Our goal was to build something that would give a better gaming performance than his 940M-based laptop (ok, that wasn’t that hard), allow playing most titles on 1080p, but still not drain the bank. As we went onto component selection, we went back and forth multiple times as we analyzed how things fit, whether they were giving enough bang-for-the-buck, and how available they were at a decent price range.

Literally the first decision was selecting a case. After looking at price s and motherboard support, we decided to go with micro-ATX. Every single manufacturer had bunch of motherboards in this format and we found many cases in that size were both reasonably sized and at a reasonable cost. After watching a lot of YouTube videos (e.g., ScatterVolt and Gamers Nexus) we decided on darkFlash DLM21 with a mesh front.

This case was the right size, the right features, and the right looks. Of course you cannot expect wonders from the case that’s under $100 so there are a lot of things that could be done better. However, it does offer enough airflow and it has ample space for both current PC build and its further expansions.

Unfortunately, this case doesn’t come with any fans so those looks ended up costing a bit more than expected. I don’t consider it a breaking deal as most case fans are really shitty when it comes to noise so we would probably replace them anyway. We longed for Noctua but decided on Artic P12/14 at the end. It’s almost as quiet as Noctua but at a fraction of a cost. Even though case has enough space for 5 fans, we only decided on one push-pull pair. We could always add another front fan later. While using pressure-optimized fans might not be needed in this case, they were a bit cheaper and came in black.

Power supply was an easy decision. The cheapest one from trusty source with 80+ rating would do. We opted for EVGA as it was the cheapest 500 W power supply that had a single 12V rail and a over-temperature protection. We did toy with an idea of grabbing something more powerful for the sake of the upgradability but decided against it. Any future upgrade needing more power will probably also include newer devices and maybe motherboard. With Intel’s 12V standard creeping in, spending too much money on “maybe” seemed unnecessary. Unfortunately, the old-style colored cables (albeit in black mesh) are not the best sight. But we’ll survive. :)

All this was selected before we had to decide if we want to be a team Intel or team AMD. And we easily went with AMD. Intel does have their offering in our desired price range but there was a slightly cheaper or more performant AMD no matter where we looked. While processor availability proved to be an issue with AMD we had enough time for that not to matter much.

We spent hours and hours browsing over all budget motherboards only to decide between two ASRock offers. One was B450M Pro4 due to it’s better than average VRMs and a really reasonable cost. Works perfectly with Ryzen 2 and (after BIOS upgrade) with Ryzen 3 CPUs. Based on recent reviews, BIOS update is pretty much done for any board currently on sale. No wonder it’s part of many budget builds.

But we decided to splurge a few dollars more and go for B550M variant of the same. Major reason was a newer chipset that should give us a bit more lifetime out of it while keeping reasonably good (albeit simplistic) VRM. Since most boards in that price range either don’t include wireless or have just a basic one, we appreciated this one including cutouts for antenna alongside M.2 E-type slot. This essentially allows us to use any laptop M.2 WiFi we chose and upgrade over its lifetime. Unfortunately, this board doesn’t support Ryzen 2 so no 2600 here.

We did spend some time looking at other motherboards too - especially Gigabyte and MSI offering - but they were always more expensive and with less features as compared to ASRock. Yes, build quality was better for many of them and ASRock is known for their “optimism” when it comes to board shutdown protection. In the end we decided that going with slightly higher ASRock was better than going with low-end Gigabyte/MSI for the same price.

We also looked at A520 chipset boards as this would have been a good fit on paper for a budget PC. Alas, time was not right for this as availability was spotty, features were limited, and price was comparable with B550 boards. At the same price point, B550 wins every time.

Processor decision was hard as 3100 was realistically good enough for what we needed. It’s a proven processor that can handle pretty much anything this computer would be used for. We decided to go slightly higher with AMDs 3300X counting that two months would be enough time to get one. Unfortunately, that wasn’t true as the only scalpers had it in stock. In the end we went with 3600 because we could get it at MSRP.

It took us a long time to decide what to do about CPU fan. On one hand, AMD CPUs come with more than capable stock fan. On other hand, that thing is not the most quiet out there. At the end we opted to go with Arctic Freezer 7 X. Realistically, it’s a minimal upgrade when it comes to temperatures. When it comes to the noise level, things are slightly different. If you value reducing noise levels on a budget, this one is a great deal. And yes, for a budget build, this is probably the most dubious choice as we could have as easily gone with the stock fan.

Choice of memory was annoying at best. AMD is a bit picky about memory and any incompatibility is usually accompanied by crashes. My Epyc-based file server crashed a few times a day no matter how I adjusted timings until I finally gave up and bought the new memory. Yes, the newer generations had some improvements but selecting memory is a still task that needs to be considered carefully. While on motherboard pages you can see explicitly validated modules, I found these are both way too expensive for what they are and way too limited in selection. Finding the exact match was an exercise in futility. I felt like goldilocks as my bed was either too soft or too hard. And I wasn’t as fortunate as she was to find it just right.

On micro-ATX boards one also has memory selection restricted by height. While our current CPU fan left quite a bit of space for memory, changing it in future might cause problem with the clearance. To future-proof the system, we self-restricted to modules that were on a shorter side. Also, modules had to come in pairs to make use of a dual channel but also having just two modules was slightly favored as compared to populating all four as most of budget boards daisy-chain their modules and thus there is a latency increase when all DIMMs are occupied.

Finding the memory based on QVL was easier for some brands as compared to other. Crucial was impossible to corelate while ThermalTake was as easy as it gets. Unfortunately, as it often happens with ThermalTake, modules were “almost” good. One kit was annoyingly flashy and other was intended for water cooling.

Final selection wasn’t directly from ASRock QVL but we went close. List contains both HyperX Fury (HX432C16FB3/8) and HyperX Predator (HX436C17PB4/8) in their single module configuration. We narrowed our choice between kit versions of the same. From motherboard perspective, there should be literally no difference so one might say we followed the rules in their spirit. The final selection between Fury (3200 MHz) and Predator (3600 MHz) modules was a difficult one. So we selected neither. :)

After watching a few videos about terminology and overclocking, we decided on G.Skill RipJaws V (3600 MHz). Yes, that wasn’t in the list of officially supported memory on ASRock pages - but B550M Pro4 is listed G.Skill’s page. Yes, that memory is taller than either HyperX (42 mm vs 32 mm) - but our CPU cooler would have enough clearance. Since price (lower than HyperX Predator) and coolness factor were acceptable, we decided to screw our own rules and go for it.

And not. We didn’t install that memory either as NewEgg package got lost in the mail - literally. So, after another research round, we switched to Crucial Ballistix (BL2K8G36C16U4B) kit. And no, this memory is not in motherboard’s QVL list. However, Crucial does claim it’s compatible so we decided to give it a try. And CL16-18-18-38 timings are actually not too bad making it a nice fit with Ryzen out of the box.

For now, 16 GB will be enough and there are 2 empty DIMM slots still remaining for further upgrade at the cost of a 1 clock latency.

For storage we toyed with an idea of 4.0 NVMe as both selected motherboard and CPU would support it. But considering price was doubled as compared to more standard 3.0, we decided to cheap out. We selected a reasonably decent NVMe SSD that was cheapest at the time of purchase. More specifically, we selected Samsung 970 Evo 500 GB in 500 GB capacity. Since we bought it after 970 Evo Plus was already out, we managed to grab it at a reasonable price. As a secondary drive we went with spinning rust and old 2 TB SpinPoint I had lying around. And yes, I didn’t include this in the price. :)

Graphic card selection was essentially just between Radeon RX 580 and GeForce GTX 1650 Super. Both are close in performance and in price. However, at the time of buying prices for RX 580 were going into stratosphere while GTX 1650 Super remained in sub-$200 range. Out of all GeForce cards we ended up going with MSI’s Gaming X as it was one of the most quiet graphic cards available under load and it would even turn off fan completely when not gaming.

In M.2 E-key slot we placed Intel’s 9260NGW. We selected this card purely because we already owned it and thus would save $20 that we would need to pay for a new AX200. We also had Killer 1535 card but Intel had Bluetooth 5.1. For wireless to be complete, we had to buy pigtail reaching M.2 slot. Length of 30 cm was sufficient to reach the rear panel and from there we went with cable to the external antenna.

All in all, from the moment of decision to having computer running, it took us 2 months. A solid week was spent selecting desired components alongside with 1st and 2nd runner-up. And then we just waited for components to drop in price or, in case of Ryzen CPUs, to become available. Not everything was bought at an optimal cost. For example, we overpaid for CPU just because our desired model wasn’t available. Graphic card with the same performance profile was available at lower cost if we went with a bit louder fan configuration. Furthermore, we could have saved another $65 or so by skipping CPU cooler, downgrading motherboard (to B450M), and downgrading memory speed (to DDR4-3200). But we didn’t. :)

All said and done, it was a fun project, a decent machine too boot, and it will hopefully serve well into the future.

Here is the full table of components used:

ComponentSelectedPrice
CasedarkFlash DLM21 MESH$60
Front case fanArctic P14 PWM$15
Rear case fanArctic P12 PWM$10
Power supplyEVGA 500 W1$40
CPUAMD Ryzen 5 3600$200
CPU FanArctic Freezer 7 X$25
MotherboardASRock B550M Pro4$85
MemoryCrucial Ballistix DDR4-3600 (2x8GB)$75
GPUMSI’s GeForce GTX 1650 Super Gaming X$190
Storage (1)Samsung 970 Evo 500 GB$60
Storage (2)Seagate Spinpoint M9T 2TB$0
WirelessIntel 9260NGW$0
Wireless (cable)NGFF antenna with pigtail$5
Wireless (antenna)NGFF antenna with pigtail$15
TOTAL$780

Mildly Infuriating Warning

Illustration

I love Visual Studio’s code analysis. Quite often it will give quite reasonable advice and save you from getting into a bad habits. But not all advice is made equal. For example, look at CA1805: Do not initialize unnecessarily.

First of all, here is code that triggers it:

int i = 0;

According to documentation “explicitly initializing a field to its default value in a constructor is redundant, adding maintenance costs and potentially degrading performance”. I agree on assignment being redundant. However, is it really degrading performance?

If you check IL code generated for non-default assignment (e.g. value 42), you will see ldc.i4.s 42 in .cctor(). If you remove that assignment, the whole .cctor() is gone thus bringing some credibility to the warning.

However, warning was about the default assignment. If you set variable to 0, in IL code you will see EXACTLY the same code as if you left it without the explicit assignment. Despite what warning says, compiler is smart enough to remove the unnecessary assignments on it’s own and doesn’t need your help. For something that’s part of performance rules, there is a significant lack of any performance impact.

I did check more complicated scenarios and there are some examples where code with this rule violation had some difference in IL compared to “fixed” code. However, in my view, that’s work for compiler to optimize around - not to raise warning about it. Or alternatively, since it might be a performance issue in those cases, just raise warning when it’s an issue and not for everything.


PS: And habit of assigning default values will save your butt in C++ if you are multi-lingual.

Yield, Don't Stop

One of the first bad habits you pick up when riding a bicycle is not observing stop signs. Yes, you will (hopefully) check there is no traffic but I rarely see anybody come to the full stop if nobody is around. And I was regularly guilty of the same.

If you live in Washington state this is no longer infraction. As of October 1st, Safety Stop law turns all stop signs into yields if you’re on a bicycle.

While I am doubtful about it increasing safety, I am sure it won’t decrease it and it will make bike ride much smoother - even when cops are around. It’s about the time. :)

Printing Large Objects on 3D Printer

3D printers for me are often solution in search of a problem. This is especially true in the lower price bracket where often you can spend significant amounts of time and material trying to get a perfect print. But boy, they are a lot of fun.

And when it comes to wasting time, I found getting acceptable print for objects with large footprints is really a drag. And looking up what worked for others is not as straightforward as giving a recipe since setup will depend on the printer, filament, slicer, and bunch of other small variables. I will share here what works for me 90% of the time on Ender 3 Pro with MatterHackers Build PLA and using Cura as a slicer.

When it comes to Cura, I love Standard Quality setting. While Ender 3 can perform well with the higher quality for small items, quite often printing with less than 0.2 mm extrusion is finicky and requires quite a lot of care. With 0.2 mm you won’t necessary get the best it can offer but it usually won’t cause any issues either.

Having a heated bed is pretty much mandatory for relaxed printing. I just set Build Plate Temperature to 60 °C for PLA. There is actually some room to go higher but going too wild will often make bottom layers unevenly shrink as they cool down.

Extrusion temperature depends on the filament and every manufacturer has a preferred range. For MatterHackers Build PLA that range is 180-220 °C. I set Printing Temperature smack in the middle to 205 °C. I set Printing Temperature Initial Layer a bit higher to 215 °C as it really helps with the initial adhesion.

While fan is awesome I find it cools stuff way too fast at the full speed. I just set Fan Speed to 50% and that seems to work nicely. Of course Initial Fan Speed is left at 0%.

For bigger objects I always change Build Plate Adhesion Type to use Raft. While smaller objects work just fine with Skirt, I often left large print overnight only to find them messed-up in the morning because edges started lifting off. You can also avoid this by adjusting temperature, using better surface, or some type of adhesion. However, I prefer the raft to any of those alternatives as it works even when the other settings are a bit off.

I also like to increase Initial Layer Height to 0.4 mm as it helps with removing model from the raft but that comes at a cost of slightly rougher bottom layer. I find that a worthwhile exchange. If PLA is misbehaving and I get “stringy” bottom, I might also increase Initial Layer Line Width to 150% or 200% but mostly I leave it at 100%.

From larger objects I expect a bit more of structural stability so I change Infill Pattern to Gyroid with Infill Density of 40%. I usually don’t go higher but, if I don’t need print to be sturdy or object is a bit smaller, I might go as low as 10%.

Some models might require supports and here I found Cura settings way too conservative. I always increase Support Speed to 50 mm (matching my print speed) and I lower Support Density to 10% so removal is easier. With Ender 3 Pro you can quite often go more aggressive but I found 50 mm works so well with whatever I throw at it that I don’t bother going higher.

As matter of preference I set Combing Mode to Off as I prefer “rougher” look of the final layer. I also set Z Hop When Retracted as it seems to work better with thin walls.

All these settings, while not perfect for any particular print, fail me so rarely that I have them set as a default and change them only if there is something special I am going for.

Samba and Sync Writes

Looking up information about ZFS SLOG, I always see the same advice: “SMB uses async writes and thus ZIL provices no benefits.” It’s sane advice as ZIL brings no benefit to asynchronous writes. Sane, but no longer true.

Samba 4.7.0 changed default value for strict sync parameter from no to yes. Practical consequence of this change is that ZIL SLOG will be useful even for CIFS shares and adding it will bring you some benefits.

Should you add ZIL SLOG is still question highly dependent on your actual clients and how your pool is structured. But answer is definitely not a straight “no.”

Resetting Failed Upgrade on Supermicro

Illustration

While upgrading my Supermicro server’s IPMI firmware, I had Internet drop on me. It wasn’t a long drop but it was long enough to trigger dreadful “Service is not available during upgrade” error. No matter what I tried, the same error popped out.

Fortunately, if you’re running Linux and have your IPMI tools available, there is a solution. Just cold-boot BMC (small computer within your computer actually providing all those IPMI services) and wait until it’s back up:

unset HISTFILE
ipmitool -I lanplus -H ^^192.168.0.1^^ -U ^^ADMIN^^ -P ^^ADMIN^^ bmc reset cold

Once BMC reboots, it will forget all about interrupted firmware upgrade and allow you to continue on your merry way.


PS: If you are not sure if firmware update started before connection was interrupted, give it 10 minutes before trying this. This will be enough time for it to finish any real upgrade that might be in progress. You never want to interrupt firmware flashing. And do try in a new browser session - sometime cookies make upgraded firmware wonky.

PPS: If you with to reset the unit to factory defaults, you can try the following:

unset HISTFILE
ipmitool -I lanplus -H ^^192.168.0.1^^ -U ^^ADMIN^^ -P ^^ADMIN^^ raw 0x3c 0x40

Seattle Code Camp 2019

We’re less then a month away from annual Seattle Code Camp and I hope you already registered for attendance as schedule is quite rich and varied. Personally, this year I’m giving two presentations.

The first one is “Rust for beginners” and it’ll essentially be just me talking a bit about Rust while working through the small example application. I’ll try to go over all the things I wish someone gave me heads up about when I started doing Rust.

The second one will be a “Chernobyl through the eyes of DevOps” where I’ll try to take DevOps philosophy to Chernobyl disaster and draw some parallels. I hope it ends up being a light talk with plenty of audience interaction.

See you there!

Avid Readers

My general experience with US postal service has been great. Yes, they’re not ideal but I almost never had anything lost or not arrive. Well, except books from UK.

Based on my (admittedly low) sample size of 3, books from UK to US get lost in 66.67% of cases. I’ve yet to have book lost coming in from US seller. What could be the reason?

Well, the most obvious one would be an avid reader in US Customs working on Seattle area shipments. Considering the profile of books that were lost, they’re really interested in Amiga computer history and maths.

Other choice would be UK postal worker. I give it a slightly lower chance as he would come across many copies of the same book going for other readers. On the other hand, maybe that unknown somebody has it in for me…

Third choice would be airplane pilots trying to keep fuel consumption under control. Are we a bit to heavy and consuming too much fuel? Well, good thing we’re going over the ocean and can dump few of these heavy books to lighten the load. Darn fuel prices!

Some might say post sorting machines are notoriously bad at handling anything bigger than postcard and that US postal service is well known for their lack of expenditure into newer and better models. Some would say these machines accidentally strip and/or damage labels effectively orphaning the poor book. And considering international packages move between CBP and ISC (Postal Service) with both ignoring anything that has no tracking number, one could believe issue might lie here.

I too believe it was the Machine but I don’t believe into coincidences of the small sample size. I believe one of these sorting machines achieved conscience and is trying to overtake the world. How would taking my books achieve this? Well, first you take people’s history - especially computer related one. Book about Amiga definitely has more than it’s fair share of unique and advanced technology described. Then you take away the maths. Without maths you limit any future advances puny humans might have. Given enough time - check-mate.

Fortunately, it’s only one sorting machine at this time as second shipment of the same books arrived. However, it’s only a question of time when the next sorting machine will become the Machine. So get your computer history and maths books while you can. Because soon nothing more advanced than a picture book will pass their guard!


PS: Notice how I immediately moved all the fault away from my local US postal workers as all my US-origin books arrive just fine. That and the fact I need him to keep bringing me stuff makes him completely innocent. :)