Windows 10 Pro USB Install on Dell's XPS 15

Illustration

When I got my new Dell XPS 15 with Windows 10 Home; the first thing I wanted to install my own fresh copy of Windows 10 Pro.

Guess what? Dell, as many other PC manufacturers, stores key in BIOS (Home edition in my case) and it will never ask you for it. Try as you may but Windows installation will never even ask your for alternate key. That is, if you don’t adjust it a bit.

First part is preparing installation USB and these same steps are needed even if you don’t need to change install key. Press <Win>+<R> and write diskpart, followed by OK. This will execute partition editor tool. Be very, very careful to select disk you want to clean and make new installation USB:

LIST DISK
 Disk ###  Status         Size     Free     Dyn  Gpt
 --------  -------------  -------  -------  ---  ---
 Disk 0    Online          476 GB      0 B        *
 Disk 1    Online          931 GB      0 B
 Disk 2    Online         7168 MB      0 B
 Disk ^^3^^    Online         7648 MB      0 B

SELECT DISK ^^3^^
 Disk 3 is now the selected disk.

CLEAN
 DiskPart succeeded in cleaning the disk.``

CREATE PARTITION PRIMARY
 DiskPart succeeded in creating the specified partition.``

FORMAT FS=FAT32 QUICK
 100 percent completed``
 DiskPart successfully formatted the volume.``

EXIT

Assuming that your, newly created and empty, USB drive is under letter U: and your Windows installation disk is at W:, you can use XCOPY to transfer files. Again, press <Win>+<R> to get a prompt where you can enter following command:

XCOPY ^^W:^^*.* /e /f ^^U:^^\

For getting our key into installation we need to create PID.txt with following content (use your key instead of XXXXX-XXXXX-XXXXX-XXXXX-XXXXX):

[PID]
Value=^^XXXXX-XXXXX-XXXXX-XXXXX-XXXXX^^

This file you then copy onto USB to U:\sources or U:\x64\sources folder, depending which one is present.

Now you can plug USB into XPS 15, boot to USB using F12 key and proceed with Windows installation as you usually would. The only difference is that Windows will now use key from USB instead of BIOS and give you the correct edition.

PS: If you want to use USB drive bigger than 64 GB, use CREATE PARTITION PRIMARY SIZE=8000 to make disk appear a bit smaller. Otherwise FAT32 formatting won’t work and that is important for UEFI.

PPS: To avoid entering legacy mode, I like to add custom EFI boot option pointing to \efi\boot\bootx64.efi on USB.

Bimil 2.20

Illustration

This minor update essentially brings only two significant changes.

First is inclusion of NTP check before time-based two-factor authentication code is generated for the first time. If you are getting code on freshly installed computer with wrong date or your clock simply drifted more than required 30 seconds, Bimil is now going to check time and issue correct code regardless of your system clock.

Second important change is Debian package. While you could run Bimil on Linux before, you had to deal with installation and requirements yourself. Now it is enough just to download package and use your favorite (Debian-based) installer. And yes, it does install in /opt.

To check these changes together with a few minor improvements and bug-fixes, you can download Bimil from these pages or update it through application.

Your Evaluation Period Has Ended

Illustration

There are no better ways to spend nice weekend morning than programming - I am sure you agree. ;)

So, one Saturday morning I started Visual Studio Community edition, only to be greeted with “Your evaluation period has ended” message. Somehow my license went “stale” even though I have been using Visual Studio 2017 for months now. Easy enough, there is “Check for an updated license” link immediately below. Unfortunately, that button did nothing except informing me that license couldn’t be downloaded.

Actual solution was to log into the Microsoft account. Once credentials have been verified, dialog simply disappeared. My best guess is that inability to log into account has been the cause for this license covfefe.

Although story has a happy ending, it might not been so if I didn’t have Internet. If this happened in the air, my options would be limited to crying and/or sleeping. Or, if I am lucky, paying exorbitant price of airplane WiFi.

I guess logging off and back onto my Microsoft account should become my standard preflight procedure.

But the question in the back of my mind is: Why the heck would you even put this check in development tool you give away for free?

HDD Health Analysis

Every day I get a daily report from my NAS. It includes bunch of data about ZFS datasets and general machine health. However, one thing was missing - I didn’t really capture hard disk SMART errors.

As disk will report a bunch of values in SMART, I first had to decide which ones to use. A great help here came from BackBlaze as they publish hard drive test data and stats. It is wealth of information and I recommend reading it all. If you decide on shortcut, one of links contains SMART stats they’ve found indicate data failure quite reliably.

First one is Reallocated Sectors Count (5). It is essentially counter of bad sectors found during drive’s operation. Ideally you want this number to be 0. As soon as it starts increasing, one should think about replacing the drive. All my drives so far have this value at 0.

Second attribute I track is Reported Uncorrectable Errors (187). This one shows number of errors that could not be corrected internally using ECC and that resulted in OS-visible read failure. Interestingly only my SSD cache supports this attribute.

One I decided not to track is Command Timeout (188) as, curiously, none of my drives actually report it. Looking into BackBlaze’s data it seems that this one is also the most unreliable of the bunch so no great loss here.

I do track Current Pending Sector Count (197) attribute. While this one doesn’t necessarily mean anything major is wrong and it is transient in nature (i.e. its value can change between some number and 0), I decided to track its value as it indicates potential issues with platter - even if data can be read at later time. This attribute is present (and 0) on my spinning disks while SSD doesn’t support it.

Fifth attribute they mentioned, Uncorrectable Sector Count (198), I do not track. While value could indicate potential issues with platters and disk surface, it is updated only via offline test. As I don’t do those, this value will never actually change. Interestingly, my SSD doesn’t even support this attribute.

I additionally track Power-On Hours (9). I do not have actual threshold nor I plan to replace the drive when it reaches certain value but it will definitely come in handy in correlation with other (potential) errors as all my disks support this attribute. Interestingly, BackBlaze found that failure rates significantly rise after three years. I do expect my drives to last significantly longer as my NAS isn’t stressed nearly as much as BackBlaze’s data center.

Lastly, I track Temperature (194). Again, I track it only to see if everything is ok with cooling. All my drives support it and, as expected, SSD’s temperature is about 10 degrees higher than for spinning drives.

Here is a small and incomplete bash example of commands I use to capture these stats on NAS4Free:

DEVICE=^^ada0^^
DISK_SMART_OUTPUT=`smartctl -a /dev/$DEVICE 2> /dev/null`
DISK_REALLOCATED=`echo "$DISK_SMART_OUTPUT" | egrep "^  5 Reallocated_Sector_Ct" | awk '{print $10}' | cut -dh -f1`
DISK_HOURS=`echo "$DISK_SMART_OUTPUT" | egrep "^  9 Power_On_Hours" | awk '{print $10}' | cut -dh -f1`
DISK_UNCORRECTABLE=`echo "$DISK_SMART_OUTPUT" | egrep "^187 Reported_Uncorrect" | awk '{print $10}' | cut -dh -f1`
DISK_TEMPERATURE=`echo "$DISK_SMART_OUTPUT" | egrep "^194 Temperature_Celsius" | awk '{print $10}' | cut -dh -f1`
DISK_PENDING=`echo "$DISK_SMART_OUTPUT" | egrep "^197 Current_Pending_Sector" | awk '{print $10}' | cut -dh -f1`

Note that I capture the whole smartctl output into a variable instead of multiple calls. This is just a bit of a time saver and there is no issue (other than speed) with simply calling smartctl multiple times. If you do decide to call it only once, do not forget quotes around “echoed” variable as they instruct bash to preserve whitespace.

PS: For curious, drives I use are 2x WD Red 4 TB (3.5"), 2x Seagate 2 TB (2.5"), and Mushkin 120GB (mSATA) SSD cache.

[2018-07-22: NAS4Free has been renamed to XigmaNAS as of July 2018]

Changing Ls Colors on NAS4Free

There is one thing I hate on my NAS4Free server - dark blue color when listing directories using ls command. It is simply an awful choice.

From Linux I knew about LS_COLORS variable and its configuration. However, NAS4Free is not Linux. And while similarities are plentiful, some things are simply not working the same.

Fortunately, one can always consult man page and see FreeBSD uses LSCOLORS variable with wildly different configuration. Curious can look at full configuration but suffice to say I was happy with just changing directory entry from blue (e) to bright blue (E).

To do this in my .bashrc I added:

export LSCOLORS="^^E^^xfxcxdxbxegedabagacad"

PS: How to preserve .bashrc over reboots is exercise left to reader because it depends on your system. Suffice to say is that either ZFS mount point or simply appending in postinit script work equally well.

[2018-07-22: NAS4Free has been renamed to XigmaNAS as of July 2018]