Moving from Legacy to UEFI Boot

Illustration

My home media PC is running on old hardware which wasn’t really an issue. But, recently, it started messing with me. So, I decided to move it to a (slightly) newer computer. And this should be as easy as transferring disk. But, in my case I used legacy boot on the old system and the new system only does UEFI. So, in order for disk transplant to take, I had to first move to UEFI on the old computer.

Fortunatelly, Microsoft actually has a half decent answer. But, more importanty, it also provides you with a tool to automate the process. Call me a chicken, but I am always worried when I touch my partitions.

First, you need to boot into the recovery environment. In theory, this should be possible by holding a <Shift> key. In practice, I rarely succeed using this method. What I found works more reliably is simply turning off machine in the middle of the boot. It’s a bit of a brute force solution but, after two unsuccessful boots, Windows will hapilly cooperate.

Once you boot into the Windows Recovery environment, you need to go Troubleshoot, Advanced, Command Prompt. There you can run validation command:

mbr2gpt.exe /validate

This command will let you know if anything is unsupported with your setup. If you have a standard Windows installation, you’ll be fine. If you have extra partitions on your boot drive, you might want to remove them before proceeding.

Once your validation passes, we can trigger the conversion from MBR to GPT which, in this case, also means changing the boot from legacy to UEFI.

mbr2gpt.exe /convert

This command will be done in less then a minute. You might get a warning about WinRE but don’t worry about that right now. Next, you power off the system using Turn off your PC option.

When you start system the next time, you will probably need to go into BIOS (F2 or Del usually do the trick). Now you can select UEFI boot option and disable the old Legacy one. Short reboot later, your Windows should boot using UEFI.

Now you can sort out the WinRE warnng by disabling and re-enabling it again.

reagentc /disable
reagentc /enable

The Story of a Persistent Companion

Illustration

I am a fan of science fiction books and I rarely go toward other genres. I mean, why bother with dark present (or dark past) when you can read about dark future? But, I occasionally do read things that contain no aliens. And one of the alien-deficient authors I like is John Green.

If that name sounds familiar, it’s probably from his Crash Course World History. I watched that darn series with my kids multiple times and, even though there was some growelling, it was an overall enjoyable experience. My first notion of him as an author was Looking for Alaska, a book that I am definitely too old for but one that I enjoyed immensely. Suffice to say that, if he writes something, it’s highly probable I will eventually read it. Maybe not immediately (again, not enough aliens in his work), but I will get around to it.

This time I actually jumped early on his literally train by actually preordering Everything Is Tuberculosis back in 2024 (31st December still counts as 2024!). After reading many of his books, I felt sure enough that book would be readable enough. Book did arrive on time, but then spent a few days just sitting around because I had no time for it.

But, when I got to it, I didn’t let the darn thing go. As often happens with good books and my poor writing skills, I cannot really tell you what made it such a good read. Maybe it was John’s voice playing in my head as if I was listening to one of his Crash Course series. Maybe it was vivid stories about impact of tuberculosis to the real human beings. Maybe it was as simple as me and my personal experiences. It doesn’t really matter, this book touched something that hasn’t been tickled in a while.

I won’t go directly into book content. Not due to spoilers - tuberculosis is quite an old story. Reason is that you can watch John’s own The Deadliest Infectious Disease of All Time video where you’re essentially given the highlights. But, as good video is, book is so much more. It really brings you along for a trip.

If you are going to read one book this year, it might as well be this one.

RayHunter and Access Denied

If you have a spare Orbic RC400L laying around, EFF’s RayHunter might give it a new lease to life. It always warms my heart to see old (and cheap) equipment get some even as it gets gray in hair. So, of course, I tried to get RayHunter running.

Fortunately, instructions are reasonably clear. Just download the latest release and run install-linux.sh. However, on my computer that resulted in an error:

thread 'main' panicked at serial/src/main.rs:151:27:
device found but failed to open: Access denied (insufficient permissions)
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace

Error is clear - insufficient permissions. And you can get around it by running stuff as root. But that should be only the last resort. Proper way to handle this is to add USB device rule that will put it into plugdev group and thus allow current user to access it (at least on Ubuntu).

To do this, first add a file to /etc/udev/rules.d/ directory for 05c6:f601 device (double-check numbers using lsusb, if needed).

sudo tee /etc/udev/rules.d/42-orbic-rc400l.rules << EOF
ACTION=="add", \
SUBSYSTEM=="usb", \
ATTRS{idVendor}=="05c6", \
ATTRS{idProduct}=="f601", \
GROUP="plugdev", \
TAG+="uaccess", \
ATTR{power/control}:="auto"
EOF

Once file is in place, just reload the rules (or restart the computer).

sudo udevadm control --reload-rules && sudo udevadm trigger

With this, script should now update device without any further problems.


PS: It’s really hard for me to tell if IMSI catcher update even works since I never had it trigger.

PPS: Rather than messing with wireless, I like to just access server via USB (adb can be found in platform-tools directory):

./adb forward tcp:8080 tcp:8080
firefox http://localhost:8080/

New Solution File Format

Illustration

Not all heroes wear capes. I mean, bunch of them cannot be bothered to wear pants. But all heroes should at least get a beer. And none more than those that finally took the darn .sln format behind the barn.

Yep, without much fanfare, a new solution file format was introduced. Instead of big ugly sln file everybody was used to but nobody ever loved, we got much simpler slnx file. In just a few lines new format pretty much does the only thing you need it to - list darn projects.

Gone are GUIDs, gone are Debug and Release profiles, and finally, gone is darn BOM with an empty starting line. Essentially everything is gone except for what you actually need. And yes, you can still have debug and release profiles - you just don’t need to explicitly define them in the solution file.

Migration is as easy as it gets:

dotnet sln <solution.sln> migrate
rm <solution.sln>

Looking at the whole .NET ecosystem, this feature is small. In general, I think this syntactic sugar category often gets overlooked. If it’s good, you will actually probably forgot all about how things were before. I hope that, in a few years time, sln will be just a distant memory and a way to scare children into eating their broccoli.

Slashing the Slash

Switching website to 11ty brought minimal changes when it comes to the URL setup. I actually haven’t touched URLs for years now. And there is a benefit to a stability. But then again, not all change is bad. Since 11ty did give me way more flexibility, I decided to switch things a bit.

The first change will probably not be noticed by any human. I finally removed www. prefix. And yes, both www.medo64.com and medo64.com were always supported with non-www version being redirected. However, for ages now all browsers would simply remove www prefix when displaying the address. So, one could argue that this blog’s address has been medo64.com as far as people are concerned. In .htaccess this was just a simple change to a redirect:

RewriteCond %{HTTP_HOST} ^www\.medo64\.com$ [NC]
RewriteRule ^(.*)$ https://medo64.com%{REQUEST_URI} [R=301,L]

The second change I did was getting rid of /YYYY/MM/ structure. This was actually trivial as it’s default 11ty behavior when it comes to slugs. Migrating from Wordpress, I actually had to add special slug code to retain it. Therefore, one could say I’m actually simplyhing the things now for both me and the reader.

Why did I have dates in URL in the first place? Well, I started blogging on Blogger platform that had this as default. As I moved my blog over various platforms, I just simply kept the URL format in order to make migration easier. Suffice to say, I still support old format just by a simple redirect in .htaccess:

RewriteRule ^[0-9][0-9][0-9][0-9]/[0-9][0-9]/(.*) /posts/$1 [R=301,L]

The last change is probably one I will be dealing with the longest and that will cause the most mistakes. I decided to remove the final URL slash (/). Fortunately, 11ty does support removing the slash with a simple config code:

eleventyConfig.addUrlTransform((page) => {
  if (page.url !== "/" && page.url.endsWith("/")) {
    return page.url.slice(0, -1);
  }
});

This will indeed change all 11ty generated URLs but mission is not accomplished yet. In my case, this broke page retrieval (e.g. used for literal pages). For example, in order to match both slash-anding and non-slash-ending pages, I now use:

return arr.filter(item => urls.includes(item.url) || urls.includes(item.url + '/'));

But that alone is not sufficient on Apache (and I suspect many different web servers). You also need to tell it to read index.html when user asks for a directory. I personally solve this in .htaccess:

RewriteCond %{REQUEST_URI} !/$
RewriteCond %{REQUEST_URI} !\.html$
RewriteCond %{REQUEST_FILENAME} -d
RewriteCond %{REQUEST_FILENAME}/index.html -f
RewriteRule ^(.*)$ $1/index.html

And yes, we’re still not done as there is yet another change to make. What you need it also a Directory directive in your apache configuration file:

<Directory "/srv/apache/">
  AllowOverride FileInfo
  DirectorySlash Off
</Directory>

And with all this in place I finally got rid of the pesky slash. So much work for a single character.

And there is also some minor work intentionally remaining. For example, I don’t redirect slash-URL to its non-slashed version. Not because it’s hard but because browser might have cached the old redirect thus making it a potential loop. I will probably come back and clean that up only after a month or two.

Moreover, as with all changes, I am sure I missed something. But, with 11ty in the backend, I am sure fixing stuff won’t be too difficult, once issue is discovered.