As I went onto rewriting QText, I did so in the completely new repository. It just made more sense that way. In time, this new code became what the next QText version will be. And now there’s a question - should I still keep it in a separate repository?
After some thinking, I decided to bring the new repository (QTextEx) as a branch in the old repository (QText). That way I have a common history while still being able to load the old C# version if needed.
All operations below are to executed in the destination repository (QText).
The first step is to create a fresh new branch without any common history. This will ensure Git doesn’t try to do some “smart” stuff when we already know these repositories are unrelated.
git switch --discard-changes --orphan ^^new^^
This will leave quite a few files behind. You probably want to clean those before proceeding.
The next step is to add one repository into the other. This can be done by adding remote into destination, pointing toward the source. Remote can be anywhere but I find it easiest if I use it directly from my file system. After fetching the repository, a simple merge is all it takes to get the commits. Once that’s done, you can remove the remote.
Back in the days of Ubuntu 20.04, I did some ZFS native encryption testing. Results were not promising to say the least but they were done using ZFS 0.8.3 on Ubuntu 20.04. There was a hope that Ubuntu 20.10 bringing 0.8.4 would have a lot of performance improvements. So I repeated my testing.
First I tested CCM and saw that results were 10-15% lower than in 20.04. However, this was probably not due to ZFS changes as both Luks and no-encryption numbers dropped too. As my testing was done on a virtual machine, it might not be anything related to Ubuntu at all. For all practical purposes, you can view those results as unchanged.
However, when I tested GCM encryption speed, I had to repeat test multiple times because I couldn’t believe the results I was seeing. ZFS native encryption using GCM was only about 25% slower than no encryption at all and handily beating Luks numbers. Compared to the last year’s times, GCM encryption got a fivefold improvement. That’s what I call optimization.
Last year I suggested going with native ZFS encryption only when you are really interested in ZFS having direct physical access to drives or if you were interested in encrypted send/receive. For performance critical scenarios, Luks was the way to go.
Now I can honestly recommend going with the native ZFS encryption (provided you use GCM). It’s as fast as Luks, allows ZFS to handle physical drives directly, and simplifies the setup. The only scenario where Luks still matters is if you want to completely hide your disk content as native encryption does leak some metadata (e.g., dataset properties). And no, you don’t need to upgrade to 20.10 for the speed as some performance improvements have been backported to 20.04 too.
I have migrated my own main file server to ZFS native encryption some time ago mostly to give ZFS direct disk access and without much care for array speed. Now there is no reason not to use it on desktop either.
PS: You can take a peek at the raw data if you’re so inclined.
PPS: Test procedure is in the previous post so I didn’t bother repeating it here.
Coming from C#, Qt and its C++ base might not look the friendliest. One example is ease of BackgroundWorker and GUI updates. “Proper” way of creating threads in Qt is simply a bit more involved.
However, with some lambda help, one might come upon solution that’s not all that different.
#include<QFutureWatcher>#include<QtConcurrent/QtConcurrent>
``…``
QFutureWatcher watcher =newQFutureWatcher<bool>();connect(watcher,&QFutureWatcher<bool>::finished,[&](){//do something once donebool result = watcher->future().result();});
QFuture<bool> future =QtConcurrent::run([](){//do something in background});
watcher->setFuture(future);
QFuture is doing the heavy lifting but you cannot update UI from its thread. For that we need a QFutureWatcher that’ll notify you within the main thread that processing is done and you get to check for result and do updating then.
Not as elegant as BackgroundWorker but not too annoying either.
I love Visual Studio’s code analysis. Quite often it will give quite reasonable advice and save you from getting into a bad habits. But not all advice is made equal. For example, look at CA1805: Do not initialize unnecessarily.
First of all, here is code that triggers it:
int i =0;
According to documentation “explicitly initializing a field to its default value in a constructor is redundant, adding maintenance costs and potentially degrading performance”. I agree on assignment being redundant. However, is it really degrading performance?
If you check IL code generated for non-default assignment (e.g. value 42), you will see ldc.i4.s 42 in .cctor(). If you remove that assignment, the whole .cctor() is gone thus bringing some credibility to the warning.
However, warning was about the default assignment. If you set variable to 0, in IL code you will see EXACTLY the same code as if you left it without the explicit assignment. Despite what warning says, compiler is smart enough to remove the unnecessary assignments on it’s own and doesn’t need your help. For something that’s part of performance rules, there is a significant lack of any performance impact.
I did check more complicated scenarios and there are some examples where code with this rule violation had some difference in IL compared to “fixed” code. However, in my view, that’s work for compiler to optimize around - not to raise warning about it. Or alternatively, since it might be a performance issue in those cases, just raise warning when it’s an issue and not for everything.
PS: And habit of assigning default values will save your butt in C++ if you are multi-lingual.
As many who use virtual machines for testing, I often need to reinstall the same. One thing that annoys me when getting Windows reinstalled is the prompt for the product key. Yes, you can bypass it temporarily but you still need to enter it when you go and activate the Windows. My laptop has product key embedded in BIOs. Why cannot I do the same with virtual machine? Well, maybe I can.
Investigation started by looking into where exactly the key is located on physical hardware. It was relatively easy to discover this was in ACPI MSDM table. Microsoft even has instructions on how to create the table. If you look a bit further, you can even find description on the MSDM fields.
Offset
Length
Name
Value
0
4
Signature
Always MSDM.
4
4
Length
Total length is always 0x55.
8
1
Revision
Always 3.
9
1
Checksum
Checksum over the whole table.
10
6
OEM ID
Anything goes; pad with spaces.
16
8
OEM Table ID
Anything goes albeit ASCII text is customary.
24
4
OEM Revision
Any number will do but keep it positive (little-endian).
28
4
Creator ID
Anything goes; pad with spaces.
32
4
Creator Revision
Any number will do but keep it positive (little-endian).
36
4
MSDM Version
Always 1.
40
4
Reserved
Always 0.
44
4
Data type
Always 1.
48
4
Reserved
Always 0.
52
4
Key length
Always 0x1D.
56
29
Product key
Product key with dashes included.
It was relatively easy to figure what goes in which field except for the checksum. I didn’t find which checksum method it uses but realistically there are only two when you have a 8-bit checksum. Either it’s CRC or a simple sum of all fields. Well, it was a sum for this one. No matter what, sum of all fields has to be 0. If all fields except for checksum would count up to 250, the checksum would need to be 6 in order to overflow to 0. Actually a trivial thing to calculate.
With the file generated, it’s easy to add it to the VirtualBox VM.