Unsung Hero

Illustration

I often write and speak about gadgets I get for myself and how I use them. However, there is one gadget I never wrote about and I use it pretty much every day ever since I got my first one in the late 1997. Those knowing me personally are aware how rare I remember year when something happened so this is something special to me. It is my pen.

There are many like it, but this one is mine. :)

As with many drugs, the first one I received as a gift. It was an Uni-ball Signo DX with a 0.38 tip. Compared to roller pens I used before this one was just sliding without any conscious effort and lines were as thin as a spider web. I watched with dread ink level getting lower and lower. Fortunately I was able to secure the new source before it came to worst. And, in times before Internet, this was a feat.

Then miracle happened and store in my town got quite respectable assortment of Mitsubishi pens. Over the years I tried many of them finally settling on Signo 207 RT (0.5 mm tip) a few years ago. It is quite a similar pen to DX with an advantage of being retractable so I couldn’t lose cap anymore. And switch to a slightly larger tip brought another level of sliding bliss.

I am sure there are pens other find better and have I tried many of them. However, this one works for me and that is all that matters.

Maybe I am an old-fashioned guy but writing and sketching really helps me to think and solve problems. Having a comfortable pen in hand makes a ton of difference.

PS: Those interested in Uni-ball pens can check their godawful web pages. I only wish their pages would be as good as their pens.

Git and Windows Cannot Access the Specified Device

I am not really sure what happened (although I am willing to place some blame on Git file attribute handling) but suddenly some of my batch files started reporting “Windows cannot access the specified device, path, or file. You may not have the appropriate permissions to access the item.” when I try to start them from Windows Explorer. Annoyingly I could still start that same batch from Windows command line. Only double-click wouldn’t work.

After short investigation culprit was found in the permissions. Some application (Git) changed permissions for the file to include only read permissions. As soon as I changed permissions to include executable, I could start script again. Heck there is even a way to get executable attribute into Git repository so this can be avoided in the future. However, I took this as an opportunity to update permissions for my drive.

Drive in question is NTFS but not because I need any permission handling capabilities. Mostly it is because way NTFS handles small files is superior to any other Windows-supported file system. So my permissions on given drive are literally just allowing all users access. With time and different computers this changed a bit so reset was in order. I wanted to allow all users full drive access.

After starting Command Prompt as an administrator first mandatory task was to switch to that drive. Not only that this allows me to use relative paths further down the road but it also makes it less likely that any errors (e.g. due to accidentally forgotten parameter) would impact my system drive.

A:

Next step was to take ownership of my whole drive, forcing change when necessary:

TAKEOWN /F * /R /D Y
 SUCCESS: The file (or folder): "A:\Test\Test1.txt" now owned by user "TEST\Josip".
 SUCCESS: The file (or folder): "A:\Test\Test2.txt" now owned by user "TEST\Josip".
 SUCCESS: The file (or folder): "A:\Test\Test3.txt" now owned by user "TEST\Josip".
...

Since previous command left a lot of output, I also used /setowner option of ICACLS. There is no benefit to this one other than showing me stats and ensuring a file hasn’t been missed. Yes, you can even use this command instead of TAKEOWN but it has no option of forcing ownership change so you might need TAKEOWN regardless.

ICACLS .\ /setowner Josip /T /C /Q
Successfully processed 119121 files; Failed processing 0 files

Next I set my root directory to allow all Users, Administrators, and SYSTEM groups in. From previous run I had Everyone and BUILTIN set so I decided to remove them while I am at it.

ICACLS .\ /grant:r Users:F Administrators:F SYSTEM:F /inheritance:e /remove Everyone /remove BUILTIN
 processed file: .\
 Successfully processed 1 files; Failed processing 0 files

And last step was what I really wanted. Just reset all permissions.

ICACLS * /reset /T /C /Q
 Successfully processed 119120 files; Failed processing 0 files

And now I have my drive just as I wanted it.

PS: If you just wanna sort out Git, you can also update executable bit and avoid whole issue.

Embedding Resources Without Pesky Resources Folder

Illustration

Adding image resources to your C# application is close to trivial. One method is to open your Resources.resx file and simply add bitmaps you wish to use. However, this leaves your with all images in Resources folder. Some people like it that way but I prefer to avoid it - I prefer the old-style system of keeping it all in your resource file.

To have all images included in resource instead being in a separate folder, just select offending resources and press F4 to bring Properties window. Under Persistence simply select Embedded in .resx and your resources are magically (no real magic involved) embedded into resx file as Base-64 encoded string. Only thing remaining is to delete leftover folder.

You use resources from application same as you normally would.

BOM Away, in Git Style

Some time ago I made a Mercurial hook (killbom) that would remove BOM from UTF-8 encoded files. As I switched to Git, I didn’t want to part with it so it was a time for rewrite. Unlike Mercurial, there is no global hook mechanism. You will need to add hook for each repository you want it in.

Start is easy enough. Just create pre-commit file in .git/hooks directory, Looking from the base of the repository file name would thus be .git/hooks/pre-commit. Content of that file would then be as follows:

#!/bin/sh

git diff --cached --diff-filter=ACMR --name-only -z | xargs -0 -n 1 sh -c '
    for FILE; do
        file --brief "$FILE" | grep -q text
        if [ $? -eq 0 ]; then
            cp "$FILE" "$TEMP/KillBom.tmp"
            git checkout -- "$FILE"

            sed -b -i -e "1s/^\xEF\xBB\xBF//" "$FILE"
            NEEDSADD=`git diff --diff-filter=ACMR --name-only | wc -l`
            if [ $NEEDSADD -ne 0 ]; then
                sed -b -i -e "1s/^\xEF\xBB\xBF//" "$TEMP/KillBom.tmp"
                echo "Removed UTF-8 BOM from $FILE"
                git add "$FILE"
            fi

            cp "$TEMP/KillBom.tmp" "$FILE"
            rm "$TEMP/KillBom.tmp"
        else
            echo "BINARY $FILE"
        fi
    done
' sh

ANYCHANGES=`git diff --cached --name-only | wc -l`
if [ $ANYCHANGES -eq 0 ]; then
    git commit --no-verify
    exit 1
fi

What this script does is first getting list of all modified files separated by the null character so that we can deal with spaces in the file names.

git diff --cached --diff-filter=ACMR --name-only -z

For each of these files we then perform replacing of the first three bytes if they are 0xEF, 0xBB, 0xBF:

sed -b -i -e "1s/^\xEF\xBB\xBF//" "$FILE"

What follows is a bit of a mess. Since it is really hard to get information whether file has been changed without temporary files, I am abusing git to check if file has been changed since it was first staged. If that is the case, assumption will be made that it was due to sed before it. If that assumption is not correct, your commit will have one extra file. As people don’t have same file changed in both staged and un-staged are, I believe risk is reasonably low.

After all files are processed, final check is made whether anything is available for commit. If there are no files in staging area, current commit will be terminated and new commit will be started with --no-verify option. Only reason for this change is so that standard commit message can be written in cases when removal of UTF-8 BOM results in no actual files to commit. Replacing it with message “No files to commit” would work equally well.

While my goal of getting BOM removed via the hook has been reasonably successful, Git hook model is really much worse than one Mercurial has. Not only that global (local) hooks are missing but having multiple hooks one after another is not really possible. Yes, you can merge scripts together in a file but that means you’ll need to handle all exit scenarios for each hook you need. And let’s not even get into how portable these hooks are between Windows and Linux.

If you are wondering what is all that $TEMP operation, it is needed in case of interactive commits. Committing just part of file is useful but didn’t play well with this hook. Saving a copy on side sorted that problem.

Download for current version of pre-commit hook can be found at GitHub.

PS: Instead of editing pre-commit file directly, you can also create it somewhere else and create a symbolic link at proper location.

PPS: I have developed and tested this hook under Windows. It should work under Linux too, but your mileage might vary depending on exact distribution.

[2015-07-12: Added support for interactive commits.] [2015-11-17: Added detection for text/binary.]

Offline to Online Switch on a Minecraft Server

Illustration

It all started with my kids learning about Minecraft skins and their dad not being able to get their new look working in the game. No matter what, they would stay Steve and Alex. Quick search told me skins are not supported in offline mode and my home server was setup as such. No worries I thought - I’ll just switch online-mode setting in [server.properties](http://minecraft.gamepedia.com/Server.properties) from false to true and that will be it.

However, after I restarted server, my whole family got to start from scratch. We were in skinned bodies but we were also in new locations. It was as if we logged onto the world for the first time. To make it worse, nobody had access to commands anymore. Our ops status has been effectively revoked.

As I added myself to ops again through Minecraft server GUI, I noticed that ops.json got two entries for my user name but each with different UUID. And I could find both UUIDs in my world’s save directory world\playerdata. That got me wondering. What would happen if I would delete file with new UUID and rename old UUID file to it. That is, if my old UUID was 76116624-b235-36a2-a614-ed79be1855ed and my new UUID was d8b2b4e0-1807-4177-a3ca-46afbd1d7538, would renaming 76116624-b235-36a2-a614-ed79be1855ed to d8b2b4e0-1807-4177-a3ca-46afbd1d7538 enable me to get back into my offline body?

Fortunately yes. Transplantation of player data succeeded without any issues. So I went through all save directories and changed played data from old to new UUID. But that wasn’t all. As we were all ops with different ops level for various worlds, I had to visit every ops.json and adjust for that. Simple search/replace was all it took.

And guess what, if you ever decide to make your server offline again, same annoyance in guaranteed since Minecraft has different UUIDs for online and offline mode. There simply seems no way around it. Later I found that people have even built tools to help them with rename.

As Minecraft requires you to verify your credentials at least once over Internet when you buy it, I cannot believe that there is technical reason behind this. Even more because this change was seemingly introduced only with version 1.7.6. My best guess is that it was added as some sort of anti-piracy measure. And as all such measures do, it ended up annoying more paying players than pirates.

In any case my, now online, server recovered from its temporary amnesia and digging could start again.

PS: Paranoid among us might want to check for UUIDs in whitelist.json too.