File.OpenWrite Is Evil

.NET Framework is full of the small helper methods and I personally find this beneficial in general. They make code easier to read and most of the time they also make errors less likely. They also lend themselves beautifully to the quick troubleshooting one-liners as a nice, temporary, and concise solution. There is almost no greater example for that flexibility than File helper methods.

If you need to read the content of the file, just say

using (var fileStream = File.OpenRead(filename)) {
    //do something
}

Similarly, if you need to write something, equivalent code is

using (var fileStream = File.OpenWrite(filename)) {
    //do something
}

However, there is a trap in the last code chunk as it doesn’t necessarily do what you might expect. Yes, file is opened for writing, but existing content is untouched. To illustrate the issue, save first John in the file and then Mellisa. File content will be, as expected, Mellisa. However, if you save John again, the content will be somewhat unexpected Johnisa.

Once seen as an example, it is obvious computer did exactly what we’ve told it. It opened the file for writing and modified the content starting from the first byte. Nowhere did we tell it to discard the old content.

Proper code for this case would be slightly longer:

using (var fileStream = new FileStream(fileName, **FileMode.Create**, FileAccess.Write)) {
    //do something
}

This will ensure file will be truncated before writing the new content and thus avoid the problem.

Annoying thing about this helper is that, under normal circumstances, it will work most of the time, biting you only when you delete/shorten something. I believe there is a case to argue it should have been designed with FileMode.Create instead of FileMode.Write as a more reasonable behavior. However, as it goes with most of these things, decision has already been made and there is no going back.

QR Authentication Key

QR Authentication Example

Two-factor authentication is a beautiful thing. You have a key, apply a bit of TOTP magic and you’ll get an unique code changing with time. To use it just run a mobile application of your choice (e.g., Google Authenticator) and scan the QR code.

If you have a bunch of pre-existing keys in textual format (e.g., recovering after phone reinstall), wouldn’t it be really useful to generate a QR code based on them?

Fortunately, the key format is really well documented in the Google Authenticator repository. In its simplest form it is otpauth://totp/**LABEL**?secret=**KEY**. Simply swapping LABEL and KEY for desired values should do the trick - e.g., otpauth://totp/**Test**?secret=**HXDMVJECJJWSRB3HWIZR4IFUGFTMXBOZ**.

To generate a QR code scannable by mobile phone application, any QR service supporting simple text encoding will do. I personally prefer goqr.me as they offer a lot of customization options and (supposedly) they don’t store QR data. Final QR code will be perfectly well read by any authenticator application out there and the key will be imported without any issue.

For the advanced scenarios, there are quite a few more advanced setting and tweaks you can do but this simplest format probably covers 90% of needs.

Batch Optimizing Images

The same image can be saved in multitude of ways. Whether it is camera phone or editing application, usually goal is to save image quickly without caring for each and every byte. I mean, is it really important if image is 2.5 MB or 2.1 MB? Under most circumstances bigger file is written more quickly and slightly bigger size is perfectly acceptable compromise.

However, if you place the image on a website, this suddenly starts to matter. If your visitors are bandwidth-challenged, it makes a difference between the load time measured in seconds or tenths of seconds. However, if you start optimizing, you can spend way too much time dealing with this. If you are lazy like me and don’t want to change your flow too much, there is always an option to save unoptimized files now and optimize later.

For optimizing images I tend to stick with two utilities: OptiPNG for PNG and jpegoptim for JPEG files. Both of them do their optimizations in a completely lossless fashion. This might not bring you the best savings, especially for JPEG images, but it has one great advantage - if you run optimization over the already optimized images, there will be no harm. This means you don’t need to track what files are already optimized and which need work. Just run the tools every once in a while and you’re golden.

I created the following script to go over each image and apply optimizations:

@ECHO OFF

SET  EXE_OPTIPNG="\Tools\optipng-0.7.5\optipng.exe"
SET EXE_JPEGTRAN="\Tools\jpegoptim-1.4.3\jpegoptim.exe"

SET    DIRECTORY=.\pictures

ECHO = OPTIMIZE PNG =
FOR /F "delims=" %%F in ('DIR "%DIRECTORY%\*.png" /B /S /A-D') do (
    ECHO %%F
    DEL "%%F.tmp" 2> NUL
    %EXE_OPTIPNG% -o7 -silent -out "%%F.tmp" "%%F"
    MOVE /Y "%%F.tmp" "%%F" > NUL
    IF ERRORLEVEL 1 PAUSE && EXIT
)

ECHO.

ECHO = OPTIMIZE JPEG =
FOR /F "delims=" %%F in ('DIR "%DIRECTORY%\*.jpg" /B /S /A-D') do (
    ECHO %%F
    %EXE_JPEGTRAN% --strip-all --quiet "%%F"
    IF ERRORLEVEL 1 PAUSE && EXIT
)

And yes, this will take ages. :)

Incremental Mercurial Clone

One both advantage and disadvantage of the distributed source control is repository containing the whole history. Upon the first clone, when all data must be downloaded, this can turn into an exercise in futility if you are on a lousy connection. Especially when, in my case, downloading a huge SVN-originating Mercurial repository multi-GB in size. As connection goes down, all work has to be repeated.

Game got boring after a while so I made following script for incremental updates:

@ECHO OFF

SET SOURCE=https://example.org/BigRepo/
SET REPOSITORY=MyBigRepo

IF NOT EXIST "%REPOSITORY%" (
    hg --debug clone %SOURCE% "%REPOSITORY%" --rev 1
)

SET XXX=0
FOR /F %%i IN ('hg tip --cwd "%REPOSITORY%" --template {rev}') DO SET XXX=%%i

:NEXT
SET /A XXX=XXX+1

:REPEAT
ECHO.
ECHO === %XXX% === %DATE% %TIME% ===
ECHO.

hg pull --cwd "%REPOSITORY%" --debug --rev %XXX% --update
SET EXITCODE=%ERRORLEVEL%
ECHO.
IF %EXITCODE% GTR 0 (
    SET FAILED=%EXITCODE%
    hg recover --cwd "%REPOSITORY%" --debug
    SET EXITCODE=%ERRORLEVEL%
    ECHO.
    ECHO ======= FAILED WITH CODE %FAILED% =======
    IF %EXITCODE% GTR 0 (
        ECHO ======= FAILED WITH CODE %EXITCODE% =======
    ) else (
        ECHO === SUCCESS ===
    )
    GOTO REPEAT
) else (
    ECHO.
    ECHO === SUCCESS ===
)

GOTO NEXT

Script first clones just a first revision and then incrementally asks for revisions one at a time. If something goes wrong, recovery is started following by yet another download. Simple and effective.

Forcing Compiled .NET Application to 32-Bit

.NET application compiled with Any CPU as a target and without “Prefer 32-bit” set, will run in 64 bits whenever it can. If application is developed and tested in that manner, this is actually a good thing. Even if you don’t care about a vastly more memory you can use, you should care about the fact Windows Server these days exists only in 64-bit flavor. Yes, with prefer 32-bit checked, your application is going to be capable of running on it. However, on all development machines you will run it in 32 bits and thus find some errors only once your application is running 64-bit on a (headless) server. Every developer should run his code in 64-bit. No excuses.

Saying that, if you stumble across a signed Any CPU .NET application that used to work on 32-bit OS just fine but stopped working with a move to 64 bits, you have a problem. Even if your environment does support 32-bit computing, stubborn code will hit bug again and again. If application was unsigned, you might go the route of editing binary directly. With signed binaries you’ll have to be a bit more sneaky.

One trick is to re-configure .NET loader:

C:\WINDOWS\Microsoft.NET\Framework64\v2.0.50727\Ldr64.exe SetWow
 loading kernel32...done.
 retrieved GetComPlusPackageInstallStatus entry point
 retrieved SetComPlusPackageInstallStatus entry point
 Setting status to: 0x00000000
 SUCCESS

However, this is a machine-wide setting and requires administrator access.

Another way is cheating the system and creating a loader application with settings you want (e.g. x86). Then load destination assembly and simply start it:

var targetAssembly = Assembly.LoadFrom("AnyCpuApplication.exe");
targetAssembly.EntryPoint.Invoke(null, null);

As “proxy application” is 32-bit, .NET loaded will load assembly into its domain with the same settings and our problem is solved.

Example code is available.