Programming in C#, Java, and god knows what not

One Time Passwords in C#

Illustration

Recently I was working on a project where time-based one-time password algorithm might come in handy. You know the one - you have token that displays 6-digit number and you enter it after your user name and password. It used to be restricted to hardware (e.g. RSA) but these days Google Authenticator is probably the best known.

While rolling something on your own is always possibility, following the standard is always better because all tough questions have been answered by people smarter than you. In this case all things needed were covered in RFC 6238 (Time-Based One-Time Password Algorithm) and RFC 4226 (An HMAC-Based One-Time Password Algorithm).

While specifications do grant you some freedom in algorithm choice and number of digits you wish to generate, looking at other services implementing the same algorithm, 6-digit SHA-1 based code seems to be unwritten rule. Also universal seems the rule to use (unpadded) Base32 encoding of a secret key. Any implementation of one-time password algorithm has obey these rules if it wants to use existing infrastructure - both server side services and end-user applications (e.g. Google Authenticator or Pebble one).

With my OneTimePassword implementation, basic code generation would looks something like this:

var otp = new OneTimePassword("jbsw y3dp ehpk 3pxp");
txtCode.Text = otp.GetCode().ToString("000000"); //to generate new code

If you are on server side, verification would look just slightly different:

var otp = new OneTimePassword("jbsw y3dp ehpk 3pxp");
var isValid = otp.IsCodeValid(code); //to verify one that user entered

If you want to generate a new secret key for end-user:

var otp = new OneTimePassword();
var secret = otp.GetBase32Secret();

Pretty much all basic scenarios are covered and then some. Sample with full code is available for download.

PS: OneTimePassword class supports many more things than ones mentioned here. You can use it in HOTP (counter) mode with TimeStep=0; you can generate your own keys; validate codes; use SHA-256 and SHA-512; other digit lengths… Play with it and see.

Determining IPv4 Broadcast Address in C#

When dealing with IPv4 network, one thing that everybody needs sooner or later is a broadcast address based on IP address and its netmask.

Let’s take well known address/netmask combo as an example - 192.168.1.1/255.255.255.0. In binary this would be:

Address .: **^^11000000 10101000 00000001^^ 00000001**
Mask ....: **11111111 11111111 11111111 00000000**
Broadcast: **^^11000000 10101000 00000001^^ !!11111111!!**

To get its broadcast address, we simply copy all address bits where netmask is set. All remaining bits are set and our broadcast address 192.168.1.255 is found.

A bit more complicated example would be address 10.33.44.22 with a netmask 255.255.255.252:

Address .: **^^00001010 00100001 00101100 000101^^10**
Mask ....: **11111111 11111111 11111111 11111100**
Broadcast: **^^00001010 00100001 00101100 000101^^!!11!!**

But principle is the same, for broadcast address we copy all address bits where mask is 1. Whatever remains gets a value of 1. In this case this results in 10.33.44.23.

As you can see above, everything we need is simply taking an original address and performing OR operation between it and a negative netmask: broadcast = address | ~mask. In C# these steps are easiest to achieve if we convert everything to integers first:

var addressInt = BitConverter.ToInt32(address.GetAddressBytes(), 0);
var maskInt = BitConverter.ToInt32(mask.GetAddressBytes(), 0);
var broadcastInt = addressInt | ~maskInt;
var broadcast = new IPAddress(BitConverter.GetBytes(broadcastInt));

Full example is available for download.

Windows Store App Doesn't Start on Virtual Drive

Illustration

As I went to update my Resistance Windows Store application, I stumbled upon unexpected error while trying to run it. Message was quite generic “This application could not be started. Do you want to view information about this issue?” and application would stay stuck on the startup screen.

Details were not much better. It was essentially the same error message: “Unable to activate Windows Store app ‘47887JosipMedved.Resistance_805v042353108!App’. The Resistance.exe process started, but the activation request failed with error ‘The app didn’t start’.” As error messages go, pretty much useless.

I was thinking that I broke something with changes so I reverted to my last known good configuration - one that is actually currently deployed in the Store. Still the same error.

It took me a while to notice that, once project is copied to the other drive, everything would work properly. A bit of back and forth and I believe I found the issue.

I keep all my projects stored on a virtual disk. While everything else treats that disk as a real physical thing, Visual Studio sees the difference but only when dealing with Windows Store applications. It just wouldn’t work.

As you can guess it, solution was to copy project on a physical drive and work from there. Easy as solutions go but definitely leaves the bitter taste. Lot of wasted time simply because of a lousily written error message. A bit more clarity next time?

Beware of Magic in AES CBC

In case of encrypted text I commonly see “magic” footer being used as a sole verification method for AES CBC; i.e. assumption is that, if last bytes were decrypted correctly, all previously decrypted bytes are valid too. However, that assumption can fail horribly.

Once case when it fails is when configurable IV is used. You can have nonsense for a IV vector and decryption will succeed. Even worse, while first few bytes will be invalid, 8-byte blocks following it will look just fine. If you validate content only by last few bytes, your program might happily continue to work without any issue.

But lets assume you have static IV and that this issue doesn’t affect you. And you are worried only about stream errors anyhow. Well, I hate to inform you but CBC mode is self-synchronizing, i.e. any recoverable errors in one block will go away after certain number of blocks. For example, if you have an error in first byte of a stream, next fifteen bytes will be corrupted but rest of stream (including your footer) will look just fine.

Corruption in the middle of stream will cause exception most of the time, but not always. If it passes unnoticed you can have valid header, valid footer and garbage in between.

As you can see from the two examples above, you cannot rely purely on fact that some stream bytes were decrypted as a proof that some other part of stream is not corrupted. Only way to be sure about stream validity is to use hash/CRC functions that were actually designed to detect corruption.

Example of both these behaviors is available for download. Below is example output with both valid and invalid decryption having a same footer (FF-FF-FF-FF):

Decrypted (OK) ..........: 00-01-02-03-04-05-06-07-08-09-0A-0B-0C-0D-0E-0F-10-11-12-13-FF-FF-FF-FF
Decrypted (invalid IV) ..: FF-01-02-03-04-05-06-07-08-09-0A-0B-0C-0D-0E-0F-10-11-12-13-FF-FF-FF-FF
Decrypted (invalid input): 31-33-7C-D9-A9-91-47-DD-52-3A-64-08-FD-2F-D4-C8-1D-11-12-13-FF-FF-FF-FF

Visual Studio Community 2013

Illustration

A bit over a week ago a new Visual Studio edition has appeared pretty much out of blue. For all practical purposes you can look at it as a cross between Visual Studio Professional (has same features) and Express editions (it’s free).

Unlike Express editions, Community can only be used by an individual developer, for open source, for learning/teaching, and in a small non-enterprise settings. If you are working for enterprise company, you’re out of luck.

Since Community is essentially the same as a Professional edition, there is not much new things that can be said about it. It can slice, it can dice, and it is an almost perfect development environment. Yes, there are Premium and Ultimate and they do offer some advantages (e.g. IntelliTrace is a gem) but most of the time one can live without those features just fine. Unlike with the Express editions you won’t feel constrained with the Community.

Surprisingly you cannot really install Community edition side-by-side with any other paid Visual Studio. Official explanation is that this is because Community is the part of a same line as other editions but I still find it an unfortunate decision. Developers wearing two hats in BYOD scenarios (e.g. enterprise by day, open source by night) might get into some conflicting situations. Side-by-side with the Express editions will still be supported so not all is black.

Speaking of Express editions, it is not really clear to me what is their destiny. Currently they do stand together with Community but they do overlap quite a bit. If we learned anything from the past, their days are numbered. I would like to be wrong since I do love them. Even with all their shortcomings, I can still see them useful in multiple scenarios (mostly due to their quite permissive licence). I will miss them.

If you currently don’t have anything better than Express on your machine and you fit into the restrictions, it is definitely worth checking out.

What Every Developer Has To Know About IPv6

Illustration

Today I gave a talk at the Seattle Code Camp. Wonderful atmosphere, excellent organization, and a plenty of fun. Only bad thing was that it ended too soon (single day only).

My session was mostly geared toward the beginners and it covered just the basics of IPv6 and how we can use it from C#. I can only hope I lit some IPv6 fire in the great crowd that came.

I went to many other sessions myself and, while they did vary in quality, there was not a single bad one. My day was definitely fulfilled.

Feel free to download slides and examples.

Creating the Self-signed Key for the TLS

In my last post I described how to do the client-authenticated TLS and one of magic ingredients there was a certificate with the private key in the form of .pfx files.

Server and client certificates are essentially the same but I’ll show creating of both anyhow. For this I will assume that your Windows SDK files are in the C:\Program Files (x86)\Microsoft SDKs\Windows\v7.1A\Bin\ and that we are storing files in the root of the drive D:

cd "C:\Program Files (x86)\Microsoft SDKs\Windows\v7.1A\Bin\"

makecert -n "CN=MyServer" -r -sv D:\server.pvk D:\server.cer
 Succeeded

makecert -n "CN=MyClient" -pe -r -sv D:\client.pvk D:\client.cer
 Succeeded

pvk2pfx -pvk D:\server.pvk -spc D:\server.cer -pfx D:\server.pfx

pvk2pfx -pvk D:\client.pvk -spc D:\client.cer -pfx D:\client.pfx

DEL D:\client.cer D:\client.pvk D:\server.cer D:\server.pvk

This results in the server.pfx and client.pfx files. We can opt to import them into the Windows Certificate Store (also possible with makecert command) or to use them directly as in this example.

Client-authenticated TLS in C#

Thanks to NSA, most probably every developer is aware of the HTTPS and the underlying TLS (or older SSL). While most scenarios involve authentication of a server, authentication of a client is often overlooked.

If you wonder what you gain, just be reminded of key-based authentication in the SSH. No need to exchange username/password with every client. You just exchange a (safely stored) key and you know who is on the other side.

Distribution and a safe storage of the client certificate is a non-trivial problem but easily handable on a smaller scale. Windows certificate store is not too bad and the client authentication makes it easy to block keys that aren’t trusted any more.

Here is the example code of a simple TLS encrypted TCP client/server with a self-signed certificates. Of course, one would expect proper certificates to be used in any production environment, but these will do in a pinch.

First we need to setup a server using just a standard TCP listener with a twist:

var serverCertificate = new X509Certificate2(ServerCertificateFile);

var listener = new TcpListener(IPAddress.Any, ServerPort);
listener.Start();

while (true) {
    using (var client = listener.AcceptTcpClient())
    using (var sslStream = new SslStream(client.GetStream(), false, App_CertificateValidation)) {
        sslStream.AuthenticateAsServer(serverCertificate, true, SslProtocols.Tls12, false);

        ``//send/receive from the sslStream``
    }
}

Client is equally simple:

var clientCertificate = new X509Certificate2(ClientCertificateFile);
var clientCertificateCollection = new X509CertificateCollection(new X509Certificate[] { clientCertificate });

using (var client = new TcpClient(ServerHostName, ServerPort))
using (var sslStream = new SslStream(client.GetStream(), false, App_CertificateValidation)) {
    sslStream.AuthenticateAsClient(ServerCertificateName, clientCertificateCollection, SslProtocols.Tls12, false);

    ``//send/receive from the sslStream``
}

Only trick in validation is to allow certificate chain errors. That is needed for self-signed certificates to work:

bool App_CertificateValidation(Object sender, X509Certificate certificate, X509Chain chain, SslPolicyErrors sslPolicyErrors) {
    if (sslPolicyErrors == SslPolicyErrors.None) { return true; }
    if (sslPolicyErrors == SslPolicyErrors.RemoteCertificateChainErrors) { return true; } //we don't have a proper certificate tree
    return false;
}

It is really this simple to convert any TCP socket code into the encrypted TLS.

Full example is available for download.

Scaling Toolstrip Images With DPI

Cheapest way to make high-DPI application is just to specify it as such in manifest and let font auto-sizing do the magic. Final result is more than acceptable and it definitely beats blurred fonts you would have without it.

However, this method doesn’t scale toolstrip icons (usually 16x16 pixels). They remain at same pixel size as before. If your monitor is 192 DPI, icons will look half their size. As monitors get higher and higher DPI situation just gets worse. Only proper solution would be to check system DPI and load higher resolution icons.

However, cheating a bit is ok too. Instead of having multiple sets of icons with different resolution each, we can resize ones that we already have. Yes, result will be a bit ugly, but not worse than Windows built-in scaling would do. All that can be achieved by having simple code in form constructor:

internal partial class MyForm : Form {
    public FilesEditForm(TabFiles tabFiles) {
        InitializeComponent();
        this.Font = SystemFonts.MessageBoxFont;

        using (var g = this.CreateGraphics()) {
            var scale = Math.Max(g.DpiX, g.DpiY) / 96.0;
            var newScale = ((int)Math.Floor(scale * 100) / 50 * 50) / 100.0;
            if (newScale > 1) {
                var newWidth = (int)(mnu.ImageScalingSize.Width * newScale);
                var newHeight = (int)(mnu.ImageScalingSize.Height * newScale);
                mnu.ImageScalingSize = new Size(newWidth, newHeight);
                mnu.AutoSize = false; //because sometime it is needed
            }
        }
    }
}

First variable simply contains scaling factor current monitor has compared to standard 96 DPI one. For example, 120 DPI monitor would cause variable’s value to be 1.25.

Next we try to determine how much we should magnify icons. In order to avoid unnecessary small adjustments, new scale is then calculated and rounded to .5 increments. If scale factor is 1.25, it will round-down to 1; scale of 1.6 will round-down to 1.5; scale of 2.2 will round-down to 2 and so on.

Check is made whether there is scaling to be done and, if needed, we simply calculate new image width and height using current size as a template. Assuming that icons were 16x16, scale factor of 1.5 would cause them to be 24x24.

Latest order of business is to turn off auto size. On most forms this step might be skipped without any issue. However, some stubborn forms will have their menu stay the same size as long as AutoSize is turned on (at least on .NET Framework 2.0). If you are on latest framework and/or your forms don’t misbehave, you can skip it safely.

PS: To have .25 increments, just swap 50 for 25; to have only whole number increments, swap 50 for 100.

Why I Don't Loop Through Dispose

Quite often graphical classes have a lot of disposing to do, e.g.:

public void Dispose() {
    foreBrush.Dispose();
    backBrush.Dispose();
    someBrush.Dispose();}

One might be tempted to optimize that a bit:

public void Dispose() {
    foreach(IDisposable element in new IDisposable[] { foreBrush, backBrush, someBrush,}) {
        element.Dispose();
    }
}

I personally find this code a bit easier to maintain and it serves same purpose. But I never really use it due to one serious drawback - it is not recognized by code analysis.

Code analysis that is part of Visual Studio Professional (and higher) does not recognize operation of this loop and thus it reports CA2213: Disposable fields should be disposed violation. While it is clear that violation is invalid, it still means that our loop goes completely unchecked.

If we add one more disposable field to class at some future time, first scenario would give us notice and we would be aware of forgotten dispose. After quick check we add dispose of that field and all is nice.

In second case we have taken responsibility of disposal ourselves and Visual Studio is not capable of checking the loop. There is nobody to check that all fields are disposed and if we forget to dispose it, it will be up to garbage collection to do it.

This will usually not be a major issue because resources will be released at some time. If we take some OS resource (e.g. file handle) it might be a bit annoying for rest of system but again nothing critical.

However, under rare circumstances, it might become important. Call me lazy, but I would rather have a bit uglier code that is automatically validated than have beautiful code that I need to check myself.