Programming Windows

Illustration

One book that brought me into Windows programming was Programming Windows by Charles Petzold. While examples were C-based actual theory was mostly language-agnostic. APIs tend to work the same whether you do it in C or VB.

If you had any interest in Windows API, I do not think that there was a better source at the time. Unfortunately this great book died after 5th edition (Windows XP based).

Well, book is back from retirement and this time it deals with Windows 8. It will be published in November at price of $50. However if you buy it before May 31st 2012, you can grab it for a $10. I would call that a good deal.

I already ordered my copy.

P.S. I must warn you that this book is very distant relative to original series at the best. Instead of low-level programming you will get XAML and panes. However, Petzold is known for good books and that alone should justify $10.

P.S. If you are interested in real C++ programming book, do check Professional C++.

CA2000 and Using Statement

I started with code like this (give-or-take few white-spaces):

using (var aes = new RijndaelManaged() { BlockSize = 128, KeySize = 256, Key = key, IV = iv, Mode = CipherMode.CBC, Padding = PaddingMode.PKCS7 }) {
    this.Transform = aes.CreateEncryptor();
    this.Stream = new CryptoStream(stream, this.Transform, CryptoStreamMode.Write);
}

Running code analysis on this returned [CA2000](http://msdn.microsoft.com/query/dev10.query?appId=Dev10IDEF1&k=k(%22DISPOSE+OBJECTS+BEFORE+LOSING+SCOPE%22)) (Microsoft.Reliability) error. It simply stated “In method ‘Aes256CbcStream.Aes256CbcStream(Stream, CryptoStreamMode, byte[])’, object ‘<>g__initLocal0’ is not disposed along all exception paths. Call System.IDisposable.Dispose on object ‘<>g__initLocal0’ before all references to it are out of scope.”

Problem here is that all object references ARE being released by using statement. Or so I thought.

If you are using Object Initializer, compiler will generate code to support it. And that compiler-generated code is culprit for that message. And yes, there is a reason for why it behaves like this.

Personally I often ignore this warning. Strictly speaking this is not a real solution and definitely not a best practice. However it is quite often acceptable. If you are generating just few of these objects and there is no failure expected (famous last words), little bit more work for garbage collector is acceptable scenario.

Real solution for now would be not to use Object Initializer syntax when dealing with using statement. In our example that would mean:

using (var aes = new RijndaelManaged()) {
    aes.BlockSize = 128;
    aes.KeySize = 256;
    aes.Key = key;
    aes.IV = iv;
    aes.Mode = CipherMode.CBC;
    aes.Padding = PaddingMode.PKCS7;
    this.Transform = aes.CreateEncryptor();
    this.Stream = new CryptoStream(stream, this.Transform, CryptoStreamMode.Write);
}

Smoothing

If you are making some measuring device with display it is always a challenge to select proper rate of refresh. Usually your measurement takes only small amount of time and it is very hard to resist updating display after each one. I saw number of devices that have displays that are just too fast to read.

Slowing rate at which measurement is taken is almost always beneficial for both user comfort and battery life. And that is valid solution, especially if value is relatively stable. However, if measurement fluctuates a bit, that results in jumps between values.

To cure that you should be doing averaging. If your measurement takes 10ms to complete, you can do 10 of them, average the result and still have quite a decent 10/second refresh rate. This is probably solution gets most use but it is not the only one.

My favorite way of slowing display is simplified weighted average. Between two measurements one that is current always carries more weight than newer one. Exact weight is matter of trial but I found small numbers like 23% work the best.

To clarify it a bit, let’s say that we have measurement of 10 and measurement of 20. Our new “average” value will become 12.3. If third measurement is also 20, value becomes 14.1, then 15.4 and so on. Value keeps getting closer and closer to real reading but speed with which it does that is very limited.

If you have measurements that are relatively stable this method works almost like an average. If value jumps occasionally this method will smooth such temporary change. That gives much nicer end user feeling as far as measurement goes. And since we are doing this at much faster rate than actually showing data, if permanent jump does occur, user will see such change relatively quickly.

Code for this might look like:

float value = measure();
while (1) {
    showValue(value);
    for (int i=0; i&lt;10; i++) {
        float newValue = measure();
        value = value + (newValue - value) * 0.23; //to smooth it a little
        value = (int)(value * 1000.0) / 1000.0; //rounding
        //do other stuff
    }
}

In this particular code, we show a value to user as soon as we can (to enhance perceived speed). After that we average next 10 values (each new one is given 23% of consideration). Then we display new average to user. Rinse and repeat. Optional rounding step additionally limits small changes.

This code is not that good if measurement takes a long time. E.g. If you have one measurement per second you will find it takes eternity to change a value. For such cases you are probably better off just displaying current measurement to user. Or, if some smoothing is required, using higher values (e.g. 0.79 or similar).

P.S. This method might not work as expected for your measurements. Do test first.

P.P.S. This is intended for human display only. If you are logging values, it is probably best to write measurements without any adjustment. If you average them before writing, you are losing details.

P.P.P.S. If you are doing averaging in full-blown desktop application, ignore this code. You can use proper moving average (linear, exponential, weighted…) that allows for much greater control. This method is just a workaround to get similar results when working on memory-limited PIC.

Case for TryGetValue

When I am reviewing code, I always check how program retrieved items from dictionary. Most often I find following pattern:

if (dict.ContainsKey(key)) {
    object obj = dict[key];
    //do something with obj
}

Whenever I see it, I wonder why somebody would use it over TryGetValue:

object obj;
if (dict.TryGetValue(i, out obj)) {
    //do something with obj
}

If you deal with dictionary of objects, later code is something around 25% (depends on size of dictionary and bunch of other stuff) faster. If you deal with dictionary of structures, difference is much smaller (7-8%) since in later case you need to deal with memory allocations (remember that there is no null for structures).

Most of the time, dictionaries are used in time critical code so changing code to later is almost a no-brainer.

I only ever came to see one single scenario where key check separated from retrieval is desired. On case your dictionary has structures in it and you expect lot of look-up failures you will be better off using first example. Second code will use much more memory and need to create structure before each key check will offset any time savings you might get when item is found.

MagiWOL 3.30

Illustration

Nothing to look here, just an evolution.

New version of MagiWOL changes import progress dialog a bit by adding time remaining text. As it always goes, it is precise full 0.1% of time.

There are some internal changes as response to crash reports but nothing too exciting to talk about.

Download and use.