Work Half-done

Illustration

I love VHD mounting feature in Windows 7. It makes playing with virtual machines much easier. And let us not forget installing Windows 7 inside it.

You just go onto More Actions and select Create VHD (although this was hardest part for me since I am adjusted to right-click mentality). After that there comes a nice dialog.

This dialog gives everything in simple terms. If you decide to create fixed disk (as you should), Disk Manager just blocks. No progress bar, no message - plain and simple nothing. As you can imagine creating 20 GB can take a while. Only signal that something is happening is light of your disk.

I find this a little disappointing. Another great feature tarnished by half-done user interface. :(

[2008-05-08: This is fixed in release candidate.]

Open Packaging Convention

For it’s XML based file formats (XML Paper Specification and Office Open XML) Microsoft used zip file as storage (not a new thing - lot of manufacturers did use it before). If one is using .NET Framework, support already comes built-in (i think from version 3.0 but I didn’t bother to check - if you have 3.5, you are on safe side).

Since every package file can consist of multiple parts (look on it as files inside of package) it seemed just great for one project of mine.

ZipPackage

Illustration

Class that handles that all is ZipPackage. It uses little bit newer specification of zip format so support for >4GB files is not an issue (try to do that with GZipStream class). Although underlying format is zip, it is rare to see zip extension since almost every application defines it’s own. That makes linking files and program a lot easier.

But there is (in my opinion of course) huge stupidity in way that writing streams is handled. Every stream first goes to Isolated Storage folder which resides on system drive. Why is that problem?

Imagine scenario where your system drive is C: and you want to create file on drive D: (which may as well be over network). First everything is created on drive C: and then copied to drive D:. That means that whatever amount of data you write, this class writes almost twice as much on disk - once uncompressed data goes on C: drive and afterwards gets compressed on drive D:.

That also means that not only you need to watch amount of disk space on your final destination but also your system drive needs to have enough space to hold total uncompressed content.

Even worse, if program crashes during making of package, you will get orphaned file in isolated storage. That may not be an issue at that moment but after a while system may complain about not having enough space on system drive. Deleting orphaned files could also prove to be difficult since it is very hard to distinguish which file belongs to which program (they have some random names).

Weird defaults

There is also issue that when you first create a part, it’s compression option is set to not compressed. That did surprised me since one of advantages of zip packaging is small size. Since every disk access slows things down, having file compressed is advantage.

Since every part is can have separate compression option, I tend to set them to Normal for most of it. Only if I know that something is very random (encrypted or sound), I set it to no compression. Speed is little bit slower when reading compressed data but I am still to find computer so slow that I have issue with uncompressing data.

To use it or not

I do hope that next version of framework will optimize some things (e.g. just get rid of temporary files). However, I will use it nevertheless since it does make transfer of related files a lot easier. Just be careful that there is enough space on system drive and everything should be fine.

If you want to check it, there is simple sample code available.

Hyper-V Server 2008

Illustration

Virtualization is nice idea. You have one powerful PC and put multiple servers on it. One for file sharing (e.g. FreeNAS), one for your domain, one for testing - list just goes on. It is even suitable for home use.

However, there is one hidden cost there. You already need operating system to host your other virtual machines. If you are playing with VMware or VirtualBox, you can use linux as host. Although there is no cost associated here, you must know that now you have one more system to manage, patch… It is very hard to minimize surface area for attacks.

You could go on hypervisor path with VMware’s ESX server but there is question of drivers here if you have lot of components that are not of premium quality. If you want to play with Microsoft’s own Hyper-V, you will not have that problem since any imaginable piece of hardware has drivers for Windows. Cost is issue. For this one, you need Windows Server 2008 - that is not free OS.

Illustration

However, now there is Hyper-V Server 2008 to cover free virtualization market. It is based on Windows, but its not windows as we know it - there is only command line interface available. There is no possibility of any configuration on the machine it self. You need another machine with Windows Vista or Windows 2008 in order to manage it. I do not find this a big issue since if you need virtualization, there is great chance that you have more than one computer anyway.

There is option of managing everything with script files but that is not so nice solution as having MMC on another computer. Even if you choose graphical path there are some manual steps required for everything to work, or you can run a script that will enable all for you.

As you can see, there are some configuration issues to deal with but once you get it running, this one is great. If you have lot of Virtual Server / Virtual PC images, just make new machine and add existing disk (or use a import tool). Do not forget to uninstall old Virtual Server Additions and to install new Integrated Services in order to get best performances.

If you used other virtualization tools (VMware, VirtualBox…) there is no real need to switch. But if you used Microsoft’s virtualization before, give this one a try. It is free.

Google Sync

Illustration

If you want to use Google Sync to synchronize your mobile phone with Google’s address book and calendar, you will probably have problem with fact that upon synchronization, your mobile phone’s content will be deleted (both contacts and calendar items).

I personally use my mobile phone (HP iPAQ 514) as most up-to-date storage of contacts so I had a big problem with that. However, I decided to give Google Sync a try. In whole process it cheated a little in order to preserve all contacts (and calendar items).

Please notice that while this whole procedure worked fine for me, you may not be that lucky. Perform backup!

Step 1

First we need to copy pim.vol file from root directory of your mobile phone. This is place where most of your PIM data is stored. Before coping that file, you may want to turn off and on your mobile phone in order to flush all data in cache. This file is write-cached to enhance performance and to avoid wear-and-tear of underlying flash. Save that file to SD card - we will need it again later.

Step 2

Synchronize with Google Sync. Instructions are at Google’s site and I will not repeat it here.

Step 3

Once synchronization is done, you will notice that your contacts are gone and replaced with those defined at Google’s side. Now it is good time to turn off your phone, turn it on again, and copy pim.vol (one your stored in step 1) over pim.vol in root directory. If you get a message that file is in use, wait a little and try again. Once file is copied, turn off your phone and turn it on again. At this point, you should have all your contacts back.

If you synchronize now, all your contacts will get recognized as new and transferred to Google. From this point on, your phone is synchronized.