Small Basic

My first programming steps were done in GW-BASIC but quite soon I switched to QBASIC. Those languages were simple to learn and they gave instant results. Yes, you couldn’t do anything really big (or useful) in them, but they had a power to make your eyes glow when some simple idea became a real program.

When I started programming for money I switched to C/C++ for a while, until I discovered Visual Basic. Although it had lack of C/C++ power and control, no one can deny development speed (faster I create something, more money I earn).

Illustration

It is hard to recommend any high-level language for learning programming to kids. You just need to learn too much of boring stuff in order to do anything that looks even remotely fun.

But now, there is Small Basic. That language doesn’t even try to be serious. Whole point is just to have fun - not productivity. It’s interface is simple and nice. Intelli-sense is not only useful and descriptive (it is more detailed than some help files I saw) but it really gives you that “someone gave this some thought” feeling.

It does borrow some nice ideas from other “kid languages” (turtle from LOGO comes to mind) but there are some ideas that will take a kid instantaneously. For example, there is special class for accessing Desktop (e.g. change wallpaper) and accessing photos on Flickr. If that doesn’t enchant your kid, I don’t know what will.

Illustration

If you, as a big kid with big toys, feel left out in this story - you are wrong. You can extend this gem with your code written in .NET thus giving your kid access to “only sky is a limit” world of ideas. You and your kid can do each its part of code and have fun. I find this precious.

Work Half-done

Illustration

I love VHD mounting feature in Windows 7. It makes playing with virtual machines much easier. And let us not forget installing Windows 7 inside it.

You just go onto More Actions and select Create VHD (although this was hardest part for me since I am adjusted to right-click mentality). After that there comes a nice dialog.

This dialog gives everything in simple terms. If you decide to create fixed disk (as you should), Disk Manager just blocks. No progress bar, no message - plain and simple nothing. As you can imagine creating 20 GB can take a while. Only signal that something is happening is light of your disk.

I find this a little disappointing. Another great feature tarnished by half-done user interface. :(

[2008-05-08: This is fixed in release candidate.]

Open Packaging Convention

For it’s XML based file formats (XML Paper Specification and Office Open XML) Microsoft used zip file as storage (not a new thing - lot of manufacturers did use it before). If one is using .NET Framework, support already comes built-in (i think from version 3.0 but I didn’t bother to check - if you have 3.5, you are on safe side).

Since every package file can consist of multiple parts (look on it as files inside of package) it seemed just great for one project of mine.

ZipPackage

Illustration

Class that handles that all is ZipPackage. It uses little bit newer specification of zip format so support for >4GB files is not an issue (try to do that with GZipStream class). Although underlying format is zip, it is rare to see zip extension since almost every application defines it’s own. That makes linking files and program a lot easier.

But there is (in my opinion of course) huge stupidity in way that writing streams is handled. Every stream first goes to Isolated Storage folder which resides on system drive. Why is that problem?

Imagine scenario where your system drive is C: and you want to create file on drive D: (which may as well be over network). First everything is created on drive C: and then copied to drive D:. That means that whatever amount of data you write, this class writes almost twice as much on disk - once uncompressed data goes on C: drive and afterwards gets compressed on drive D:.

That also means that not only you need to watch amount of disk space on your final destination but also your system drive needs to have enough space to hold total uncompressed content.

Even worse, if program crashes during making of package, you will get orphaned file in isolated storage. That may not be an issue at that moment but after a while system may complain about not having enough space on system drive. Deleting orphaned files could also prove to be difficult since it is very hard to distinguish which file belongs to which program (they have some random names).

Weird defaults

There is also issue that when you first create a part, it’s compression option is set to not compressed. That did surprised me since one of advantages of zip packaging is small size. Since every disk access slows things down, having file compressed is advantage.

Since every part is can have separate compression option, I tend to set them to Normal for most of it. Only if I know that something is very random (encrypted or sound), I set it to no compression. Speed is little bit slower when reading compressed data but I am still to find computer so slow that I have issue with uncompressing data.

To use it or not

I do hope that next version of framework will optimize some things (e.g. just get rid of temporary files). However, I will use it nevertheless since it does make transfer of related files a lot easier. Just be careful that there is enough space on system drive and everything should be fine.

If you want to check it, there is simple sample code available.

Hyper-V Server 2008

Illustration

Virtualization is nice idea. You have one powerful PC and put multiple servers on it. One for file sharing (e.g. FreeNAS), one for your domain, one for testing - list just goes on. It is even suitable for home use.

However, there is one hidden cost there. You already need operating system to host your other virtual machines. If you are playing with VMware or VirtualBox, you can use linux as host. Although there is no cost associated here, you must know that now you have one more system to manage, patch… It is very hard to minimize surface area for attacks.

You could go on hypervisor path with VMware’s ESX server but there is question of drivers here if you have lot of components that are not of premium quality. If you want to play with Microsoft’s own Hyper-V, you will not have that problem since any imaginable piece of hardware has drivers for Windows. Cost is issue. For this one, you need Windows Server 2008 - that is not free OS.

Illustration

However, now there is Hyper-V Server 2008 to cover free virtualization market. It is based on Windows, but its not windows as we know it - there is only command line interface available. There is no possibility of any configuration on the machine it self. You need another machine with Windows Vista or Windows 2008 in order to manage it. I do not find this a big issue since if you need virtualization, there is great chance that you have more than one computer anyway.

There is option of managing everything with script files but that is not so nice solution as having MMC on another computer. Even if you choose graphical path there are some manual steps required for everything to work, or you can run a script that will enable all for you.

As you can see, there are some configuration issues to deal with but once you get it running, this one is great. If you have lot of Virtual Server / Virtual PC images, just make new machine and add existing disk (or use a import tool). Do not forget to uninstall old Virtual Server Additions and to install new Integrated Services in order to get best performances.

If you used other virtualization tools (VMware, VirtualBox…) there is no real need to switch. But if you used Microsoft’s virtualization before, give this one a try. It is free.