Fountain Pens on the Plane

As a fountain pen user, I’ve always heard about precautions you must take before you board the flight. Most people I spoke with recommend to clean the pen completely or, if you really want to use it, keep it fully inked as to minimize possibility of air expanding and pushing the ink out.

On the first glance, all this seems logical so I always took precautions. Considering that all my flights lately have been cross-Atlantic, it seemed as a wise choice. However, on my last flight I decided to experiment a little.

On my trip from USA to Croatia (Seattle-Frankfurt-Zagreb) I carried four of my pens: Pilot Custom 74, TWSBI Diamond 580, Platinum Cool, and Pilot Metropolitan. Custom 74 was attached to my notebook while remaining pens had home in the case located in my backpack. All were fully loaded with different inks: Noodler’s Heart of Darkness, Diamine Oxblood, Private Reserve DC Supershow Violet, and Private Reserve Sherwood Green respectively.

I used Custom 74 with Noodler’s during the whole flight and, outside of the nib creep Noodler is famous for, I had no issues what-so-ever. And yes, I used it both during take off and landing - just to be sure. Other pens I took just a few scribbles with for test purposes but I haven’t noticed anything wrong.

On the way back I expected slightly different results as I have used some ink and didn’t refill any pen. Custom 74 was close to being empty, TWSBI was around 50%, while Cool and Metropolitan were reasonably full at around 75%. I expected trouble.

Surprisingly, nothing happened. My pens operated just fine with TWSBI taking more of an main pen role from Custom 74. Absolutely no leakage occurred during any of two flights (Zagreb-Frankfurt-Seattle).

Based on my, admittedly limited, test I don’t see any justification of additional pen preparation before the flight if you are bringing it in cabin with you. Any pressure change in the cabin during flight is small enough that any modern fountain pen can handle it just fine. Yes, in the case of sudden decompression, pen would probably leak but then you’d have more important things to worry about than 2 mL of ink.

If you are transporting pen in unpressurized cargo area I would always go with cleaning out the pen completely. In all other cases relax and write on. :)

Local Host Name Resolving Under Windows With Mikrotik's DNS

Illustration

As I switched all DNS resolving to my Mikrotik router, a curious problem appeared. I couldn’t access my main file server using its short name anymore.

That is fine, I thought. If I go to IP -> DNS and I add Static entry for it. And so I did and everything worked when I tested it. From Linux machine. From Windows 10 machine you ask? Nope - I couldn’t access it still. I try ping and it complains. I try nslookup and it works. Interestingly, an entry with a domain (e.g. server.thing) would work with both. It was just short names that wouldn’t behave.

To make long story short, fix is to force Windows to use longer names even for single word lookups. To do this, we can employ magic of DHCP’s domain-name setting conveniently available under DHCP network setup. If this is provided to Windows host upon IP address assignment, it will append all single word host names with that DNS suffix and, provided you defined static DNS host entry with that full name, Windows will work happily ever after.

Downside of this solution is that you need to have both long and short form (e.g. server and server.network) defined for mixed Windows/Linux environments. Yes, you can create a regex to cover both but it will look ugly (e.g. ^server(.network)?$). I personally simply define host two times - looks nicer. ;)

Root issue is just another leftover from NBNS/WINS resolver era - something nobody uses on any normal network but somehow Windows still thinks of it as an appropriate default behavior. Annoyingly some components are built smarter so, depending which tool you use, you can chase damn Schrödinger’s cat all day long…

Why No Insider?

Illustration

I am a bit crazy. Some say in general, while I will just use it to describe my choice of beta testing Microsoft’s software. :)

If Microsoft had release candidate of either Windows or Visual Studio, I would install it. And I wouldn’t pussy out and put it on some non-important server. No, I usually got that running on my daily driver.

And yes, this is definitely not what Microsoft recommends. And yes, this approach has bit me in the ass multiple times. However, I usually liked the new features enough to ignore the small issues. Well, no more. After what seems ages on the Windows Insider track, I have given up.

First issue I had with Windows Insider is the fact it would obliterate custom drivers on every install. As I have Asus N56VJ that needs the same for keyboard shortcuts (e.g. to disable touchpad), this was annoyance. And you couldn’t just reinstall driver - it took a bit more involved process to recover it.

Another issue was pushing of damn Edge. Every freaking time I would get new insider build, Edge would appear in taskbar. And not only there - it would take over the file associations too. A bit hypocritical and a whole lot annoying. That is, when you can switch it - there was a full month where “bug” in the build prevented moving away from Edge for http file handler.

However, both of these issues, along with a few others, were just a minor annoyances. Computer was still usable after them and I could get everything properly working within a day while not losing too much time.

Straw that broke my back were issues with VirtualBox. For the last few months every Insider build broke VirtualBox and VMware in one way or the other. While some were minor and easily solvable, others required either waiting for the new build or update from manufacturer. And, strangely, I found Microsoft’s Hyper-V re-enabled every time.

As someone who runs quite a few Linux related virtual machines (not all are properly supported by Hyper-V), I simply cannot be days without a running system. And the pressure of knowing that, even if I do get it working, it is never more than a build away from braking was simply too much.

Year ago, I would have jotted this just as a cost of getting early features. But insider updates lately became just noise as no proper feature has been introduced in ages. It seems to me that last few of them were just a sneaky way to reset my defaults for the damn Edge and show some Hyper-V up my bum.

Due to all this, I stopped all insider builds to two of my machines.

My main multimedia PC went to Linux Mint and my daily driver went back to the last Windows 10 official release. Frankly, if it wasn’t for Visual Studio, I would have moved it to Mint too.

It is a sad day when Linux distribution is valid and less annoying than Windows…

Remote Passwordless SSH/RSA Login Into Mikrotik

It all started with the need for backup. I had to do two things. First create a backup user with read-only access and then to automate gathering of exported configuration using SSH. And, as a twist, SSH would need to use RSA - something WinBox started supporting only recently (since 6.31).

The easiest way to configure this is to enter commands into New Terminal from WinBox. I will simply repeat commands needed instead of going through the screens. Commands are actually quite descriptive and easy to “translate” into GUI actions if that is your preference.

Before creating user itself we need to create a group without any rights, followed by user creation:

/user
group add name=backup policy=
add name=backup group=backup

With the user in hand, we should get key authentication going. Do notice that key.txt contents should be the public key for use with login. How to generate it is out of scope but just google PuttyGen and you can find a lot of information about this. In any case, we can set publick key for user by using the following commands:

/file
print file=key.txt
set key.txt contents="^^ssh-rsa ...^^"

/user
ssh-keys import public-key-file=key.txt user=backup

After assigning key to a user, we can give it appropriate rights - in my case those were ssh and read. Do notice that policy could have been set while creating group but that would allow user to login without any password until SSH key was set. While window is short and chance is really remote, I prefer to avoid it:

/user
group set [find name=backup] policy=ssh,read

If everything has been done correctly, you can log into router using your RSA keys and you can run export command to gather current configuration.

PS: If you are limiting MACs to be used with SSH beware that Mikrotik supports only hmac-sha1.

100 MA Is a Myth

A lot of electronics ends up being connected to USB - whether they need computer or not. USB and its 5 V has became an unofficial standard power supply for the world. And thus it makes me crazy when I hear from people that you can only draw 100 mA from USB without going through USB enumeration else something horrible is going to happen. And that is bullshit.

Yes, there is such a thing as USB negotiation where each device can request certain amount of current from its host. Older USB devices could ask for up to 500 mA (in units of 100 mA) while USB 3.0 devices have maximum of 750 mA (in units of 150 mA). If you go higher than that, you get into the category of USB Power Delivery 3.0 and the beauty of multiple voltages we’ll conveniently ignore here, and deal only with 5 V requirements.

A misconception when it comes to USB negotiation is due to ability of USB devices to self-report their maximum usage and likewise the ability of chipset to say no if multiple devices go over some internal limit. Ideally, if you have four USB ports on the same power bus with total of 1 A available and you connect four devices using 300 mA each, three would get positive response and fourth one would get their request denied.

What might not be completely clear from this story is that bus has no means to either measure current or to enforce device’s removal. This whole story depends on the device accurately reporting its (maximum) usage and actually turning itself off if it receives “power denied” response. And yes, this self-reporting works as well as you can imagine it. As there is no way for computer to ensure either accuracy of data or device’s compliance everybody simply ignores it.

Computer manufacturers decided to save some money and not verify device’s consumption. So what if device that reported 300 mA is using 350 mA? Should you disconnect the device just because it uses lousy cable with a big loss? Why would you even care if that is the only device connected? What to do if that device just goes to 500 mA for a fraction of second? What to do with devices reporting nothing (e.g. coffee heaters)? Is nitpicking really worth bad user experience (my damn device is not working!) and is it worth extra cost to implement (and test)?

While there were some attempts at playing “power cops” in the early days; with time all manufacturers decided to over-dimension their power bus to handle more power than specification requires and simply placed cheap poly-fuse on it to shut it down in the case of great overload.

Such friendly behavior has culminated with each port of any laptop made in the last five years being capable of 1 A minimum. And you can test it - just connect data lines (or use UsbAmps’ high-power option) - and you will see you can easily pull 1 A out of something officially specified for half of that. And that is without any power negotiation - courtesy of the USB Battery Charging specification.

This leniency from manufacturers in turn generated a whole category of completely passive devices. Why the heck would USB light have a chip more expensive then all other components together just to let computer know its power usage? That is just wasting money if you can get power out of it whether you inform it or not.

Devices that have to communicate with computer over USB kept their self-reporting habits just because they had to use a bit smarter chips to interface the USB. And all those chips had to have power negotiation built-in to be certified. There was literally no cost in using this feature.

And even then they would fudge the truth. Quite often an external CD drive or hard disk would actually need more than 500 mA to spin up. And there was no possibility in the early days to specify more than 500 mA. So they lied.

Such (lying) behavior was essentially later approved with the USB battery specification that uses not an USB message but voltage levels as a limiting factor. It essentially says that you can pull as much as you want while voltage levels are high enough. Once voltage starts dropping, ease off a bit. This point can happen at 1 A, or 1.5 A, or 2 A - it all depends on power source/computer you are using.

Device manufacturers have become quite verse too. Since USB battery charging specification was done is hardware friendly manner, there was no extra cost to bear. Only task was to determine if computer supports the, then new, specification. If yes, pull current until voltage start dropping. If not, limit yourself to 500 mA. How do they recognize it? You’ve guessed it - by shorting the data lines.

Due to the powers of backward compatibility you can pull essentially as much current as you want from your PC. Yes, you are still limited by fuse (quite commonly 2 A or more per port pair), you still have thin USB cables and their voltage drop (with accompanying losses) to deal with, and yes devices - especially phones - will still self-limit to avoid tripping aforementioned fuse. But otherwise it is a wild west.

Unless you are making device that has to be certified, stop worrying about the power negotiation. If your device already has an USB transceiver onboard and you need to program it anyhow, go for the standard and configure current you need. But if you don’t have such chip or programming is just too much of a hassle, simply ignore it and world is not going to self-destruct. I promise.