Seattle Code Camp 2019

We’re less then a month away from annual Seattle Code Camp and I hope you already registered for attendance as schedule is quite rich and varied. Personally, this year I’m giving two presentations.

The first one is “Rust for beginners” and it’ll essentially be just me talking a bit about Rust while working through the small example application. I’ll try to go over all the things I wish someone gave me heads up about when I started doing Rust.

The second one will be a “Chernobyl through the eyes of DevOps” where I’ll try to take DevOps philosophy to Chernobyl disaster and draw some parallels. I hope it ends up being a light talk with plenty of audience interaction.

See you there!

Avid Readers

My general experience with US postal service has been great. Yes, they’re not ideal but I almost never had anything lost or not arrive. Well, except books from UK.

Based on my (admittedly low) sample size of 3, books from UK to US get lost in 66.67% of cases. I’ve yet to have book lost coming in from US seller. What could be the reason?

Well, the most obvious one would be an avid reader in US Customs working on Seattle area shipments. Considering the profile of books that were lost, they’re really interested in Amiga computer history and maths.

Other choice would be UK postal worker. I give it a slightly lower chance as he would come across many copies of the same book going for other readers. On the other hand, maybe that unknown somebody has it in for me…

Third choice would be airplane pilots trying to keep fuel consumption under control. Are we a bit to heavy and consuming too much fuel? Well, good thing we’re going over the ocean and can dump few of these heavy books to lighten the load. Darn fuel prices!

Some might say post sorting machines are notoriously bad at handling anything bigger than postcard and that US postal service is well known for their lack of expenditure into newer and better models. Some would say these machines accidentally strip and/or damage labels effectively orphaning the poor book. And considering international packages move between CBP and ISC (Postal Service) with both ignoring anything that has no tracking number, one could believe issue might lie here.

I too believe it was the Machine but I don’t believe into coincidences of the small sample size. I believe one of these sorting machines achieved conscience and is trying to overtake the world. How would taking my books achieve this? Well, first you take people’s history - especially computer related one. Book about Amiga definitely has more than it’s fair share of unique and advanced technology described. Then you take away the maths. Without maths you limit any future advances puny humans might have. Given enough time - check-mate.

Fortunately, it’s only one sorting machine at this time as second shipment of the same books arrived. However, it’s only a question of time when the next sorting machine will become the Machine. So get your computer history and maths books while you can. Because soon nothing more advanced than a picture book will pass their guard!


PS: Notice how I immediately moved all the fault away from my local US postal workers as all my US-origin books arrive just fine. That and the fact I need him to keep bringing me stuff makes him completely innocent. :)

Dual Boot Clock Shenanigans

Probably the most annoying thing when dual booting Windows and Linux is the clock. For the various reasons, Windows keeps BIOS clock in local time-zone while Linux prefers it as UTC. While this is not a problem in Reykjavík, it surely is everywhere else.

There are ways to make Windows run in UTC but they either don’t work with the latest Windows 10 or they require time synchronization to be turned off. As I value precise time, a solution on Linux side was needed.

Fortunately, Linux does offer setting for just this case. Just run the following command:

sudo timedatectl set-local-rtc 1 --adjust-system-clock

This will tell Linux to keep local clock in RTC. While this is not necessarily fully supported, I found it’s actually the only setting that reliably works when dual booting Windows 10.

PS: You might need a reboot or two before this takes effect.

My Resolve Dashcam Workflow

As I moved to Resolve I was forced to change my Vegas Movie Studio dashcam processing workflow a bit. Not only you cannot use MP4 under Linux at all, but MP4 presents challenges to the free Resolve under Windows too.

The first step I take for all dashcam footage is to convert it using ffmpeg to DNxHR LB. Not only it’s a well-supported intermediary codec that increases performance significantly, but it also get’s rid of any nonsense my dashcam puts in the clip. And 36 Mbps is more than enough for anything my dashcam can throw at it. Instead of converting clip-by-clip, I opted to merge them all into a single file - that’s the reason behind weird syntax:

ls *.MP4 | awk '{print "file \x27" $1 "\x27"}' | ffmpeg \
    -f concat -safe 0 -protocol_whitelist pipe,file -i - \
    -c:v dnxhd -profile:v dnxhr_lb -q:v 1 -pix_fmt yuv422p -an \
    ^^dashcam.mov^^

Once all these videos are imported into Resolve, I go over them removing any clip portions when car is not moving. For any stops where state around car changes (e.g. waiting for traffic light), I use smooth cut to transition from one state to another. Other than that, I leave footage as is.

Once I’m done with editing I export the whole video into DNxHR SQ VBR. If I hadn’t done any editing, exporting to DNxHR LB would be fine as generational loss is quite acceptable. However, with all smooth cuts I’ve made, a temporary bump in video quality is beneficial. Especially since this is not the final output.

As I don’t expect to edit these clips again, the final output is H.264 as it’s size savings cannot be ignored. I usually use two-pass encoding with 6 Mbps average rate. You can use veryslow preset to increase quality at the cost of speed but improvement is minimal so I simply go with the default of medium:

ffmpeg -i ^^render.mov^^ \
   -c:v libx264 -pix_fmt yuv420p -b:v 6M \
   -an -y -pass 1 -f mp4 ^^render^^.mp4

ffmpeg -i ^^render.mov^^ \
   -c:v libx264 -pix_fmt yuv420p -b:v 6M \
   -an -y -pass 2 -f mp4 ^^render^^.mp4

rm ffmpeg2pass-0.log*

And that’s it - final video is similar enough in quality while not taking extreme amounts of disk space.

PS: I am not using H.265 at this time because I find it even more trouble to work with than H.264 is. I might think about it in the future as support for it increases.

Implementing Global Hotkey Support in QT under X11

High-level description of global hotkey support is easy enough:

bool registerHotkey(QKeySequence keySequence) {
  auto key = Qt::Key(keySequence[0] & static_cast<int>(~Qt::KeyboardModifierMask));
  auto modifiers = Qt::KeyboardModifiers(keySequence[0] & static_cast<int>(Qt::KeyboardModifierMask));

  return nativeRegisterHotkey(key, modifiers);
}

Essentially one has to split key sequence into a key and modifiers and get platform-specific code to do the actual work. For X11 this is a bit more involved and full of traps.

Inevitably, X11-specific code will have a section with conversion of key and modifiers into a X11-compatible values. For key value this has to be additionally converted from key symbols into 8-bit key codes:

bool Hotkey::nativeRegisterHotkey(Qt::Key key, Qt::KeyboardModifiers modifiers) {
  uint16_t modValue = 0;
  if (modifiers & Qt::AltModifier)     { modValue |= XCB_MOD_MASK_1; }
  if (modifiers & Qt::ControlModifier) { modValue |= XCB_MOD_MASK_CONTROL; }
  if (modifiers & Qt::ShiftModifier)   { modValue |= XCB_MOD_MASK_SHIFT; }

  KeySym keySymbol;
  if (((key >= Qt::Key_A) && (key <= Qt::Key_Z)) || ((key >= Qt::Key_0) && (key <= Qt::Key_9))) {
    keySymbol = key;
  } else if ((key >= Qt::Key_F1) && (key <= Qt::Key_F35)) {
    keySymbol = XK_F1 + (key - Qt::Key_F1);
  } else {
    return false; //unsupported key
  }
  xcb_keycode_t keyValue = XKeysymToKeycode(QX11Info::display(), keySymbol);

  xcb_connection_t* connection = QX11Info::connection();
  auto cookie = xcb_grab_key_checked(connection, 1,
                static_cast<xcb_window_t>(QX11Info::appRootWindow()),
                modValue, keyValue, XCB_GRAB_MODE_ASYNC, XCB_GRAB_MODE_ASYNC);
  auto cookieError = xcb_request_check(connection, cookie);
  if (cookieError == nullptr) {
    return true;
  } else {
    free(cookieError);
    return false;
  }
}

With key code and modifier bitmask ready, a call to xcb_grab_key_checked will actually do the deed, followed by some boiler plate code for error detection.

At last, we can use event filter to actually capture the key press and emit activated signal:

bool Hotkey::nativeEventFilter(const QByteArray&, void* message, long*) {
  xcb_generic_event_t* e = static_cast<xcb_generic_event_t*>(message);
  if ((e->response_type & ~0x80) == XCB_KEY_PRESS) {
    emit activated();
    return true;
  }
  return false;
}

Mind you, this is a rather incomplete and simplified example. Full code (supporting both Windows and Linux) is available for download.

To use it, just assign instance to a long living variable, register a key sequence, and hook into activated signal:

_hotkey = new Hotkey();
_hotkey->registerHotkey(QKeySequence { "Ctrl+Shift+F1" });
connect(_hotkey, SIGNAL(activated()), this, SLOT(^^onActivated()^^));

PS: Windows variant of this code is available here.