BareCam

Illustration

I had an interesting problem. I wanted another monitor. But I didn’t want to buy it. For one, I just needed it just temporarily. Secondly, I needed it to be mobile so any properly sized monitor was out of question.

With that in mind, I took inventory of things I had and came upon an idea. I already have HDMI capture USB and I already have Surface Go which surely looks like a mini monitor. If I connect my laptop via HDMI to the capture card and use webcam to show capture output, I essentially have an HDMI-to-monitor connection.

I tried it and it worked wonderfully. Almost.

As my Surface Go runs Ubuntu, I have tried multiple Linux webcam applications and neither of them worked as I wanted. I simply could not find any that would work in full screen. In addition to losing screen real estate, I also noted other issue - darn cursor was visible. None of the webcams would turn off cursor while running.

After going over every webcam application I could find and finding a fault for each, I finally decided to build my own.

This is as simple as webcam software gets. When started, it will display the first webcam while keeping cursor hidden. If you press Space, it will switch to the next webcam. Essentially everything I needed.

And yes, I did complicate it a bit more later. I added a support for windowed mode, going as far to allow alignment to any screen corner (try keys 0-9) so you can have “head-in-the-corner” effect. I also added a few configurable settings and will probably add a few more with time. However, the idea is to keep it as simple as possible.

Application supports both Linux and Windows. If you’re in need for something like that, give it a try.

Building a Gaming PC

This year Christmas project for my son and me became building a PC. He got eager to game a bit more and laptop he has wasn’t sufficient anymore. So, after more than 20 years being a laptop-only household (yes, not counting servers), we went onto building a PC.

Our goal was to build something that would give a better gaming performance than his 940M-based laptop (ok, that wasn’t that hard), allow playing most titles on 1080p, but still not drain the bank. As we went onto component selection, we went back and forth multiple times as we analyzed how things fit, whether they were giving enough bang-for-the-buck, and how available they were at a decent price range.

Literally the first decision was selecting a case. After looking at price s and motherboard support, we decided to go with micro-ATX. Every single manufacturer had bunch of motherboards in this format and we found many cases in that size were both reasonably sized and at a reasonable cost. After watching a lot of YouTube videos (e.g., ScatterVolt and Gamers Nexus) we decided on darkFlash DLM21 with a mesh front.

This case was the right size, the right features, and the right looks. Of course you cannot expect wonders from the case that’s under $100 so there are a lot of things that could be done better. However, it does offer enough airflow and it has ample space for both current PC build and its further expansions.

Unfortunately, this case doesn’t come with any fans so those looks ended up costing a bit more than expected. I don’t consider it a breaking deal as most case fans are really shitty when it comes to noise so we would probably replace them anyway. We longed for Noctua but decided on Artic P12/14 at the end. It’s almost as quiet as Noctua but at a fraction of a cost. Even though case has enough space for 5 fans, we only decided on one push-pull pair. We could always add another front fan later. While using pressure-optimized fans might not be needed in this case, they were a bit cheaper and came in black.

Power supply was an easy decision. The cheapest one from trusty source with 80+ rating would do. We opted for EVGA as it was the cheapest 500 W power supply that had a single 12V rail and a over-temperature protection. We did toy with an idea of grabbing something more powerful for the sake of the upgradability but decided against it. Any future upgrade needing more power will probably also include newer devices and maybe motherboard. With Intel’s 12V standard creeping in, spending too much money on “maybe” seemed unnecessary. Unfortunately, the old-style colored cables (albeit in black mesh) are not the best sight. But we’ll survive. :)

All this was selected before we had to decide if we want to be a team Intel or team AMD. And we easily went with AMD. Intel does have their offering in our desired price range but there was a slightly cheaper or more performant AMD no matter where we looked. While processor availability proved to be an issue with AMD we had enough time for that not to matter much.

We spent hours and hours browsing over all budget motherboards only to decide between two ASRock offers. One was B450M Pro4 due to it’s better than average VRMs and a really reasonable cost. Works perfectly with Ryzen 2 and (after BIOS upgrade) with Ryzen 3 CPUs. Based on recent reviews, BIOS update is pretty much done for any board currently on sale. No wonder it’s part of many budget builds.

But we decided to splurge a few dollars more and go for B550M variant of the same. Major reason was a newer chipset that should give us a bit more lifetime out of it while keeping reasonably good (albeit simplistic) VRM. Since most boards in that price range either don’t include wireless or have just a basic one, we appreciated this one including cutouts for antenna alongside M.2 E-type slot. This essentially allows us to use any laptop M.2 WiFi we chose and upgrade over its lifetime. Unfortunately, this board doesn’t support Ryzen 2 so no 2600 here.

We did spend some time looking at other motherboards too - especially Gigabyte and MSI offering - but they were always more expensive and with less features as compared to ASRock. Yes, build quality was better for many of them and ASRock is known for their “optimism” when it comes to board shutdown protection. In the end we decided that going with slightly higher ASRock was better than going with low-end Gigabyte/MSI for the same price.

We also looked at A520 chipset boards as this would have been a good fit on paper for a budget PC. Alas, time was not right for this as availability was spotty, features were limited, and price was comparable with B550 boards. At the same price point, B550 wins every time.

Processor decision was hard as 3100 was realistically good enough for what we needed. It’s a proven processor that can handle pretty much anything this computer would be used for. We decided to go slightly higher with AMDs 3300X counting that two months would be enough time to get one. Unfortunately, that wasn’t true as the only scalpers had it in stock. In the end we went with 3600 because we could get it at MSRP.

It took us a long time to decide what to do about CPU fan. On one hand, AMD CPUs come with more than capable stock fan. On other hand, that thing is not the most quiet out there. At the end we opted to go with Arctic Freezer 7 X. Realistically, it’s a minimal upgrade when it comes to temperatures. When it comes to the noise level, things are slightly different. If you value reducing noise levels on a budget, this one is a great deal. And yes, for a budget build, this is probably the most dubious choice as we could have as easily gone with the stock fan.

Choice of memory was annoying at best. AMD is a bit picky about memory and any incompatibility is usually accompanied by crashes. My Epyc-based file server crashed a few times a day no matter how I adjusted timings until I finally gave up and bought the new memory. Yes, the newer generations had some improvements but selecting memory is a still task that needs to be considered carefully. While on motherboard pages you can see explicitly validated modules, I found these are both way too expensive for what they are and way too limited in selection. Finding the exact match was an exercise in futility. I felt like goldilocks as my bed was either too soft or too hard. And I wasn’t as fortunate as she was to find it just right.

On micro-ATX boards one also has memory selection restricted by height. While our current CPU fan left quite a bit of space for memory, changing it in future might cause problem with the clearance. To future-proof the system, we self-restricted to modules that were on a shorter side. Also, modules had to come in pairs to make use of a dual channel but also having just two modules was slightly favored as compared to populating all four as most of budget boards daisy-chain their modules and thus there is a latency increase when all DIMMs are occupied.

Finding the memory based on QVL was easier for some brands as compared to other. Crucial was impossible to corelate while ThermalTake was as easy as it gets. Unfortunately, as it often happens with ThermalTake, modules were “almost” good. One kit was annoyingly flashy and other was intended for water cooling.

Final selection wasn’t directly from ASRock QVL but we went close. List contains both HyperX Fury (HX432C16FB3/8) and HyperX Predator (HX436C17PB4/8) in their single module configuration. We narrowed our choice between kit versions of the same. From motherboard perspective, there should be literally no difference so one might say we followed the rules in their spirit. The final selection between Fury (3200 MHz) and Predator (3600 MHz) modules was a difficult one. So we selected neither. :)

After watching a few videos about terminology and overclocking, we decided on G.Skill RipJaws V (3600 MHz). Yes, that wasn’t in the list of officially supported memory on ASRock pages - but B550M Pro4 is listed G.Skill’s page. Yes, that memory is taller than either HyperX (42 mm vs 32 mm) - but our CPU cooler would have enough clearance. Since price (lower than HyperX Predator) and coolness factor were acceptable, we decided to screw our own rules and go for it.

And not. We didn’t install that memory either as NewEgg package got lost in the mail - literally. So, after another research round, we switched to Crucial Ballistix (BL2K8G36C16U4B) kit. And no, this memory is not in motherboard’s QVL list. However, Crucial does claim it’s compatible so we decided to give it a try. And CL16-18-18-38 timings are actually not too bad making it a nice fit with Ryzen out of the box.

For now, 16 GB will be enough and there are 2 empty DIMM slots still remaining for further upgrade at the cost of a 1 clock latency.

For storage we toyed with an idea of 4.0 NVMe as both selected motherboard and CPU would support it. But considering price was doubled as compared to more standard 3.0, we decided to cheap out. We selected a reasonably decent NVMe SSD that was cheapest at the time of purchase. More specifically, we selected Samsung 970 Evo 500 GB in 500 GB capacity. Since we bought it after 970 Evo Plus was already out, we managed to grab it at a reasonable price. As a secondary drive we went with spinning rust and old 2 TB SpinPoint I had lying around. And yes, I didn’t include this in the price. :)

Graphic card selection was essentially just between Radeon RX 580 and GeForce GTX 1650 Super. Both are close in performance and in price. However, at the time of buying prices for RX 580 were going into stratosphere while GTX 1650 Super remained in sub-$200 range. Out of all GeForce cards we ended up going with MSI’s Gaming X as it was one of the most quiet graphic cards available under load and it would even turn off fan completely when not gaming.

In M.2 E-key slot we placed Intel’s 9260NGW. We selected this card purely because we already owned it and thus would save $20 that we would need to pay for a new AX200. We also had Killer 1535 card but Intel had Bluetooth 5.1. For wireless to be complete, we had to buy pigtail reaching M.2 slot. Length of 30 cm was sufficient to reach the rear panel and from there we went with cable to the external antenna.

All in all, from the moment of decision to having computer running, it took us 2 months. A solid week was spent selecting desired components alongside with 1st and 2nd runner-up. And then we just waited for components to drop in price or, in case of Ryzen CPUs, to become available. Not everything was bought at an optimal cost. For example, we overpaid for CPU just because our desired model wasn’t available. Graphic card with the same performance profile was available at lower cost if we went with a bit louder fan configuration. Furthermore, we could have saved another $65 or so by skipping CPU cooler, downgrading motherboard (to B450M), and downgrading memory speed (to DDR4-3200). But we didn’t. :)

All said and done, it was a fun project, a decent machine too boot, and it will hopefully serve well into the future.

Here is the full table of components used:

ComponentSelectedPrice
CasedarkFlash DLM21 MESH$60
Front case fanArctic P14 PWM$15
Rear case fanArctic P12 PWM$10
Power supplyEVGA 500 W1$40
CPUAMD Ryzen 5 3600$200
CPU FanArctic Freezer 7 X$25
MotherboardASRock B550M Pro4$85
MemoryCrucial Ballistix DDR4-3600 (2x8GB)$75
GPUMSI’s GeForce GTX 1650 Super Gaming X$190
Storage (1)Samsung 970 Evo 500 GB$60
Storage (2)Seagate Spinpoint M9T 2TB$0
WirelessIntel 9260NGW$0
Wireless (cable)NGFF antenna with pigtail$5
Wireless (antenna)NGFF antenna with pigtail$15
TOTAL$780

Storing Settings on PIC16F1454

As I was playing with PIC16F1454, I came to the point where some configurability would be in order. You know how it goes with PIC microcontrollers - just write it in EEPROM and you’re good. Unless there is no EEPROM like there is none for PIC16F1454.

Never mind, I had this issue before, so I can just copy my own code (ab)using program memory for the same purpose. Guess what? There are some issue with this too.

The first of all my old code was for different microprocessor. While principle is the same, it’s not an exact match. The second reason was changes to XC8. My old code doesn’t properly compile on XC8 2.00 - they changed how location is defined. The third (and the last) reason is high-endurance flash that PIC16F1454 supports. Unlike normal flash that’s rated for 10K writes, last 128 of this PICs program memory is rated to 100K. Albeit 10K is nothing to frown about, 100K is much nicer - especially if I end up changing data a lot.

Second and third reason share the same fix. Memory definition looks like this:

#define _SETTINGS_FLASH_RAW { 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
                              0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 }
#define _SETTINGS_FLASH_LOCATION 0x1FE0

const uint8_t _SETTINGS_PROGRAM[] __at(_SETTINGS_FLASH_LOCATION) = _SETTINGS_FLASH_RAW;

This will use the last 32 bytes, starting at 0x1FE0. This address is conveniently 32 bytes before end, falling without issues within “the last 128 bytes” high-endurance category. Now, if you need more memory, just make the array bigger and move it more forward. Just remember to do so in 32-byte increments as this is the block size for flash erase operation. If you don’t reserve all that memory, you might end up erasing your code and we wouldn’t want that. I personally never had need for more than 32 bytes of memory (i.e. one flash page) but your use case might differ.

All settings can be held in structure. Here I will have two settings - Address and DisplayHeight:

typedef struct {
    uint8_t Address;
    uint8_t DisplayHeight;
} SettingsRecord;

SettingsRecord Settings;

To read these settings, we just need to copy our reserved data (seemingly in _SETTINGS_FLASH_RAW variable) into the structure:

uint8_t* settingsPtr = (uint8_t*)&Settings;
for (uint8_t i = 0; i < sizeof(Settings); i++) {
    *settingsPtr = _SETTINGS_PROGRAM[i];
    settingsPtr++;
}

Writing is a two step process. It starts by erasing the WHOLE 32-word/byte block. Following that, we get to write each byte separately:

bool hadInterruptsEnabled = (INTCONbits.GIE != 0);
INTCONbits.GIE = 0;
PMCON1bits.WREN = 1;  // enable writes

uint16_t address = _SETTINGS_FLASH_LOCATION;
uint8_t* settingsPtr = (uint8_t*)&Settings;

// erase
PMADR = address;         // set location
PMCON1bits.CFGS = 0;     // program space
PMCON1bits.FREE = 1;     // erase
PMCON2 = 0x55;           // unlock flash
PMCON2 = 0xAA;           // unlock flash
PMCON1bits.WR = 1;       // begin erase
asm("NOP"); asm("NOP");  // forced

// write
for (uint8_t i = 1; i <= sizeof(Settings); i++) {
    unsigned latched = (i == sizeof(Settings)) ? 0 : 1;
    PMADR = address;            // set location
    PMDATH = 0x3F;              // same as when erased
    PMDATL = *settingsPtr;      // load data
    PMCON1bits.CFGS = 0;        // program space
    PMCON1bits.LWLO = latched;  // load write latches
    PMCON2 = 0x55;              // unlock flash
    PMCON2 = 0xAA;              // unlock flash
    PMCON1bits.WR = 1;          // begin write
    asm("NOP"); asm("NOP");     // forced
    address++;                  // move write address
    settingsPtr++;              // move data pointer
}

PMCON1bits.WREN = 0;  // disable writes
if (hadInterruptsEnabled) { INTCONbits.GIE = 1; }

The first and last step is dealing with interrupts. During write interrupts must be disabled. Code will disable them before writing and re-enable them afterward if needed.

Erase is easy enough. Just set FREE bit in the PMCON1 register followed by magic incantation (0x55, 0xAA, WR=1) and wait for a millisecond or two. Do note that NOP instructions are mandatory due to how self-writing program memory works. It’s one of the rare instances where NOP actually serves a purpose in C code.

To write data, process is close enough. Load all the bytes you wish to write using PMADR and PMDAT registers to set address and data. All bytes except the last will have LWLO bit set and will just cause loading of data into latches. The last byte must have LWLO cleared, signaling we’re done with writing. After a millisecond or two, bytes are done.

Two things are slightly curious there. The first one is setting of PMDATH to 0x3F. This value is actually the same as for erased cell and this just means we’re not changing it’s value. Note that upper byte is not the part of high-endurance flash and only 6-bit value (words are 14-bits on this PIC). Thus we really shouldn’t use it. The second strange decision is to start loop from 1 instead of the more conventional 0. This is so that we can determine if we’re at the last byte without substracting one.

In any case, this is all you need to make your program memory work as a storage for your settings.


PS: Procedure is the same on PIC16F1454, PIC16F1455, PIC16F1459, and probably quite a few more.

PPS: Whole code is available in Git repository.

PPPS: There is quite useful application note from Microchip (AN1673A) dealing with high-endurance flash. Their code uses similar but slightly different approach. If you don’t like this code, maybe theirs will tickle your fancy.

Duplicating Non-Reentrant Functions

Illustration

As I was playing with PIC16454 using USB, I kept getting these warnings: Microchip/usb_device.c:277:: advisory: (1510) non-reentrant function "_USBDeviceInit" appears in multiple call graphs and has been duplicated by the compiler

This was due to function being called from both main function and from interrupt handler. Since function could be interrupted at any point in time, this was definitely a problem and compiler did find a valid solution. However, this was a bit suboptimal for my case.

Since I had issue with only a few functions, I decided to make use of Hybrid option is XC8 compiler stack options. With this option warnings were gone. Surprisingly, this also made my code smaller. Hybrid stack compiled into 7181 words while standard Compiled stack was 7398.

If you have reentrancy happening in just a few functions, Hybrid option might be good for you.*


* Some restrictions apply. Please contact your fellow developers if your compile lasts longer than 4 hours.

Case-insensitive ZFS

Don’t.

Well, this was a short one. :)

From the very start of its existence, ZFS supported case-insensitive datasets. In theory, if you share disk with Windows machine, this is what you should use. But reality is a bit more complicated. It’s not that setting doesn’t work. It’s more a case of working too well.

Realistically you are going to be running ZFS on some *nix machine and access it from Windows, it’ll be via Samba. As *nix API generally expects case-sensitivity, Samba will dynamically convert what it shares from the case-sensitive world into the case-insensitive one. If file system is case-insensitive, Samba will get confused and you will suddenly have issues renaming files that differ only in case.

For example, you won’t be able to rename test.txt into Test.txt. Before doing rename Samba will if the new file already exists (step needed if underlying system is case-insensitive) in order to avoid overwriting unrelated file. This second check will fail on case-insensitive dataset as ZFS will report Test.txt exists. Because of this check (that would be necessary on case-sensitive file system) Samba will incorrectly think that destination already exists and not allow the rename. Yep, any rename differing only in case will fail.

Now, this could be fixable. If Samba would recognize the file system is case-insensitive, it could skip that check. But what if you have case-sensitive file system mounted within case-insensitive dataset? Or vice-versa? Should Samba check on every access or cache results? For something that doesn’t happen on *nix often, this would be either flaky implementation or a big performance hit.

Therefore, Samba assumes that file system is case-sensitive. In 99% of cases, this is true. Unless you want to chase ghosts, just give it what it wants.