Type-C Power Delivery as Passive PoE Source

This is a part 1 out of 3. PCBs were sponsored by PCBWay.


A long time ago I decided upon passive 48 V PoE for my network. Choice was made due to both hAP ac and Audience access points supporting up to 57 V. Thus 48 V was closest lower standard value with a reasonably priced power supplies. However, this decision came to bite me in the ass with my new hAP ax³ supports going only up to 28 V. Placing it into my network would lead to a lot of sparking fun. What I needed was a lower voltage.

Well, the next logical step was switching to 24 V. However, buying two 24 V power supplies (one to use, and one for backup) allowing for 60 W was actually not that cheap. Since I had bunch of leftover power supplies, I started to wonder if I could use something I already have.

Yes, I could have used one of vendor-specific laptop power supplies but that would involve cutting cables as their connectors were anything but standard. The only standard connector I had was type-C. And then it hit me. Why not simply use power delivery to get 20 V out of it?

Realistically, probably the easiest in-place replacement was one of many prebuilt cables out there. And I think this is the best way to go if you just need power. But, for my home setup I connected PoE over ResetBox device which allows for an easy power reset. Wouldn’t it be really nice if I updated ResetBox to directly use type-C?

And no, ResetBox is not just a simple switch albeit it offers essential functionality of one. Its a bit smarter than that as it contains microcontroller that gets to control a relay. For example, it will ignore short, accidental, presses and allow reset only when held for 3 seconds. Ideal for controlling things you don’t want to reset by accident - for example your home wireless network.

For this update it would be ideal to switch input to type-C, allow for on-board voltage selection (5 V, 9 V, 12 V, 15 V, and 20 V), and lastly have a nice way to indicate the selected voltage.

With that in mind, the first task became selection of PD chip. It is possible to control PD on your own but I quickly decided against it as setting it to function right with various PD and non-PD type-C devices would take ages. No, I wanted something standard.

Quick search quickly pointed toward IP2721 as a reasonably easy to (hand)solder device used in many existing triggers. However, its package was a bit on a large side and voltage output was quite limited since it only supported 5 V, 15 V, and 20 V operation. Definitely not a full set of voltages I wanted. And yes, you could get lower voltages by using IP2721_MAX12 variant but then you lose the higher voltage options.

Second device I found was HUSB238 and I immediately liked its SOT33-6L variant. Unfortunately, this easy-to-solder variant was nowhere to be found for purchase. Even worse, the chip didn’t support the full PD 3.0 5A operation. Yes, probably not a deal-breaker for this particular scenario as 60 W was plenty but still no ideal for a new design.

After quite a lot of additional search I stumbled across CH224K. Not only it came in (seemingly) easy to solder ESSOP-10 package but it also supported easy control. Based on a few examples I found it seemed possible to tease the full 100 W with it. The datasheet also mentions an even more appealing CH221K in the SOT23-6L package, but I had difficulty finding it in the market. On the other hand, the ESSOP-10 variant was readily available all over AliExpress.

With chip selected, the second order of business became figuring out how to connect all this. With only 6 I/O pins available, the current PIC12F1501 was a bit crowded. In addition to 1 button input, 1 LED output, and a 1 relay output it already handled, I would add 3 outputs for voltage control (also used to show LEDs status). Total of 1 input and 5 outputs. Not comfortable but just enough.

Originally, ResetBox had the option to handle AC input. However, with type-C, this is no longer necessary, so we can replace the relay with a small P-MOSFET and completely bypass the diode bridge. We don’t control this MOSFET directly, but rather via a small NPN transistor. This approach serves two purposes: first, it prevents our PIC from seeing 20 V at any given time, and second, it allows the PG signal to override the output (i.e., no PG, no voltage).

While originally ResetBox had option to hadnle AC input, with type-C we don’t need this anymore and thus we can replace relay with a small P-MOSFET and omit diode bridge altogether. We don’t control this MOSFET directly but over a small NPN transistor and that’s for two reasons. First one is to avoid our PIC seeing 20V at any point in time and the second reason is allowing PG signal to override output (i.e., no PG, no voltage).

The power supply for the main PIC is managed by an LDO. While this approach does waste some energy, it’s not too significant as the PIC won’t need more than roughly 20 mA. For such a small current, it simply wasn’t worth opting for a switched-mode regulation. I considered using a 5 V LDO despite the input being at 5 V, but chose to make the smarter decision and go with 3.3 V instead. And yes, I did test the 5 V LDO in the same circuit and it worked, albeit with a slight voltage drop.

Illustration

For PCB creation I chose PCBWay and they generously provided me with free PCBs. As it’s common with my projects, I don’t really push the boundary of what’s possible but there were a few non-standard things with this PCB. The first one was its thickness as I needed 0.8 mm due to type-C connector. The second one were really small holes for the same. And yes, this was quite well within the specification but I actually had issues with this when it came to other PCB manufacturers.

But more about that the next time when I go over all the things I’ve botched during the PCB design.


Latest design is available on GitHub

Native ZFS Encryption Speed (Ubuntu 23.04)

There is a newer version of this post

Well, Ubuntu 23.04 is here and it’s time for the new round of ZFS encryption testing. New version, minor ZFS updates, and slightly confusing numbers at some points.

First, Ubuntu 23.04 brings us to ZFS 2.1.9 on kernel 6.2. It’s a minot change on ZFS version (up from 2.1.5 in Ubuntu 22.10) but kernel bump is more than what we had in a while (was kernel 5.19).

Good news is that almost nothing has changed as compared to 22.10. Numbers are close enough to what they were before that they might be a statistical error when it comes to either AES-GCM or AES-XTS (on LUKS). If that’s what you’re using (and you should), you can stop here.

Illustration

However, if you’re using AES-CCM, things are a bit confusing, at least on my test system. For writes, all is good. But when it comes to reads, gremlins seem to be hiding somewhere in the background.

Every few reads speed would simply start dropping. After a few slower measurements, it would come back where it was. I repeated it multiple times and it was always reads that started dropping while writes would stay stable.

While that might not be reason not to upgrade if you’re using AES-CCM, you might want to perform a few tests of your own. Mind you, you should be switching to AES-GCM anyhow.

As always, raw data I gathered during my tests is available.

Adding Tools to .NET Container

When Microsoft provides you with container image, they provide everything you need to run .NET application. And no more. But what if we want to add our own tools?

Well, there’s nothing preventing you from using just standard docker stuff. For example, enriching default Alpine Linux image would just require creating a Dockerfile with the following content:

FROM mcr.microsoft.com/dotnet/runtime:7.0-alpine
RUN apk add iputils traceroute curl netcat-openbsd

Essentially we tell Docker to use Microsoft’s image as our baseline and to install a few packages. To “execute” those commands, simply use the file to build an image:

docker build --tag dotnet-runtime-7.0-alpine-withtools .

To see if all works as intended, we can simply test it with Docker.

docker run --rm -it dotnet-runtime-7.0-alpine-withtools sh

Once happy, just tag and push it. In this case, I’m adding it to the local repository.

docker tag dotnet-runtime-7.0-alpine-withtools:latest localhost:5000/dotnet-runtime:7.0-alpine-withtools
docker push localhost:5000/dotnet-runtime:7.0-alpine-withtools

In our .NET project, we just need to change the ContainerBaseImage value and publish it as usual:

<ContainerBaseImage>localhost:5000/dotnet-runtime:7.0-alpine-withtools</ContainerBaseImage>

PS: If you don’t have Docker running locally, don’t forget to start it:

docker run -d -p 5000:5000 --name registry registry:2

Using Alpine Linux Docker Image for .Net 7.0

With .NET 7 publishing a docker image became trivial. Really, all that’s needed is to add a few entries into .csproj file.

<ContainerBaseImage>mcr.microsoft.com/dotnet/runtime:7.0</ContainerBaseImage>
<ContainerRuntimeIdentifier>linux-x64</ContainerRuntimeIdentifier>
<ContainerImageName>test</ContainerImageName>
<ContainerImageTags>0.0.1</ContainerImageTags>

With those in place, and assuming we have docker working, we can then “publish” the image.

dotnet publish -c Release --no-self-contained \
    /t:PublishContainer -p:PublishProfile=DefaultContainer \
    Test.csproj

And there’s nothing wrong with this. However, what if you want an image that’s smaller than 270 MB this method offers? Well, there’s always Alpine Linux. And yes, Microsoft offers an image for Alpine too.

So I changed my project values.

<ContainerBaseImage>mcr.microsoft.com/dotnet/runtime:7.0-alpine</ContainerBaseImage>
<ContainerRuntimeIdentifier>linux-x64</ContainerRuntimeIdentifier>
<ContainerImageName>test</ContainerImageName>
<ContainerImageTags>0.0.1</ContainerImageTags>

And that led me to a dreadful Error/CrashLoopBackOff state. My application simply wouldn’t run and since the container crashed, it was really annoying to troubleshoot anything. But those familiar with .NET and Alpine Linux might see the issue. While almost any other Linux is happy with the linux-x64 moniker, our Alpine needs a special linux-musl-x64 value due to using a different libc implementation. And no, you cannot simply put that in .csproj as you’ll get error that The RuntimeIdentifier 'linux-musl-x64' is not supported by dotnet/runtime:7.0-alpine.

You need to add it to the publish command line as an option

dotnet publish -c Release --no-self-contained  -r linux-musl-x64\
    /t:PublishContainer -p:PublishProfile=DefaultContainer \
    Test.csproj

And now, our application should work on Alpine with considerable size savings without any issues.

Quickly Patching a Failing Ansible Setup

In my network, I use Ansible to configure both servers and clients. And yes, that includes Windows clients too. And it all worked flawlessly for a while. Out of nowhere, one Wednesday, my wife’s Surface Pro started failing its Ansible setup steps with Error when collecting bios facts.

For example:

[WARNING]: Error when collecting bios facts: New-Object : Exception calling ".ctor" with "0" argument(s): "String was not recognized as a valid DateTime."  At line:2 char:21  + ...         $bios = New-Object -TypeName
Ansible.Windows.Setup.SMBIOSInfo  +                     ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~      + CategoryInfo          : InvalidOperation: (:) [New-Object], MethodInvocationException      +
FullyQualifiedErrorId : ConstructorInvokedThrowException,Microsoft.PowerShell.Commands.NewObjectCommand      at <ScriptBlock>, <No file>: line 2

And yes, the full list of exceptions was a bit longer, but they all had one thing in common. They were pointing toward SMBIOSInfo.

The first order of business was to find what the heck was being executed on my wife’s Windows machine. It took some process snooping to figure out that setup.ps1 was the culprit. Interestingly, this was despite ansible_shell_type being set to cmd. :)

On my file system, I found that file at two places. However, you’ll notice that if you delete one in the .ansible directory, it will be recreated from the one in /usr/lib.

  • /usr/lib/python3/dist-packages/ansible_collections/ansible/windows/plugins/modules/setup.ps1
  • /root/.ansible/collections/ansible_collections/ansible/windows/plugins/modules/setup.ps1

Finally, I was ready to check the script for errors, and it didn’t take me long to find the one causing all the kerfuffle I was experiencing.

The issue was with the following code:

string dateFormat = date.Length == 10 ? "MM/dd/yyyy" : "MM/dd/yy";
DateTime rawDateTime = DateTime.ParseExact(date, dateFormat, null);
return DateTime.SpecifyKind(rawDateTime, DateTimeKind.Utc);

That code boldly assumed the BIOS date uses a slash / as a separator. And that is true most of the time, but my wife’s laptop reported its date as 05.07.2014. Yep, those are dots you’re seeing. Even worse, the date was probably in DD.MM.YYYY format, albeit that’s a bit tricky to prove conclusively. In any case, ParseExact was throwing the exception.

My first reaction was to simply return null from that function and not even bother parsing the BIOS date as I didn’t use it. But then I opted to just prevent the exception as maybe that information would come in handy one day. So I added a TryParse wrapper around it.

DateTime rawDateTime;
if (DateTime.TryParseExact(date, dateFormat, null,
    System.Globalization.DateTimeStyles.None, out rawDateTime)) {
    return DateTime.SpecifyKind(rawDateTime, DateTimeKind.Utc);
} else {
    return null;
}

This code retains status quo. If it finds the date in either MM/dd/yyyy or MM/dd/yy format, it will parse it correctly. Any other format will simply return null, which is handled elsewhere in the code.

With this change, my wife’s laptop came back into the fold, and we lived happily ever after. The end.


PS: Yes, I have opened a pull request for the issue.