Sendmail via GMail on Ubuntu Server

I finally decided to migrate my Wordpress onto Ubuntu 20.04 only to discover e-mail I configured via Smtpmail stopped working. That meant my WordPress and various PHP and command line tools couldn’t use Google’s e-mail relay to deliver e-mails to my Inbox. It was time to setup my server to use Google’s SMTP again. And this time I decided to go with postfix.

Installing postfix usually involves a GUI asking a few questions. However, you can use debconf-set-selections to preload the answers. Make sure to be the root used (sudo su -).

debconf-set-selections <<< "postfix postfix/main_mailer_type string 'Internet Site'"
debconf-set-selections <<< "postfix postfix/mailname string ''"
apt-get install --assume-yes postfix libsasl2-modules

Once installed, we need to provide credentials in /etc/postfix/sasl/sasl_passwd.

unset HISTFILE
echo "[smtp.gmail.com]:587 ^^relay@gmail.com^^:^^password^^" > /etc/postfix/sasl/sasl_passwd
postmap /etc/postfix/sasl/sasl_passwd
chmod 0600 /etc/postfix/sasl/sasl_passwd /etc/postfix/sasl/sasl_passwd.db

Finally we need to update /etc/postfix/main.cf for authentication options.

sed -i 's/relayhost = /relayhost = [smtp.gmail.com]:587/' /etc/postfix/main.cf
cat <<EOF >> /etc/postfix/main.cf
smtp_sasl_auth_enable = yes
smtp_sasl_password_maps = hash:/etc/postfix/sasl/sasl_passwd
smtp_sasl_security_options = noanonymous
EOF

And that’s pretty much it. The only remaining thing is to restart postfix:

systemctl restart postfix

To test if it works, just use sendmail.

echo "Subject: Test via sendmail" | sendmail -v ^^youremail@example.com^^

[2022-06-16: As of 2022-06-01, it’s not possible to use your Google email and password directly. However, you can still follow this guide and use App Password instead.]

The Cost of CyberCard

After publishing text about the CyberCard project I got the question from a friend. Wasn’t it cheaper to buy Jeff Mayes’ interface driver then to build my own?

Answer is yes - at $30 that board is cheap. But that’s not all. Even the original RMCARD205 at $150 is cheaper than what I spent.

First of all, there were 4 revisions. The first revision was a bit too large. Manually filing PCB did the trick for the troubleshooting but I wanted to have revision B with the correct width. While width was now correct, I accidentally shortened it a bit. And yes, this brought me to the third revision. For that revision I also changed MCP2221A to SOIC package. It wasn’t strictly necessary but I figured having all three ICs in SOIC looked nicer than having different package styles on the same board. The last revision D was just a bit more fiddling with design without any major change. Yes, there were some other changes but this was a gist of it.

Considering each revision was around $25 in PCB cost (OSHPark) and I spent about $50 in parts for them, project was more expensive than official RMCARD205 even without accounting for my time. Since the first version was actually working, you can view all the time and money spent afterward as wasted.

But I disagree. From the moment I started working on it I knew it would end more expensive than the original part. Even for the first board I spent more money in PCB and parts than what Jeff’s adapter would cost with shipping. I found this board to be the perfect project: it would result in something useful, it was simple enough that I could work with it whenever I had some spare time, cheap enough that it wouldn’t break the bank, and an excellent chance to setup PIC16F1454 as an USB device.

I was eyeing PIC16F1454 for a few years now (I still have sample from Microchip from when it was originally announced) but I never got around to. When I first started with the board design I noticed MCP2221A USB-to-serial bridge was compatible with 16F1454’s footprint. If I was a betting man, I would have said that MCP2221A was nothing other than PIC16F1454 with the custom code. This project gave me a reason to get into this interesting PIC and do some USB programming.

I actually paid not for the final board - no matter how well it works. I paid a good money to keep me entertained and to fill my free time. And it was worth every penny.

SignTool Failing with 0x80096005

After creating a new setup package I noticed my certificate signing wasn’t working. I kept getting error while running the same signing command I always had.

sign -s "My" -sha1 $CERTIFICATE_THUMBPRINT -tr ^^http://timestamp.comodoca.com/rfc3161^^ -v App.exe
 SignTool Error: An unexpected internal error has occurred.
 Error information: "Error: SignerSign() failed." (-2146869243/0x80096005)

A bit of troubleshooting later and I narrowed my problem to the timestamping server as removing /tr option made it work as usually (albeit without the timestamping portion). There were some certificate changes for the timestamp server but I don’t believe this was the issue as the new certificate was ok and I remember their server occasionally not working for days even before this.

And then I remembered what I did the last time Comodo’s timestamp server crapped out. Quite often you can use other, more reliable, timestamp server. In my case I went with timestamp.digicert.com.

sign -s "My" -sha1 $CERTIFICATE_THUMBPRINT -tr ^^http://timestamp.digicert.com^^ -v App.exe
 Successfully signed: App.exe

PS: This same error might happen due to servers refusing SHA-1.

Connecting to CyberPower OR500LCDRM1U UPS Serial Port

Illustration

To keep my file server and networking equipment running a bit longer in the case of power outage, I have them connected to CyberPower OR500LCDRM1U UPS. It’s a nice enough 1U UPS but with a major issue - no USB connection.

Well, technically there is an USB connection but it doesn’t work under anything else than Windows. If you want it working under Unix, the only option is RMCARD205, optional network module upward of $150. Essentially doubling the price of UPS.

And it’s those internal connections Jeff Mayes took advantage of for a simple serial interface. If the only thing you want is a serial interface, you might as well go with his interface driver as price is really reasonable.

However, his boards require you to either have a serial port or to have an USB-to-serial cable. What I wanted was direct USB connection. Since there was nothing out there, I decided to roll my own.

Since I had an UPS locally, it was easy enough to get physical dimensions. Unfortunately just measuring them wasn’t sufficient as they narrow as you go deeper so my first assumption of 3.1x1.7 inches was a bit off. Due to that and bottom connector that was a bit shallower then expected, the final board dimensions were more like 71x43 mm. It took a bit of probing to find the 4 signals I needed were grouped together with GND and RX on the bottom while TX and 12 V were on the top.

Connecting the appropriate serial connections to UART-to-USB converter like MCP2221A was a minimum required but I felt a bit queasy about connecting it directly to my computer. Therefore I decided to isolate the UPS interface from the computer. For this purpose I used Si8621 digital isolator offering 2,500 V isolation which was probably an overkill but allowed me to sleep better.

The last physical piece needed was a cover for card to avoid having a large opening in the back of my rack. While risk of anything getting inside is reasonably low, making a 3D printed cover was easy enough. It took a few tries to get cover design right in TinkerCAD but it avoided having a gaping hole.

If you are interested in making one for yourself, check project page for all the files.

Testing Native ZFS Encryption Speed

[2020-11-02: There is a newer version of this post]

As I wrote about installing ZFS with the native encryption on the Ubuntu 20.04, it got me thinking… Should I abandon my LUKS-setup and switch? Well, I guess some performance testing was in order.

For this purpose I decided to go with the Ubuntu Server (to minimize impact desktop environment might have) inside of the 2 CPU Virtual Machine with 24 GB of RAM. Two CPUs should be enough to show any multithreading performance difference while 24 GB of RAM is there to give home to our ZFS disks. I didn’t want to depend on disk speed and variation it gives. For the testing purpose I only care about the relative speed difference and using the RAM instead of the real disks would give more repeatable results.

For OS I used Ubuntu Server with ZFS packages, carved a chunk of memory for RAM disks, and limited ZFS ARC to 1G.

sudo -i &lt;&lt; EOF
    apt update
    apt dist-upgrade -y
    apt install -y zfsutils-linux
    grep "/ramdisk" /etc/fstab || echo "tmpfs  /ramdisk  tmpfs  rw,size=20G  0  0" \
        | sudo tee -a /etc/fstab
    grep "zfs_arc_max" /etc/modprobe.d/zfs.conf || echo "options zfs zfs_arc_max=1073741824" \
        | sudo tee /etc/modprobe.d/zfs.conf
    reboot
EOF

With the system in pristine state, I created data used for testing (random 2 GiB).

dd if=/dev/urandom of=/ramdisk/data.bin bs=1M count=2048

Data disks are just bunch of zeros (3 GB each) and the (RAID-Z2) ZFS pool has the usual stuff but with compression turned off and sync set to always in order to minimize their impact on the results.

for I in {1..6}; do dd if=/dev/zero of=/ramdisk/disk$I.bin bs=1MB count=3000; done
echo "12345678" | zpool create -o ashift=12 -O normalization=formD \
    -O acltype=posixacl -O xattr=sa -O dnodesize=auto -O atime=off \
    -O encryption=^^aes-256-gcm^^ -O keylocation=prompt -O keyformat=passphrase \
    -O compression=off -O sync=always -O mountpoint=/zfs TestPool raidz2 \
    /ramdisk/disk1.bin /ramdisk/disk2.bin /ramdisk/disk3.bin \
    /ramdisk/disk4.bin /ramdisk/disk5.bin /ramdisk/disk6.bin

To get write speed, I simply copied the data file multiple times and took the time reported by dd. To get a single figure, I removed the highest and the lowest value averaging the rest.

sudo -i &lt;&lt; EOF
    sudo dd if=/ramdisk/data.bin of=/zfs/data1.bin bs=1M
    sudo dd if=/ramdisk/data.bin of=/zfs/data2.bin bs=1M
    sudo dd if=/ramdisk/data.bin of=/zfs/data3.bin bs=1M
    sudo dd if=/ramdisk/data.bin of=/zfs/data4.bin bs=1M
    sudo dd if=/ramdisk/data.bin of=/zfs/data5.bin bs=1M
EOF

For reads I took the file that was written and dumped it to /dev/null. Averaging procedure was the same as for writes.

sudo -i &lt;&lt; EOF
    sudo dd if=/zfs/data1.bin of=/dev/null bs=1M
    sudo dd if=/zfs/data2.bin of=/dev/null bs=1M
    sudo dd if=/zfs/data3.bin of=/dev/null bs=1M
    sudo dd if=/zfs/data4.bin of=/dev/null bs=1M
    sudo dd if=/zfs/data5.bin of=/dev/null bs=1M
EOF

Illustration

With all that completed, I had my results.

I was quite surprised how close a different bit sizes were in the performance. If your processor supports AES instruction set, there is no reason not to go with 256 bits. Only when you have an older processor without the encryption support does the 128-bit crypto make sense. There was a 15% difference when it comes to the read speeds in the favor of the GCM mode so I would probably go with that as my cipher of choice.

However, once I added measurements without the encryption and for the LUKS-based crypto I was shocked. I expected thing to go faster without the encryption but I didn’t expect such a huge difference. Also surprising was seeing the LUKS encryption to have triple the performance of the native one.

Illustration

Now, this test is not completely fair. In the real life, with a more powerful machine, and on the proper disks you won’t see such a huge difference. The sync=always setting is a performance killer and results in more encryption calls than you would normally see. However, you will still see some difference and good old LUKS seems like the winner here. It’s faster out of box, it will use less CPU, and it will encrypt all the data (not leaving metadata in the plain as ZFS does).

I will also admit that comparison leans toward apples-to-oranges kind. Reason to use ZFS’ native encryption is not due to its performance but due to the extra benefits it brings. Part of those extra cycles go into the authentication of each written block using a strong MAC. Leaving metadata unencrypted does leak a bit of (meta)data but it also enables send/receive without either side even being decrypted - just ideal for a backup box in the untrusted environment. You can backup the data without ever needing to enter password on the remote side. Lastly let’s not forget allowing ZFS direct access to the physical drives allows it to shine when it comes to the fault detection and handling of the same. You will not get anything similar if you are interfacing over the virtual device.

Personally, I will continue using the LUKS-based full disk encryption for my desktop machines. It’s just much faster. And I probably won’t touch my servers for now either. But I have a feeling that really soon I might give native ZFS encryption a spin.

[2020-11-01: Newer updates of 0.8.3 (0.8.3-1ubuntu12.4) have greatly improved GCM speed. With those optimizations GCM mode is now faster than Luks. For more details check 20.10 post.]


PS: You can take a peek at the raw data if you’re so inclined.