All things related to XigmaNAS (previously NAS4Free)

Microsoft Accounts on NAS4Free Samba

Illustration

With Windows 10 you can get a certain advantages if you opt to use your Microsoft login. However, if you have something Unixoid (e.g. NAS4Free) for your file server you will quickly notice you need to remind it of your user name. And no, solution is not as simple as renaming your user to your e-mail address since at character (@) cannot be used.

What needs to be done instead is to create a simple mapping file (I named mine samba.map):

anita = anita@medo64.com
anita = anita
josip = josip@medo64.com
josip = josip

In that file you map user name received over network to the Unix user. Notice that multiple different names can be mapped to the same user.

Last step is getting Samba to actually use that file. How is that configured really varies with your system. For NAS4Free, you want to visit Services -> CIFS/SMB -> Settings and into Auxiliary parameters add path to the given mapping:

username map = /mnt/Config/samba.map

Now you can use your Windows 10 user e-mail to your heart’s desire.

[2018-07-22: NAS4Free has been renamed to XigmaNAS as of July 2018]

Retrieving NAS4Free Build

For my NAS’ e-mail report I wanted to display NAS4Free version. While uname does give quite a few options, none of them returned all the details I could see on the Status page. And this is where open source shines - source code to the rescue!

Quick investigation has shown that all necessary details are in [util.inc](https://github.com/nas4free/nas4free/blob/master/etc/inc/util.inc). There in get_product_name function I’ve found the first hint - set of prd.* files in /etc directory. Those files get built with the OS and have all the details needed.

To keep a long story short, this is what I’ve decided upon:

VERSION_TEXT="`cat /etc/prd.name` `$cat /etc/prd.version` `cat /etc/prd.version.name` (revision `cat /etc/prd.revision`)"
echo "$VERSION_TEXT"

When ran on my current system, I get “NAS4Free 10.2.0.2 - Prester (revision 2118)”. Exactly what I needed. :)

Accessing NAS4Free Web Interface With SSH

Illustration

I was fortunate enough to collocate my backup machine at friend’s place. Unfortunately I wasn’t smart enough to actually setup everything. :) Only thing I had working was SSH.

Yes, my first though was to use SSH forwarding. Configuring local SSH tunnel with any free source port and using destination IP address with 443 (https) or 80 (http) as a port would allow for accessing remote web interface. Just access 127.0.0.1:localport and web interface would appear as if it was accessed through local network.

For my example I configured local port 62443 toward destination 192.168.0.1:443 and accessing 127.0.0.1:62443 should have been enough to show NAS4Free web interface. However, that didn’t work as smartass me didn’t enable Port Forwarding on the remote box. Duh!

To get out of this hole, first step is to allow for editing of config.xml where all settings are saved and that is mounted read-only by default:

umount /cf
mount -o rw /cf

After that use vi to edit /cf/conf/config.xml and add tcpforwarding configuration parameter:

<sshd>
    <port>22</port>
    <pubkeyauthentication/>
    <permitrootlogin/>
    <enable/>
    <private-key/>
    <subsystem/>
    ^^<tcpforwarding/>^^
</sshd>

Unsurprisingly that doesn’t really help as configuration isn’t applied automatically. Easiest way to apply it is restart:

init 6

As machine booted, I could access web interface via the magic of SSH port forwarding.

[2018-07-22: NAS4Free has been renamed to XigmaNAS as of July 2018]

Removing \"Custom Script Entries\" From NAS4Free Status E-mail

One annoyance I have with NAS4Free is how every custom report has a prefix - even when you fully customize it:

Custom script entries:
----------------------
All local pools are healthy
• Nenya: not reachable
• Narya: not reachable
...``

I find that “Custom script entries:” followed by dashes completely unnecessary and, if you read message on a small screen (e.g. Pebble watch), it just takes space from the more important information.

Culprit to this can be found in /etc/inc/report.inc where the following line creates that header:

$statusreport->AddArticle(new StatusReportArticleCmd("^^Custom script entries^^","{$config['statusreport']['report_scriptname']}"));

Good old sed can help us with removing this:

sed -i -e 's^Custom script entries^^g' /etc/inc/report.inc

If you have embedded installation, this will work only until restart. To make it “permanent”, just add it to SystemAdvancedCommand scripts.

[2018-07-22: NAS4Free has been renamed to XigmaNAS as of July 2018]

Creating a ZFS Backup Machine

With my main ZFS machine completed, time is now to setup a remote backup. Unlike main server with two disks and an additional SSD, this one will have just a lonely 2 TB disk inside. Main desire is to have a cheap backup machine that we’ll hopefully never use for recovery.

OS of a choice is NAS4Free and I decided to install it directly on HD without a swap partition. Installing on a data drive is a bit controversial but it does simplify setup quite a bit if you move drive from machine to machine. And the swap partition is pretty much unnecessary if you have more than 2 GB of RAM. Remember, we are just going to sync to this machine - nothing else.

After NAS4Free got installed (option 4: Install embedded OS without swap), disk will contain a single boot partition with the rest of space flopping in the breeze. What we want is to add a simple partition on the 4K boundary for our data:

gpart add -t freebsd -b 1655136 -a 4k ada0
 ada0s2 added

Partition start location was selected to be the first one on a 4KB boundary after the 800 MB boot partition. We cannot rely on gpart as it would select the next available location and that would destroy performance on a 4K drives (pretty much any spinning drive these days). And we cannot use freebsd-zfs for partition type since we are playing with MBR partitions and not GPT.

To make disk easier to reach, we label that partition:

glabel label -v disk0 ada0s2

And we of course encrypt it:

geli init -e AES-XTS -l 128 -s 4096 /dev/label/disk0
geli attach /dev/label/disk0

Last step is to actually create our backup pool:

zpool create -O readonly=on -O canmount=off -O compression=on -O atime=off -O utf8only=on -O normalization=formD -O casesensitivity=sensitive -O recordsize=32K -m none Backup-Data label/disk0.eli

To backup data we can then use zfs sync for initial sync:

DATASET="Data/Install"
zfs snapshot ${DATASET}@$Backup_Next
zfs send -p $DATASET@$Backup_Next | ssh $REMOTE_HOST zfs receive -du Backup-Data
zfs rename $DATASET@$Backup_Next Backup_Prev

And similar one for incremental from then on:

DATASET="Data/Install"
zfs snapshot ${DATASET}@$Backup_Next
zfs send -p -i $DATASET@$Backup_Prev $DATASET@$Backup_Next | ssh $REMOTE_HOST zfs receive -du Backup-Data
zfs rename $DATASET@$Backup_Next Backup_Prev

There is a lot more details to think about so I will share script I am using - adjust at will.

Other ZFS posts in this series:

[2018-07-22: NAS4Free has been renamed to XigmaNAS as of July 2018]

Adding Cache to ZFS Pool

To increase performance of a ZFS pool I decided to use read cache in the form of SSD partition. Ss always with ZFS, certain amount of micromanagement is needed for optimal benefits.

Usual recommendation is to have up to 10 GB of cache for each 1 GB of available RAM since ZFS keeps headers for cached information always in RAM. As my machine had total of 8 GB, this pretty much restricted me to the cache size in 60es range.

To keep things sane, I decided to use 48 GB myself. As sizes go, this is quite unusual one and I doubt you can even get such SSD. Not that it mattered as I already had leftover 120 GB SSD laying around.

Since I already had Nas4Free installed on it, I checked partition status

gpart status
  Name  Status  Components
  da1s1      OK  da1
 ada1s1      OK  ada1
 ada1s2      OK  ada1
 ada1s3      OK  ada1
 ada1s1a     OK  ada1s1
 ada1s2b     OK  ada1s2
 ada1s3a     OK  ada1s3

and deleted the last partition:

gpart delete -i 3 ada1
 ada1s3 deleted

Then we have to create partition and label it (optional):

gpart add -t freebsd -s 48G ada1
 ada1s3 added

glabel label -v cache ada1s3

As I had encrypted data pool, it only made sense to encrypt cache too. For this it is very important to check physical sector size:

camcontrol identify ada1 | grep "sector size"
 sector size           logical 512, ^^physical 512^^, offset 0

Whichever physical sector size you see there is one you should give to geli as otherwise you will get permanent ZFS error status when you add cache device. It won’t hurt the pool but it will hide any real error going on so it is better to avoid. In my case, physical sector size was 512 bytes:

geli init -e AES-XTS -l 128 -s ^^512^^ /dev/label/cache
geli attach /dev/label/cache

Last step is adding encrypted cache to our pool:

zpool add Data cache label/cache.eli

All left is to enjoy the speed. :)

Other ZFS posts in this series:

[2018-07-22: NAS4Free has been renamed to XigmaNAS as of July 2018]

My Encrypted ZFS Setup

For my Nas4Free-based NAS I wanted to use full-disk encrypted ZFS in a mirror configuration across one SATA and one USB drive. While it might not be optimal for performance, ZFS does support this scenario.

On booting Nas4Free I discovered my disk devices were all around the place. To identify which one is which, I used diskinfo:

 diskinfo -v ada0
 ada0
         512             # sectorsize
         2000398934016   # mediasize in bytes (1.8T)
         3907029168      # mediasize in sectors
         4096            # stripesize
         0               # stripeoffset
         3876021         # Cylinders according to firmware.
         16              # Heads according to firmware.
         63              # Sectors according to firmware.
         S34RJ9AG212718  # Disk ident.

Once I went through all drives (USB drives are named da*), I found my data disks were at ada0 and da2. To avoid any confusion in the future and/or potential re-enumeration if I add another drive, I decided to give them a name. SATA disk would be known as disk0 and USB one as disk1:

glabel label -v disk0 ada0
 Metadata value stored on /dev/ada0.
 Done.

glabel label -v disk1 da2
 Metadata value stored on /dev/da2.
 Done.

Do notice that you lose the last drive sector for the device name. In my opinion, a small price to pay.

On top of the labels we need to create encrypted device. Beware to use labels and not the whole disk:

geli init -e AES-XTS -l 128 -s 4096 /dev/label/disk0
geli init -e AES-XTS -l 128 -s 4096 /dev/label/disk1

As initialization doesn’t really make devices readily available, both have to be manually attached:

geli attach /dev/label/disk0
geli attach /dev/label/disk1

With all things dealt with, it was time to create the ZFS pool. Again, be careful to attach inner device (ending in .eli) instead of the outer one:

zpool create -f -O compression=on -O atime=off -O utf8only=on -O normalization=formD -O casesensitivity=sensitive -m none Data mirror label/disk{0,1}.eli

While both SATA and USB disk are advertised as the same size, they do differ a bit. Due to this we need to use -f to force ZFS pool creation (otherwise we will get “mirror contains devices of different sizes” error). Do not worry for data as maximum available space will be restricted to a smaller device.

I decided that pool is going to have the compression turned on by default, there will be no access time recording, it will use UTF8, it will be case sensitive (yes, I know…) and it won’t be “mounted”.

Lastly I created a few logical datasets for my data. Yes, you could use a single dataset, but quotas make handling of multiple ones worth it:

zfs create -o mountpoint=/mnt/Data/Family -o quota=768G Data/Family
zfs create -o mountpoint=/mnt/Data/Install -o quota=256G Data/Install
zfs create -o mountpoint=/mnt/Data/Media -o quota=512G Data/Media

As I am way too lazy to login after every reboot, I also saved my password into the password.zfs file on the TmpUsb self-erasable USB drive. A single addition to System->Advanced->Command scripts as a postinit step was need to do all necessary initialization:

/etc/rc.d/zfs onestop ; mkdir /tmp/TmpUsb ; mount_msdosfs /dev/da1s1 /tmp/TmpUsb ; geli attach -j /tmp/TmpUsb/password.zfs /dev/label/disk0 ; geli attach -j /tmp/TmpUsb/password.zfs /dev/label/disk1 ; umount -f /tmp/TmpUsb/ ; rmdir /tmp/TmpUsb ; /etc/rc.d/zfs onestart

All this long command does is mounting of the FAT12 drive containing the password (since it was recognized as da1 its first partition was at da1s1) and uses file found there for attaching encrypted devices. Small restart of ZFS subsystem is all that is necessary for pool to reappear.

As I wanted my TmpUsb drive to be readable under Windows, it is not labeled and thus manual script correction might be needed if further USB devices are added.

However, for now, I had my NAS box data storage fully up and running.

Other ZFS posts in this series:

NUC NAS

Every few years I update my home NAS server and try to do the best within my restrictions.

First condition is that it has to use the hardware I already have. Yes, I might buy something new so that I free up the existing HW for its bright NAS future, but I don’t want to buy something specific for the NAS. While I am sure there are prebuilt systems that are much better than what I am planning, I am not building NAS only for my data. I am also building it to learn and have fun.

As second condition data has to be reasonably safe. That doesn’t exclude a single drive NAS setup - I’ve been running one for last two years. However, together with a backup process, it has to allow for a full hardware loss while keeping data loss at the minimum. It also has to cover for the remote backup - even if it is just an HDD I keep carrying with me. And I do not have a lot of data on my NAS - currently all things I hold dear are under 1 TB in size.

It also has to be physically small enough I could take it on a plane within my clothes (good padding is important). As I am currently in the US on a non-permanent visa, that scenario is as likely as any hard drive failure. Cheap bastard in me definitely doesn’t want to pay hundreds of dollars for shipping if I can just snuggle NAS in my luggage.

Last condition is that data has to be encrypted at rest. While NAS is at home I might make some things easy on me (e.g., auto-decryption at startup) but it has to be possible to keep data encrypted during transport. I am not saying TSA might be stealing stuff from luggage, I just want to be cautious.

All these things taken into consideration, I decided to use my old Intel NUC D34010WYKH as a new data storage. It is a two-core (4 logical processors) i3 device running at 1.7 GHz accompanied by 8 GB RAM and enough room for one SSD (mSATA) and one small 2.5" HDD (SATA). This nicely covered using old hardware (this was my ex-HTPC) and a small size.

For OS I decided upon XigmaNAS as it supports ZFS and it can be installed on an USB drive thus leaving my other drives fully available for data. I did consider FreeNAS as OS but NAS4Free just felt better. With ZFS I also had option of using FreeBSD or Solaris but I decided not to deal with OS updates myself. And yes, I know Linux supports both ZFS and its deranged brother BTRFS, but there are too many issues with getting either to work without issues.

As you could deduce, ZFS is going to be in charge of all data with the encryption taken care of by GELI. I did lose a bit of comfort as encryption makes web management a bit more difficult but, once scripts are in place, you don’t need GUI anyhow. To allow for quick disabling of auto-decryption I would use TmpUsb drives with auto-delete. If server gets stolen this would ensure nobody can get my data.

As I wanted to have a mirror and NUC has enough place only for one 2.5" 2 TB drive, I decided to have an external 2 TB USB 3.0 drive as its partner. To make backup work I would sync daily snapshots to another local machine (manual dual boot) and to the other at a remote site. In addition to this, I would also do the weekly backup on an external USB.

Let me be the first to say I know this setup is far from the ideal with two obvious (and big) faults. The first one is not having the ECC RAM as this diminishes data security ZFS has to offer. It is not a catastrophe but not what you might want for your NAS either. Second is the need for 2.5" drives due to NUC’s size. Those drives are more expensive, offer less capacity, and are slower than the bigger 3.5" brethren. This is made even worse by having an external USB drive as a part of the pool as this is making the performance worse than it should be. And let’s not even go thinking about accidental unplugging…

Regardless of all its limitations, I believe this setup is going to work well for my purpose. If everything else fails it will at least give me endless hours of scripting fun needed to make all this work.

Other ZFS posts in this series:

PuTTY Doesn't Work With NAS4Free

Every few years there comes the time to refresh my NAS hardware and the choice usually falls upon the latest NAS4Free installation. As I do fair amount of customizing, this means that SSH access is mandatory. With version NAS4Free 10 I stumbled upon trouble. My trusty PuTTY could not connect and there was no obvious reason why.

Only potential culprit I could find was in NAS4Free using DSA keys but PuTTY has been supporting those for ages so that was obviously not a full story. And I could connect from Linux so it was really PuTTY doing funny stuff and not misconfiguration. As I wanted my project to go further I decided to find PuTTY’s replacement. And that search pretty much boiled my choices down to two.

First candidate was MobaXterm. Not only that this replaces PuTTY but it also offers much better session management and an reasonable tabbed interface. However, it has $70 price tag attached. Yes, there is a free version too but its restrictions make it unsuitable for anybody dealing with SSH regularly. Call me a cheap bastard but I don’t want to give that kind of money for SSH client. All other functionality MobaXterm has is a nice touch and might make it worth that money, but I didn’t really have any use for it.

Another program worth considering was mRemoteNG. While this one also worked well as a SSH client toward my NAS4Free machine and it does come as a free download I found that its interface was simply too annoying to deal with. Yes, I would use it in a pinch but for the most time it was making me think MobaXterm might be worth it.

And then I went onto PuTTY’s page and saw there was a new release available (0.65). Guess what? That release worked without a hitch. Yes, sane person would check for a new version before spending time testing the replacement but I got so used to PuTTY developed in a lazy fashion that I honestly didn’t expect a new version to be there.

So, after a long search I came back to PuTTY and its abysmal session management. And I couldn’t be happier about it.

NAS4Free in the Role of Syslog Server

Illustration

In my network there are multiple *nix devices, most notable of them being my file server (NAS4Free) and my router (Asus RT-AC56U). Nice thing about their common ancestry is that both support syslog logging. Since I already have a proper reporting in place for my file server, I started thinking about getting my router messages there too.

Well, as luck would have it, there is already a syslog server present within NAS4Free. Only reason why it doesn’t work is that it is explicitly disabled in /etc/rc.d/syslogd. Following line is the culprit:

syslogd_flags="-8 -ss"

In full NAS4Free installation it is simple to edit that file. In embedded, some “trickery” is needed. In System -> Advanced -> Command scripts I added a new PostInit entry:

sed -i -e 's^syslogd_flags=".*"^syslogd_flags="-8 -a 192.168.1.0/24:*"^g' /etc/rc.d/syslogd ; /etc/rc.d/syslogd restart

Purpose of this rather long command (ok, two commands) is to do a string replace of default flags with ones allowing the whole 192.168.1.x range to use it as a server (you could define single server too).

There are additional steps that could be taken, e.g. adding host name into /etc/hosts or getting syslog to save my router messages into separate log file (configurable in /etc/syslog.conf). However, as far as my needs went, I was perfectly fine with this.

[2018-07-22: NAS4Free has been renamed to XigmaNAS as of July 2018]