(I originally posted this over on cohost, but cohost will be shutting down so I'm reposting it here.)I recently picked myself up a
https://www.terra-master.com/global/products/homesoho-nas/f2-221.html to replace my old NAS and I figured I'd write up my notes on what I did for that. It's based a bit on information from Joel Duncan's
TrueNAS on TerraMaster F2-221 article but, uh, less TrueNAS and more Debian I guess.
Rationale
I had a couple goals here:
- Physically smaller-- My existing NAS is six drives in a gigantic 4U case, because slab of 2008 thought that the thing to optimize for was "plenty of room to stick a bunch of drives in" but slab of 2022 wanted something that would not be a pain to relocate across state lines (a goal for 2023).
- Increase of capacity-- the old NAS had 2x600GB drives in RAID1 and 4x2TB drives in RAID1, which is pretty trivial to exceed capacity-wise with a single drive these days.
- Improved power usage-- the old NAS wasn't terribly power-efficient (Athlon 4850e with 6 HDDs idles at about 200W), whereas the new one seems to idle at about 10W, so that ought to be slightly friendlier on my utility bill.
Installation
Joel's article covers the basics (and has pictures), but just to note them here:
- The F2-221 has 2GiB of memory soldered on, but has a DDR3L SODIMM slot. I wasn't able to find the same memory module, but the TED3L8G1600C11 is compatible and brings the total system memory up to 10GiB.
- There is an internal USB port for the boot volume. It comes populated with a 120MB USB flash drive; I replaced this with a 64GB Sandisk Cruzer Fit.
At that point, you've got yourself a tiny little x64 box; installing Debian _almost_ goes like you'd expect, except I needed to use an
installer image containing firmware for sake of the Realtek NICs. :(
(This being a piece of x64 hardware was a requirement for me-- Terramaster does have some ARM64 units in their lineup, but past experience is that generic support for Linux distros on ARM is still non-ideal and I wanted something that didn't feel like my dayjob.)
Disk layout
I installed Debian onto the 64GB USB flash, with the requisite vfat EFI system partition and then a single large ext4 partition for the main filesystem.
In the two drive bays, I installed a pair of
Seagate IronWolf 12TB drives, and set them up as a btrfs
raid1
, on which I created two subvolumes,
home
(home directories) and
share
(a big subvolume for shared media to be shared out over SMB). My
/etc/fstab
looks like so:
# / was on /dev/sdd2 during installation
UUID=f8c56311-488e-442d-920c-a021929dc8d3 / ext4 errors=remount-ro,lazytime,noatime,commit=120 0 1
# /boot/efi was on /dev/sdd1 during installation
UUID=EF2C-FE9E /boot/efi vfat umask=0077 0 1
tmpfs /tmp tmpfs defaults 0 0
UUID=bf4e94fa-1a8f-42bc-aff8-630c1ad18891 /home btrfs defaults,noatime,compress=zstd,commit=120,subvol=home 0 2
UUID=bf4e94fa-1a8f-42bc-aff8-630c1ad18891 /srv/share btrfs defaults,noatime,compress=zstd,commit=120,subvol=share,bsdgroups 0 2
I did up the
commit
time to reduce impact of writes (especially on that USB flash!), and I use the
bsdgroups
option to help keep any content that users add to
/srv/share
in the
share
usergroup.
I'm not sure what other subvolumes to have, but I might think up more someday.
Network setup
The first Ethernet port is serviced by a PCIe-attached RTL8125. Despite the F2-221 specifications saying that it's a 1GbE port, this seems to be 2.5GbE-capable? I do not have any other 2.5GbE network equipment to test this though.
02:00.0 Ethernet controller: Realtek Semiconductor Co., Ltd. RTL8125 2.5GbE Controller (rev 05)
The second Ethernet port, curiously, is internally USB-attached and is an RTL8153:
Bus 002 Device 002: ID 0bda:8153 Realtek Semiconductor Corp. RTL8153 Gigabit Ethernet Adapter
Unfortunately in systemd's fine estimation, this means that we have a
enp2s0
and a
enx6cbfb502543c
and that lack of consistency annoyed me, so I set up link files to rename them to
eno1
and
eno2
, which is what they should be named as on-board ports.
# File: /etc/systemd/network/10-eno1.link
# The first network port is PCIe-attached, but is not indicated as
# an onboard port in the BIOS.
[Match]
Property=ID_VENDOR_ID=0x10ec ID_MODEL_ID=0x8125 ID_BUS=pci
[Link]
Description=Onboard Port 1
Name=eno1
# File: /etc/systemd/network/10-eno2.link
# There is a permanently-affixed but USB-attached NIC for the second port.
[Match]
Property=ID_VENDOR_ID=0bda ID_MODEL_ID=8153 ID_BUS=usb
[Link]
Description=Onboard Port 2
Name=eno2
Extra Samba configuration for sake of MacOS
I have, amongst the hardware in my house, a Mac Mini that is still serving as a HTPC. Did you know that MacOS in 2022 _still_ displays non-Mac machines (which it deduces to be Windows) as
CRT monitors with a Win9x-era BSOD? Despite CRTs falling out of fashion in the mid-2000s as well as the last operating system with that style of bluescreen going completely out of support in 2006?
Come on, Apple. It's not funny anymore, it's just petty. And it's made even more ridiculous by this icon being used for Linux machines too! Sigh.
Anyway. We can fix that by advertising a couple services over mDNS with Avahi:
<!-- File: /etc/avahi/services/devinfo.service -->
<?xml version="1.0" standalone='no'?><!--*-nxml-*-->
<!DOCTYPE service-group SYSTEM "avahi-service.dtd">
<service-group>
<name replace-wildcards="yes">%h</name>
<service>
<type>_device-info._tcp</type>
<port>0</port>
<txt-record>model=Xserve1,1</txt-record>
</service>
</service-group>
<!-- File: /etc/avahi/services/smb.service -->
<?xml version="1.0" standalone='no'?><!--*-nxml-*-->
<!DOCTYPE service-group SYSTEM "avahi-service.dtd">
<service-group>
<name replace-wildcards="yes">%h</name>
<service>
<type>_smb._tcp</type>
<port>445</port>
</service>
</service-group>
Unfortunately we can't come up with our own custom icons here (the icons are specified for each model code on the MacOS side, in
/System/Library/CoreServices/CoreTypes.bundle/Contents/Info.plist
) but
Xserve1,1
is an okay fit. But hey, now we show up as not-a-CRT monitor!
LEDs
Terramaster supposedly has
an LED driver for Intel J33-based platforms which is pretty gross (just pokes PCI configuration space to find the GPIO controller, then creates its own device files instead of using /sys/class/leds), and while there have been some slight modifications by others (such as
this one that is DKMS-enabled) none of them really do anything to alleviate any of the grossness. However, as far as I can tell... on my F2-221 the HDD LEDs are not actually connected via these GPIOs-- neither of these drivers affect the visual state of the LEDs, nor does exporting the GPIOs to userspace via
/sys/class/gpio/export
(I believe
GPIO23
through
GPIO32
would be
457
through
466
) and twiddling them that way.
The LEDs do turn on or off based on the presence or absence of a drive in a bay, so they're hooked up
somewhere in hardware I think? This unfortunately probably means that I cannot make them turn red on drive failure or any other interesting condition.
(Terramaster even making a driver for GPIO-connected LEDs in the first place also boggles me-- those
can be specified in ACPI!)