Filesystem and virtualization decisions for homeserver build
from thecoffeehobbit@sopuli.xyz to selfhosted@lemmy.world on 03 Apr 14:57
https://sopuli.xyz/post/24848457

Hi Lemmy! First post, apologies if it’s not coherent :)

I have a physical home server for hosting some essential personal cloud services like smart home, phone backups, file sharing, kanban, and so. I’m looking to re-install the platform as there are some shortcomings in the first build. I loosely followed the FUTO wiki so you may recognise some of the patterns from there.

For running this thing I have a mini-pc with 3 disks, 240GB and 2x 960GB SSDs. This is at capacity, though the chassis and motherboard would in theory fit a fourth disk with some creativity, which I’m interested to make happen at some point. I also have a Raspberry Pi in the house and a separate OPNsense box for firewall/dns blocking/VPN etc that works fine as-is.

In the current setup, I have Ubuntu Server on the 240GB disk with ext4, which hosts the services in a few VMs with QEMU and does daily snapshots of the qcow2 images onto the 960GB SSDs which are set up as a mirrored zfs pool with frequent automatic snapshots. I copy the zpool contents periodically to an external disk for offsite backup. There’s also a simple samba share set up on the pool which I thought to use for syncthing and file sharing somehow. This is basically where I’m stopping to think now if what I’m doing makes sense.

Problems I have with this:

Some additional design pointers:

My current thoughts revolve around the following - comments most welcome.

I’m not afraid to do some initially complex setting up. I’m a full stack web developer, not a professional sysadmin though, so advice is welcome. I don’t want to buy tons of new shit, but I’m not severely budget limited either. I’m the only admin for this system but not the only user (family setting).

What’s the 2025 way of doing this? I’m most of all looking any inspiration as to the “why”, I can figure out ways to get it done if I see the benefits.

tldr: how to best have reliable super-frequent snapshots of a home server’s data with encryption, preferably making use of zfs.

#selfhosted

threaded - newest

nesc@lemmy.cafe on 03 Apr 16:30 next collapse

Don’t fret about ssd lifespan, unless you are planning on writing tb a day they will outlive your setup. I wouldn’t personally use zfs for this, unless you have a lot of memory just laying around.

thecoffeehobbit@sopuli.xyz on 03 Apr 16:40 collapse

Fair about the SSD life. How would you go about achieving the frequent backups without zfs? I wouldn’t want to implement it separately for every app I use, though I’m open to it if this doesn’t work out.

I’ll easily buy more memory if needed, the box now has 8GB and isn’t struggling in any way.

nesc@lemmy.cafe on 03 Apr 17:30 collapse

I won’t use fs snapsots as backups especially one as poorly supported on linux as zfs. I would go with external qcow disk snapshots and they can be pretty easily automated.

thecoffeehobbit@sopuli.xyz on 03 Apr 17:46 collapse

This is what I’m doing currently, but it’s not really feasible to have the services shut down hourly for snapshots. This is indeed why I started looking towards filesystem-level snapshotting Obviously I will have other types of backups as well, I’m simply looking to have the on-the-fly immutable snapshot capability here somehow.

nesc@lemmy.cafe on 03 Apr 19:47 collapse

You do not need to shut down services to make snapshots, why would you?

thecoffeehobbit@lemmy.world on 03 Apr 20:25 collapse

Uhh, from most what I have gathered from self-hosting so far, doing that is not trivial as you’d need to flush the ram contents to disk first basically. I’m starting to realize though that the same holds equally for filesystem level snapshotting. What I’m really after is making my data live on separate pass through storage that has all the fancy filesystem level stuff so I can just relax about the VM backups.

nesc@lemmy.cafe on 04 Apr 06:08 collapse

You are overthinking it, without flushing ram everything works fine. OS inside VM would just boot as normal and that’s it.

InvertedParallax@lemm.ee on 03 Apr 17:10 next collapse

ZFS, hands down, it doesn’t even begin to hurt the SSDs, it’s basically the best choice, just try to not fill the whole volumes or it starts thrashing like crazy.

ZFS has encryption, but LUKS is fine too.

I’ve run Raidz2 for well over a decade, never had data loss that wasn’t extremely my fault, and I recovered from that almost immediately from backed up snapshot.

thecoffeehobbit@sopuli.xyz on 03 Apr 17:17 collapse

Thanks! Can I ask what is your setup like? ZFS on bare metal? Do you have VMs?

InvertedParallax@lemm.ee on 03 Apr 19:03 collapse

Zfs on Debian on bare metal with nfs server. Edit: and it hosts the worker vms

Vlan for services with routed subnet

Sriov connectx4 with 1 primary vm running freebsd and basically all my major services in their own jails. Won’t go into details, but it has like 20 jails and runs almost everything. (had full vnet jails for a while which was really cool but performance wasn’t great).

1 vm for external nginx and bind on Debian vm on isolated subnet/Vlan and dmz for exposed services

1 vm for mailinabox on dmz subnet/Vlan

1 Debian vm on services vlan/net for apps that don’t play well with freebsd, mostly dockers, I do not like this vm, it’s basically unclean and mostly isolated.

Few other vms for stuff.

It’s a Dell r730 with 2 2697(or 2698? 20c/40t each) with 512gb. Edit: v4 so broadwell

12x16tb hgst h530s with 2 nvme drives and 2 Sata ssds, somewhere in there is a zlog and l2arc.

Can’t figure out how to fit a decent GPU in there so currently it’s living on my dual Rome workstation, this system is due for an upgrade, thinking about swapping the workstation to a much lighter one and push the work to the server, while moving the storage to a dedicated system, but not there yet.

Love freebsd though, don’t use it as my daily driver, tried a bit, it worked but there was just enough trouble to not make it work, but freebsd has moved on and so have i, so it’s worth a shot again.

Decent i/O, but nothing to write home about, think it saturates the 10g but only just, I have gear for full 100g (I do a LOT of chip startups, and worked at a major networking chip firm a while) but it takes a lot more power, and i have PGE so I can’t justify it till I can seriously saturate it.

Also I’m in process of moving to Europe, built a weak network here and linked via wire guard, but shit is expensive here and I’m not sure how to finish the move just yet, so I’m basically 50/50 including time at work in the valley.

thecoffeehobbit@sopuli.xyz on 03 Apr 18:37 collapse

Ok so wrapping my head around this, what I think I need to be clear about is the separation between applications and data. Applications get the nightly VM snapshot way of backing up, and data will get the frequent zfs snapshots (and other backups). Kinda what I tried to do to begin with, so I will look more on how to do this separation for the applications I intend to use.

Still unsure if samba is the way to go for linking it together on the same physical machine.

Should I just run syncthing on the bare metal host…? Will sleep on it.