from theorangeninja@sopuli.xyz to selfhosted@lemmy.world on 06 Aug 07:38
https://sopuli.xyz/post/31590751
Hello everyone,
I am about to renovate my selfhosting setup (software wise). And then thought about how I could help my favourite lemmy community become more active. Since I am still learning many things and am far away from being a sysadmin I don’t (just) want tell my point of view but thought about a series of posts:
Your favourite piece of selfhosting
I thought about asking everyone of you for your favourite piece of software for a specific use case. But we have to start at the bottom:
Operating systems and/or type 1 hypervisors
You don’t have to be an expert or a professional. You don’t even have to be using it. Tell us about your thoughts about one piece of software. Why would you want to try it out? Did you try it out already? What worked great? What didn’t? Where are you stuck right now? What are your next steps? Why do you think it is the best tool for this job? Is it aimed at beginners or veterans?
I am eager to hear about your thoughts and stories in the comments!
And please also give me feedback to this idea in general.
threaded - newest
Linux
OS: Unraid
It’s primarily NAS software, with a form of software raid functionality built in.
I like it mainly because it works well and the GUI makes is very easy to use and work with.
On top of that you can run VMs and docker containers, so it is very versatile as well.
I use it to host the following services on my network:
It costs a bit of money up-front, but for me it was well-worth the investment.
Love Unraid. Been using it for a few years now on an old Dell server. I’m about to transform my current gaming PC into the main server so I can utilize the GPU pass-through and CPU pinning for things like running a VM just for LLM/AI and a VM for EndeavourOS for gaming. I just need to figure out how to keep my old server somehow working still bc of all the drive storage I have already setup, which my PC doesn’t have space for without a new case.
For anyone looking to setup Unraid, I highly recommend the SpaceInvaderOne YouTube channel. It helped tremendously when I got started.
+1 for unraid. Nice OS that let’s me easily do what I want
I’m interested in learning more about nixOS but until i get there, proxmox all day
I’m pretty happy with Debian as my server’s OS. I recently gave in to temptation and switched from stable to testing, on my home systems I run Arch because i like to have the most up to date stuff, but with my servers that’s a bit less important, even so debian testing is usually pretty stable itself anyway so I’m not worried much about things breaking because of it.
I think this is a great idea. With such a foundational deployment concept like OS there are so many options and each can change the very core of one’s self hosted journey. And then expanding to different services and the different ways to manage everything could be a great discussion for every existence level.
I myself have been considering Proxmox with LXCs deployed via the Community Scripts repo versus bare metal running a declarative OS with Docker compose or direct packages versus a regular Ubuntu/Debian OS with Docker compose. I am hoping to create a self-documenting setup with versioning via the various config and compose files, but I don’t know what would end up being the most effective for me.
I think my overarching deployment strategy is portability. If it’s easy to take a replacement PC, get a base install loaded, then have a setup script configure the base software/user(s) and pull config/compose files and start services, and then be able to swap out the older box with minimal switchover or downtime, I think that’s my goal. That may require several OS tools (Ansible, NixOS config, Docker compose, etc.) but I think once the tooling is set up it will make further service startups and full box swaps easier.
Currently I have a single machine that I started spinning up services with Docker compose but without thought to those larger goals. And now if I need to fiddle with that box and need to reboot or take it offline then all my services go down. I think my next step is to come up with a deployment strategy that remains consistent, but I use that strategy to segment services across several physical machines so that critical services (router, DNS, etc.) wouldn’t be affected if I was testing out a new service and accidentally crashed a machine.
I love seeing all the different ways folks deploy their setups because I can see what might work well for me. I’m hoping this series of discussions will help me flesh out my deployment strategy and get me started on that migration.
This sounds very interesting! I came from DietPi to MicroOS and am now thinking about NixOS, also because of the portability aspect.
I skipped Ansible for now but maybe I have to try that out together with NixOS.
Are you using a VM manager of some sort? I saw libvirtd mentioned in this thread a couple of times.
Cool idea with that thread series! I tried a similar thing with the Selfhosting Sunday posts and I always enjoy seeing what everyone’s up to.
I’ve been running Docker containers on plain Linux (Debian mostly) for a long time (and native a pplications before but I’m glad I migrated most of it) but last year I switched to Proxmox for my own hardware. I was mostly interested in the super comfortable automated VM snapshots but after adding a second node I’m also glad to have High Availability. To maintain a proper quorum (have at least 3 nodes for decisions) I run corosync on a Raspi. It’s been super reliable once set up properly. I have a NAS for backups/snapshots which is native TrueNAS (it was simply the best GUI for ZFS and NFS).
Thought back and forth about setting up K3S and migrate everything but I decided it’s not worth the effort and would just be for practice, but I can’t be arsed to set it up just for that. (I do K8S at work but we have managed clusters so barely any low level tinkering).
My setup is PVE on the bottom with TrueNAS core for NAS functions as a VM (with a passed through HBA)
this has been pretty sweet! i just wish the hba didn’t take so long to boot
I’ve been using NixOS on my server. Having all the server’s config in one place gives me peace of mind that the server is running exactly what I tell it to and I can rebuild it from scratch in an afternoon.
I don’t use it on my personal machine because the lack of fhs feels like it’d be a problem, but when selfhosting most things are popular enough to have a module already.
Proxmox and truenas for all my physical boxes and then Debian for all my VMs and LXCs. I’m not all that adventurous when it comes to OS choice as I found things that worked years ago and I’ve stick with them ever since as I’ve not seen anything that really looks like it does anything interesting/new that makes it worth switching.
Ubuntu Server. It just works.
I’ve several Debian stable servers operating in my stack. Almost all of them host a range of VMs in addition to a plethora of containers. Some house large arrays, others focus on application gruntwork. I chose Debian because I know it, been using it since the early 00s. It’s👌.
I use TrueNAS SCALE at home on my NAS and since they ditched kubernetes (and Truecharts, which was a happy little accident) it’s been great.
It’s free. New hardware is incorporated into the kernel reasonably regularly IMO. ZFS file system Pretty easy to control with GUI exclusively Docker is now very easy to use, images are community supported mostly but I’ve not had issues with Jellyfin, *arr, pihole, reverse proxy etc.
Rocky Linux. Been using debian but I like firewalld a bit more than ufw, and I don’t trust myself enough to let myself touch iptable.
You can run Firewalld anywhere
I know. But coming out of the box is nicer.
I’m new to all this.
Synology: I was using Synology before and getting started with trying some Docker containers. The Synology was very underpowered and containers kept crashing or being shut down (from resources running out I guess) so I wanted to upgrade.
Comments seemed to suggest it is best to keep the Synology as purely a NAS and use a mini PC for compute, so that’s what I went for. Got a 12th Gen Intel mini PC pretty cheap on eBay to play around with.
Debian - I’ve put Debian with KDE on the mini PC server. I was looking into TrueNAS or Unraid to consist what I should try learning. My brother (rightly) said there’s no reason to over complicate things when I don’t need functions of those OS and don’t understand them. The one place the Linux community seems to be united is in recommending Debian for a server for being rock solid and stable. I’ve been very happy with it.
Spent my week off figuring out Docker, mounting NAS drives on the server PC, troubleshooting the problems. Got a setup I’m really happy with and I’m really happy I went with Debian.
I have pretty much the same setup. Works like a charm.
What are you running on your server? I’m looking for more ideas.
I’ve got loads of stuff up and running, but now it is all quietly functional and I’m withdrawing from the enjoyment if setting up something new. I’ve recently had to delete a couple of Docker apps which weren’t really very useful for me, but I enjoyed setting them up and liked seeing a long list of healthy containers in Dockge.
Immich, paperless, Bitwarden, and a static website with recipes. I am very happy with all of them. Next projects are Forgejo, obsidian live sync (via CouchDB) and a budgeting software (not decided yet)
Notes app is a good idea. I might have a look at options.
Actual is working really well for me for budgeting.
Save your resources on the mini pc by getting rid of KDE, desktops can take quite a lot of resources to run!
If you aren’t familiar with the BASH shell it’s essentially the heart of every Linux/GNU based operating systems, no need for a clunky GUi on a server.
Key commands:
cd
== Change Directorysudo
== Root privilegesmkdir
== Make directoryrm -f
== Remove file/directory with forcetouch
== Make a new filenano
== Text/File editorcat
== Read file contents and print to shellCommands don’t need to be complicated! For example
nano /home/SomeUser/Downloads/SomeRandom.txt
will open the text editor to SomeRandom.txt in the/Downloads
directory of SomeUserThanks. I do know almost all those commands, but I’m not quite comfortable with using konsole/SSH exclusively yet. KDE is what I’m most familiar with from my desktop PC and I thought it would be easier to set up knowing where settings etc are. Also I use a Guacamole Docker app to access the server’s desktop (my personal machine) when I need to do some personal task while at work. That may change as I get better at this and learn more.
Edit: I don’t want to mess with the server now, but I’ll try to put LXQT at some point to save some resources. I don’t trust myself to remove KDE cleanly and install a different a different DE without destroying the setup.
Stage 1: Ubuntu server Stage 2: Ubuntu server + docker Stage 3: Ansible/OpenTofu/Kubernetes Stage 4: Proxmox
oops straight to stage 4.
but wait stage 3 looks daunting
Don’t get me wrong, I use libvrt where it makes sense but why would anyone go to proxmox from a full iac setup?
I do 2 at home, and 3 at work, coming from 4 at both and haven’t looked back.
Because it is much simpler to provision a VM
Maybe for the initial setup, but nothing is more repeatable than automation. The more manual steps you have to build your infra, the harder it is to recover/rebuild/update later
You automate the VM deployments.
if you’re automating the creation and deployment of vms, and the downstream operating systems, and not doing some sort of HA/failover meme setup… proxmox makes things way more complicated than raw libvirt/qemu/kvm.
Can you please elaborate on this? I am currently using MicroOS and think about NixOS because of quick setup. But also about Proxmox and NixOS on top. Where would libvirt fit in in this scenario?
Kubernetes is overkill for most things not just self hosting. If you need to learn it great otherwise don’t waste your time on it. Extremely complicated given what it provides.
fr, unless you’re horizontally scaling something or managing hundreds of services what’s the point
I agree with this thread, but to answer your question I think the point is to tinker with it j “just because”. We’re all in this for fun, not profit.
I’ve been using Alpine Linux. I’ve always leaned towards minimalism in my personal life so Alpine seems like an appropriate fit for me.
Since what is installed is intentional, I am able to keep track of changes more accurately. I keep a document for complete setup by hand, then reduce that to an install script so I can get back to the same state in a minimal amount of time if needed.
Since I only have a Laptop and two Raspberry Pi’s with no intention of expanding or upgrading, this works for me as a personal hobby.
I’ve even gone as far as to use Alpine Sway as a desktop to keep everything similar as well.
I wouldn’t recommend it for anyone who doesn’t have the time to learn. It doesn’t use systemd and packages are often split meaning you will have to figure out what additional packages you may need beyond the core package.
I appreciate the approach Alpine takes because from a security point of view, less moving parts means less surface area to exploit. In today’s social climate, who knows how or when I’ll become a target.
Kinda dumb but I run DietPi on a mini PC. Just nice and simple
+1. Very easy, very stable.
I also started with DietPi an every device, works like a charm. But I personally want to try something else to learn a bit more.
Edit:
I think about trying NixOS in the near future.
archlinux + podman / libvirtd + nomad (libvirt and docker plugins) + ansible / terraform + vault / consul sometimes
UPD:
archlinux - base os. You never need change major version and that is great. I update core systems every weekend.
podman / libvirtd - 2 types of core abstractions. podman - docker containers management, libvirtd - VM management.
nomad - Hashicorp orcestrator. You can run exec, java application, container or virtual machine on one way with that. Can integrate with podman and libvirtd.
ansible - VM configuration playbooks + core system updates
terraform - engine for deploy nomad jobs (docker containers. VMs. execs or something else)
Vault - K/V storage. I save here secrets for containers and VMs
consul - service networking solution if you need realy hard network layer
As a result, I’m not really sure if it’s a simple level or a complex one, but it’s very flexible and convenient for me.
UPD2: As a result, I described the applications level, but in fact it is 1 very thick server on AMD Epic with archlinux. XD By the way, the lemmy node from which I write is just on it. =) And yes, it’s still selfhosted.
No love for Open Media Vault? I run it virtualized under Proxmox and I’m quite happy with it, not very fancy but super stable.
I run about twenty containers on OMV, with 4 8tb drives in a ZFS ZRAID5 setup. I love how users can be shared across services, for example the same user may access SMB shares or connect via OpenVPN.
+1 for OMV. I use it at work all the time to serve Clonezilla images through an SMB share. It’s extremely reliable. The Clonezilla PXE server is a separate VM, but the toolkit is available in the
clonezilla
package, and I could even integrate the two services if I felt particularly masochistic one day.My first choice for that role was TrueNAS, but at the time I had to use an old-ass Dell server that only had hardware RAID, and TrueNAS couldn’t use ZFS with it.
Hypervisor: Proxmox (fuck Hyper-V: It’s good but soo annoying. Fuck ESXi cuz Broadcom).
General purpose OS (for servers): Debian (and OMV)
Truenas core because I'm a bsd guy at heart. with that all but dead I'm trying to decide between bare freebsd or xigmanas.
I have a arch linux box for things that don't run on bsd.
I’m gonna be simple : Syno DSM with portainer.
Hardware and software. Simple, for my simple needs.
My old DS916+ is great at the ile services but too weak for computing, so I have a reclaimed business laptop for the services. I could not imagine running anything on the DS.
I run jellyfin, freshrss, actualbudget and a few others services.
Just what I need :)
I use openSUSE MicroOS as the container host, with podman. It was a bit tricky to install it in my Hetzner VPS and get used to how MicroOS handles system updates (it’s an immutable system), but I am quite happy with it. I found it interesting and decided to try out so I could learn how to use the system.
debian very simple an classic but i started using bsds recemtly
I use Debian as well for all my servers whether they are a VM or container. It is light weight, well supported and dead stable.
Proxmox+Almalinux
I also have a few Debian VMs kicking around
PVE running on a pile of e-waste. Most of the parts are leftovers from my parents’ old PC that couldn’t handle Win10. Proxmox loves it. Even the 10GB mis-matched DDR3 memory. The only full VM is OPNSense (formerly pfSense), everything else runs inside Debian containers. It only struggles when Jellyfin has to transcode something because I don’t have a spare GPU.
Best type of homelab! Just use what’s there
I used to really like esxi, but broadcom screwed us on that.
Hyper-v sucks to run and manage. It’s also pretty bloated.
Proxmox is pretty awesome if you want full VMs. I’m gonna move everything I have onto it eventually.
For ease of use, if you have Synology that can run containers, it’s okay.
I also like and tend to use unraid at my house, but that’s more because of my insane storage requirements and how I upgrade with dissimilar disks fairly frequently. (I’m just shy of 500tb and my server holds 38 disks.)
Damn, 38 disks! How do you connect them all? Some kind of server hardware?
Curious because I’m currently using all 6 SATA ports on an old consumer motherboard and not sure how I’ll be able to expand my storage capacity. The best option I’ve seen so far would probably be adding PCIe SATA controller(s), but I can’t imagine having enough PCIe slots to reach 38 disks that way! Wondering if there’s another option I haven’t seen yet.
Yep. It’s a 4u super micro chassis with the associated backplanes.
I had some servers left over from work. It’s set up to also take jbod cards with mini-sas to expand into additional shelf’s if I need that.
My setup really isn’t much of an entry setup. It’s similar to this: …supermicro.com/…/4u-superstorage-ssg-641e-e1cr36…
That means every one of your disks is >13TB? That’s expensive!
It’s been a long term build. With unraid it’s been pretty easy to slowly add disks one disk at a time.
I’m moving everything towards 22tb disks right now. It’s still got a handful of 4 and 5tb disks in it. I’ve ended up with a pile of smaller disks that I’ve pulled and just… sit around.
I also picked up a Synology recently that houses 12x 12tb disks that goes into that total count. I’ve got another couple Synologys just laying around unused.
I’ve got 30x4TB disks, just because second hand enterprise gear is so cheap. I’ll slowly replace the 4TB SAS with larger capacity SATA to make use of the spin down functionality of unraid. I don’t need the extra speed of SAS and I wouldn’t mind saving a few watt-hours.
I’ve been using Ubuntu server on my server for close to a decade now and it has been just rock solid.
I know Ubuntu gets (deserved) hate for things like snaps shenanigans, but the LTS is pretty great. Not having to worry about a full OS upgrade for up to 10 years (5 years standard, 10 years if you go Ubuntu pro (which is free for personal use)) is great.
A couple times I’ve considered switching my server to another distro, but honestly, I love how little I worry about the state of my server os.
Been using debian for 25 years.
I‘m an old fart using FreeBSD and jails, in the jails mostly bare metal install. Home assistant runs in bhyve, one docker app (audiobookshelf) runs in bhyve as well (alpine linux and docker)
Proxmox all day, every day.
Generally speaking I start with Debian and install proxmox on top rather than use their installer, this way I can config things as I want them before getting proxmox going, which I guess counts as a more advanced user use case, though not really complicated.
Edit: and if it wasn’t obvious, everything is Debian, even those not on proxmox (which is just debian anyway, and isn’t much tbh).
Debian on the host and everything else in containers
Debian on the servers, Diet-Pi on the SBC’s, all containerized.
Maybe crazy, but I’ve been running flatcar lately. Automatic OS updates are nice and I pretty much exclusively use most of my machines to run containers.
openSUSE MicroOS
I’ve only tried it out on a VPS, so I’m not completely sold on it yet, but I do think I’ll be switching to it eventually. I’m currently on Leap, but since almost everything is containerized, I’m not getting much benefit from the slow release cycle.
For your questions:
The main appeal is unattended, atomic updates using bleeding edge packages. You keep your apps as separate from the base system as possible (containerized), and the base handles itself.
My main issue is with the toolbox utility, which runs a container to hold userland utilities for debugging stuff. So far, it has been buggy with the underprivileged user I configured, and I’d really rather not login as root. I’ve worked around it for now, but it leaves a lot to be desired.
Mostly figuring out how I want to handle my VPN (for exposing LAN services to the outside world) config. My options are:
The main sticking point is that I need HAProxy in front and route traffic to the given device, so the VPN and HAProxy need to talk. The easiest solution is to put both on the host, but that breaks the whole point of MicroOS. The ideal is to have both the VPN and HAProxy containerized, but I ran into some issues with podman.
This is definitely a veteran system right now, but I think it’s ideal because it means I can completely automate system updates and not worry about my apps breaking. It also means I can automate setting up a new server (say, if I move to a different VPS) or even new OS since I only need to deploy my containers and don’t need anything special from the OS setup.
I’m also playing with Aeon on my laptop, but that’d going a lot less smoothly than MicroOS on the server.
I have been using Proxmox VE with Docker running on the host not managed by Proxmox, and then Cockpit to manage NFS Shares with Home Assistant OS running in a VM. It’s been pretty rock solid. That was until I updated to Version 9 last night, it’s been a nightmare getting the docker socket to be available. I think Debian Trixie may have some sort of extra layers of protection, I haven’t investigated it too much, but my plan tomorrow and this week is to migrate everything to Debian 12 as that’s the tried and true OS for me and I know it’s quite stable with Cockpit, docker and so forth with KVM for my Home Assistant installation.
One other OS for consideration if you are wanting to check it out is XCP-NG which I played with and Home Assistant with that was blazing fast, but they don’t allow NFS shares to be created and using existing data on my drives was not possible, so I would’ve had to format them .
Proxmox Virtual Environment (PVE, Hypervisor), my beloved. Especially in combination with Proxmox Backup Server (PBS).
My homelab would not exist without Proxmox VE, as I’m definitely not going to use Nutanix or VMWare. I love working with linux and Proxmox VE is literally debian with a modified kernel and a Management Webinterface on top.
I first learned about Proxmox VE in my company, while we still had VMWare for us and all of our customers. We gradually switched everyone over to Proxmox VE and now I’m using it at home too. Proxmox is an Austrian (my country) company, so I was double hyped about this software.
A few things I like most about Proxmox VE
(*) What I mean by ease of access to the correct part of the documentation is: Whenever you’re in the WebUI and need to decide on some settings, there’s a button somewhere on the same page which is going to lead you directly to the portion of the documentation you need right now. I don’t know why this seems like such a great luxury, every software should have something like this.
Next steps
My “server” (some mini PC with spare parts I already had) is getting too weak for the workload I put it through, so I’m going to migrate to a better “server”. I already have a PC and most of the necessary parts, I just need some SSDs and an AMD CPU.
Even migrating from PVE (old) -> PVE (new) couldn’t be easier:
I think it’s great to have a series posting about personal achievements and troubles with selfhosting. There’s so much software out there, you always get to see someone doing something you didn’t even know could be done or using a software you didn’t realize even existed. Sharing is caring.
I run several different ones, Debian is the most, Ubuntu server runs a few and I have a couple of truenas scale instances simply because they have run truenas for years and work well. One is local network only, another is available but is used for storage and storage alone via s3/minio and sftp and duplicati
I have a nuc with Linux mint and host everything on docker containers. I expose any service I need through caddy.
Anything that can run proxmox is running proxmox. Even if it’s a single OS running on it, it’s still running proxmox
A friend recommended me OpenSuse MicroOS, and it has been a great experience!
It’s a atomic OS designed to be just enough to run containers and it does it perfectly. It updates and reboots itself automatically so I never have to worry about it.
IMO, perfect for a home environment, just wish the documentation was better.
I tried MicroOS for a while now and I don’t know if it was my fault but I did not work so smoothly all the time. Maybe because the machine was turned off for a few days in a row. But a couple of times I just couldn’t ssh into the machine or it would not start up at all. Luckily ,ou can roll back and I used that to copy my docker volumes and compose files over. I think about trying NixOS next.
Favorite heavyweight Type 1 hypervisor: XCP-ng. It’s open source, runs on a ton of enterprise and consumer-grade hardware, has always been rock stable for me, even when forgetting to update it for like 6 months, still ran everything like a champ.
I need to try ProxMox, has some cool features. XCP-ng is pretty intuitive though, UI makes sense and is cleaner than Proxmox. The integration in Proxmox with the Incus project is pretty cool though, especially being able to run VMs and containers and manage them together. I’ve been thinking of trying that and seeing how it goes.
For containers, I just install Debian and run Docker on there. Stable, simple, nothing fancy. If I need something more up to date, I typically use Ubuntu Server.
Hypervisor Gotta say, I personally like a rather niche product. I love Apache Cloudstack.
Apache Cloudstack is actually meant for companies providing VMs and K8S clusters to other companies. However, I’ve set it up for myself in my lab accessible only over VPN.
What I like best about it is that it is meant to be deployed via Terraform and cloud init. Since I’m actively pushing myself into that area and seeking a role in DevOps, it fits me quite well.
Standing up a K8S cluster on it is incredibly easy. Basically it is all done with cloud init, though that process is quite automated. In fact, it took me 15m to stand up a 25 node cluster with 5 control nodes and 20 worker nodes.
Let’s compare it to other hypervisors though. Well, Cloudstack is meant to handle global operations. Typically, Cloudstack is split into regions, then into zones, then into pods, then into clusters, and finally into hosts. Let’s just say that it gets very very large if you need it to. Only it’s free. Basically, if you have your own hardware, it is more similar to Azure or AWS, then to VMWare. And none of that even costs any licensing.
Technically speaking, Cloudstack Management is capable of handling a number of different hypervisors if you would like it to. I believe that includes VMWare, KVM, Hyperv, Ovm, lxc, and XenServer. I think it is interesting because even if you choose to use another hypervisor that you prefer, it will still work. This is mostly meant as a transition to KVM, but should still work though I haven’t tested it.
I have however tested it with Ceph for storage and it does work. Perhaps doing that is slightly more annoying than with proxmox. But you can actually create a number of different types of storage if you wanted to take the cloud provider route, HDD vs SSD.
Overall, I like it because it works well for IaaS. I have 2000 vlans primed for use with its virtual networking. I have 1 host currently joined, but a second host in line for setup.
Here is the article I used to get it initially setup, though I will admit that I personally used a different vlan for the management ip and the public ip vlan. rohityadav.cloud/blog/cloudstack-kvm/