Splitting Docker between SSD and HDD
from Naate@lemmy.world to selfhosted@lemmy.world on 20 Jun 19:53
https://lemmy.world/post/16749017

I’m in the process of planning upgrades for my server, and I’m trying to figure out the “best” drive configuration for Docker. My general understanding would be that the containers should be running from an SSD, and any storage (images, videos, documents) should use a volume on an HDD.

Is it as simple as changing the data-root to point to the SSD, and keep my external volumes on the HDD as defined in my existing compose files? I’ve already moved data-root once b/c the OverlayFS was chewing up the limited space on the OS drive, so that process isn’t daunting.

If there’s a better way to configure Docker, I’m open to it, as long as it doesn’t require rebuilding everything from scratch.

For reference, the server is running Debian Bookworm on an older i5 3400 with 32GB RAM.

#selfhosted

threaded - newest

retrieval4558@mander.xyz on 20 Jun 20:02 next collapse

Unless I’m misunderstanding, I do believe it is that simple, yes.

Naate@lemmy.world on 20 Jun 20:31 collapse

I think you’re right. I’m just trapped in the cycle of over-thinking and second guessing my knowledge and capabilities.

possiblylinux127@lemmy.zip on 20 Jun 21:02 next collapse

Your CPU may be a bottleneck depending on what you are doing. The slow ram speeds will mean processing will go slowly.

In my homelab I run all SSDs. They are cheap enough that I can afford them without problem. However, if you use a mix of spinning rust and SSDs you should separate them down into different pools. I would personally have the containers on SSDs and the data storage on HDDs. In the docker compose you can do a directory mount to a HDD pool.

Depending on what you are doing I would say you should get into Proxmox early. Get a small boot SSD and then create a larger SSD zfs pool and a HDD ZFS pool. From there you can setup your VMs to use either. You could have a VM with one disk in the HDD and a second disk in the SSD. This setup would also give you the flexibility to dynamically move things around. Proxmox will not work well on that old of a CPU so if you wanted to get fancy like I’m describing you would need to upgrade to something newer.

CameronDev@programming.dev on 20 Jun 21:18 next collapse

I5 3470 is old, but its not that bad. Lots of people are homelabing on NUCs which are only very slightly faster. Performance per Watt will be terrible though. (I am on an i7-10710u, and I’ve yet to run out of steam so far - cpu.userbenchmark.com/Compare/…/m900004vs2771 )

It has VTx/VTd, so should be okay for proxmox, what makes you think it won’t work well?

possiblylinux127@lemmy.zip on 20 Jun 21:36 collapse

I had in my head that it didn’t have the proper extensions for virtualization.

However, the memory and core count will be a bottleneck with virtualization. Only having 4 cores will make it a hard to delicate resources and the slower ram will mean you could have performance issues. It really depends on what you are doing I suppose. It does have 6mb of cache which will help some.

If you got a i5 6500 with ddr4 memory you would have much better performance.

CameronDev@programming.dev on 21 Jun 00:48 collapse

4 cores is a bit limiting, but definitely depends on the usage. I only have 1 VM on my NUC, everything else is docker.

I thought all the core processors had VT* extensions, I was using virtualization on my first gen i7. They are very old an inefficient now though.

Naate@lemmy.world on 22 Jun 16:50 collapse

For the most part, this old bucket is doing just fine with probably more than I should be throwing at it.

I’m curious as to why proxmox and VMs over a minimal Debian install with docker containers, though? At least, from my understanding, proxmox would be requiring a lot more hardware overhead when I’m mostly just running emby/jellyfin, nextcloud, homeassistant (and related services) and frigate (with a coral).

It’s definitely a lot, but I also rarely see cpu use over 70% (typically much lower), though frigate likes to cause problems occasionally. And I’ve never seen a concerning amount of ram usage.

Definitely getting one of those little n100s soon, and will probably move the home automation stuff over there, and slowly transition the current box into being a nas and nothing more.

possiblylinux127@lemmy.zip on 22 Jun 17:45 collapse

Segmentation really. KVM doesn’t have a not of overhead but with Proxmox you can separate out everything easier. Also it makes moving services between machines very easy. When moving you might end up with a few dropped packets but the VM will stay running as it moves to a different machine.

Naate@lemmy.world on 22 Jun 20:46 collapse

Interesting. I’ll dig a little more then. Most of my vm experience has been on desktop for various reasons, and it’s almost always been a pain in the ass and not worth the effort.

I assume the kvm stuff can be running a minimal os, sort of like an alpine docker image?

possiblylinux127@lemmy.zip on 22 Jun 20:56 collapse

It runs Proxmox as the base

Decronym@lemmy.decronym.xyz on 20 Jun 21:45 next collapse

Acronyms, initialisms, abbreviations, contractions, and other phrases which expand to something larger, that I’ve seen in this thread:

Fewer Letters More Letters
NUC Next Unit of Computing brand of Intel small computers
SSD Solid State Drive mass storage
ZFS Solaris/Linux filesystem focusing on data integrity

3 acronyms in this thread; the most compressed thread commented on today has 14 acronyms.

[Thread #819 for this sub, first seen 20th Jun 2024, 21:45] [FAQ] [Full list] [Contact] [Source code]

tal@lemmy.today on 21 Jun 00:57 collapse

If there’s a better way to configure Docker, I’m open to it, as long as it doesn’t require rebuilding everything from scratch.

You could try using lvmcache (block device level) or bcachefs (filesystem level caching) or something like that, have rotational storage be the primary form of storage but let the system use SSD as a cache. Dunno what kind of performance improvements you might expect, though.