What's up, selfhosters? It's self hosting Sunday!
from tofu@lemmy.nocturnal.garden to selfhosted@lemmy.world on 03 Aug 21:03
https://lemmy.nocturnal.garden/post/166542
from tofu@lemmy.nocturnal.garden to selfhosted@lemmy.world on 03 Aug 21:03
https://lemmy.nocturnal.garden/post/166542
What’s up, what’s down and what are you not sure about?
Let us know what you set up lately, what kind of problems you currently think about or are running into, what new device you added to your homelab or what interesting service or article you found.
Personally I’m finally reaping the fruits of my labour and enjoy my stable homelab without doing much. One node went down recently and the other took over until I restarted so I was not in a hurry to fix things. Enjoying family time and only running updates that aren’t automated (yet). I’m about to dig a bit deeper into logging, probably setting up central log collection like Loki at some point, but not yet.
threaded - newest
I was excited to learn that homeassistant lets me bypass the atrocious Sonos app for controlling all my speakers from various music sources.
Though at the same time, I’m little disappointed that offTikTok is broken.
I wanna get into it but man, the mountain of knowledge I need to even understand what people are talking about is hard to climb. I’m trying to just get some stuff running in docker and it fails to launch and I’m like… How?! Isn’t that the whole point of docker lol. Baby steps I guess
I’ve learnt it from scratch in my week off, spending 2 or 3 hours on it every night for a week (although this might be underselling it as I had become familiar with desktop Linux over the past year and had a superficial idea of Docker containers with my Synology NAS). But still it’s not as big a deal as you think once you find some good resources. I’m going to comment about my setup after this in this thread… Have a look.
Main resource that helped me was Marius Hosting and ChatGPT got me out of trouble when I got stuck by deciphering logs for me when things didn’t work.
Thanks. Yeah I’m just trying to work at it slowly in my downtime instead of just watching YouTube all night.
Are you doing things through docker compose? If so, feel free to PM me or reply here with your compose file and I’ll help as best I can
Docker should be trivial to run. Hopefully it gives you some useful messages in the logs.
Sometimes you just need to start small and not worry about over complicating things. I started my journey in 2011 running Plex on a crappy laptop
Check out Cosmos, I struggled piecing things together but when I restarted from scratch with this as the base is has been SO much easier to get services working, while still being able to see how things work under the hood.
It’s basically a docker manager with integrated reverse proxy and OpenID SSO capability, with optional VPN and storage management
Im at the level where I don’t know what SSO means. I can follow instructions to change a DNS. But what a DNS actually is I don’t know. Which is fine, until I need to work out what’s broken
SSO is “single sign on”. DNS is “domain name service”, which is just a way to turn a hostname (like www.google.com) into an IP address. It’s sort of like a phone directory, but for the Internet.
SSO is single sign on, so you don’t need individual username and password for every service. It’s a bit more advanced so don’t worry about it until you have what you want working properly for a while.
DNS is like the yellow pages of the internet - when you type www.google.com your computer uses a DNS server to look up what actual IP address corresponds to the website name. The point of Adguard or pihole is that when a website tries to load an ad your custom DNS server just says it doesn’t recognize the address
Oh like a custom yellowpages, sick!
I felt exactly the same when i started - the learning curve is real! Try TrueCharts.org or linuxserver.io for reliable docker templates with good docs that actually work, saved me so much troubleshooting headache.
Thanks will do!
It’s messy. Dockers superpower: You can write a crazy ass python application that needs dozens of dependencies and weird software configured. You put it into a container, you can update and publish the container with a single script call. Other people can install that, set some variables and not have to install the dozens of other pieces of software. They also don’t have to worry about updates.
But that’s not to say you don’t have to worry about networks, storage and ports.
Then the simplicity of the configuration of containers depends upon the person that made the container. Maybe they wanted to be very flexible and there are dozens of things you need to set. Maybe they didn’t include the data store internally in that container and you need your own data store in another container.
I installed a new server at home and went with NixOS. It looks super cool but it takes so much time to learn everything. The only thing keeping me from going back to Debian is how easy it was to permanently mount drives (and save a configuration for any future install or mishaps).
(I.e. mount,
nixos-generate-config
,nixos-rebuild switch
and done!)You might have some luck with Suse, their Yast configuration is very easy and was stable for years for me. Now I’m running on an M1 Mac mini which was more of a pain than a regular setup for sure. Unfortunately the Linux support just isn’t there yet.
I don’t think it’s possible to learn everything for NixOS as a casual user / admin. It’s massive. I was luckily able to sneak a NixOS project into work which gave me some paid time on the topic. But there’s always room to learn more about it. Which is a good thing - by its nature, it’s just more powerful than conventional distributions.
More powerful = more mental burden and capacity used to know how to run and manage its unique syntax and structure.
Sincerely NIX user daily. Switching away from nix and off to fedora kinoite.
I meant more figuratively. Finally managed to move my compose files to nix files thanks to compose2nix. One thing that throw me for a loop was that podman (or perhaps nix) creates a service named <backend>-<service>. Compose2nix handles this accordingly but I had a few hours of head scratching until I figured that out.
Good luck on the journey! What I meant is that over time, you’ll realize that what you did was probably not the most elegant was to do something, at least that’s my experience with my config. Like, I started with a flake with an explicit config for each machine (basically multiple nixosConfigurations) and then turned it into a lib with functions to turn a set of hosts from json into an attribute set (kind of a simple inventory done). My last efforts that are still ongoing (cough) are splitting my NixOS modules off into a separate flake using flake-parts.
I do understand you meant having the stuff that your need work, I just wanted to hint that the language is very powerful and as such, most configurations have room for improvement, as in learning to do things more efficient or do things that weren’t possible before.
Have had Opnsense router, Ubiquiti switch, and shitty ASUS Router in AP mode for years. Got new Ubiquiti APs to improve wifi speed and coverage.
Can’t get VLANS to work to save soul. Feels bad.
I’ll paste a comment I made about this recently (with updates). My question is: what is a good solution to keep a music folder backed up (lives on my server NVME partitioned boot drive, but want it backed up automatically to my NAS HDD)? Also: how can I back up my Docker setup in case I screw it up and need to set it all up again?
I used just a Synology NAS with Docker containers to begin with but outgrew that. Now I have a mini PC with a 12th Gen i5 (picked up cheap on eBay) for computing and the Synology NAS is just a NAS.
Docker containers:
Glutun (VPN), qBittorrent, media managers (sonarr, radarr, prowlarr, flaresolverr), Jellyfin (video streaming), Paperless NGX (document upload), Immich (photo upload), watchtower (auto update Docker containers), Plex (because my wife+friends aren’t used to Jellyfin yet and it takes a while to transition them to unfamiliar technology), Actual (budgeting), Syncthing (file sync, update: removed this, not needed, actually need a backup solution), Element server (chat server just for myself, I make channels to cross-share snippets of text/links/images to myself, accessible on any device).
Still need to set up Lidarr and Beet for my music management (update: tried these last night and don’t really need them). Also need to find a good exercise logger, set up Guacamole remote access interface (update: done, happy with this), learn to use Dockage to replace Portainer (done, happy with this), set up an RSS docker app (update: done, still messing around with FreshRSS) and audio bookshelf for podcasts and audiobooks. Haven’t got the guts to approach Home Assisstant yet.
I stopped looking for a notes app and use Joplin to sync with my Mailbox.org account, but I might look for a Docker solution for notes.
NoMachine runs on my server PC for remote desktop. The server PC runs Debian with KDE (because I’m familiar with setting up what I need in KDE, which is the most superior of all desktop environments).
Synology handles making my apps accessible externally (from Synology.me reverse proxy addresses).
I used to use the Marius Hosting site to set up Synology Docker containers. Now I just copy his YAML data and edit it for my server. So I still use those guides.
I’ve written a noob guide notes for myself to set this all up again in case I destroy it somehow (already happened once). Really enjoyed using my week off to learn all this.
<img alt="" src="https://europe.pub/pictrs/image/5cb43406-0dd4-4d6e-b1a5-47c728e088db.jpeg">
Tons of services used daily.
Piled on the ground under a board.
Like many things in my life, this remains 75% complete “good enough”. This lives behind a huge backboard behind my TV. Said backboard is slanting because it is leaning against the wall and I still haven’t mounted it to the wall properly. You can even see some glass panels leaning against the wall, those are some shelves I’ve been meaning to put in… For the past 6 years.
The router, fibre internet entry point and LAN connection in the wall (to upstairs) are all behind d the TV there… So everything is just dumped there.
You’re gonna break my OCD brain.
Backup solution, you could use Borg or Restic, they are CLI, but there are also GUI for them
If you did the switch to Dockge, it might be because you prefer having your docker compose files accessible easily on the filesystem, the question is if you have the persistent data of your containers in bind mounts as well, so they are easy to backup.
I have a git repo of my stacks folder, with all my docker compose files (secrets on env files that are ignored), so that I can track all changes made to them.
Also, I have a script that stops every container while I’m sleeping and triggers backups of the stacks folder and all my bind mount folders, that way I have a daily/weekly backup of all my stuff, and in case something breaks, I can roll back from any of these backups and just docker compose up, and I’m back on track.
An important step, is to frequently check that backups are good, I do this by stopping my main service and running from a different folder with the backed up compose file and bind mounts
Rsync in a cron job would do it, no?
I finished setting up my personal computer with Sway on Alpine so now I can’t procrastinate anymore on getting TLS working with Caddy for my RPi 5.
I decided to ditch Cloudflare since using that service makes me feel uncomfortable. TLS is a bit of a pain because I am using an uncommon port so I need to do a DNS challenge. I still haven’t been able to get it working with DeSec.io but I hope maybe sometime this week.
I might look into using a tunnel service in the future but if I can figure this out, I’ll at least be able to adapt to changes in the future if I need to deal with any changing situations.
When I figure that out, I’ll look into Gemeni protocol and host something there. I don’t want anything big, just a little space of my own in the corner of the internet. Maybe I’ll look into hosting an irc server for a small group of people too.
I’m also using caddy with desec.io. When first triggering the challenge for an entry, it can fail a couple of times. I think it just takes a while for the DNS entry to be available.
Another thing that I’ve experienced is that I can’t use wildcard subdomain entries. My guess is that it’s somehow because I only have public IPv6 addresses (but I don’t remember the details). I have configured an internal DNS with the wildcard entry since I’m only ever connecting to that host via wireguard from outside my network. For the host itself I’ve created a regular AAAA record.
Realized today that borgbackup failed for almost 2 months straight on one of my servers (was a simple case of a lock being stuck). Finally setup push notifications via Pushover to notify on success/fail.
This is worth it. Had this happen on OS backup. Lost my data. Notifs should be default.
Healthchecks is incredibly nice for this kind of thing, it’ll notify you if it doesn’t receive a ‘success’ ping on whatever interval you specify.
I use it for all my Restic backups.
Same. I’d rather be alerted because something expected didn’t happen, not silence because something failed so hard it didn’t even send an alert.
Yeah that sounds even better. What service do you use?
Backblaze B2 for storage, and I host Healthchecks myself at home.
Recently set up a Maloja container and a Multi-scrobbler container so I can finally ditch last.fm!
I've been thinking about setting up a scrobble server, but haven't been sure what I would do with it. What do you use the information for? Does it affect how you listen?
I use it just for myself mostly to look at my listening habits. When I was younger I used last.fm to find new artists and talk to people who had similar interests and took their suggestions. These days its just to keep a log of my music listening.
Also, I have noticed a pattern of listening to less music for myself and being a lot more deferential to my partners when I am in a relationship over when I am single. This year is 20 years since I started using last.fm and part of my goal this year is to listen to more music than I have listened to yearly over the past 20 years. 2005 was my highest listening year ever, and while I’m not on pace to break that record, I’m on place to come close to it and shatter every other year in between.
Anyway, that’s just a me thing, it’s helped me feel like myself again, re-embracing really enjoying music in a way I have not over the past 16 years of being with two different partners (one for 2 years, another for 13 years). It’s like a celebration of 20 years of loving music and coming back into my own and becoming more me again, and less codependent on others and letting them drive the music.
Wow, thank you for this response. I hadn't thought of tracking music preferences as a tool for self discovery.
finally picked up a bunch of cheap 2.5" sas drives to turn my dumpster server (Proliant DL380 G7 with 16 hot swap bays) into a backup server.
still trying to work out the specifics but the idea is because it's a power hog and LOUD I want to use wake on lan to run a backup task and turn it off automatically.
I can turn it on via lan so I'm halfway there right? ......
I run MECM at home so that my Windows machines don’t reboot themselves on their own. Apparently it corrupted its database some weeks ago, and I didn’t notice until the backups had aged out. So I had to build a new one from scratch, and holy fuck I hate it every time. So many steps not documented or automated in the installation, to say nothing of all the items scattered across the MECM console that you have to configure before it does anything.
So I guess I should get around to log and system monitoring.
I looked into VyOS to replace my main firewall/IPS system (IPfire) with, as I would like to switch to running it in a VM, which is not recommended with IPfire. Seems pretty good so far with the new gratis semi-stable Stream releases.
And I set up Unified Push notifications with my Ejabberd server. Works great.
Opnsense is also great, and has a webUI for easier setup.
That is what I started with originally, and I don’t want to go back. The WebUI is super convoluted and for anything other than the basics it does more harm than good in my experience… and well, FreeBSD is just not my thing.
Any particular reason you are looking for a virtualized VM? Just to be less reliant on a single piece of hardware?
That and power savings, since I have sufficient capacity to run it on one of my main servers with an extra NIC to pass through.
My homelab has been mostly on autopilot for a while. Synology 6 bay running most lighter weight docker stuff (arrstack, immich, etc) and an Intel nuc running heavy stuff (quicksync transcodes for Plex+jf, ollama). Both connected to digitalocean via WG for reverse proxy due to CGNAT.
I had my router SSD either die or get corrupted this past week, haven’t looked much at the old SSD besides trying to extract the config off of it. I ended up just fresh installing opnsense because I didnt have any recent backups (my Synology and nuc back up to rsync.net, but I haven’t gotten around to automated backups for my router since it’s basically a plain config, and my cloud reverse proxy which is just a basic docker compose + small haproxy config). Luckily my homelab reaching out to the cloud reverse proxy means there’s basically no important config on my router anymore, they just need DHCP and a connection.
Besides that the arrstack just chugs along on its own.
I recently figured out I can load jellyfin playback URLs into vrchat video players, either direct stream or through the transcoding pipeline as an m3u8 that live transcodes based on the url parameters you set. This is great because the way watch parties in VRChat works is that everyone in an instance loads the same URL pasted into media players and syncs the playback. That means you need to have a publicly accessible url (preferably with a token of some sort) that can be loaded by an arbitrary number of unique IP addresses simultaneously, which I don’t think is doable with Plex.
I’m now working on a little web app to let me log into Jellyfin, search/browse media, and generate the links with arbitrary or pre-set transcode settings for easy copy/pasting into VRChat. The reason it’s needed is that Jellyfin only provides the original file without transcoding when you use the “copy stream” option, so I believe the only way to get a transcoded stream url currently is to set the web interface to specific settings and grab the URL from the network. But that doesn’t let you set arbitrary stuff like codecs and subtitle burn in and overriding what it thinks you support. So a simple app to construct the URL will make VRChat watch parties a lot easier.
I made some more tweaks to my Renovate bot which runs on a Woodpecker CI instance on my own hardware. Now it merges green PRs automatically. And I have it running every hour so all my software projects stay up-to-date and it responds quickly when I request a rebase.
I’ve also been cleaning up my Home Assistant automations and devices and trying to think up some useful things I can do for myself in an apartment where I can’t replace switches or the thermostat.
Realized last week that my fail2ban settings are too strict -- I get banned immediately if I visit my funkwhale (music server) domain without being logged in. In fact, I think much of my "downtime" might have actually just been me banning myself for 15 minutes now and then...
I was thinking about getting rid of Grafana, which is overkill for my server, and replacing it with Logdy this weekend, but didn't get around to it.
I’ve set up Pangolin on my VPS and had no problems accessing docker services on my homelab remotely. However, I don’t know how I am supposed to SSH or SFTP to my homelab. Will I connect to my VPS instead? Would I need to break Pangolin or expose a vulnerability to do so?
Honestly I am in need of a proper networking tutorial at this point.
According to the Pangolin docs it supports raw TCP and UDP connections.
For SSH you can also try to use the VPS as a jump host like this:
I would never have found this on my own otherwise. I feel any amount of gratitude would fall short of compensating for how much time and effort it has saved me. Thank you regardless.
If possible, can you share how I can achieve the same effect with SFTP?
Either use the
sftp
command, it also supports the-J
option, or use SSH tunneling. For example here I bind the homelab port 4533 to my local port 8080.I can now open a new shell and run:
You could also do it this way:
Thanks a ton!
Another glorious day of not having to worry about my nice and stable Debian server. It runs on an old Dell thin client I got on ebay, which isn’t much, but it gets the job done.
Everything is just peachy this week except that I’m still trying to sort out why my I’m unable to access the internet when I’m connected to my unraid wireguard instance.
I am also finally ready to ditch my plex instance, too. Got some self-inflicted permissions issues sorted and it’s been smooth sailing for long enough that I’m ready to make the switch
I noticed that my link collector nears perfection (for my use case) - not much stuff required to be done lately. Which is a good thing.
Does anyone know how to get a static IP for their server when their ISP doesn’t allow it. I’ve found out how to use duckdns, but I want to set up my own DNS server from anywhere but I’m pretty sure it requires using a static IP.
Does anyone know how to get a static IP for their server when their ISP doesn’t allow it. I’ve found out how to use duckdns, but I want to set up my own DNS server from anywhere but I’m pretty sure it requires using a static IP.
Dynamic DNS is the usual way. Your ISP assigns the IP, so they’re the only ones who can make it static.
You might be able to do it with some VPN shenanigans, but generally dynamic DNS is what you want. It’s basically a script that runs on your server that will periodically update the IP on the DNS entries.
Does anyone know how to get a static IP for their server when their ISP doesn’t allow it. I’ve found out how to use duckdns, but I want to set up my own DNS server from anywhere but I’m pretty sure it requires using a static IP.
I use duckdns, and thus have a xyz.duckdns.org domain, that points to the dynamic ipv4 address of my server. I do not host my own DNS server, rather I rely on a cheap Website / Mail / domain bundle. There I can enter my duckdns domain as a CNAME DNS entry. Thus every DNS lookup that is not for the remote hosted Website will resolve the duckdns domain and finally end at my server.
I am not sure where you want to host your DNS server or also for what specific reason… If you don’t have a domain, you kind of don’t need to host a DNS, and every domain provider I had, also offered a DNS server with it.
I wanted to run a pi hole to use as a DNS so that I can be ad free on any device. The problem is that with my computer or with my phone, I need to put in a specific IP address when I want to change DNS on that device.
If you drop the “from anywhere” part, you can set up a pihole with a static address that you can use from within your LAN, without any involvement from your ISP.
Read section “Assign your Raspberry Pi a static IP address” of raspberrypi.com/…/running-pi-hole-on-a-raspberry-…
Yes exactly, additionally you probably don’t want to host your pi hole for external use (mobile phone or laptop in a different network) for the reason of latency.
The delay that is imposed by visiting your pi hole at home for each DNS request is going to be very unpleasant.
Rather rely on an external dns provider that provides pi hole like funticionality.
But this does not mean that you can’t also host your pihole for internal use. I use it not just for removing ads, but also to allow the access of local domains.
Honestly I never thought about the latency issue, but I probably won’t do it because of that now that you mention it. Much appreciated.
So I wouldn’t put Pihole on the internet, but instead set up a Wireguard VPN on your devices and access Pihole via that.
Then you can use the dynamic DNS hostname for Wireguard, and a direct IP for Pihole.
Alternatively you could run Adguard Home instead, as it supports being a DoT and DoH server, both of which work over a hostname on your devices (ie; Android uses DoT for its secure DNS option).
Ooh wireguard sounds like a great option
Finally retired proxmox (actually I just removed pve packages and repos). Left the nfs export on there and hardened the whole thing.
Now I’m slowly working to get all my installs into layered ansible playbooks. Fortunately, there exists an incus ansible module.
With separate, mounted, persistent data, it’s getting very close to docker in easy deployment.
Getting ready to move from out of the woods and back to civilization with my partner.
Not looking forward to having neighbors above or below me but I’m very excited to have internet that doesnt fucking suck.
Once were moved and a bit more settled, I’m gonna start really digging into to selfhosting things. I have the hardware, a couple HP mini PCs that will run home assistant and probably a server for various docker things. Nextcloud and immich seem to be the things I’ve found i wanna use so far. I already have a NAS set up, but was having am issue with it not booting if a monitor isnt plugged in. I bought a dummy plug for it but haven’t tried it out yet.
Will also be setting up an AI server for local LLM use. Hope to train one to fit my needs once I pull the trigger on 3060 12GB card but need to figure out what other parts I’ll use. Might upgrade my main rig and use the parts from that, or maybe I’ll buy a old dell and fix it up. Not sure yet.
Lots of ideas, so little time lol.
Might want a bigger GPU, I have a 3080ti and the 12gb is pretty limiting in terms of how large a model you can use, or like one thing I was hoping to do was essentially replace Google Assistant/Gemini and can’t realistically run a good model and the STT/TTS off the one gpu.
Thats why i was considering training my own model if possible. Ive been toying around with kobold.CPP and gpt4all which both have RAG implementations.
My idea is to essentially chat with documentation and as a separate use case, have it potentially be a AI search engine but locally hosted. I do still prefer to search myself, but fuck man, searches have gotten so bad, and the kobold.CPP web lookup feature was pretty neat IMO.
So yea you’re not wrong, I’m just hoping that if in train it and or give it documentation it can reference when answering, it will be suitable. Mostly AI has been good for me as kind of a rubber ducky when troubleshooting and helping me search for things when I have some specific question and in don’t want “top 5 things vaguely related to your question” results.
Interesting, I mainly have used text generation webui which has a search support plug in, kinda nifty to use my searxng instance for it. It’s a bit finicky though.
Another thing to keep in mind then (apologies if this is just repeating info you already know), you’d also want to keep in mind your total potential context size in relation to the model size, since both take up VRAM. Reading search results/pages can eat up a lot
Yea I’m aware but I appreciate the insight :) so far my local ai experience has been lack luster so I’m hoping that training and RAG will make up for the context size at least a little. Ifnit can answer accurately in the first place, it may not need as big of a context window.
If you haven’t tried using RAG in some form, I would recommend giving it a go. Its pretty cool stuff, helps make models answer more accurately based on the documentation you give them though in my case, ive had limited success. Tbh, chatgpt has become my last resort when I just wanna get something done but I don’t like using it due to the privacy concerns, not to mention the ethical issues I have with ai training in general from big tech.
How is searxng BTW? Would you say its good to host or do you use a normal search engine more often? Or do you just use it for the AI search plugin?
Ive actually been thinking about using it rather than duckduckgo but was also hopeful the search index they are working on would be enough to satisfy my needs, or that a self hosted AI enabled search engine would work well enough when I need it.
I’ve completely replaced my searching with searxng, it is a little slower and ofc if I have an outage or something at home I have to go back to a different search temporarily but overall I like it a lot.
It was one of the first things I set up last year with my homelab because I am attempting to degoogle a fair amount, the Ai search stuff was just a fun test
Thats rad, thanks for the info. I may follow suit, been trying to degoogle myself lately.
For sure, good luck and have fun :D
Found out Ghost 6.0 is out today and now it supports ActivityPub. It’s time to set up a new blog I’ll never write once more!
Oh exciting, finally!
Tried building a storage box out of a bunch of old parts, it looks alright and has all the parts I want. Doesn’t boot though, that’ll be a tomorrow thing :(
I’ve been hosting immich for a long time and finally decided to make a website so people could sign up for paid monthly accounts and upload their stuff to the server that I’m going to run anyway. Maybe it’ll make me beer money.
Ngl this would skeeve me out. The chances of someone uploading CSAM while slim scare the fuck outta me.
Oh, wow, I was expecting a comment about privacy from the preview I got of your message and then it went on to talk about risk to the provider instead! Yes, you’ve definitely identified a risk and I hope to mitigate it with hopes, prayers, and as anonymous access logs as I can get while still identifying public, popular images.
Straight up CSAM would be pretty brazen. You could probably reduce the chances to zero by just saying that if there’s any thing like that uploaded it will go straight to the police. You probably wouldn’t need to invade anybody’s privacy. The warning itself would set the bar.
I’m kind of surprised there’s not an open source model out there capable of identifying it. Cloudflare has it as a free service if you use them, But I’m not seeing anything that you could just self host.
Models need content for training.
Models that can identify things can generate things if run in reverse. (This is blatant oversimplification.)
Those models already exist. It’s one of the things that everybody’s worried about trying to stop.
With the amount of companies out there willing to fund stopping it I’m surprised somebody hasn’t stepped up to spend a few million dollars to train one specifically to catch people and make it available.
Turns out making money off of it is more important I guess.
Recently set up my own nextcloud instance!
Nextcloud AIO?
Yup!
I recently started setting up home server on Raspberry Pi 5. Having issues with raid1. I have 2 nvme PCIe gen 4 SSDs. There was power outages while writing. Now second disk keeps randomly falling. Though, I’m not sure if that’s the reason because I don’t know what was raid status before outage, also disk passes checks. First time it degraded, it tried to recover and it failed. I removed that disk from raid, recreated partition run some test using nvme-cli. Disk looked healthy. I re-added disk, rebuild started and completed successfully. Then I’ve written around 500Gbs of data and it degraded again. At that point I took a break.
There are two things I’m yet to try:
I’m frustrated and will appreciate any hints.
Started looking at Gemini-protocol over the weekend. (It’s like a newer version of gopher) Now I’m looking for a problem to fit the tool.
I started writing a science fiction, choose your own adventure, short story to fit the platform But that’ll take ages to finish.
I’m also eyeing a meshtastic client proxy. But you only get about 200 bytes per message so I’m not entirely sure it’s worth it.
The last thing, it would be kind of cool is a Zim tie-in. It would be cool to have a canned Wikipedia that could be accessed via Gemini protocol.
Got my HPE DL380 G9 networked and configured with hardware RAID 0 and Debian running under ProxMox for a test run (need more disks for RAID 5). Thing had an advanced iLo license intact from the previous owner.
Deployed a docker container of linkwarden to it to try out and it seems pretty nice.
I was also enjoying my stable homelab until… well lets just say I got cheap parts here, nice stuff there and now I am building myself a new system and I started by stripping a case I got for 20 bucks and totally spray painting it, got some nice black and white cables, wanna display my nas this time instead of hiding it in the cupboard. After that I will put in the parts I got and then I need to migrate everything from the old nas (well hopefully I just put the drives in and it works). Soooo… Yeah 😀