Proxmox 9 released
(www.proxmox.com)
from beerclue@lemmy.world to selfhosted@lemmy.world on 06 Aug 00:48
https://lemmy.world/post/34018219
from beerclue@lemmy.world to selfhosted@lemmy.world on 06 Aug 00:48
https://lemmy.world/post/34018219
Proxmox 9 was released, based on Debian 13 (Trixie), with some interesting new features.
Here are the highlights: pve.proxmox.com/wiki/Roadmap#Proxmox_VE_9.0
Upgrade from 8 to 9 readme: pve.proxmox.com/wiki/Upgrade_from_8_to_9
Known issues & breaking changes: pve.proxmox.com/wiki/Roadmap#9.0-known-issues
threaded - newest
The new mobile interface is lit š„. Finally usable
Fuck, I just left for a month away and I hate to do major upgrades when remote.
Probably for the best. Upgrades on the first release havenāt had a stellar record
Exactly, for example I missed the note that updating truenas to the latest version disables and hides all the virtual machines (theoretically they can get migrated to the new engine but it gave me some weird error. Luckily truenas can be downgraded easily.)
Now, 3 months after the First release of the update, those virtual machines arenāt disabled and hidden anymore
core or scale? are they getting rid of core??
scale, they got rid of the kvm emulator in the last release and i was devastated to see all my VM gone. The āmigrationā consists in you migrate the disk image to the new directory, then you make a new VM⦠IF you knew that BEFORE the update and took note of all the settings because the old VM menu is gone!
but also itās clear than core is on life support
poop.
well my truenas is a vm on proxmox, i assume Iāll figure something out when it is time lol
As of the last update released on August 1st, the āoldā VMs are now visible again. The latest Electric Eel chain also merged all Core features into Scale, so the jump should not be as drastic any longer. Iāve always lived on Scale, but I assume you could try backing up your config and spinning up a new Scale VM and restoring the backup to it. No matter how you dice it though, it will be spicy!
Stick with 8 then, until we know itās stable.
Not sure I want to check how far behind I am. How rough are these upgrades? Iāve got most things under Terraform and Ansible but am still procrastinating under the fear of losing a weekend regiggling things.
Iād also like to know.
I built a new machine seceral months back with PVE and got the hang of it but itās been āset it and forget itā since then due to everything running smoothly. Now I donāt remember half the things I learned and donāt want to get in over my head running into issues during a major upgrade. I definitely do want the ability to expand my ZFS pool so I will need to bite the bullet eventually.
It will vary but for me it was smooth
Previous 3 major release upgrades Iāve done were smooth, ymmv
I just did three nodes this evening from 8.4.1 to 9, no issues other than a bit of farting around with my sources.list files.
Not noticing anything significant, but I havenāt tried the mobile interface yet.
I just did one of my two nodes. Easy upgrade, looks good so far.
This is awesome, I am going to imediatly get a test cluster set up when I get to work. Snapshots with FC support was the only major thing (appart from Veeam support) holding us back from switching to Proxmox. The HA improvements also sound nice!
Testing in production? Brave move mate. :)
A job for the weekend I guess. just done all the prerequisites and only have a warning for dkms
As a person who just installed proxmox for the first time a couple of weeks ago, does this allow me to fix some of my mistakes and convert VMs to LXCs?
i donāt think so
You could just start over if you dont have much invested into your current setup.
Iām in too deep. Iām trying this script. Fingers crossed
Depending on the services, you should be able to make a backup and restore without needing to delete the real version until youāre sure everything is working
As someone who also started proxmox fairly recently, I found that the community has these really cool scripts that you can use to get started. Obviously youāre running bash scripts on your main node for some, so there are risks involved with that but in my experience itās been great.
For beginners here: do not run apt upgrade!! Read the documentation on how to upgrade properly.
Itās always good to read the docs, but I often skip them myself :)
They have this nifty tool called
pve8to9
that you could run before upgrading, to check if everything is healthy.I have a 3 node cluster, so I usually migrate my VMs to a different node and do my maintenance then, with minimal risks.
pve8to9 --full
Yes!!
Edit2: the following is no longer true, so ignore it.
Why do you want this? There are very few valid use cases for it.
Edit: this is a serious question. Adding a member to a vdev does not automatically move any of the parity or data distribution off the old vdev. Youāll not only have old data distributed on old vdev layout until you copy it back, but youāll also now have a mix of io requests for old and new vdev layout, which will kill performance.
Not to mention that the metadata is now stored for new layout, which means reads from the old layout will cause rw on both layouts. Itās not actually something anyone should want, unless they are really, really stuck for expansion.
And weāre talking about a hypervisor here, so performance is likely a factor.
Jim Salter did a couple writeups on this.
Yes it does. ZFS does a full resilver after the addition. Jim Salterās write ups are from 4 years ago. Shit changes.
Edit: and even if it didnāt⦠Itās trivial to write a script that rewrites all the data to move it into the new structure. To say thereās no valid cases when even in 2021 there was an answer to the problem is a bit crazy.
Whoah, I see this has indeed changed. Thanks.
Wait till you hear about zfs anyraid. An upcoming feature to make zfs more flexible with mixed sized drives.
Yay, it only took 2 hours and the help of an llm since the upgrade corrupted my lvm metadata! Little bit of post cleanup and verifying everything works. Now I can go to sleep (itās 5am).
Wasnāt that bad, but not exactly relaxing. And when my VMs threw a useless error (ācanāt start need manual fixā) I might have slightly panickedā¦
Not something that sounds production ready lol
Started a system upgrade at 3amā¦you ok?
Iām always up late (itās 5:19a), though a good bit more than usual lately. But I did the upgrade because I was anxious, had nothing to do, and there were no users utilizing the machine.
Thanks for posting this and reminding me to never go back to Proxmox. My Proxmox server killed itself and all VMās twice before I moved onto HyperV.
Oof. I have my VMs getting backed up to another machine so theoretically (untested) I should be able to recover with less than a day of data loss (very minimal for this box). The annoying part would be getting it hooked up to a monitor and keyboard, since itās under an end-table in the living room.
This is the first issue in like⦠15 months? Hopefully it stays rather uneventful.
Anyone got screenshots of the new mobile UI?
www.youtube.com/watch?v=yJsReZLcbHo
Looks neat!
.
I am telling myself that updating remotely is not a good idea
Keep on telling yourself that, but most of us arenāt on physical console anyways
My duplicate comments were caused by my slow home server. I really should upgrade my hardware
.
I am telling myself that updating remotely is not a good idea
My work computer is Debian and Iām so looking forward to the upgrade. Just gotta contain myself for a free weeks until a 0.1 type update is released.
There is no need ibthink. I did all 12 of my cluster at home plus all the work proxmox with no issues
It might be safer to wait, one of my IRL friends ran into an issue, and I saw some others post about it on the Proxmox forums:
TASK ERROR: activating LV āpve/dataā failed: Check of pool pve/data failed (status:64). Manual repair required!
I think I didnāt run into that error because I flattened my LVM kinda, but if I hadnāt customized my setup maybe I would have run into that too.
Its in the release upgrade notes. There isvone command to run if you are doing lvm. All my stuff is zfs or ceph so i never ran into it
I took a look but Iām not seeing any command for LVM mentioned anywhere?
Sorry it might be from running pve8to9 program to verify system readiness.
Actually no. 4.5.2 in upgrade instructions talks about lvm adjuatments needed
Ah, okay that makes more sense.
This is going to affect many more people who didnāt read it, then.
Although, that seems to only affect guests and not hosts?
The host machine becomes unbootable IIRC, so I think itās something else?
My āserversā are headless, in the basement, so even if Iām home, itās still remote :D
IPMI + BMC are wonderful things.
I tell myself that every time, but I mean, I still end up doing it every time anyway lmao
edit: Just did it, it went well.