Proxmox 9 released (www.proxmox.com)
from beerclue@lemmy.world to selfhosted@lemmy.world on 06 Aug 00:48
https://lemmy.world/post/34018219

Proxmox 9 was released, based on Debian 13 (Trixie), with some interesting new features.

Here are the highlights: pve.proxmox.com/wiki/Roadmap#Proxmox_VE_9.0

Upgrade from 8 to 9 readme: pve.proxmox.com/wiki/Upgrade_from_8_to_9

Known issues & breaking changes: pve.proxmox.com/wiki/Roadmap#9.0-known-issues

#selfhosted

threaded - newest

kebab@endlesstalk.org on 06 Aug 01:00 next collapse

The new mobile interface is lit šŸ”„. Finally usable

billygoat@catata.fish on 06 Aug 01:41 collapse

Fuck, I just left for a month away and I hate to do major upgrades when remote.

ikidd@lemmy.world on 06 Aug 02:33 next collapse

Probably for the best. Upgrades on the first release haven’t had a stellar record

Moonrise2473@feddit.it on 06 Aug 05:29 collapse

Exactly, for example I missed the note that updating truenas to the latest version disables and hides all the virtual machines (theoretically they can get migrated to the new engine but it gave me some weird error. Luckily truenas can be downgraded easily.)

Now, 3 months after the First release of the update, those virtual machines aren’t disabled and hidden anymore

SidewaysHighways@lemmy.world on 06 Aug 06:17 collapse

core or scale? are they getting rid of core??

Moonrise2473@feddit.it on 06 Aug 06:30 collapse

scale, they got rid of the kvm emulator in the last release and i was devastated to see all my VM gone. The ā€œmigrationā€ consists in you migrate the disk image to the new directory, then you make a new VM… IF you knew that BEFORE the update and took note of all the settings because the old VM menu is gone!

but also it’s clear than core is on life support

SidewaysHighways@lemmy.world on 06 Aug 07:21 collapse

poop.

well my truenas is a vm on proxmox, i assume I’ll figure something out when it is time lol

Zanathos@lemmy.world on 07 Aug 00:58 collapse

As of the last update released on August 1st, the ā€œoldā€ VMs are now visible again. The latest Electric Eel chain also merged all Core features into Scale, so the jump should not be as drastic any longer. I’ve always lived on Scale, but I assume you could try backing up your config and spinning up a new Scale VM and restoring the backup to it. No matter how you dice it though, it will be spicy!

lka1988@sh.itjust.works on 06 Aug 15:05 collapse

Stick with 8 then, until we know it’s stable.

etchinghillside@reddthat.com on 06 Aug 01:42 next collapse

Not sure I want to check how far behind I am. How rough are these upgrades? I’ve got most things under Terraform and Ansible but am still procrastinating under the fear of losing a weekend regiggling things.

CmdrShepard49@sh.itjust.works on 06 Aug 01:53 next collapse

I’d also like to know.

I built a new machine seceral months back with PVE and got the hang of it but it’s been ā€œset it and forget itā€ since then due to everything running smoothly. Now I don’t remember half the things I learned and don’t want to get in over my head running into issues during a major upgrade. I definitely do want the ability to expand my ZFS pool so I will need to bite the bullet eventually.

possiblylinux127@lemmy.zip on 06 Aug 02:03 next collapse

It will vary but for me it was smooth

sandwichsaregood@lemmy.world on 06 Aug 03:34 next collapse

Previous 3 major release upgrades I’ve done were smooth, ymmv

phanto@lemmy.ca on 06 Aug 04:19 next collapse

I just did three nodes this evening from 8.4.1 to 9, no issues other than a bit of farting around with my sources.list files.

Not noticing anything significant, but I haven’t tried the mobile interface yet.

SheeEttin@lemmy.zip on 06 Aug 04:32 collapse

I just did one of my two nodes. Easy upgrade, looks good so far.

coffeetastesbadlikecoffee@sh.itjust.works on 06 Aug 05:42 next collapse

This is awesome, I am going to imediatly get a test cluster set up when I get to work. Snapshots with FC support was the only major thing (appart from Veeam support) holding us back from switching to Proxmox. The HA improvements also sound nice!

slazer2au@lemmy.world on 06 Aug 06:15 collapse

Testing in production? Brave move mate. :)

BlueEther@no.lastname.nz on 06 Aug 06:23 next collapse

A job for the weekend I guess. just done all the prerequisites and only have a warning for dkms

TheUnicornOfPerfidy@feddit.uk on 06 Aug 06:58 next collapse

As a person who just installed proxmox for the first time a couple of weeks ago, does this allow me to fix some of my mistakes and convert VMs to LXCs?

SidewaysHighways@lemmy.world on 06 Aug 07:20 next collapse

i don’t think so

CmdrShepard49@sh.itjust.works on 06 Aug 11:54 next collapse

You could just start over if you dont have much invested into your current setup.

TheUnicornOfPerfidy@feddit.uk on 06 Aug 19:30 collapse

I’m in too deep. I’m trying this script. Fingers crossed

JPAKx4@lemmy.blahaj.zone on 06 Aug 22:56 collapse

Depending on the services, you should be able to make a backup and restore without needing to delete the real version until you’re sure everything is working

JPAKx4@lemmy.blahaj.zone on 06 Aug 20:30 collapse

As someone who also started proxmox fairly recently, I found that the community has these really cool scripts that you can use to get started. Obviously you’re running bash scripts on your main node for some, so there are risks involved with that but in my experience it’s been great.

littleomid@feddit.org on 06 Aug 07:24 next collapse

For beginners here: do not run apt upgrade!! Read the documentation on how to upgrade properly.

beerclue@lemmy.world on 06 Aug 07:42 collapse

It’s always good to read the docs, but I often skip them myself :)

They have this nifty tool called pve8to9 that you could run before upgrading, to check if everything is healthy.

I have a 3 node cluster, so I usually migrate my VMs to a different node and do my maintenance then, with minimal risks.

drkt@scribe.disroot.org on 06 Aug 13:48 collapse

pve8to9 --full

Damage@feddit.it on 06 Aug 09:29 next collapse

ZFS now supports adding new devices to existing RAIDZ pools with minimal downtime.

Yes!!

non_burglar@lemmy.world on 06 Aug 13:47 collapse

Edit2: the following is no longer true, so ignore it.

Why do you want this? There are very few valid use cases for it.

Edit: this is a serious question. Adding a member to a vdev does not automatically move any of the parity or data distribution off the old vdev. You’ll not only have old data distributed on old vdev layout until you copy it back, but you’ll also now have a mix of io requests for old and new vdev layout, which will kill performance.

Not to mention that the metadata is now stored for new layout, which means reads from the old layout will cause rw on both layouts. It’s not actually something anyone should want, unless they are really, really stuck for expansion.

And we’re talking about a hypervisor here, so performance is likely a factor.

Jim Salter did a couple writeups on this.

Saik0Shinigami@lemmy.saik0.com on 06 Aug 14:17 collapse

Adding a member to a vdev does not automatically move any of the parity or data distribution off the old vdev.

Yes it does. ZFS does a full resilver after the addition. Jim Salter’s write ups are from 4 years ago. Shit changes.

Edit: and even if it didn’t… It’s trivial to write a script that rewrites all the data to move it into the new structure. To say there’s no valid cases when even in 2021 there was an answer to the problem is a bit crazy.

non_burglar@lemmy.world on 06 Aug 14:34 collapse

Whoah, I see this has indeed changed. Thanks.

adavis@lemmy.world on 07 Aug 10:03 collapse

Wait till you hear about zfs anyraid. An upcoming feature to make zfs more flexible with mixed sized drives.

wreckedcarzz@lemmy.world on 06 Aug 12:12 next collapse

Yay, it only took 2 hours and the help of an llm since the upgrade corrupted my lvm metadata! Little bit of post cleanup and verifying everything works. Now I can go to sleep (it’s 5am).

Wasn’t that bad, but not exactly relaxing. And when my VMs threw a useless error (ā€˜can’t start need manual fix’) I might have slightly panicked…

Appoxo@lemmy.dbzer0.com on 06 Aug 18:29 next collapse

Not something that sounds production ready lol

potpotato@lemmy.world on 07 Aug 11:13 next collapse

Started a system upgrade at 3am…you ok?

wreckedcarzz@lemmy.world on 07 Aug 12:19 collapse

I’m always up late (it’s 5:19a), though a good bit more than usual lately. But I did the upgrade because I was anxious, had nothing to do, and there were no users utilizing the machine.

nevetsg@aussie.zone on 07 Aug 12:34 collapse

Thanks for posting this and reminding me to never go back to Proxmox. My Proxmox server killed itself and all VM’s twice before I moved onto HyperV.

wreckedcarzz@lemmy.world on 07 Aug 20:54 collapse

Oof. I have my VMs getting backed up to another machine so theoretically (untested) I should be able to recover with less than a day of data loss (very minimal for this box). The annoying part would be getting it hooked up to a monitor and keyboard, since it’s under an end-table in the living room.

This is the first issue in like… 15 months? Hopefully it stays rather uneventful.

Sunny@slrpnk.net on 06 Aug 21:47 next collapse

Anyone got screenshots of the new mobile UI?

possiblylinux127@lemmy.zip on 06 Aug 23:48 collapse

www.youtube.com/watch?v=yJsReZLcbHo

Sunny@slrpnk.net on 07 Aug 06:25 collapse

Looks neat!

[deleted] on 07 Aug 01:59 next collapse

.

mio@lemmy.mio19.uk on 07 Aug 01:59 next collapse

I am telling myself that updating remotely is not a good idea

Oisteink@feddit.nl on 07 Aug 09:59 collapse

Keep on telling yourself that, but most of us aren’t on physical console anyways

mio@lemmy.mio19.uk on 07 Aug 10:19 collapse

My duplicate comments were caused by my slow home server. I really should upgrade my hardware

[deleted] on 07 Aug 02:01 next collapse

.

mio@lemmy.mio19.uk on 07 Aug 02:01 collapse

I am telling myself that updating remotely is not a good idea

bigkahuna1986@lemmy.ml on 07 Aug 02:18 next collapse

My work computer is Debian and I’m so looking forward to the upgrade. Just gotta contain myself for a free weeks until a 0.1 type update is released.

ssdfsdf3488sd@lemmy.world on 08 Aug 01:34 collapse

There is no need ibthink. I did all 12 of my cluster at home plus all the work proxmox with no issues

ipkpjersi@lemmy.ml on 08 Aug 02:17 collapse

It might be safer to wait, one of my IRL friends ran into an issue, and I saw some others post about it on the Proxmox forums: TASK ERROR: activating LV ā€˜pve/data’ failed: Check of pool pve/data failed (status:64). Manual repair required!

I think I didn’t run into that error because I flattened my LVM kinda, but if I hadn’t customized my setup maybe I would have run into that too.

ssdfsdf3488sd@lemmy.world on 08 Aug 06:02 collapse

Its in the release upgrade notes. There isvone command to run if you are doing lvm. All my stuff is zfs or ceph so i never ran into it

ipkpjersi@lemmy.ml on 08 Aug 11:05 collapse

I took a look but I’m not seeing any command for LVM mentioned anywhere?

ssdfsdf3488sd@lemmy.world on 08 Aug 13:39 next collapse

Sorry it might be from running pve8to9 program to verify system readiness.

ssdfsdf3488sd@lemmy.world on 08 Aug 13:41 collapse

Actually no. 4.5.2 in upgrade instructions talks about lvm adjuatments needed

ipkpjersi@lemmy.ml on 08 Aug 15:21 collapse

and the pve8to9 checklist script suggests to run this migration script if necessary

Ah, okay that makes more sense.

This is going to affect many more people who didn’t read it, then.

Although, that seems to only affect guests and not hosts?

The host machine becomes unbootable IIRC, so I think it’s something else?

beerclue@lemmy.world on 07 Aug 05:38 next collapse

My ā€œserversā€ are headless, in the basement, so even if I’m home, it’s still remote :D

HiTekRedNek@lemmy.world on 07 Aug 10:20 collapse

IPMI + BMC are wonderful things.

ipkpjersi@lemmy.ml on 07 Aug 21:29 collapse

I tell myself that every time, but I mean, I still end up doing it every time anyway lmao

edit: Just did it, it went well.