How to Get Started Using Virtual Machine Manager in Linux (Posted in response to Virtual box and VMware) (www.maketecheasier.com)
from possiblylinux127@lemmy.zip to linux@lemmy.ml on 27 Apr 2024 23:20
https://lemmy.zip/post/14416287

If you run Windows make sure to install the virtio drivers

docs.fedoraproject.org/…/creating-windows-virtual…

#linux

threaded - newest

Frellwit@lemmy.world on 28 Apr 2024 01:36 next collapse

Is there an equivalent or something similar to “Use host i/o cache” that VirtualBox have? Last time I tried virt-manager the install time of the vm was incredibly slow because of the terrible write speed to my hdd. Vbox fixes that issue with the host i/o cache setting.

aodhsishaj@lemmy.world on 28 Apr 2024 02:01 next collapse
d3Xt3r@lemmy.nz on 28 Apr 2024 10:24 collapse

Usually setting the cache mode to “none” gives the best performance, assuming you’re using the virtio interface, instead of SATA/SCSI. This is a common mistake most newbies make when installing Windows, because virt-manager defaults to the latter, which gives poor perfomance. The same goes for the network btw, you’d want to use the virtio network interface instead of the emulated NIC. So before you install a Windows guest, make sure you change both those interfaces.

After changing the hardware interfaces, what you’d need to do (with Windows guests) is you’d need to supply the [virtio drivers](github.com/virtio-win/…/README.md, which you’ll need to provided to the Windows setup (via the virtio driver ISO) when prompted.

But if you’ve already installed Windows, you’ll need to install all the virtio drivers first and then update the interfaces after you’ve powered off the VM.

And in case you were wondering, this isn’t an issue with Linux guests, since virt-manager defaults to virtio hardware, and drivers aren’t an issue either.

sorter_plainview@lemmy.today on 28 Apr 2024 08:12 next collapse

What is the difference between Virtual Machine Manager and Proxmos?

boredsquirrel@slrpnk.net on 28 Apr 2024 08:47 next collapse

Proxmox is an entire distro just for running virtual machines, with a web UI. Virt-manager is a program you install on a normal machine

sorter_plainview@lemmy.today on 28 Apr 2024 10:59 next collapse

Aah… Isn’t that what called a bare metal OS?

thedeadwalking4242@lemmy.world on 28 Apr 2024 11:57 collapse

A bare metal OS is an OS running outside of a hypervisor. Virt-manager is a class 1 hypervisor that allows you to host guest operating systems. ( Run vms )

sorter_plainview@lemmy.today on 28 Apr 2024 12:14 collapse

Hey sorry for the confusion. What I meant is Proxmos is considered as a bare metal hypervisor and Virt manager is a hypervisor inside an OS, right?

thedeadwalking4242@lemmy.world on 28 Apr 2024 12:18 next collapse

Technically no, both use kvm virtualization which is included in the Linux kernal, so both are “bare metal hypervisors” other wise know as class 1 hypervisors. Distinctions can be confusing 😂

sorter_plainview@lemmy.today on 28 Apr 2024 13:14 collapse

Oh dear… I really thought I understood what bare metal means… But looks like this is beyond my tech comprehension

boredsquirrel@slrpnk.net on 28 Apr 2024 15:24 collapse

Bare metal is “kernel running on hardware” I think. KVM is a kernel feature, so the virtualization is done in kernel space (?) and on the hardware.

sorter_plainview@lemmy.today on 28 Apr 2024 18:12 collapse

Well this can be a starting point of a rabbit hole. Time to spend hours reading stuff that I don’t really understand.

boredsquirrel@slrpnk.net on 28 Apr 2024 18:42 collapse

TL;DR: use what is in the kernel, without strange out of tree kernel modules like for VirtualBox, and use KVM, i.e. on fedora virt-manager qemu qemu-kvm

possiblylinux127@lemmy.zip on 28 Apr 2024 16:27 next collapse

*Proxmox

Virtual manager is a application that connects to libvirtd in the back end. Think of it as a web browser or file manager for VMs.

Proxmox VE is an entire OS built for virtualization on dedicated servers. It also has support for clusters and live VM migrations between hosts. It is in essence a server OS designed to run in a data center (or homelab) of some kind. If is sort of equivalent to vSphere but they charge you per CPU socket for enterprise support and stability

sorter_plainview@lemmy.today on 28 Apr 2024 18:10 collapse

Well this thread clearly established that I neither have technical knowledge and I don’t pay attention to spelling…

Jokes aside this is a good explanation. I have seen admins using vSphere and it kind of makes sense. I’m just starting to scratch the surface of homelab, and now started out with a raspberry pie. My dream is a full fledged self sustaining homelab.

possiblylinux127@lemmy.zip on 28 Apr 2024 18:15 collapse

If you ever want to get a Proxmox cluster go for 3-5 identical machines. I have a 3 totally different machines and it creates headaches

DrWeevilJammer@lemmy.ml on 28 Apr 2024 20:28 collapse

What kind of headaches are you having? I’ve been running two completely different machines in a cluster with a pi as a Qdevice to keep quorum and it’s been incredibly stable for years.

possiblylinux127@lemmy.zip on 29 Apr 2024 01:02 collapse

One device decided to be finicky and the biggest storage array is all on one system.

It really sucks you can’t do HA with BTRFS. It is more reliable than ZFS due to licensing

sorter_plainview@lemmy.today on 29 Apr 2024 03:37 collapse

What’s the licensing part you mentioned? Can you elaborate a little?

possiblylinux127@lemmy.zip on 29 Apr 2024 05:40 collapse

OpenZFS is not GPL compatible so it can never be baked into the kernel in the same way BTRFS can. I’ve run into issues where I’ve needed to downgrade the kernel but if I do the system won’t boot.

Btrfs also doesn’t need any special software to work as it is completely native and baked in.

Kazumara@discuss.tchncs.de on 28 Apr 2024 20:49 collapse

They both use KVM in the end, so they are both Type 1 hypervisors.

Loading the KVM kernel module turn your kernel into the bare metal hypervisor.

PlexSheep@infosec.pub on 28 Apr 2024 19:26 collapse

It’s really just Debian with more packages preinstalled, but yeah, the idea is that you have an OS that has the primary purpose of virtualizing other machines.

Kazumara@discuss.tchncs.de on 28 Apr 2024 20:39 collapse

It’s really just Debian with more packages preinstalled

And a custom kernel with ZFS support

PlexSheep@infosec.pub on 29 Apr 2024 08:26 collapse

Oh right, they ship a modified kernel, didn’t think of this. I also didn’t know about the zfs thing, my homelab uses btrfs.

Kazumara@discuss.tchncs.de on 29 Apr 2024 11:21 collapse

I’m also using btrfs, but I originally wanted ZFS before seeing that it was only available through FUSE on my distro.

That’s why I even noticed ZFS was one of the features of Proxmox :)

possiblylinux127@lemmy.zip on 28 Apr 2024 16:22 collapse

Apples and oranges really. They underlying tech is the same but Proxmox is an entire platform

ProtonBadger@lemmy.ca on 28 Apr 2024 18:26 collapse

(Posted in response to Virtual box and VMware)

What? Is there some new controversy going on ?

ikidd@lemmy.world on 28 Apr 2024 23:04 collapse

Oh, people recommend VirtualBox all the time and it’s awful.

ProtonBadger@lemmy.ca on 29 Apr 2024 00:35 next collapse

Ah well, I’ve used Virtualbox, Vmware and KVM and I found them all useful for my purposes. Vmware is very slick and has an edge on easy Gfx acceleration for Windows guests but since they’re now owned by Broadcom that might become a problem.

I’m happy with Virtualbox on my desktop and KVM on a few servers. I don’t really care to take sides.

lord_ryvan@ttrpg.network on 25 May 2024 23:37 collapse

What’s awful about it? Genuinely asking

ikidd@lemmy.world on 26 May 2024 02:54 collapse

Performance compared to KVM is really poor. And it handles graphics poorly.

lord_ryvan@ttrpg.network on 26 May 2024 13:29 collapse

I’ve tried QEMU for this, I’ll try KVM one of these days