Bytedance Proposes "Parker" For Linux: Multiple Kernels Running Simultaneously (www.phoronix.com)
from KarnaSubarna@lemmy.ml to linux@lemmy.ml on 23 Sep 17:28
https://lemmy.ml/post/36567119

#linux

threaded - newest

atzanteol@sh.itjust.works on 23 Sep 18:27 next collapse

I remember partitioned systems being a big thing in like the '90s '00s since those were the days you would pour $$$$ into large systems. But I thought the “cattle not pets” movement did away with that? Are we back to the days of “big iron”?

Lydia_K@lemmy.world on 23 Sep 19:23 next collapse

What do you think all those cattle run on?

Just big ass servers with tons of cores and ram.

atzanteol@sh.itjust.works on 23 Sep 20:01 collapse

I figured it was cattle all the way down. Even if they’re big. Especially when you have thousands of them.

Though maybe these setups can be scripted/automated to be easy to replicate and reproduce?

Ithral@lemmy.blahaj.zone on 23 Sep 22:32 collapse

In essence yes, for example VMware ESXi hosts can be managed by a single image with customizations made at the cluster level. Give me pxe and I can provision you n hosts in about the same time as 1 host

muzzle@lemmy.zip on 23 Sep 19:25 next collapse

And the wheel of reincarnation forever keeps turning.

fruitycoder@sh.itjust.works on 25 Sep 19:06 collapse

Constant back and forth. Moving things closer increases efficenicy moving them apart increases resillency.

So we are constantly shuffling between the two for different workloads to optimize for the given thing.

That said i see this as an extension too the cattle idea by making even the kernel a thing to raised and culled on demand. This matter a lot more with heavy workloads like HPC and AI stuff where a process can be measure in days or weeks and stable uptime is paramount, vs the stateless work of intended k8s stuff (i say intended because you can k8s all the things now but it needs extensions to handle the new lifecycles).

tla@lemmy.world on 23 Sep 21:04 next collapse

How is this better than a hypervisor OS running multiple VM’s?

avidamoeba@lemmy.ca on 23 Sep 21:08 next collapse

I imagine there’s some overhead savings but I don’t know what. I guess with classic hypervisor there’s still calls going through the host kerbel whereas with this they’d go straight to the hardware without special passthrough features?

deadcade@lemmy.deadca.de on 24 Sep 15:46 next collapse

Saving on some overhead, because the hypervisor is skipped. Things like disk IO to physical disks can be more efficient using multikernel (with direct access to HW) than VMs (which have to virtualize at least some components of HW access).

With the proposed “Kernel Hand Over”, it might be possible to send processes to another kernel entirely. This would allow booting a completely new kernel, moving your existing processes and resources over, then shutting down the old kernel, effectively updating with zero downtime.

It will definitely take some time for any enterprises to transition over (if they have a use for this), and consumers will likely not see much use in this technology.

LeFantome@programming.dev on 24 Sep 21:11 next collapse

There is not hypervisor. So, no hypervisor to update.

friend_of_satan@lemmy.world on 24 Sep 22:22 next collapse

I recently heard this great phrase:

“A VM makes an OS believe that it has the machine to itself; a container makes a process believe that it has the OS to itself.”

This would be somewhere between that, where each container could believe it has the OS to itself, but with different kernels.

fruitycoder@sh.itjust.works on 25 Sep 19:00 collapse

More transparent hardware sharing, less over head by not needing to virtualize hardware.

geneva_convenience@lemmy.ml on 23 Sep 22:23 next collapse

Docker has little overhead and wouldn’t this require running the entire kernelmultiple times, take up more RAM?

Also dynamically allocating the RAM seems more efficient than having to assign each kernel a portion at boot.

trevor@lemmy.blahaj.zone on 24 Sep 09:57 collapse

If this works out, it’s likely something that container engines would take advantage of as well. It may take more resources to do (we’ll have to see), but adding kernel isolation would make for a much stronger sandbox. Containers are just a collection of other isolation tools like this anyway.

gvisor already exists for environments like this, where the extra security at the cost of some performance is welcome. But having support for passing processes an isolated, hardened kernel from the primary running Linux kernel would probably make a lot of that performance gap disappear.

I’m also thinking it could do wonders for compatibility too, since you could bundle abandoware apps with an older kernel, or ship new apps that require features from the latest kernel to places that wouldn’t normally have those capabilities.

Xiisadaddy@lemmygrad.ml on 23 Sep 22:42 next collapse

This seems to be a pretty niche use case brought about by changes in the available hardware for servers. Likely they are having situations where their servers have copious amounts of RAM, and CPU cores that the task it is handling don’t need all of, or perhaps isn’t even able to make use of due to software constraints. So this is a way for them to run different tasks on the same hardware without having to worry about virtualization. Effectively turning a bare metal server into 2 bare metal servers. They mention in their statement that, “The primary use case in mind for parker is on the machines with high core counts, where scalability concerns may arise.”

KarnaSubarna@lemmy.ml on 24 Sep 05:56 next collapse

If you consider the core count in modern server grade CPUs, this makes sense.

LeFantome@programming.dev on 24 Sep 21:17 collapse

I run a Proxmox homelab. I just had to shut everything it runs down to upgrade Proxmox. If I could hot rreload the kernel, I would not have had to do that. Sounds pretty handy to me. But that may be the multikernel approach, not this partitioning.

Honestly, even on the desktop. On distros like Arch or Chimera Linux, the kernel is getting updated all the time. It would be great to avoid restarts there too.

TeddyKila@hexbear.net on 23 Sep 22:48 next collapse

And they said k8s was overengineered!

Infrapink@thebrainbin.org on 24 Sep 01:05 next collapse

They call it Parker because it's almost, but not quite, the right thing.

RiverRabbits@lemmy.blahaj.zone on 24 Sep 20:35 collapse

I know that Square you’re talking about!

[deleted] on 24 Sep 10:11 next collapse

.

somerandomperson@lemmy.dbzer0.com on 24 Sep 15:14 next collapse

GTFO, you’re the brainrot ai slop hosting TikTok company.

JTskulk@lemmy.world on 24 Sep 22:04 next collapse

Code is code. If it’s good Free code, I’ll use it. I also don’t like Microsoft and Facebook but I run their kernel code too.

somerandomperson@lemmy.dbzer0.com on 25 Sep 04:39 collapse

Why should i trust them with this multi-kernel thingy if they let the dumpster fire that is tiktok, exist? And, they’re probably trying to embrace-extend-extinguish Linux just like microsoft and apple with their WSL and Containers.app respectively.

JTskulk@lemmy.world on 25 Sep 20:18 collapse

Because it’s Free and reviewed by kernel maintainers, what do you mean?

yogthos@lemmy.ml on 24 Sep 23:04 collapse

the only brainrot here is your own

IrritableOcelot@beehaw.org on 24 Sep 19:51 next collapse

I mean isn’t this just Xen revisited? I don’t understand why this is necessary.

LeFantome@programming.dev on 24 Sep 21:09 collapse

Xen is running full virtual machines. You run full operating systems on simulated hardware. The real “host” operating system is the hypervisor (Xen). Inside a VM, you have the concept of one or more CPUs but you do not know which actual CPU cores that maps to. The load can be distributed to any of them by the real host.

In something like Docker, you only run a single host kernel. On top of that you run sandbox environments that run on the kernel that “think” they have an environment to themselves but are actually sharing a single host kernel. The single host kernel directly manages the real hardware. Processes can run on any of the CPUs managed by the single host kernel.

In both of the above, updating the host means shutting the system down.

With this new approach, you have multiple kernels, all running natively on real hardware. Any given CPU is being managed by only one of the kernels. No hypervisor.

HiddenLayer555@lemmy.ml on 24 Sep 21:16 collapse

If we’re going to this amount of trouble, wouldn’t it be better to replace the monolithic kernel with a microkernel and servers that provide the same APIs for Linux apps? Maybe even seL4 which has its behaviour formally verified. That way the microkernel can spin up arbitrary instances of whatever services are needed most.

yogthos@lemmy.ml on 24 Sep 23:03 collapse

I always thought that Minix was a superior architecture to be honest.