LodeMike@lemmy.today
on 23 Jun 2024 21:41
collapse
I meant tech radar. Thanks
9point6@lemmy.world
on 23 Jun 2024 20:31
nextcollapse
Haha okay
Edit: after a skim and a quick Google, this basically looks like a packaging up of existing modern processor features (sorta AVX/SVE with a load of speculative execution thrown on top)
Blackout@kbin.run
on 23 Jun 2024 20:36
nextcollapse
<img alt="Finnish startup at work" src="https://c.tenor.com/V7fGn26bUKsAAAAd/tenor.gif">
Hmm, so sounds like they’re moving the kernel scheduler down to a hardware layer? Basically just better smp?
Chocrates@lemmy.world
on 23 Jun 2024 23:09
collapse
Processors have an execution pipeline, so a single command like mov has some number of actions the CPU takes to execute it. CPU designers already have some magic that allows them to execute these out of order as well as other stuff like pre calculating what they think the next command will probably be.
It’s been a decade since my cpu class so I am butchering that explanation, but I think that is what they are proposing messing with
LodeMike@lemmy.today
on 23 Jun 2024 23:30
collapse
That’s accurate.
Its done through multiple algorithms, but the general idea is to schedule calculations as soon as possible, accounting for data hazards to make sure everything is still equivalent to non out of order execution. Individual circuits can execute different things at the same time. Special hardware is needed to make the algorithms work.
There’s also branch prediction which is the same thing kind of except the CPU needs a way to ensure if the prediction was actually correct.
Coasting0942@reddthat.com
on 23 Jun 2024 21:01
nextcollapse
Others have already laughed at this idea, but on a similar topic:
I know we’ve basically disabled a lot of features that sped up the CPU but introduced security flaws. Is there a way to turn those features back on for an airgapped computer intentionally?
shundi82@sh.itjust.works
on 23 Jun 2024 21:24
nextcollapse
Haven’t used it in years, but it might still work:
The kernel option is mitigations=off, if you want to try adding it to your Grub command line? From the testing I’ve done, provides no benefits whatsoever - no more frames in games, compilation runs no quicker, battery life on a laptop is no better.
geomela@lemmy.world
on 23 Jun 2024 21:16
nextcollapse
Hur Hur Hur… PPU
Th4tGuyII@fedia.io
on 23 Jun 2024 21:41
nextcollapse
The TL;DR for the article is that the headline isn't exactly true. At this moment in time their PPU can potentially double a CPU's performance - the 100x claim comes with the caveat of "further software optimisation".
Tbh, I'm sceptical of the caveat. It feels like me telling someone I can only draw a stickman right now, but I could paint the Mona Lisa with some training.
Of course that could happen, but it's not very likely to - so I'll believe it when I see it.
Having said that they're not wrong about CPU bottlenecks and the slowed rate of CPU performance improvements - so a doubling of performance would be huge in this current market.
barsquid@lemmy.world
on 24 Jun 2024 00:01
nextcollapse
Putting the claim instead of the reality in the headline is journalistic malpractice. 2x for free is still pretty great tho.
barsquid@lemmy.world
on 24 Jun 2024 00:10
collapse
Just finished the article, it’s not for free at all. Chips need to be designed to use it. I’m skeptical again. There’s no point IMO. Nobody wants to put the R&D into massively parallel CPUs when they can put that effort into GPUs.
frezik@midwest.social
on 24 Jun 2024 12:20
collapse
Not every problem is amenable to GPUs. If it has a lot of branching, or needs to fetch back and forth from memory a lot, GPUs don’t help.
Now, does this thing have exactly the same limitations? I’m guessing yes, but it’s all too vague to know for sure. It’s sounds like they’re doing what superscalar CPUs have done for a while. On x86, that starts with the original Pentium from 1993, and Crays going back to the '60s. What are they doing to supercharge this idea?
Does this avoid some of security problems that have popped up with superscalar archs? For example, some kernel code running at ring 0 is running alongside userspace code, and it all gets the same ring 0 level as a result.
Clusterfck@lemmy.sdf.org
on 24 Jun 2024 00:25
nextcollapse
I get that we have to impress shareholders, but why can’t they just be honest and say it doubles CPU performance with the chance of even further improvement with software optimization. Doubling performance of the same hardware is still HUGE.
Feathercrown@lemmy.world
on 24 Jun 2024 23:18
collapse
I don’t know what “they” you’re talking about, but I think it’s clear I’m referring to the person responsible for writing the original title. Not OP and not the article author if the publisher is choosing the title.
I’m just glad there are companies that are trying to optimize current tech rather than just piling over new hardware every damn year with forced planned obsolescence.
Though the claim is absurd, I think double the performance is NEAT.
dustyData@lemmy.world
on 24 Jun 2024 11:50
collapse
This is new hardware piling. What they claim to do requires reworking manufacturing, is not retroactive with current designs, and demands more hardware components. It is basically a hardware thread scheduler. Cool idea, but it won’t save us from planned obsolescence, if anything it is more incentive for more waste.
MadMadBunny@lemmy.ca
on 25 Jun 2024 03:20
collapse
Ah, good ol’ magic wishful thinking…
xantoxis@lemmy.world
on 24 Jun 2024 01:56
nextcollapse
This change is likened to expanding a CPU from a one-lane road to a multi-lane highway
This analogy just pegged the bullshit meter so hard I almost died of eyeroll.
rottingleaf@lemmy.zip
on 24 Jun 2024 11:57
nextcollapse
Apparently the percentage of people actually understanding what they are doing in the management part of the industry is now too low to filter out even such bullshit.
AnarchistArtificer@slrpnk.net
on 24 Jun 2024 13:07
collapse
You’ve got to be careful with rolling your eyes, because the parallelism of the two eyes means that the eye roll can be twice as powerful ^1
(1) If measured against the silly baseline of a single eyeroll
Buffalox@lemmy.world
on 24 Jun 2024 09:00
nextcollapse
Why is this bullshit upvoted?
Already the first sentence, they change from the headline “without recoding” to “with further optimization”.
Then the explanation “a companion chip that optimizes processing tasks in real-time”
This is already done at compiler level and internally in any modern CPU for more than a decade.
It might be possible to some degree for some specific forms of code, like maybe Java. But generally for the CPU this is bullshit, and the headline is decidedly dishonest.
amanda@aggregatet.org
on 24 Jun 2024 11:01
nextcollapse
Has anyone been able to find an actual description of what this does? I clicked two layers deep and neither explains the details. It does sound like they’re doing CPU scheduling in the hardware, which is cool and makes some sense, but the descriptions are too vague to explain what the hell this is except “more parallelism goes brrrr” and it’s not clear to me why current GPUs aren’t already that.
downhomechunk@midwest.social
on 24 Jun 2024 12:29
nextcollapse
Overclockers:
“Give me some liquid nitrogen and I’ll make that 102x.”
over_clox@lemmy.world
on 24 Jun 2024 19:54
collapse
Meh, I just spit on it.
Kazumara@discuss.tchncs.de
on 24 Jun 2024 14:03
nextcollapse
But overall I have to say I don’t believe them. You can’t just make threads independent if they logically have dependencies. Or just remove cache coherency latency by removing caches.
bitfucker@programming.dev
on 24 Jun 2024 23:23
collapse
Can’t have cache latency if there is no cache!
StupidBrotherInLaw@lemmy.world
on 25 Jun 2024 02:11
collapse
So THIS is what the communists were talking about when they told me about the benefits of transitioning to a cacheless society!
_sideffect@lemmy.world
on 24 Jun 2024 15:46
nextcollapse
You can download more ram too!
qevlarr@lemmy.world
on 24 Jun 2024 16:30
nextcollapse
🚨 ⚠ 🚨 Hoax alert! 🚨 ⚠ 🚨
blahsay@lemmy.world
on 24 Jun 2024 23:43
nextcollapse
10 tricks to speed up your cpu and trim belly fat. Electrical engineers hate them! Invest now! Start up is called ‘DefinitelyNotAScam’.
probableprotogen@lemmy.dbzer0.com
on 25 Jun 2024 00:48
nextcollapse
Gee its like all modern computers already have massively parallel processing devices built in.
tombruzzo@lemm.ee
on 25 Jun 2024 01:04
nextcollapse
I don’t care. Intel promised 5nm 10ghz single core processors by this point and I still want it out of principle
threaded - newest
Cybercriminals are creaming their jorts at the potential exploits this might open up.
Please, hackers wear cargo shorts and toe shoes sir
Oof. But yeah. Fair.
I want to go on record that sometimes I just wear sandals with socks.
Truly! The scum of the earth!
I highly doubt that unless they invented magic.
Edit: oh… They ommitted the “up to” in the headline.
Added it
I meant tech radar. Thanks
Haha okay
Edit: after a skim and a quick Google, this basically looks like a packaging up of existing modern processor features (sorta AVX/SVE with a load of speculative execution thrown on top)
<img alt="Finnish startup at work" src="https://c.tenor.com/V7fGn26bUKsAAAAd/tenor.gif">
Hmm, so sounds like they’re moving the kernel scheduler down to a hardware layer? Basically just better smp?
Processors have an execution pipeline, so a single command like mov has some number of actions the CPU takes to execute it. CPU designers already have some magic that allows them to execute these out of order as well as other stuff like pre calculating what they think the next command will probably be.
It’s been a decade since my cpu class so I am butchering that explanation, but I think that is what they are proposing messing with
That’s accurate.
Its done through multiple algorithms, but the general idea is to schedule calculations as soon as possible, accounting for data hazards to make sure everything is still equivalent to non out of order execution. Individual circuits can execute different things at the same time. Special hardware is needed to make the algorithms work.
There’s also branch prediction which is the same thing kind of except the CPU needs a way to ensure if the prediction was actually correct.
Others have already laughed at this idea, but on a similar topic:
I know we’ve basically disabled a lot of features that sped up the CPU but introduced security flaws. Is there a way to turn those features back on for an airgapped computer intentionally?
Haven’t used it in years, but it might still work:
www.grc.com/inspectre.htm
The kernel option is
mitigations=off
, if you want to try adding it to your Grub command line? From the testing I’ve done, provides no benefits whatsoever - no more frames in games, compilation runs no quicker, battery life on a laptop is no better.wiki.archlinux.org/title/Improving_performance#Tu…
Hur Hur Hur… PPU
The TL;DR for the article is that the headline isn't exactly true. At this moment in time their PPU can potentially double a CPU's performance - the 100x claim comes with the caveat of "further software optimisation".
Tbh, I'm sceptical of the caveat. It feels like me telling someone I can only draw a stickman right now, but I could paint the Mona Lisa with some training.
Of course that could happen, but it's not very likely to - so I'll believe it when I see it.
Having said that they're not wrong about CPU bottlenecks and the slowed rate of CPU performance improvements - so a doubling of performance would be huge in this current market.
Putting the claim instead of the reality in the headline is journalistic malpractice. 2x for free is still pretty great tho.
Just finished the article, it’s not for free at all. Chips need to be designed to use it. I’m skeptical again. There’s no point IMO. Nobody wants to put the R&D into massively parallel CPUs when they can put that effort into GPUs.
Not every problem is amenable to GPUs. If it has a lot of branching, or needs to fetch back and forth from memory a lot, GPUs don’t help.
Now, does this thing have exactly the same limitations? I’m guessing yes, but it’s all too vague to know for sure. It’s sounds like they’re doing what superscalar CPUs have done for a while. On x86, that starts with the original Pentium from 1993, and Crays going back to the '60s. What are they doing to supercharge this idea?
Does this avoid some of security problems that have popped up with superscalar archs? For example, some kernel code running at ring 0 is running alongside userspace code, and it all gets the same ring 0 level as a result.
I get that we have to impress shareholders, but why can’t they just be honest and say it doubles CPU performance with the chance of even further improvement with software optimization. Doubling performance of the same hardware is still HUGE.
They… they did?
Not in the title
They didn’t write the title.
I don’t know what “they” you’re talking about, but I think it’s clear I’m referring to the person responsible for writing the original title. Not OP and not the article author if the publisher is choosing the title.
And I think it’s pretty clear I’m not. And it seems pretty clear the OP wasn’t either.
So… are you just stating random things for the fuck of it, or did you have an actual reason for bringing up a non-sequitur?
Was it though?
I’m just glad there are companies that are trying to optimize current tech rather than just piling over new hardware every damn year with forced planned obsolescence.
Though the claim is absurd, I think double the performance is NEAT.
This is new hardware piling. What they claim to do requires reworking manufacturing, is not retroactive with current designs, and demands more hardware components. It is basically a hardware thread scheduler. Cool idea, but it won’t save us from planned obsolescence, if anything it is more incentive for more waste.
Ah, good ol’ magic wishful thinking…
This analogy just pegged the bullshit meter so hard I almost died of eyeroll.
Apparently the percentage of people actually understanding what they are doing in the management part of the industry is now too low to filter out even such bullshit.
You’ve got to be careful with rolling your eyes, because the parallelism of the two eyes means that the eye roll can be twice as powerful ^1
(1) If measured against the silly baseline of a single eyeroll
Why is this bullshit upvoted?
Already the first sentence, they change from the headline “without recoding” to “with further optimization”.
Then the explanation “a companion chip that optimizes processing tasks in real-time”
This is already done at compiler level and internally in any modern CPU for more than a decade.
It might be possible to some degree for some specific forms of code, like maybe Java. But generally for the CPU this is bullshit, and the headline is decidedly dishonest.
Has anyone been able to find an actual description of what this does? I clicked two layers deep and neither explains the details. It does sound like they’re doing CPU scheduling in the hardware, which is cool and makes some sense, but the descriptions are too vague to explain what the hell this is except “more parallelism goes brrrr” and it’s not clear to me why current GPUs aren’t already that.
Overclockers:
“Give me some liquid nitrogen and I’ll make that 102x.”
Meh, I just spit on it.
The techradar article is terrible, the techcrunch article is better, the Flow website has some detail.
But overall I have to say I don’t believe them. You can’t just make threads independent if they logically have dependencies. Or just remove cache coherency latency by removing caches.
Can’t have cache latency if there is no cache!
So THIS is what the communists were talking about when they told me about the benefits of transitioning to a cacheless society!
You can download more ram too!
🚨 ⚠ 🚨 Hoax alert! 🚨 ⚠ 🚨
10 tricks to speed up your cpu and trim belly fat. Electrical engineers hate them! Invest now! Start up is called ‘DefinitelyNotAScam’.
Gee its like all modern computers already have massively parallel processing devices built in.
I don’t care. Intel promised 5nm 10ghz single core processors by this point and I still want it out of principle
Startup discovers what a northbridge is