Mind-bending new programming language for GPUs just dropped... - Code Report (youtube.com)
from ruffsl@programming.dev to programming@programming.dev on 17 May 2024 20:53
https://programming.dev/post/14226562

#programming

threaded - newest

arthur@lemmy.zip on 17 May 2024 22:10 next collapse

Very interesting, and very early stage rn.

tonytins@pawb.social on 17 May 2024 23:46 next collapse

Yikes, those high CPU threads. Definitely needs some more polishing.

sus@programming.dev on 18 May 2024 18:42 collapse

what’s wrong with them? are you sure it’s just not set to use 100% of all cores, and then the OS does some shuffling?

eveninghere@beehaw.org on 18 May 2024 02:13 next collapse

Yet, it runs on massively parallel hardware like GPUs, with near-linear speedup

What a bold claim…

mrkeen@mastodon.social on 18 May 2024 05:52 collapse

@eveninghere @ruffsl that claim's correct. But so far it doesn't have great performance on a single core.

eveninghere@beehaw.org on 18 May 2024 10:33 next collapse

Sorry, how could it be correct? On that page there’s no explanation on what they’re measuring to begin with. No mention on the benchmark set up either. There are problems that can never scale linearly due to the reality of hardware.

sus@programming.dev on 18 May 2024 18:34 collapse

the “will linearly speedup anything [to the amount of parallel computation available]” claim is so stupid that I think it’s more likely they meant “only has a linear slowdown compared to a basic manual parallel implementation of the same algorithm”

superb@lemmy.blahaj.zone on 19 May 2024 00:08 next collapse

Good thing they don’t claim that. Read the README, they make very nuanced and reasonable claims about their very impressive language

eveninghere@beehaw.org on 19 May 2024 10:39 collapse

Yeah, and still… the example code in github is also bad. The arithmetic is so tiny that the performance of the execution can be worse than the serial execution. It makes the impression that the language parallelizes everything possible, in which case the execution would possibly get stuck at some parallel parts that’s not worth parallelizing.

There’s a huge chunk of technical information missing for an expert to imagine what’s going on. And too many comments here still praise the language. They don’t mention anything concrete in those texts. This makes me REALLY skeptical of this post.

Edit: there are many posts that make up BS for job interviews. I sure hope this is not one of those.

porgamrer@programming.dev on 18 May 2024 17:03 collapse

The github blurb says the language is comparable to general purpose languages like python and haskell.

Perhaps unintentionally, this seems to imply that the language can speed up literally any algorithm linearly with core count, which is impossible.

If it can automatically accelerate a program that has parallel data dependencies, that would also be a huge claim, but one that is at least theoretically possible.

superb@lemmy.blahaj.zone on 19 May 2024 00:09 collapse

If it can automatically accelerate a program that has parallel data dependencies, that would also be a huge claim, but one that is at least theoretically possible.

You nailed it! That’s exactly what this is! Read through their README, and the paper attached. It’s very cool tech

Daxtron2@startrek.website on 18 May 2024 05:35 next collapse

This could be game changing for introducing shader programming to more developers if it pans out.

wargreymon2023@sopuli.xyz on 18 May 2024 05:59 next collapse

Gotta read the paper, this is a game-changer.

eveninghere@beehaw.org on 18 May 2024 10:26 next collapse

Is this a PR? The link is PR with no substance, praises itself without any details on benchmarking setup, and still I see some comments here being positive.

Kissaki@programming.dev on 19 May 2024 09:58 next collapse

I hope the demo starts soon…

<img alt="" src="https://programming.dev/pictrs/image/2b36b981-377b-4116-9fc5-b6d05924c9c8.png">

(What a bullshit correlation/equation to start with.)

Asudox@lemmy.world on 19 May 2024 10:54 next collapse

Funny how they benchmarked an ARM CPU and not a x64 one as if ARM CPUs are now faster than x64 ones.

rutrum@lm.paradisus.day on 19 May 2024 11:33 collapse

Futhark is another language with the same goals, executed differently.