AI may soon make Nobel-level discovery, scientists predict (www.semafor.com)
from return2ozma@lemmy.world to technology@lemmy.world on 08 Oct 08:10
https://lemmy.world/post/37052186

#technology

threaded - newest

6nk06@sh.itjust.works on 08 Oct 08:16 next collapse

almost certain, eventually

Nice astrology bro.

brathoven@feddit.org on 08 Oct 08:19 collapse

Nobel price in astrology, hallucinated.

Fyrnyx@kbin.melroy.org on 08 Oct 08:54 next collapse

Well, until AI finds a cure for cancer, solves climate issues and fixes the economy for everyone, it is still shit.

Xanthobilly@lemmy.world on 08 Oct 09:06 next collapse

Solves climate issues by turning itself off.

Fyrnyx@kbin.melroy.org on 08 Oct 09:34 collapse

After it does the other two things, yes.

kami@lemmy.dbzer0.com on 08 Oct 09:53 collapse

Sounds like a plot for a “Love, Death and Robots” short

[deleted] on 08 Oct 09:52 collapse

.

CosmoNova@lemmy.world on 08 Oct 09:43 next collapse

Technically machines make most discoveries possible these days but I have yet to see an electric microscope receive the prize. I don‘t see how this is any different.

bufalo1973@piefed.social on 08 Oct 09:56 next collapse

May… or maybe not.

Buffalox@lemmy.world on 08 Oct 10:54 next collapse

“eventually” is a cheap cop out. Because I have no doubt AI will eventually surpass us, it’s simply the nature of the speed of development of technology over evolution. But we are not there yet.

baggachipz@sh.itjust.works on 08 Oct 10:55 next collapse

Any day now….

phdepressed@sh.itjust.works on 08 Oct 11:01 next collapse

Eventually we’ll make agi instead of this llm bullshit. Assuming we don’t destroy ourselves first.

Bronzebeard@lemmy.zip on 08 Oct 11:02 next collapse

We’ve been shoving large amounts of data into machine learning algorithms for ages now. Still need people to interpret the outputs and actually test that the results are accurate.

nyan@lemmy.cafe on 08 Oct 12:53 next collapse

I’m pretty sure that you can find one researcher, somewhere, who will agree with anything you say, including that the weather is being affected by a war between Martians and the people living inside the hollow earth. Especially if you’re offering a large bribe to said researcher to make a statement about something outside their field while they’re somewhat drunk, and then mutilating their remark out of context via the process fondly known as journalism.

In other words, “one researcher” predicting something is pretty much worthless.

SnoringEarthworm@sh.itjust.works on 08 Oct 14:21 next collapse

“It’s almost certain” that AI will reach that level eventually, one researcher told Nature.

Semafor doing so much work trying the launder this into a story. “One scientist” in the original article, to multiple scientists in their headline.

This is the first of three waves of AI in science, says Sam Rodriques, chief executive of FutureHouse — a research lab in San Francisco, California, that debuted an LLM designed to do chemistry tasks earlier this year.

And the one “scientist” seems to have switched tracks from doing actual research to doing capitalism.

frezik@lemmy.blahaj.zone on 08 Oct 14:52 next collapse

This one probably will happen.

The reason is that there are certain fields where you have to sift through massive amounts of data to find the thing you’re looking for. This is an ideal task for machine learning. It’s not going to replace real scientists, and it sure as hell shouldn’t replace peer review. It’s a tool with a use.

As one example, the longest known black hole jet was recently discovered using ML techniques: caltech.edu/…/gargantuan-black-hole-jets-are-bigg…

fushuan@lemmy.blahaj.zone on 08 Oct 15:22 collapse

Fyi, “AI” has been used in medicine research for decades. GenAI is the one that’s wonky. I’d be surprised and sceptical of any researcher that would suggest genAI as the star tool when there are so many predictive ML models that already work so well…