CosmoNova@lemmy.world
on 08 Oct 09:43
nextcollapse
Technically machines make most discoveries possible these days but I have yet to see an electric microscope receive the prize. I don‘t see how this is any different.
bufalo1973@piefed.social
on 08 Oct 09:56
nextcollapse
“eventually” is a cheap cop out. Because I have no doubt AI will eventually surpass us, it’s simply the nature of the speed of development of technology over evolution. But we are not there yet.
baggachipz@sh.itjust.works
on 08 Oct 10:55
nextcollapse
Any day now….
phdepressed@sh.itjust.works
on 08 Oct 11:01
nextcollapse
Eventually we’ll make agi instead of this llm bullshit. Assuming we don’t destroy ourselves first.
Bronzebeard@lemmy.zip
on 08 Oct 11:02
nextcollapse
We’ve been shoving large amounts of data into machine learning algorithms for ages now. Still need people to interpret the outputs and actually test that the results are accurate.
I’m pretty sure that you can find one researcher, somewhere, who will agree with anything you say, including that the weather is being affected by a war between Martians and the people living inside the hollow earth. Especially if you’re offering a large bribe to said researcher to make a statement about something outside their field while they’re somewhat drunk, and then mutilating their remark out of context via the process fondly known as journalism.
In other words, “one researcher” predicting something is pretty much worthless.
SnoringEarthworm@sh.itjust.works
on 08 Oct 14:21
nextcollapse
“It’s almost certain” that AI will reach that level eventually, one researcher told Nature.
Semafor doing so much work trying the launder this into a story. “One scientist” in the original article, to multiple scientists in their headline.
This is the first of three waves of AI in science, says Sam Rodriques, chief executive of FutureHouse — a research lab in San Francisco, California, that debuted an LLM designed to do chemistry tasks earlier this year.
And the one “scientist” seems to have switched tracks from doing actual research to doing capitalism.
frezik@lemmy.blahaj.zone
on 08 Oct 14:52
nextcollapse
This one probably will happen.
The reason is that there are certain fields where you have to sift through massive amounts of data to find the thing you’re looking for. This is an ideal task for machine learning. It’s not going to replace real scientists, and it sure as hell shouldn’t replace peer review. It’s a tool with a use.
fushuan@lemmy.blahaj.zone
on 08 Oct 15:22
collapse
Fyi, “AI” has been used in medicine research for decades. GenAI is the one that’s wonky. I’d be surprised and sceptical of any researcher that would suggest genAI as the star tool when there are so many predictive ML models that already work so well…
threaded - newest
Nice astrology bro.
Nobel price in astrology, hallucinated.
Well, until AI finds a cure for cancer, solves climate issues and fixes the economy for everyone, it is still shit.
Solves climate issues by turning itself off.
After it does the other two things, yes.
Sounds like a plot for a “Love, Death and Robots” short
.
Technically machines make most discoveries possible these days but I have yet to see an electric microscope receive the prize. I don‘t see how this is any different.
May… or maybe not.
“eventually” is a cheap cop out. Because I have no doubt AI will eventually surpass us, it’s simply the nature of the speed of development of technology over evolution. But we are not there yet.
Any day now….
Eventually we’ll make agi instead of this llm bullshit. Assuming we don’t destroy ourselves first.
We’ve been shoving large amounts of data into machine learning algorithms for ages now. Still need people to interpret the outputs and actually test that the results are accurate.
I’m pretty sure that you can find one researcher, somewhere, who will agree with anything you say, including that the weather is being affected by a war between Martians and the people living inside the hollow earth. Especially if you’re offering a large bribe to said researcher to make a statement about something outside their field while they’re somewhat drunk, and then mutilating their remark out of context via the process fondly known as journalism.
In other words, “one researcher” predicting something is pretty much worthless.
Semafor doing so much work trying the launder this into a story. “One scientist” in the original article, to multiple scientists in their headline.
And the one “scientist” seems to have switched tracks from doing actual research to doing capitalism.
This one probably will happen.
The reason is that there are certain fields where you have to sift through massive amounts of data to find the thing you’re looking for. This is an ideal task for machine learning. It’s not going to replace real scientists, and it sure as hell shouldn’t replace peer review. It’s a tool with a use.
As one example, the longest known black hole jet was recently discovered using ML techniques: caltech.edu/…/gargantuan-black-hole-jets-are-bigg…
Fyi, “AI” has been used in medicine research for decades. GenAI is the one that’s wonky. I’d be surprised and sceptical of any researcher that would suggest genAI as the star tool when there are so many predictive ML models that already work so well…