galaxies_collide@lemmy.world
on 20 Sep 2023 15:52
nextcollapse
This is a massive leap from the Folding@home project. From 4 million to 71 million is insane!
JackGreenEarth@lemm.ee
on 20 Sep 2023 16:29
nextcollapse
Great, it’s done by Google. But good thing they’re identified, I guess.
frustratedphagocytosis@kbin.social
on 20 Sep 2023 18:44
nextcollapse
Back in the day, I'd be thrilled to read something like this, but now all I hear is 'look at how many new ways the Google overlord can fuck humans up with protein mutations to eliminate fragile meat-based enemies'
thefartographer@lemm.ee
on 21 Sep 2023 00:40
collapse
That’s ridiculous sci-fi fantasy
- cough -
Anyone else have a sudden urge to be more open with their location sharing?
PreviouslyAmused@lemmy.ml
on 20 Sep 2023 21:05
collapse
I want to believe this, but given how wonky AI bots have proven to be as of late, I can’t help but think that you can cut this number down by several million
repungnant_canary@lemmy.world
on 20 Sep 2023 21:16
collapse
In my field where Google “throws” their huge DL models at problems as well, the papers they publish tend to have very limited explanation of how and why the model works and they don’t really provide a comprehensive validation of the model. So I find it difficult to trust their findings here, not only by looking at LLMs but also their “scientific” models.
threaded - newest
This is a massive leap from the Folding@home project. From 4 million to 71 million is insane!
Great, it’s done by Google. But good thing they’re identified, I guess.
Back in the day, I'd be thrilled to read something like this, but now all I hear is 'look at how many new ways the Google overlord can fuck humans up with protein mutations to eliminate fragile meat-based enemies'
That’s ridiculous sci-fi fantasy
- cough -
Anyone else have a sudden urge to be more open with their location sharing?
I want to believe this, but given how wonky AI bots have proven to be as of late, I can’t help but think that you can cut this number down by several million
In my field where Google “throws” their huge DL models at problems as well, the papers they publish tend to have very limited explanation of how and why the model works and they don’t really provide a comprehensive validation of the model. So I find it difficult to trust their findings here, not only by looking at LLMs but also their “scientific” models.