Bubble Trouble (www.wheresyoured.at)
from 1984@lemmy.today to technology@lemmy.world on 21 Jul 20:15
https://lemmy.today/post/34010403

This article describes what ive been thinking about for the last week. How will these billions of investments by big tech actually create something that is significantly better than what we have today already?

There are major issues ahead and im not sure they can be solved. Read the article.

#technology

threaded - newest

Xaphanos@lemmy.world on 21 Jul 21:47 next collapse

My company is in AI. One of our customers pays for systems capable of the hard computational work to design the drugs to treat Parkinson’s. This is the only newly possible with the newest technology.

MysteriousSophon21@lemmy.world on 22 Jul 04:13 collapse

This is acutally one of the most promising applications - AI can screen millions of potential drug compounds and predict protein interactions in hours instead of months, which is why we’re seeing breakthroughs in neurodegenerative disease research.

mesamunefire@piefed.social on 21 Jul 22:31 next collapse

Interesting: https://arxiv.org/pdf/2305.17493

""THE CURSE OF RECURSION:
TRAINING ON GENERATED DATA MAKES MODELS FORGET"

A great read on the referenced paper.

some_kind_of_guy@lemmy.world on 21 Jul 23:16 collapse

I wonder if AI applications other than just “be a generalist chat bot” would run into the same thing. I’m thinking about pharma, weather prediction, etc. They would still have to “understand” their english-language prompts, but the LLMs can do that just fine today, and could feed systems designed to iteratively solve for problems in those areas. A model feeding into itself or other models doesn’t have to be a bad thing.

homesweethomeMrL@lemmy.world on 22 Jul 01:23 collapse

Only in the sense that those “words” they know are pointers to likely connected words. If the concepts follow alike then, theoretically all good. But beyond FAQs and such I’m not seeing anything that would indicate it’s ready for anything more.