John Gruber (@gruber@mastodon.social) (mastodon.social)
from bestboyfriendintheworld@sh.itjust.works to apple_enthusiast@lemmy.world on 08 Mar 20:20
https://sh.itjust.works/post/34044236

#apple_enthusiast

threaded - newest

paraphrand@lemmy.world on 08 Mar 20:41 next collapse

I frankly think Anthropic and OpenAI will/would struggle to make a hallucination free AI too. I don’t understand why Apple thinks they are going to be able to fix hallucinations.

bestboyfriendintheworld@sh.itjust.works on 08 Mar 20:45 collapse

I don’t even know if it’s theoretically possible to make a hallucination free LLM. That’s kind of its basic operating principle.

Oisteink@feddit.nl on 08 Mar 22:28 collapse

People are misled by the name. Its not making stuff up, its just less accurate

bestboyfriendintheworld@sh.itjust.works on 09 Mar 14:27 collapse

Less accurate as in misleading and outright false.

Oisteink@feddit.nl on 09 Mar 15:02 collapse

It always predicts the next word based on its tokenisation, data from training and context handling. So accuracy is all there is.

some_guy@lemmy.sdf.org on 09 Mar 17:25 collapse

It’s a travesty. The whole LLM “AI” push is a fraud. There’s nothing approaching actual intelligence. It’s simply statistical word strings.