Maybe we shouldn’t be treating text generators as sources of truth.
Isn’t there some liability for someone who provides inaccurate voting information? Perhaps that could be used to influence Google et al. to stop providing AI summaries on their results pages.
roofuskit@lemmy.world
on 16 Sep 2024 15:37
collapse
Yes, there is. Even Xitter quickly changed their LLM to point to a government website whenever voting questions were asked so they have no liability.
Imgonnatrythis@sh.itjust.works
on 16 Sep 2024 16:35
nextcollapse
Oh this trend is getting old. Why are we acting like AI is normally right about everything and we are so excited when we find a loophole where it screws up? This is like lowest common demoninator news. I bet you could have a AI generate these articles about things AI is wrong about, that’s how formulaic this has become. It’s wrong about all kinds of stuff. You know that phrase about don’t believe everything you read on the internet? AI believes most of what it reads on the internet.
I could go on, but my glue pizza is ready to come out of the oven…
burgersc12@mander.xyz
on 16 Sep 2024 17:36
collapse
“AI models provide inaccurate information” this is all it is. The rest is true as well, but the biggest thing is the fact that these things do not know the “right” answer they just give you an answer, no matter how wrong it is.
threaded - newest
Maybe we shouldn’t be treating text generators as sources of truth.
Isn’t there some liability for someone who provides inaccurate voting information? Perhaps that could be used to influence Google et al. to stop providing AI summaries on their results pages.
Yes, there is. Even Xitter quickly changed their LLM to point to a government website whenever voting questions were asked so they have no liability.
Oh this trend is getting old. Why are we acting like AI is normally right about everything and we are so excited when we find a loophole where it screws up? This is like lowest common demoninator news. I bet you could have a AI generate these articles about things AI is wrong about, that’s how formulaic this has become. It’s wrong about all kinds of stuff. You know that phrase about don’t believe everything you read on the internet? AI believes most of what it reads on the internet. I could go on, but my glue pizza is ready to come out of the oven…
“AI models provide inaccurate information” this is all it is. The rest is true as well, but the biggest thing is the fact that these things do not know the “right” answer they just give you an answer, no matter how wrong it is.