Why we need an AI safety hotline.
(www.technologyreview.com)
from 101@feddit.org to technology@beehaw.org on 16 Sep 2024 09:12
https://feddit.org/post/2899130
from 101@feddit.org to technology@beehaw.org on 16 Sep 2024 09:12
https://feddit.org/post/2899130
threaded - newest
“AI Safety” is a buzzword OpenAI invented to stifle competition.
If you legitimately believe this then you are a clown. Terminator came out in what year again? Lmaoooo
Edit with citation:
“As AI researchers in the 1960s and 1970s began to use computers to recognize images, translate between languages, and understand instructions in normal language and not just code, the idea that computers would eventually develop the ability to speak and think—and thus to do evil—bubbled into mainstream culture.”
technologyreview.com/…/our-fear-of-artificial-int… (MIT tech review)
Another article from before OpenAI was even a blip on the radar:
technologyreview.com/…/our-fear-of-artificial-int…
And another:
theconversation.com/stephen-hawking-warned-about-…
It even has its own Wikipedia article! …wikipedia.org/…/Existential_risk_from_artificial…
Terminator is a fun movie. It’s also completely fictional.
See rest of comment
it’s not a buzzword, it is valuable, but as with many things “safety” is being used as an excuse to push bad legislation (in this case regulatory capture).
for examples of REAL ai safety, i would recommend looking at this YouTube channel
Typical call to the AI safety hotline:
“Hello, my chatbot told me to put glue in my pizza, then I did it. How do I sue AI?!?!” -The only idiot who ever called this hotline