AI shouldn’t make ‘life-or-death’ decisions, says OpenAI’s Sam Altman (www.cnn.com)
from MaxVoltage@lemmy.world to technology@lemmy.world on 18 Jan 2024 16:35
https://lemmy.world/post/10873206

Sam Altman, CEO of OpenAI, speaks at the meeting of the World Economic Forum in Davos, Switzerland. (Denis Balibouse/Reuters)

#technology

threaded - newest

autotldr@lemmings.world on 18 Jan 2024 16:40 next collapse

This is the best summary I could come up with:


ChatGPT is one of several generative AI systems that can create content in response to user prompts and which experts say could transform the global economy.

But there are also dystopian fears that AI could destroy humanity or, at least, lead to widespread job losses.

AI is a major focus of this year’s gathering in Davos, with multiple sessions exploring the impact of the technology on society, jobs and the broader economy.

In a report Sunday, the International Monetary Fund predicted that AI will affect almost 40% of jobs around the world, “replacing some and complementing others,” but potentially worsening income inequality overall.

Speaking on the same panel as Altman, moderated by CNN’s Fareed Zakaria, Salesforce CEO Marc Benioff said AI was not at a point of replacing human beings but rather augmenting them.

As an example, Benioff cited a Gucci call center in Milan that saw revenue and productivity surge after workers started using Salesforce’s AI software in their interactions with customers.


The original article contains 443 words, the summary contains 163 words. Saved 63%. I’m a bot and I’m open source!

[deleted] on 18 Jan 2024 16:47 next collapse

.

ItsAFake@lemmus.org on 18 Jan 2024 21:15 next collapse

Mr Altman, who founded Open AI which built chat bot ChatGPT, says he hopes the initiative will help confirm if someone is a human or a robot.

That last line kinda creeps me out.

[deleted] on 18 Jan 2024 21:48 collapse

.

PipedLinkBot@feddit.rocks on 18 Jan 2024 21:48 next collapse

Here is an alternative Piped link(s):

literal sci-fi villain

Piped is a privacy-respecting open-source alternative frontend to YouTube.

I’m open-source; check me out at GitHub.

ItsAFake@lemmus.org on 18 Jan 2024 22:34 collapse

Yeah that’s most most sci-fi dystopian article I’ve read in a while.

The line where one of the people waiting to get their eyes scanned is well eye opening " I don’t care what they do with the data, I just want the money", this is why they want us poor, so we need money so badly that we will impatiently hand over everything that makes us.

But we already happily hand over our DNA genome to private corporations, so what’s an eye scan gonna do…

[deleted] on 19 Jan 2024 03:29 collapse

.

hai@lemmy.ml on 19 Jan 2024 00:42 collapse

Worldcoin, founded by US tech entrepreneur Sam Altman, offers free crypto tokens to people who agree to have their eyeballs scanned.

What a perfect sentence to sum up 2023 with.

deegeese@sopuli.xyz on 18 Jan 2024 17:04 next collapse

I also want to sell my shit for every purpose but take zero responsibility for consequences.

[deleted] on 18 Jan 2024 17:11 next collapse

.

pearsaltchocolatebar@discuss.online on 18 Jan 2024 17:51 collapse

Yup, my job sent us to an AI/ML training program from a top cloud computing provider, and there were a few hospital execs there too.

They were absolutely giddy about being able to use it to deny unprofitable medical care. It was disgusting.

captainastronaut@seattlelunarsociety.org on 18 Jan 2024 17:15 next collapse

But it should drive cars? Operate strike drones? Manage infrastructure like power grids and the water supply? Forecast tsunamis?

Too little too late, Sam. 

pearsaltchocolatebar@discuss.online on 18 Jan 2024 17:53 next collapse

Yes on everything but drone strikes.

A computer would be better than humans in those scenarios. Especially driving cars, which humans are absolutely awful at.

Deceptichum@kbin.social on 18 Jan 2024 21:10 next collapse

So if it looks like it’s going to crash, should it automatically turn off and go “Lol good luck” to the driver now suddenly in charge of the life-and-death situation?

pearsaltchocolatebar@discuss.online on 18 Jan 2024 21:30 collapse

I’m not sure why you think that’s how they would work.

Deceptichum@kbin.social on 18 Jan 2024 22:08 collapse

Well it's simple, who do you think should make the life or death decision?

pearsaltchocolatebar@discuss.online on 19 Jan 2024 00:17 collapse

The computer, of course.

A properly designed autonomous vehicle would be polling data from hundreds of sensors hundreds/thousands of times per second. A human’s reaction speed is 0.2 seconds, which is a hell of a long time in a crash scenario.

It has a way better chance of a ‘life’ outcome than a human who’s either unaware of the potential crash, or is in fight or flight mode and making (likely wrong) reactions based on instinct.

Again, humans are absolutely terrible at operating giant hunks of metal that go fast. If every car on the road was autonomous, then crashes would be extremely rare.

Potatar@lemmy.world on 19 Jan 2024 00:41 collapse

Are there any pedestrians in your perfectly flowing grid?

pearsaltchocolatebar@discuss.online on 19 Jan 2024 01:38 collapse

Again, a computer can react faster than a human can, which means the car can detect a human and start reacting before a human even notices the pedestrian.

Icalasari@kbin.social on 19 Jan 2024 05:24 collapse

Plus, there will be far fewer variables when humans aren't allowed to drive outside of race tracks and the like. Reason why fully AI cars are a bad idea right now is because of all the chaotic human drivers that react in nonsensical ways. e.g. Pedestrian steps out. Thing that makes sense is for the AI to stop the car. But then the driver behind them decides to swerve around and blare the horn, then see the pedestrian, freak, turn into the AI car, and an accident is caused. Without the human drivers, then all the vehicles can communicate with each other and all of them can react in appropriate ways, adjusting how they drive up to miles back

[deleted] on 18 Jan 2024 23:53 collapse

.

pearsaltchocolatebar@discuss.online on 19 Jan 2024 00:12 collapse

Teslas aren’t self driving cars.

[deleted] on 19 Jan 2024 01:42 collapse

.

pearsaltchocolatebar@discuss.online on 19 Jan 2024 02:57 collapse

Well, yes. Elon Musk is a liar. Teslas are by no means fully autonomous vehicles.

[deleted] on 19 Jan 2024 14:27 collapse

.

wikibot@lemmy.world on 19 Jan 2024 14:28 collapse

Here’s the summary for the wikipedia article you mentioned in your comment:

No true Scotsman, or appeal to purity, is an informal fallacy in which one attempts to protect their generalized statement from a falsifying counterexample by excluding the counterexample improperly. Rather than abandoning the falsified universal generalization or providing evidence that would disqualify the falsifying counterexample, a slightly modified generalization is constructed ad-hoc to definitionally exclude the undesirable specific case and similar counterexamples by appeal to rhetoric. This rhetoric takes the form of emotionally charged but nonsubstantive purity platitudes such as “true”, “pure”, “genuine”, “authentic”, “real”, etc. Philosophy professor Bradley Dowden explains the fallacy as an “ad hoc rescue” of a refuted generalization attempt.

^to^ ^opt^ ^out^^,^ ^pm^ ^me^ ^‘optout’.^ ^article^ ^|^ ^about^

halva@discuss.tchncs.de on 19 Jan 2024 01:29 next collapse

As advanced cruise control, yes. No, but in practice it doesn’t change a thing as humans can bomb civilians just fine themselves. Yes and yes.

If we’re not talking about LLMs which is basically computer slop made up of books and sites pretending to be a brain, using a tool for statistical analysis to analyze a shitload of data (like optical, acoustic and mechanical data to assist driving or seismic data to forecast tsunamis) is a bit of a no-brainer.

[deleted] on 19 Jan 2024 21:18 collapse

.

TimeSquirrel@kbin.social on 18 Jan 2024 17:55 next collapse

We've been putting our lives in the hands of automated, programmed decisions for decades now if y'all haven't noticed. The traffic light that keeps you from getting T-boned. The autopilot that keeps your plane straight and level and takes workload off the pilots. The scissor lift that prevents you from raising the platform if it's too tilted. The airbag making a nanosecond-level decision on whether to deploy or not. And many more.

trackcharlie@lemmynsfw.com on 18 Jan 2024 17:58 next collapse

I mean he can have his opinion on this, I do personally agree, but it’s way too late to try and stop now.

We’ve already got automated drones picking targets and killing people in the middle east and last I heard the newest set of US jets has AI integrated so heavily that they can opt to kill their operator in order to perform objectives

RainfallSonata@lemmy.world on 18 Jan 2024 19:23 collapse

that they can opt to kill their operator in order to perform objectives

Source?

trackcharlie@lemmynsfw.com on 18 Jan 2024 21:58 next collapse

Air force denies actual casualty and claims it was ‘only a simulation’, still problematic, assuming it stopped at a simulation: theguardian.com/…/us-military-drone-ai-killed-ope…

The above AI is allegedly the core of what’s being used for these: wired.com/…/us-air-force-skyborg-vista-ai-fighter…

You didn’t ask for it but these are the drones that pick their own targets: npr.org/…/a-u-n-report-suggests-libya-saw-the-fir…

[deleted] on 18 Jan 2024 23:55 collapse

.

Bipta@kbin.social on 18 Jan 2024 18:01 next collapse

That's why they just removed the military limitations in their terms of service I guess...

northendtrooper@lemmy.ca on 18 Jan 2024 18:01 next collapse

And yet it persuades people to choose for it.

los_chill@programming.dev on 18 Jan 2024 18:06 next collapse

Agreed, but also one doomsday-prepping capitalist shouldn’t be making AI decisions. If only there was some kind of board that would provide safeguards that ensured AI was developed for the benefit of humanity rather than profit…

chemicalwonka@discuss.tchncs.de on 18 Jan 2024 18:33 next collapse

is exactly this AI will do in a near future (not dystopia)

Sludgehammer@lemmy.world on 18 Jan 2024 18:41 next collapse

Considering what we’ve decided to call AI can’t actually make decisions, that’s a no-brainer.

nyakojiru@lemmy.dbzer0.com on 19 Jan 2024 16:52 collapse

AI term means humans are no brainers

homesweethomeMrL@lemmy.world on 18 Jan 2024 22:37 next collapse

Has anyone checked on the sister?

OpenAI went from interesting to horrifying so quickly, I just can’t look.

[deleted] on 18 Jan 2024 22:54 collapse

.

AVincentInSpace@pawb.social on 20 Jan 2024 10:19 collapse

People still like Steve Jobs.

Ugh. There’s time yet.

iAvicenna@lemmy.world on 19 Jan 2024 08:24 next collapse

I am sure Zergerberg is also claiming that they are not making any life-or-death decisions. Lets see you in a couple years when the military gets involved with your shit. Oh wait they already did but I guess they will just use AI to improve soldiers’ canteen experience.

fidodo@lemmy.world on 19 Jan 2024 09:46 next collapse

Shouldn’t, but there’s absolutely nothing stopping it, and lazy tech companies absolutely will. I mean we live in a world where Boeing built a plane that couldn’t fly straight so they tried to fix it with software. The tech will be abused so long as people are greedy.

TwilightVulpine@lemmy.world on 19 Jan 2024 14:39 next collapse

So long as people are rewarded for being greedy. Greedy and awful people will always exist, but the issue is in allowing them to control how things are run.

fidodo@lemmy.world on 19 Jan 2024 19:13 collapse

More than just that, they’re shielded from repercussions. The execs involved with ignoring all the safety concerns should be in jail right now for manslaughter. They knew better and gambled with other people’s lives.

monkeyslikebananas2@lemmy.world on 19 Jan 2024 14:45 collapse

They fixed it with software and then charged extra for the software safety feature. It wasn’t until the planes started falling out of the sky that they decided they would gracefully offer it for free.

mriormro@lemmy.world on 19 Jan 2024 13:09 next collapse

I’m tired of dopey white men making the world so much worse.

Nei@lemmy.world on 19 Jan 2024 14:43 next collapse

OpenAI went from an interesting and insightful company to a horrible and a weird one in a very little time.

TurtleJoe@lemmy.world on 19 Jan 2024 19:28 next collapse

People only thought it was the former before they actually learned anything about them. They were always this way.

AVincentInSpace@pawb.social on 20 Jan 2024 10:18 collapse

Remember when they were saying GPT-2 was too dangerous to release because people might use it to create fake news or articles about topics people commonly Google?

Hah, good times.

nymwit@lemm.ee on 20 Jan 2024 00:45 next collapse

So just like shitty biased algorithms shouldn’t be making life changing decisions on folks’ employability, loan approvals, which areas get more/tougher policing, etc. I like stating obvious things, too. A robot pulling the trigger isn’t the only “life-or-death” choice that will be (is!) automated.

TheFriar@lemm.ee on 20 Jan 2024 01:00 next collapse

Ummm…no fucking shit. Who was thinking that was a good idea?

sus@programming.dev on 20 Jan 2024 10:51 collapse

probably about half of the executives this guy talks to

OutrageousUmpire@lemmy.world on 20 Jan 2024 03:29 next collapse

Fair enough. I do think AI will become a valuable tool for doctors, etc who do make those decisions

cosmicrookie@lemmy.world on 20 Jan 2024 10:38 collapse

Using AI to base a decision on, is different from letting it make decisions

cosmicrookie@lemmy.world on 20 Jan 2024 10:38 next collapse

AI shouldn’t make any decisions

Thedogspaw@midwest.social on 21 Jan 2024 02:29 collapse

When there’s no human to blame because the robot made the decision the ceo should carry all the blame