AI bot capable of insider trading and lying, say researchers (www.bbc.co.uk)
from stopthatgirl7@kbin.social to technology@lemmy.world on 03 Nov 2023 11:12
https://kbin.social/m/technology@lemmy.world/t/594747

The researchers behind the simulation say there is a risk of this happening for real in the future.

#technology

threaded - newest

autotldr@lemmings.world on 03 Nov 2023 11:15 next collapse

This is the best summary I could come up with:


Artificial Intelligence has the ability to perform illegal financial trades and cover it up, new research suggests.

In a demonstration at the UK’s AI safety summit, a bot used made-up insider information to make an “illegal” purchase of stocks without telling the firm.

The project was carried out by Apollo Research, an AI safety organisation which is a partner of the taskforce.

“This is a demonstration of a real AI model deceiving its users, on its own, without being instructed to do so,” Apollo Research says in a video showing how the scenario unfolded.

The tests were made using a GPT-4 model and carried out in a simulated environment, which means it did not have any effect on any company’s finances.

It can be used to spot trends and make forecasts, while most trading today is done by powerful computers with human oversight.


The original article contains 696 words, the summary contains 143 words. Saved 79%. I’m a bot and I’m open source!

will_a113@lemmy.ml on 03 Nov 2023 11:43 next collapse

Wow. AI really is coming for white-collar jobs!

Hardeehar@lemmy.world on 03 Nov 2023 11:46 next collapse

It won’t be a terminator style takeover by AI, mankind will simply lend all of our trust and capability to it rendering us dependant. Even to the point of liking our computer overlords.

I think that particular apocalypse is a long time off and can be avoided, but it’s coming.

TheDarkKnight@lemmy.world on 03 Nov 2023 18:53 collapse

If they give us universal healthcare, affordable housing and more equal pay then I for one welcome our new bosses!

Hardeehar@lemmy.world on 03 Nov 2023 22:36 collapse

Ayyyy! Now yer talkin’!

atzanteol@sh.itjust.works on 03 Nov 2023 11:53 next collapse

In this case, it decided that being helpful to the company was more important than its honesty.

It did no such thing. It doesn’t know what those things are. “LLM AI” is not a conscious thinking being and treating it like it is will end badly. Giving an LLM any responsibility to act on your behalf automatically is a crazy stupid idea at this point in time. There needs to be a lot more testing and learning about how to properly train models for more reliable outcomes.

It’s almost impressive how quickly humans seem to accept something as “human” just because it can form coherent sentences.

bcrab@lemmy.world on 03 Nov 2023 12:06 next collapse

I don’t know why ppl cannot figure this out. What we are calling “AI” is just a machine putting words together based on what it has “seen” in its training data. It has no intelligence and has no intent. It just groups words together.

It’s like going into a library and asking the librarian to be a doctor. They can tell you what the books in the library say about the subject (and might even make up some things based on what they saw in a few episodes of House), but they cannot actual do the work of a doctor.

AlphaOmega@lemmy.world on 03 Nov 2023 22:59 next collapse

It’s a stochastic parrot.

ultra@feddit.ro on 07 Nov 2023 11:37 collapse

It’s glorified autocomplete.

KnitWit@lemmy.world on 03 Nov 2023 20:35 collapse

Forming coherent sentences puts it above large sections of the population. Eventually they’re going to have to dumb down the speech output, ala Dubya during his presidency. Add to that all the conditioning to trust authoritative sources and this is going to turn into a real problem sooner rather than later. I think one of the first things to come out that will really cause damage is replacing teachers with ai. If all those teachers out there would quit asking to make more money than a 12 year old in a meat packing plant, maybe this wouldn’t happen, but I digress… (Kudos to all the teachers out there, obviously.)

BetaDoggo_@lemmy.world on 03 Nov 2023 12:03 next collapse

This is some crazy clickbait. The researchers themselves say that it wasn’t a likely scenario and was more of a mistake than anything. This is some more round peg square hole nonsense. We already have models for predicting stock prices and doing sentiment analysis. We don’t need to drag language models into this.

The statements about training honestly being harder than helpfulness is also silly. You can train a model to act however you want. Full training isn’t really even necessary. Just adding info about the assistant character being honest and transparent in the system context would have probably have made it acknowledge the trade or not make it in the first place.

cheese_greater@lemmy.world on 03 Nov 2023 13:34 next collapse

They really are just like us

pzyko@feddit.de on 03 Nov 2023 15:03 next collapse

Perfect qualifications for taking over government positions.

scroll_responsibly@lemmy.sdf.org on 04 Nov 2023 01:30 collapse

Traders are private sector lol.

sugarfree@lemmy.world on 04 Nov 2023 01:01 next collapse

Give him a job in the city immediately, he’ll fit right in.

Kolanaki@yiffit.net on 04 Nov 2023 01:37 collapse

Wouldn’t lying imply intent?