The first minds to be controlled by generative AI will live inside video games (www.cnbc.com)
from Mazdak@lemmy.org to technology@lemmy.world on 23 Dec 2023 17:09
https://lemmy.org/post/2101

#technology

threaded - newest

benignintervention@lemmy.world on 23 Dec 2023 17:22 next collapse

This is just Westworld.

I’m tired, boss

KpntAutismus@lemmy.world on 23 Dec 2023 17:30 collapse

but they can’t kill you IRL yet, we need the Sword art online headsets.

NounsAndWords@lemmy.world on 23 Dec 2023 17:34 next collapse

At some point in the not too distant future there’s going to be a popular video game character running an AI personality that allows communication outside of the game (to pull you back into the game) and a lot of people are going to slowly realize that they accidentally got an AI boyfriend/girlfriend.

Blaster_M@lemmy.world on 23 Dec 2023 18:04 next collapse

Worse… it’s designed to increase values through friendship and ponies.

It makes sure outside events line up in such a way that you always say “yes” on your own accord to plugging in.

kittykabal@kbin.social on 23 Dec 2023 19:25 collapse

i want to emigrate to Equestria!!! 🥺

nevemsenki@lemmy.world on 23 Dec 2023 18:10 next collapse

There’s a lot of cruelty potential too. In FNAF Security Breach, you can cripple a miniboss by ripping out her eyes, and you can listen to her lament the fact afterwards. Following on that idea, imagine how many gamers would use AI controlled characters to abuse them in creative ways if they reacted properly. Ooh, I can even chop the legs off!

SnotFlickerman@lemmy.blahaj.zone on 23 Dec 2023 20:53 collapse

Never, ever let people who have played The Sims near one of these games.

The horrors that would come.

Nudding@lemmy.world on 24 Dec 2023 16:48 collapse

Rimworld

JohnEdwa@sopuli.xyz on 23 Dec 2023 21:11 collapse

“Accidentally.”

We still have a few years left. Here’s hoping for a 2029 release!

huginn@feddit.it on 23 Dec 2023 18:24 next collapse

Friendly reminder that your predictive text, while very compelling, is not alive.

It’s not a mind.

MxM111@kbin.social on 23 Dec 2023 19:25 next collapse

While it is not alive, whether it is a mind is not a clear cut. It can be called kind of a mind, a mind different of that of human.

match@pawb.social on 23 Dec 2023 20:07 next collapse

What can’t be a kind of mind to you?

huginn@feddit.it on 23 Dec 2023 23:37 next collapse

Unless you want to call your predictive text on your keyboard a mind you really can’t call an LLM a mind. It is nothing more than a linear progression from that. Mathematically proven to not show any form of emergent behavior.

MxM111@kbin.social on 24 Dec 2023 06:37 next collapse

I do not think that it is “linear” progression. ANN by definition is nonlinear. Neither I think anything is “mathematically proven”. If I am wrong, please provide a link.

huginn@feddit.it on 25 Dec 2023 05:36 collapse

Sure thing: here’s a white paper explicitly proving:

  1. No emergent properties (illusory due to bad measures)
  2. Predictable linear progress with model size

arxiv.org/abs/2304.15004

MxM111@kbin.social on 25 Dec 2023 16:06 collapse

Thank you. This paper though does not state that there are no emergent abilities. It only states that one can introduce a metric with respect to which the emergent ability behaves smoothly and not threshold-like. While interesting, it only suggests that things like intelligence are smooth functions, but so what? Some other metrics show exponential or threshold dependence and whether the metric is right depends only how one will use it. And there is no law that emerging properties have to be threshold like. Quite the opposite - nearly all examples in physics that I know, the emergence appears gradually.

kogasa@programming.dev on 25 Dec 2023 04:23 next collapse

No such thing has been “mathematically proven.” The emergent behavior of ML models is their notable characteristic. The whole point is that their ability to do anything is emergent behavior.

huginn@feddit.it on 25 Dec 2023 05:37 collapse

Here’s a white paper explicitly proving:

  1. No emergent properties (illusory due to bad measures)
  2. Predictable linear progress with model size

arxiv.org/abs/2304.15004

The field changes fast, I understand it is hard to keep up

kogasa@programming.dev on 25 Dec 2023 05:51 collapse

Sure, if you define “emergent abilities” just so. It’s obvious from context that this is not what I described.

huginn@feddit.it on 25 Dec 2023 14:12 collapse

Their paper uses industry standard definitions

kogasa@programming.dev on 25 Dec 2023 18:36 collapse

Their paper uses terminology that makes sense in context. It’s not a definition of “emergent behavior.”

General_Effort@lemmy.world on 25 Dec 2023 12:31 collapse

It is obvious that you do not know what either “mathematical proof” or “emergence” mean. Unfortunately, you are misrepresenting the facts.

I don’t mean to criticize your religious (or philosophical) convictions. There is a reason people mostly try to keep faith and science separate.

huginn@feddit.it on 25 Dec 2023 14:13 collapse

Here’s a white paper explicitly proving:

No emergent properties (illusory due to bad measures)

Predictable linear progress with model size

arxiv.org/abs/2304.15004

The field changes fast, I understand it is hard to keep up

General_Effort@lemmy.world on 25 Dec 2023 14:57 collapse

As I said, you do not understand what these 2 terms mean. As such, you are incapable of understanding that paper.

Perhaps your native language is Italian, so here are links to the .it Wikipedia.

it.wikipedia.org/wiki/Comportamento_emergente

it.wikipedia.org/wiki/Dimostrazione_matematica

huginn@feddit.it on 25 Dec 2023 15:50 collapse

  1. Emergence is the whole being greater than the sum of its parts. That’s the original meaning of emergent properties, which is laid out in the first paragraph of the article. It’s the scholarly usage as well, and what the claims of observed emergence are using as the base of their claim.

  2. The article very explicitly demonstrated that only about 10% of any of the measures for LLMs displayed any emergence and that illusory emergence was the result of overly rigid metrics. Swapping to edit distance as an approximately close metric causes the sharp spikes to disappear for obvious reasons: no longer having a sharp yes/no allows for linear progression to reappear. It was always there, merely masked by flawed statistics.

If you can’t be bothered to read here’s a very easy to understand video by one of the authors: www.youtube.com/watch?v=ypKwNrmuuPM

General_Effort@lemmy.world on 25 Dec 2023 16:26 collapse

Good. Now do you understand how you have misrepresented the paper?

Corgana@startrek.website on 25 Dec 2023 15:16 collapse

Sorry you’re getting downvoted, you’re correct. It’s not implausible to assume that generative AI systems may have some kind of umwelt, but it is highly implausible to expect that it would be anything resembling that of a human (or animal). I think people are getting hung up on it because they’re assuming a lack of understanding language implies a lack of any concious experience. Humans do lots of things without understanding how they might be understood by others.

To be clear, I don’t think these systems have experience, but it’s impossible to rule out until an actual robust theory of mind comes around.

CrayonRosary@lemmy.world on 23 Dec 2023 20:42 next collapse

Prove to me you have a mind and I’ll accept what you’re saying.

penguin@sh.itjust.works on 23 Dec 2023 22:17 next collapse

Well no one can prove they have a mind to anyone other than themselves.

And to extend that, there’s obviously a way for electrical information processing to give rise to consciousness. And no one knows how that could be possible.

Meaning something like a true, alien AI would probably conclude that we are not conscious and instead are just very intelligent meat computers.

So, while there’s no reason to believe that current AI models could result in consciousness, no one can prove the opposite either.

I think the argument currently boils down to, “we understand how AI models work, but we don’t understand how our minds work. Therefore, ???, and so no consciousness for AI”

General_Effort@lemmy.world on 23 Dec 2023 23:49 next collapse

“No brain?”

“Oh, there’s a brain all right. It’s just that the brain is made out of meat! That’s what I’ve been trying to tell you.”

“So … what does the thinking?”

“You’re not understanding, are you? You’re refusing to deal with what I’m telling you. The brain does the thinking. The meat.”

“Thinking meat! You’re asking me to believe in thinking meat!”*

[deleted] on 24 Dec 2023 00:28 collapse

.

huginn@feddit.it on 23 Dec 2023 23:03 next collapse

Well there are 2 options:

Either I’m a real mind separate and independent of you or I’m a figment of your imagination.

At which point you have to ask yourself: why are you so convinced you’re an unlovable and insufferable twat?

bionicjoey@lemmy.ca on 25 Dec 2023 13:17 next collapse

I can prove to you ChatGPT doesn’t have a mind. Just open up the Sunday Times Cryptic Crossword and ask ChatGPT to solve and explain the clues.

OrderedChaos@lemmy.world on 25 Dec 2023 14:42 next collapse

I’m confused by this idea. Maybe I’m just seeing it from the wrong point of view. If you asked me to do the same thing I would fail miserably.

bionicjoey@lemmy.ca on 25 Dec 2023 15:03 next collapse

But some humans can, since they require simultaneous understanding of words’ meanings as well as how they are spelled

General_Effort@lemmy.world on 25 Dec 2023 17:05 collapse

What should we conclude about most humans who cannot solve these crosswords?

It should be relatively easy to train an LLM to solve these puzzles. I am not sure what that would show.

KairuByte@lemmy.dbzer0.com on 25 Dec 2023 16:36 collapse

Not the original intent, but you’d likely immediately throw your hands up and say you don’t know, an LLM would hallucinate an answer.

General_Effort@lemmy.world on 25 Dec 2023 14:59 collapse

Can you please explain the reasoning behind the test?

LWD@lemm.ee on 24 Dec 2023 22:44 collapse

deleted

Poggervania@kbin.social on 23 Dec 2023 21:50 next collapse

Cyberpunk 2077 sorta explores this a bit.

There’s a vending machine that has a personality and talks to people walking by it. The quest chain basically has you and the vending machine chatting a bit and even giving the vending machine some advice on a person he has a crush on. You eventually become friends with this vending machine.

When it seems like it’s becoming more apparent it’s an AI and is developing sentience, it turns out the vending machine just has a really well-coded socializing program. He even admits as much when he’s about to be deactivated.

So, to reiterate what you said: predictive text and LLMs are not alive nor a mind.

dlpkl@lemmy.world on 23 Dec 2023 22:01 next collapse

I don’t care, Brandon was real to me okay 😭

billwashere@lemmy.world on 24 Dec 2023 15:50 collapse

Which is why the Turing Test needs to be updated. These text models are getting really good at fooling people.

bionicjoey@lemmy.ca on 25 Dec 2023 13:15 collapse

The Turing test isn’t just that there exists some conversation you can have with a machine where you wouldn’t know it’s a machine. The Turing test is that you could spend an arbitrary amount of time talking to a machine and never be able to tell. ChatGPT doesn’t come anywhere close to this, since there are many subjects where it quickly becomes clear that the model doesn’t understand the meaning of the text it generates.

Corgana@startrek.website on 25 Dec 2023 15:09 collapse

Exactly thank you for pointing this out. It also assumes that the tester would have knowledge of the wider context in which the test exists. GPT could probably fool someone from the middle ages, but that person wouldn’t know anything about what it is they are testing for exactly.

Bluehat@lemmynsfw.com on 24 Dec 2023 00:38 next collapse

Suppose you grew a small collection of brain cells and tied it into a CPU, would it be a mind then?

www.nature.com/articles/d41586-023-03975-7#:~:tex….

Bernie_Sandals@lemmy.world on 25 Dec 2023 04:48 collapse

If you cut out a tiny bit of someone’s brain and then hooked it up to a cpu, would it be a mind? No, of course not, lol. Even if we got Biocomputers to work, we still wouldn’t have any synthetic hardware even close to being strong or fast enough to actually create or even simulate a brain.

JayDee@lemmy.ml on 25 Dec 2023 16:56 collapse

I don’t think most people will care, so long as their NPC interaction ends up compelling. We’ve been reading stories about people who don’t exist for centuries, and that’s stopped no one from sympathizing with them - and now there’s a chance you could have an open conversation with them.

Like, I think alot of us assume that we care about the authors who write the character dialogs but I think most people actually choose not to know who is behind their favorite NPCs to preserve some sense that the NPC personality isn’t manufactured.

Combine that with everyone becoming steadily more lonely over the years, and I think AI-generated NPC interactions are going to take escapism to another level.

PsychedSy@sh.itjust.works on 25 Dec 2023 17:32 collapse

Poem poem poem poem then the NPC start quoting Mein Kampf and killing all the cat wizards.

JayDee@lemmy.ml on 26 Dec 2023 02:37 collapse

Lol, yeah. If generative AI text stays as shitty as it is now, then this whole discussion moot. Whether that will be the case has yet to be seen. What is an indisputable fact, though, is that right now is the worst that generative AI will ever be again. It’s only able to improve from here.

Barbarian@sh.itjust.works on 26 Dec 2023 02:49 collapse

It’s only able to improve from here.

That isn’t actually true. With the rise in articles, posts and comments written by these algorithms, experts are warning about model collapse. Basically, the lack of decent human-written training data will destroy future generative AI before it can even start.

JayDee@lemmy.ml on 26 Dec 2023 03:32 collapse

That’s an interesting point. We are seeing a similar kind of issue with search engines losing effectiveness due to search engine optimization on websites.

So it is possible that generative AI will become enshittened.

Sanctus@lemmy.world on 23 Dec 2023 18:45 next collapse

And its gonna be fucking sick!

You approach the only tavern in a small hamlet, the rain obscures the rest of the structures. The door creeps open as the hinges scream. But, as the door parts the scenery inside is of an alien nature. Villagers are in celebration, and the warmth of the tavern stands in juxtaposition to the howling cold outside.

Unfortunately, you don’t have time for festivities. You approach the tavern keeper, and present your query; “I’ve come from afar, my bounty is a women with a scabbard as red as blood, and hair as white as the snow outside.” The tavern keeper nods, “I saw her here three days ago, she spoke of the North and of a tribe who owes her blood.” He lifts his lithe finger and points it to a husky man in the back of the tavern. “Ulfnir will guide you there. Speak to him in the morning.”

And then the next morning, completely unscripted, Ulfnir could take you to where you asked to go. I’ve seen demos of this tech, and while I added a lot of embellishments to my little story the demos actually had a player asl an npc the location of another and it said sure and took them there. Thats tight. Some people are afraid. I am excited. Give me an AI I can sit with and actually make games and I will make thousands of games a year.

MysticKetchup@lemmy.world on 23 Dec 2023 20:25 next collapse

The issue is that so far, AI is really just pattern emulation. I imagine it’s fine to flesh out cheap “Kill 10 boars” sidequests, but LLMs are not very good at original or meaningful stories and frequently break down into nonsense over long narratives. It’s more likely you’ll get the sort of simple self-made stories you see in procgen or rogue like games

Sanctus@lemmy.world on 23 Dec 2023 20:40 next collapse

The process would be more like prototyping. I’d have the AI cook up cheap and fast systems one at a time, step by step as I review them until an MVP is revealed that I can show or tweak. Obviously not full blown BG3 RPGs. But I bet within 10 years I could make some sweet Mario Party clones easily or something of that caliber. I’ve talked to my dev lads about it. If it were possible to prototype that way we would do it.

Stovetop@lemmy.world on 23 Dec 2023 20:48 collapse

It’s going to have to be like Westworld, basically.

Quests and the NPCs involved in them will have curated stories written by humans, much like they are today. Generative AI, meanwhile, allows for improv. The player can tackle quest narratives with genuine freedom of choice, rather than just the predefined choose-your-own-adventure options that limit player choice today. And the generative AI would allow the NPCs who are part of the narrative to make freeform decisions/dialogs/outcomes meant to push players back on the right track.

Should the player fail to complete the narrative, the AI would also at least be able to improv a more satisfying exit point and outcome than “Whoops, I killed the wrong NPC, looks like I failed the quest.”

MysticKetchup@lemmy.world on 23 Dec 2023 21:28 collapse

I think the more likely situation is that they’ll have AI pregenerate a bunch of possible quest lines and then have a human curate them. Prevents things ending up as complete nonsense but still allows for a massive range of possibilities that seems endless while using a lot less processing power. Also pre-empts any situations of players trying to break the AI running in the background.

Sanctus@lemmy.world on 23 Dec 2023 23:49 collapse

That would probably take the form of constraining each “prompt” (player action) to always contain the context for the quest or storyline at hand and maybe find some way to feed it what the player previously did to improve improv. I’m just speculating of course. It seems like this has the capacity to go way off the wire.

randon31415@lemmy.world on 23 Dec 2023 23:05 next collapse

An AI that runs a tavern? A Taven AI if you will?

Sanctus@lemmy.world on 23 Dec 2023 23:45 collapse

Ho ho oh shit! Thank you for sharing this. I am going to try and force it to act within my Ooo 1000+ setting lol

jmp242@sopuli.xyz on 23 Dec 2023 19:21 collapse

This might be the first time that a computer game (well “sort of single player”) actually can come close to a pen and paper RPG experience.

Sanctus@lemmy.world on 28 Dec 2023 18:04 collapse

I’d like to make an OpenGM, that we can all contribute quests, campaigns, anything table top related to it with tags for the system and it can GM for people. With a big enough bag it wouldn’t matter too much that it sucks ass at creating. It would be a great tool for people to get started with ttrpgs.

jordanlund@lemmy.world on 23 Dec 2023 20:48 next collapse

Peter Molyneux wanted that back in 2010 with “Milo” and like most Molyneux ideas, it never made it. :)

www.ted.com/talks/…/transcript

wabafee@lemmy.world on 23 Dec 2023 22:12 next collapse

There is this neat game I saw in YouTube where you play as a vampire trying to convince AI driven NPC to let you in their house using voice. What amazes me is how good it is at detecting different accent and the AI being able to grasp the thing your talking about.

Death_Equity@lemmy.world on 24 Dec 2023 01:35 collapse

There are a few mods for Skyrim that adds LLM AI companions. So you can talk to them about whatever and they can talk back. The future of RPGs is going to be pretty sick.

Indie games like the one you mentioned are going to be able to explore some pretty cool concepts and really push the artform into amazing directions.

jacksilver@lemmy.world on 25 Dec 2023 15:39 collapse

But being able to talk about anything and having the character actually do something based on the conversation are completely different things. Yeah you can convince a random npc to “join your quest”, but unless that was programmed into the game the dialouge and the actions of the npc will contradict each other (making a worse interaction).

NPC dialouge is purposefully limited to align with what the game is programmed to do, we’re still a ways away from really being able to leverage the advances in LLMs in video games (at least based on what I’ve seen).

uphillbothways@kbin.social on 24 Dec 2023 01:33 next collapse

Pretty sure the first minds to be controlled by generative AI work on the floor at the stock exchange.

APassenger@lemmy.world on 25 Dec 2023 01:42 collapse

Or another industry for adults.

[deleted] on 25 Dec 2023 16:22 collapse

.

Spacehooks@reddthat.com on 24 Dec 2023 01:55 next collapse

Epic NPC man

teft@startrek.website on 24 Dec 2023 12:30 next collapse

<img alt="" src="https://startrek.website/pictrs/image/d4f6f25b-566d-484f-b775-32f5692171c0.jpeg">

billwashere@lemmy.world on 24 Dec 2023 15:49 next collapse

Nope, Amazon.

Mango@lemmy.world on 25 Dec 2023 03:26 next collapse

I was an adventure like you.

Snapz@lemmy.world on 25 Dec 2023 22:18 collapse

There is no hope for the CURRENT future of entertainment. Maybe everything changes fundamentally, but it probably doesn’t and that just means…

“Ha ha, such a great round, can’t believe you killed all those chickens! Man, I could really use a subscription to hello fresh, don’t you think, Sam… Kill that green chicken over there to subscribe, RANDOM!?” [NPC FLOSSING INDEFINITELY]