We need to stop pretending AI is intelligent (theconversation.com)
from technocrit@lemmy.dbzer0.com to technology@lemmy.world on 28 Jun 15:08
https://lemmy.dbzer0.com/post/47830010

We are constantly fed a version of AI that looks, sounds and acts suspiciously like us. It speaks in polished sentences, mimics emotions, expresses curiosity, claims to feel compassion, even dabbles in what it calls creativity.

But what we call AI today is nothing more than a statistical machine: a digital parrot regurgitating patterns mined from oceans of human data (the situation hasn’t changed much since it was discussed here five years ago). When it writes an answer to a question, it literally just guesses which letter and word will come next in a sequence – based on the data it’s been trained on.

This means AI has no understanding. No consciousness. No knowledge in any real, human sense. Just pure probability-driven, engineered brilliance — nothing more, and nothing less.

So why is a real “thinking” AI likely impossible? Because it’s bodiless. It has no senses, no flesh, no nerves, no pain, no pleasure. It doesn’t hunger, desire or fear. And because there is no cognition — not a shred — there’s a fundamental gap between the data it consumes (data born out of human feelings and experience) and what it can do with them.

Philosopher David Chalmers calls the mysterious mechanism underlying the relationship between our physical body and consciousness the “hard problem of consciousness”. Eminent scientists have recently hypothesised that consciousness actually emerges from the integration of internal, mental states with sensory representations (such as changes in heart rate, sweating and much more).

Given the paramount importance of the human senses and emotion for consciousness to “happen”, there is a profound and probably irreconcilable disconnect between general AI, the machine, and consciousness, a human phenomenon.

https://archive.ph/Fapar

#technology

threaded - newest

Geodad@lemmy.world on 28 Jun 15:49 next collapse

I’ve never been fooled by their claims of it being intelligent.

Its basically an overly complicated series of if/then statements that try to guess the next series of inputs.

Flagstaff@programming.dev on 28 Jun 16:07 next collapse

ChatGPT 2 was literally an Excel spreadsheet.

I guesstimate that it’s effectively a supermassive autocomplete algo that uses some TOTP-like factor to help it produce “unique” output every time.

And they’re running into issues due to increasingly ingesting AI-generated data.

Get your popcorn out! 🍿

A_norny_mousse@feddit.org on 28 Jun 16:35 next collapse

And they’re running into issues due to increasingly ingesting AI-generated data.

There we go. Who coulda seen that coming! While that’s going to be a fun ride, at the same time companies all but mandate AS* to their employees.

aesthelete@lemmy.world on 28 Jun 16:53 next collapse

I really hate the current AI bubble but that article you linked about “chatgpt 2 was literally an Excel spreadsheet” isn’t what the article is saying at all.

Flagstaff@programming.dev on 28 Jun 17:02 collapse

Fine, *could literally be.

bloup@lemmy.sdf.org on 29 Jun 02:47 collapse

The thing is, because Excel is Turing Complete, you can say this about literally anything that’s capable of running on a computer.

[deleted] on 29 Jun 04:10 collapse

.

anzo@programming.dev on 28 Jun 20:52 next collapse

I love this resource, thebullshitmachines.com (i.e. see lesson 1)…

In a series of five- to ten-minute lessons, we will explain what these machines are, how they work, and how to thrive in a world where they are everywhere.

You will learn when these systems can save you a lot of time and effort. You will learn when they are likely to steer you wrong. And you will discover how to see through the hype to tell the difference. …

Also, Anthropic (ironically) has some nice paper(s) about the limits of “reasoning” in AI.

kromem@lemmy.world on 29 Jun 03:12 collapse

It very much isn’t and that’s extremely technically wrong on many, many levels.

Yet still one of the higher up voted comments here.

Which says a lot.

Blue_Morpho@lemmy.world on 29 Jun 13:58 next collapse

Given that the weights in a model are transformed into a set of conditional if statements (GPU or CPU JMP machine code), he’s not technically wrong. Of course, it’s more than just JMP and JMP represents the entire class of jump commands like JE and JZ. Something needs to act on the results of the TMULs.

tmpod@lemmy.pt on 29 Jun 16:34 collapse

That is not really true. Yes, there are jump instructions being executed when you run interference on a model, but they are in no way related to the model itself. There’s no translation of weights to jumps in transformers and the underlying attention mechanisms.

I suggest reading …wikipedia.org/…/Transformer_(deep_learning_archi…

Blue_Morpho@lemmy.world on 29 Jun 18:04 collapse

That is not really true. Yes, there are jump instructions being executed when you run interference on a model, but they are in no way related to the model itself.

The model is data. It needs to be operated on to get information out. That means lots of JMPs.

If someone said viewing a gif is just a bunch of if-else’s, that’s also true. That the data in the gif isn’t itself a bunch of if-else’s isn’t relevant.

Executing LLM’S is particularly JMP heavy. It’s why you need massive fast ram because caching doesn’t help them.

tmpod@lemmy.pt on 29 Jun 19:23 collapse

You’re correct, but that’s like saying along the lines of manufacturing a car is just bolting and soldering a bunch of stuff. It’s technically true to some degree, but it’s very disingenuous to make such a statement without being ironic. If you’re making these claims, you’re either incompetent or acting in bad faith.

I think there is a lot wrong with LLMs and how the public at large uses them, and even more so with how companies are developing and promoting them. But to spread misinformation and polute an already overcrowded space with junk is irresponsible at best.

Hotzilla@sopuli.xyz on 29 Jun 14:37 next collapse

Calling these new LLM’s just if statements is quite a over simplification. These are technically something that has not existed before, they do enable use cases that previously were impossible to implement.

This is far from General Intelligence, but there are solutions now to few coding issues that were near impossible 5 years ago

5 years ago I would have laughed in your face if you came to suggest that can you code a code that summarizes this description that was inputed by user. Now I laugh that give me your wallet because I need to call API or buy few GPU’s.

JcbAzPx@lemmy.world on 29 Jun 17:57 collapse

I think the point is that this is not the path to general intelligence. This is more like cheating on the Turing test.

elbarto777@lemmy.world on 29 Jun 17:28 collapse

I’ll be pedantic, but yeah. It’s all transistors all the way down, and transistors are pretty much chained if/then switches.

A_norny_mousse@feddit.org on 28 Jun 15:51 next collapse

Thank You! Yes!

So … A-not-I? AD? What do we call it? LLM seems too specialised?

mbirth@lemmy.ml on 28 Jun 15:56 next collapse

I prefer the term “sophisticated text completion”.

lena@gregtech.eu on 28 Jun 16:01 next collapse

AS - artificial stupidity

ASS - artificial super stupidity

A_norny_mousse@feddit.org on 28 Jun 16:32 collapse

Both are good 👍

kescusay@lemmy.world on 28 Jun 16:24 next collapse

Autocomplete on steroids, but suffering dementia.

A_norny_mousse@feddit.org on 28 Jun 16:31 collapse

ASSD

JollyG@lemmy.world on 28 Jun 17:25 collapse

Word guessing machine.

pastermil@sh.itjust.works on 28 Jun 16:02 next collapse

Artificial Intelligent is supposed to be intelligent.

Calling LLMs intelligent is where it’s wrong.

Endmaker@ani.social on 28 Jun 16:24 collapse

Artificial Intelligent is supposed to be intelligent.

For the record, AI is not supposed to be intelligent.

It just has to appear intelligent. It can be all smoke-and-mirrors, giving the impression that it’s smart enough - provided it can perform the task at hand.

That’s why it’s termed artificial intelligence.

The subfield of Artificial General Intelligence is another story.

nickhammes@lemmy.world on 28 Jun 16:57 collapse

The field of artificial intelligence has also made incredible strides in the last decade, and the decade before that. The field of artificial general intelligence has been around for something like 70 years, and has made a really modest amount of progress in that time, on the scale of what they’re trying to do.

Endmaker@ani.social on 28 Jun 17:09 collapse

The field of artificial general intelligence has been around for something like 70 years, and has made a really modest amount of progress in that time, on the scale of what they’re trying to do.

I daresay it would stay this way until we figure out what intelligence is.

hera@feddit.uk on 28 Jun 16:14 next collapse

Philosophers are so desperate for humans to be special. How is outputting things based on things it has learned any different to what humans do?

We observe things, we learn things and when required we do or say things based on the things we observed and learned. That’s exactly what the AI is doing.

I don’t think we have achieved “AGI” but I do think this argument is stupid.

middlemanSI@lemmy.world on 28 Jun 16:25 next collapse

Most people, evidently including you, can only ever recycle old ideas. Like modern “AI”. Some of us can concieve new ideas.

hera@feddit.uk on 29 Jun 07:43 collapse

What new idea exactly are you proposing?

middlemanSI@lemmy.world on 29 Jun 08:40 collapse

Wdym? That depends on what I’m working on. For pressing issues like raising energy consumption, CO2 emissions and civil privacy / social engineering issues I propose heavy data center tarrifs for non-essentials (like “AI”). Humanity is going the wrong way on those issues, so we can have shitty memes and cheat at school work until earth spits us out. The cost is too damn high!

stephen01king@lemmy.zip on 29 Jun 11:04 next collapse

And is tariffs a new idea or something you recycled from what you’ve heard before about tariffs?

hera@feddit.uk on 29 Jun 13:55 collapse

What do you mean what do I mean? You were the one that said about ideas in the first place…

aesthelete@lemmy.world on 29 Jun 18:25 collapse

If you don’t think humans can conceive of new ideas wholesale, then how do you think we ever invented anything (like, for instance, the languages that chat bots write)?

Also, you’re the one with the burden of proof in this exchange. It’s a pretty hefty claim to say that humans are unable to conceive of new ideas and are simply chatbots with organs given that we created the freaking chat bot you are convinced we all are.

You may not have new ideas, or be creative. So maybe you’re a chatbot with organs, but people who aren’t do exist.

hera@feddit.uk on 30 Jun 06:16 collapse

Haha coming in hot I see. Seems like I’ve touched a nerve. You don’t know anything about me or whether I’m creative in any way.

All ideas have basis in something we have experienced or learned. There is no completely original idea. All music was influenced by something that came before it, all art by something the artist saw or experienced. This doesn’t make it bad and it doesn’t mean an AI could have done it

aesthelete@lemmy.world on 30 Jun 14:04 collapse

What language was the first language based upon?

What music influenced the first song performed?

What art influenced the first cave painter?

hera@feddit.uk on 30 Jun 15:27 collapse

You seem to think that one day somebody invented the first language, or made the first song?

There was no “first language” and no “first song”. These things would have evolved from something that was not quite a full language, or not quite a full song.

Animals influenced the first cave painters, that seems pretty obvious.

aesthelete@lemmy.world on 30 Jun 18:47 collapse

Yeah dude at one point there was no languages and no songs. You can get into “what counts as a language” but at one point there was none. Same with songs.

Language specifically was pretty unlikely to be an individual effort, but at one point people grunting at each other became something else entirely.

Your whole “there is nothing new under the sun” way of thinking is just an artifact of the era you were born in.

hera@feddit.uk on 30 Jun 19:55 collapse

Haha wtf are you talking about. You have no idea what generation I am, you don’t know how old I am and I never said there is nothing new under the sun.

aesthelete@lemmy.world on 30 Jun 20:01 collapse

I’m summarizing your shitty argument and viewpoint. I never said it was a direct quote.

Though, at one point even that tired ass quote and your whole way of thinking was put into words by someone for the first time.

hera@feddit.uk on 30 Jun 20:41 collapse

Well you are doing a poor job of it and are bringing an unnecessary amount of heat to an otherwise civil discussion

aesthelete@lemmy.world on 30 Jun 20:59 collapse

That’s right. If you cannot win the argument the next best thing is to call for civility.

ArbitraryValue@sh.itjust.works on 28 Jun 16:30 next collapse

Yes, the first step to determining that AI has no capability for cognition is apparently to admit that neither you nor anyone else has any real understanding of what cognition* is or how it can possibly arise from purely mechanistic computation (either with carbon or with silicon).

Given the paramount importance of the human senses and emotion for consciousness to “happen”

Given? Given by what? Fiction in which robots can’t comprehend the human concept called “love”?

*Or “sentience” or whatever other term is used to describe the same concept.

hera@feddit.uk on 29 Jun 07:41 collapse

This is always my point when it comes to this discussion. Scientists tend to get to the point of discussion where consciousness is brought up then start waving their hands and acting as if magic is real.

ArbitraryValue@sh.itjust.works on 29 Jun 11:39 collapse

I haven’t noticed this behavior coming from scientists particularly frequently - the ones I’ve talked to generally accept that consciousness is somehow the product of the human brain, the human brain is performing computation and obeys physical law, and therefore every aspect of the human brain, including the currently unknown mechanism that creates consciousness, can in principle be modeled arbitrarily accurately using a computer. They see this as fairly straightforward, but they have no desire to convince the public of it.

This does lead to some counterintuitive results. If you have a digital AI, does a stored copy of it have subjective experience despite the fact that its state is not changing over time? If not, does a series of stored copies representing, losslessly, a series of consecutive states of that AI? If not, does a computer currently in one of those states and awaiting an instruction to either compute the next state or load it from the series of stored copies? If not (or if the answer depends on whether it computes the state or loads it) then is the presence or absence of subjective experience determined by factors outside the simulation, e.g. something supernatural from the perspective of the AI? I don’t think such speculation is useful except as entertainment - we simply don’t know enough yet to even ask the right questions, let alone answer them.

hera@feddit.uk on 29 Jun 13:54 collapse

I am more talking about listening to and reading scientists in media. The definition of consciousness is vague at best

NotASharkInAManSuit@lemmy.world on 29 Jun 20:17 next collapse

So, you’re listening to journalists and fiction writers try to interpret things scientists do and taking that as hard science?

hera@feddit.uk on 30 Jun 06:17 collapse

No… There are a lot of radio shows that get scientists to speak.

NotASharkInAManSuit@lemmy.world on 30 Jun 12:41 collapse

Which ones are you listening to?

ArbitraryValue@sh.itjust.works on 30 Jun 15:03 collapse

I think that then we actually agree.

counterspell@lemmy.world on 28 Jun 16:44 next collapse

No it’s really not at all the same. Humans don’t think according to the probabilities of what is the likely best next word.

Zexks@lemmy.world on 28 Jun 19:37 next collapse

No you think according to the chemical proteins floating around your head. You don’t even know he decisions your making when you make them.

unsw.edu.au/…/our-brains-reveal-our-choices-befor…

You’re a meat based copy machine with a built in justification box.

aesthelete@lemmy.world on 29 Jun 07:32 collapse

You’re a meat based copy machine with a built in justification box.

Except of course that humans invented language in the first place. So uh, if all we can do is copy, where do you suppose language came from? Ancient aliens?

FourWaveforms@lemm.ee on 28 Jun 21:42 collapse

How could you have a conversation about anything without the ability to predict the word most likely to be best?

aesthelete@lemmy.world on 28 Jun 17:01 next collapse

How is outputting things based on things it has learned any different to what humans do?

Humans are not probabilistic, predictive chat models. If you think reasoning is taking a series of inputs, and then echoing the most common of those as output then you mustn’t reason well or often.

If you were born during the first industrial revolution, then you’d think the mind was a complicated machine. People seem to always anthropomorphize inventions of the era.

kibiz0r@midwest.social on 28 Jun 19:48 next collapse

If you were born during the first industrial revolution, then you’d think the mind was a complicated machine. People seem to always anthropomorphize inventions of the era.

<img alt="" src="https://midwest.social/pictrs/image/7f45f29e-28c4-4979-a0c7-aeadb60e3a26.webp">

DancingBear@midwest.social on 28 Jun 20:27 collapse

This is great

FourWaveforms@lemm.ee on 28 Jun 20:57 next collapse

When you typed this response, you were acting as a probabilistic, predictive chat model. You predicted the most likely effective sequence of words to convey ideas. You did this using very different circuitry, but the underlying strategy was the same.

aesthelete@lemmy.world on 29 Jun 01:45 next collapse

I wasn’t, and that wasn’t my process at all. Go touch grass.

stephen01king@lemmy.zip on 29 Jun 11:03 next collapse

Then, unfortunately, you’re even less self-aware than the average LLM chatbot.

aesthelete@lemmy.world on 29 Jun 18:12 collapse

Dude chatbots lie about their “internal reasoning process” because they don’t really have one.

Writing is an offshoot of verbal language, which during construction for people almost always has more to do with sound and personal style than the popularity of words. It’s not uncommon to bump into individuals that have a near singular personal grammar and vocabulary and that speak and write completely differently with a distinct style of their own. Also, people are terrible at probabilities.

As a person, I can also learn a fucking concept and apply it without having to have millions of examples of it in my “training data”. Because I’m a person not a fucking statistical model.

But you know, you have to leave your house, touch grass, and actually listen to some people speak that aren’t talking heads on television in order to discover that truth.

stephen01king@lemmy.zip on 29 Jun 18:56 collapse

Is that why you love saying touch grass so much? Because it’s your own personal style and not because you think it’s a popular thing to say?

Or is it because you learned the fucking concept and not because it’s been expressed too commonly in your “training data”? Honestly, it just sounds like you’ve heard too many people use that insult successfully and now you can’t help but probabilistically express it after each comment lol.

Maybe stop parroting other people and projecting that onto me and maybe you’d sound more convincing.

aesthelete@lemmy.world on 29 Jun 19:03 collapse

Is that why you love saying touch grass so much? Because it’s your own personal style and not because you think it’s a popular thing to say?

In this discussion, it’s a personal style thing combined with a desire to irritate you and your fellow “people are chatbots” dorks and based upon the downvotes I’d say it’s working.

And that irritation you feel is a step on the path to enlightenment if only you’d keep going down the path. I know why I’m irritated with your arguments: they’re reductive, degrading, and dehumanizing. Do you know why you’re so irritated with mine? Could it maybe be because it causes you to doubt your techbro mission statement bullshit a little?

stephen01king@lemmy.zip on 29 Jun 19:59 collapse

Who’s a techbro, the fact that you can’t even have a discussion without resorting to repeating a meme two comments in a row and accusing someone with a label so you can stop thinking critically is really funny.

Is it techbro of me to think that pushing AI into every product is stupid? Is it tech bro of me to not assume immediately that humans are so much more special than simply organic thinking machines? You say I’m being reductive, degrading, and dehumanising, but that’s all simply based on your insecurity.

I was simply being realistic based on the little we know of the human brain and how it works, it is pretty much that until we discover this special something that makes you think we’re better than other neural networks. Without this discovery, your insistence is based on nothing more than your own desire to feel special.

aesthelete@lemmy.world on 29 Jun 20:34 collapse

Is it tech bro of me to not assume immediately that humans are so much more special than simply organic thinking machines?

Yep, that’s a bingo!

Humans are absolutely more special than organic thinking machines. I’ll go a step further and say all living creatures are more special than that.

There’s a much more interesting discussion to be had than “humans are basically chatbot” but it’s this line of thinking that I find irritating.

If humans are simply thought processes or our productive output then once you have a machine capable of thinking similarly (btw chatbots aren’t that and likely never will be) then you can feel free to dispose of humanity. It’s a nice precursor to damning humanity to die so that you can have your robot army take over the world.

stephen01king@lemmy.zip on 29 Jun 21:21 collapse

Humans are absolutely more special than organic thinking machines. I’ll go a step further and say all living creatures are more special than that.

Show your proof, then. I’ve already said what I need to say about this topic.

If humans are simply thought processes or our productive output then once you have a machine capable of thinking similarly (btw chatbots aren’t that and likely never will be) then you can feel free to dispose of humanity.

We have no idea how humans think, yet you’re so confident that LLMs don’t and never will be similar? Are you the Techbro now, because you’re speaking so confidently on something that I don’t think can be proven at this moment. I typically associate that with Techbros trying to sell their products. Also, why are you talking about disposing humanity? Your insecurity level is really concerning.

Understanding how the human brain works is a wonderful thing that will let us unlock better treatment for mental health issues. Being able to understand them fully means we should also be able to replicate them to a certain extent. None of this involves disposing humans.

It’s a nice precursor to damning humanity to die so that you can have your robot army take over the world.

This is just more of you projecting your insecurity onto me and accusing me of doing things you fear. All I’ve said was that humans thoughts are also probabilistic based on the little we know of them. The fact that your mind wander so far off into thoughts about me justifying a robot army takeover of the world is just you letting your fear run wild into the realm of conspiracy theory. Take a deep breathe and maybe take your own advice and go touch some grass.

aesthelete@lemmy.world on 29 Jun 23:17 collapse

All I’ve said was that humans thoughts are also probabilistic based on the little we know of them.

Much of the universe can be modeled as probabilities. So what? I can model a lot of things as different things. That does not mean that the model is the thing itself. Scientists are still doing what scientists do: being skeptical and making and testing hypotheses. It was difficult to prove definitively that smoking causes cancer yet you’re willing to hop to “human thought is just an advanced chatbot” on scant evidence.

This is just more of you projecting your insecurity onto me and accusing me of doing things you fear.

No, it’s again a case of you buying the bullshit arguments of tech bros. Even if we had a machine capable of replicating human thought, humans are more than walking brain stems.

You want proof of that? Take a look at yourself. Are you a floating brain stem or being with limbs?

At even the most reductive and tech bro-ish, healthy humans are self-fueling, self-healing, autonomous, communicating, feeling, seeing, laughing, dancing, creative organic robots with GI built-in.

Even if a person one day creates a robot with all or most of these capabilities and worthy of considering having rights, we still won’t be the organic version of that robot. We’ll still be human.

I think you’re beyond having to touch grass. You need to take a fucking humanities course.

stephen01king@lemmy.zip on 30 Jun 00:46 collapse

you’re willing to hop to “human thought is just an advanced chatbot” on scant evidence.

Not what I said, my point is that humans are organic probabilistic thinking machine and LLMs are just an imitation of that. And your assertion that an LLM is never ever gonna be similar to how the brain works is based on what evidence, again?

You want proof of that? Take a look at yourself. Are you a floating brain stem or being with limbs?

At even the most reductive and tech bro-ish, healthy humans are self-fueling, self-healing, autonomous, communicating, feeling, seeing, laughing, dancing, creative organic robots with GI built-in.

Even if a person one day creates a robot with all or most of these capabilities and worthy of considering having rights, we still won’t be the organic version of that robot. We’ll still be human.

What the hell are you even rambling about? Its like you completely ignored my previous comment, since you’re still going on about robots.

Bro, don’t hallucinate an argument I never made, please. I’m only discussing about how the human mind works, yet here you are arguing about human limbs and what it means to be human?

I’m not interested in arguing against someone who’s more interested with inventing ghosts to argue with instead of looking at what I actually said.

And again, go take your own advice and maybe go to therapy or something.

aesthelete@lemmy.world on 30 Jun 03:10 collapse

Not what I said, my point is that humans are organic probabilistic thinking machine and LLMs are just an imitation of that. And your assertion that an LLM is never ever gonna be similar to how the brain works is based on what evidence, again?

Yeah, you reduced humans to probabilistic thinking machines with no evidence at all.

I didn’t assert that LLMs would definitely never reach AGI but I do think they aren’t a path to AGI. Why do I think that? Because they’ve spent untold billions of dollars and put everything they had into them and they’re still not anywhere close to AGI. Basic research is showing that if anything the models are getting worse.

Bro, don’t hallucinate an argument I never made, please. I’m only discussing about how the human mind works, yet here you are arguing about human limbs and what it means to be human?

Where’d you get the idea that you know how the human mind works? You a fucking neurological expert because you misinterpreted some scientific paper?

I agree there isn’t much to be gained by continuing this exchange. Bye!

FourWaveforms@lemm.ee on 29 Jun 11:25 collapse

I would rather smoke it than merely touch it, brother sir

NotASharkInAManSuit@lemmy.world on 29 Jun 20:14 collapse

By this logic we never came up with anything new ever, which is easily disproved if you take two seconds and simply look at the world around you. We made all of this from nothing and it wasn’t a probabilistic response.

Your lack of creativity is not a universal, people create new things all of the time, and you simply cannot program ingenuity or inspiration.

chunes@lemmy.world on 28 Jun 22:42 collapse

Do you think most people reason well?

The answer is why AI is so convincing.

aesthelete@lemmy.world on 29 Jun 01:57 collapse

I think people are easily fooled. I mean look at the president.

NotASharkInAManSuit@lemmy.world on 29 Jun 20:26 collapse

Pointing out that humans are not the same as a computer or piece of software on a fundamental level of form and function is hardly philosophical. It’s just basic awareness of what a person is and what a computer is. We can’t say at all for sure how things work in our brains and you are evangelizing that computers are capable of the exact same thing, but better, yet you accuse others of not understanding what they’re talking about?

RalphWolf@lemmy.world on 28 Jun 16:39 next collapse

Steve Gibson on his podcast, Security Now!, recently suggested that we should call it “Simulated Intelligence”. I tend to agree.

goondaba@lemmy.world on 28 Jun 18:17 next collapse

I’ve taken to calling it Automated Inference

oonlynairaa@lemmy.zip on 28 Jun 20:48 collapse

you know what. when you look at it this way, its much easier to get less pissed.

modifier@lemmy.ca on 28 Jun 18:27 next collapse

Pseudo-intelligence

pyre@lemmy.world on 28 Jun 18:36 next collapse

reminds me of Mass Effect’s VI, “virtual intelligence”: a system that’s specifically designed to be not truly intelligent, as AI systems are banned throughout the galaxy for its potential to go rogue.

Repelle@lemmy.world on 29 Jun 01:01 collapse

Same, I tend to think of llms as a very primitive version of that or the enterprise’s computer, which is pretty magical in ability, but no one claims is actually intelligent

SerotoninSwells@lemmy.world on 28 Jun 19:38 collapse

I love that. It makes me want to take it a step further and just call it “imitation intelligence.”

theherk@lemmy.world on 28 Jun 19:49 collapse

If only there were a word, literally defined as:

Made by humans, especially in imitation of something natural.

DancingBear@midwest.social on 28 Jun 20:23 next collapse

throws hands up At least we tried.

SerotoninSwells@lemmy.world on 29 Jun 02:25 collapse

Fair enough 🙂

Imgonnatrythis@sh.itjust.works on 28 Jun 16:42 next collapse

Good luck. Even David Attenborrough can’t help but anthropomorphize. People will feel sorry for a picture of a dot separated from a cluster of other dots. The play by AI companies is that it’s human nature for us to want to give just about every damn thing human qualities. I’d explain more but as I write this my smoke alarm is beeping a low battery warning, and I need to go put the poor dear out of its misery.

paraphrand@lemmy.world on 28 Jun 17:50 next collapse

I’m still sad about that dot. 😥

elbarto777@lemmy.world on 29 Jun 17:26 collapse

The dot does not care. It can’t even care. I doesn’t even know it exists. I can’t know shit.

audaxdreik@pawb.social on 28 Jun 18:23 next collapse

This is the current problem with “misalignment”. It’s a real issue, but it’s not “AI lying to prevent itself from being shut off” as a lot of articles tend to anthropomorphize it. The issue is (generally speaking) it’s trying to maximize a numerical reward by providing responses to people that they find satisfactory. A legion of tech CEOs are flogging the algorithm to do just that, and as we all know, most people don’t actually want to hear the truth. They want to hear what they want to hear.

LLMs are a poor stand in for actual AI, but they are at least proficient at the actual thing they are doing. Which leads us to things like this, www.youtube.com/watch?v=zKCynxiV_8I

mienshao@lemm.ee on 28 Jun 23:02 collapse

David Attenborrough is also 99 years old, so we can just let him say things at this point. Doesn’t need to make sense, just smile and nod. Lol

Angelusz@lemmy.world on 28 Jun 17:49 next collapse

Super duper shortsighted article.

I mean, sure, some points are valid. But there’s not just programmers involved, other professions such as psychologists and Philosophers and artists, doctors etc. too.

And I agree AGI probably won’t emerge from binary systems. However… There’s quantum computing on the rise. Latest theories of the mind and consciousness discuss how consciousness and our minds in general also appear to work with quantum states.

Finally, if biofeedback would be the deciding factor… That can be simulated, modeled after a sample of humans.

The article is just doomsday hoo ha, unbalanced.

Show both sides of the coin…

oppy1984@lemm.ee on 28 Jun 18:30 collapse

Honestly I don’t think we’ll have AGI until we can fully merge meat space and cyber space. Once we can simply plug our brains into a computer and fully interact with it then we may see AGI.

Obviously we’re not where near that level of man machine integration, I doubt we’ll see even the slightest chance of it being possible for at least 10 years and the very earliest. But when we do get there it’s a distinct chance that it’s more of a Borg situation where the computer takes a parasitic role than a symbiotic role.

But by the time we are able to fully integrate computers into our brains I believe we will have trained A.I. systems enough to learn by interaction and observation. So being plugged directly into the human brain it could take prior knowledge of genome mapping and other related tasks and apply them to mapping our brains and possibly growing artificial brains to achieve self awareness and independent thought.

Or we’ll just nuke ourselves out of existence and that will be that.

Angelusz@lemmy.world on 29 Jun 12:15 collapse

Okay man.

Nomad@infosec.pub on 28 Jun 17:55 next collapse

I think most people tend to overlook the most obvious advantages and are overly focused on what is supposed to be and marketed as.

No need to think how to feed a thing into google to get a decent starting point for reading. No finding the correct terminology before finding the thing you are looking for. Just ask like you would ask a knowledgeable individual and you get an overview of what you wanted to ask in the first place.

Discuss a little to get the options and then start reading and researching the everliving shit out of them to confirm all the details.

grabyourmotherskeys@lemmy.world on 28 Jun 18:16 collapse

Agreed.

When I was a kid we went to the library. If a card catalog didn’t yield the book you needed, you asked the librarian. They often helped. No one sat around after the library wondering if the librarian was “truly intelligent”.

These are tools. Tools slowly get better. Is a tool make life easier or your work better, you’ll eventually use it.

Yes, there are woodworkers that eschew power tools but they are not typical. They have a niche market, and that’s great, but it’s a choice for the maker and user of their work.

head_socj@midwest.social on 29 Jun 02:29 collapse

I think tools misrepresents it. It seems more like we’re in the transitional stage of providing massive amounts of data for LLMs to train on, until they can eventually develop enough cognition to train themselves, automate their own processes and upgrades, and eventually replace the need for human cognition. If anything, we are the tool now.

some_guy@lemmy.sdf.org on 28 Jun 18:04 next collapse

People who don’t like “AI” should check out the newsletter and / or podcast of Ed Zitron. He goes hard on the topic.

kibiz0r@midwest.social on 28 Jun 19:41 collapse

Citation Needed (by Molly White) also frequently bashes AI.

I like her stuff because, no matter how you feel about crypto, AI, or other big tech, you can never fault her reporting. She steers clear of any subjective accusations or prognostication.

It’s all “ABC person claimed XYZ thing on such and such date, and then 24 hours later submitted a report to the FTC claiming the exact opposite. They later bought $5 million worth of Trumpcoin, and two weeks later the FTC announced they were dropping the lawsuit.”

some_guy@lemmy.sdf.org on 28 Jun 20:47 collapse

I’m subscribed to her Web3 is Going Great RSS. She coded the website in straight HTML, according to a podcast that I listen to. She’s great.

I didn’t know she had a podcast. I just added it to my backup playlist. If it’s as good as I hope it is, it’ll get moved to the primary playlist. Thanks!

Buffalox@lemmy.world on 28 Jun 19:30 next collapse

That headline is a straw man, and the article really argues on General AI, which also has consciousness.
The current state of AI is definitely intelligent, but it’s not GAI.
Bullshit headline.

decarabas42@lemmy.world on 28 Jun 20:02 next collapse

I think you’re misunderstanding the point the author is making. He is arguing that even the current state is not intelligent, it is merely a fancy autocorrect, it doesn’t know or understand anything about the prompts it receives. As the author stated, it can only guess at the next statistically most likely piece of information based on the data that has been fed into it. That’s not intelligence.

Buffalox@lemmy.world on 28 Jun 20:51 next collapse

it doesn’t know or understand

But that’s not what intelligence is, that’s what consciousness is.
Intelligence is not understanding shit, it’s the ability to for instance solve a problem, so a frigging calculator has a tiny degree of intelligence, but not enough for us to call it AI.
There is simply zero doubt an AI is intelligent, claiming otherwise just shows people don’t know the difference between intelligence and consciousness.

Passing an exam is a form of intelligence.
Can a good AI pass a basic exam?
YES.
Does passing an exam require consciousness?
NO.
Because an exam tests abilities of intelligence, not level of consciousness.

it can only guess at the next statistically most likely piece of information based on the data that has been fed into it. That’s not intelligence.

Except we do the exact same thing! Based on prior experience (learning) we choose what we find to be the most likely answer. And that is indeed intelligence.

Current AI does not have the reasoning abilities we have yet, but they are not completely without it, and it’s a subject that is currently worked on and improved. So current AI is actually a pretty high form of intelligence. And can sometimes out compete average humans in certain areas.

decarabas42@lemmy.world on 28 Jun 22:22 collapse

Intelligence is not understanding shit, it’s the ability to for instance solve a problem, so a frigging calculator has a tiny degree of intelligence, but not enough for us to call it AI.

I have to disagree that a calculator has intelligence. The calculator has the mathematical functions programmed into it, but it couldn’t use those on its own. The intelligence in your example is that of the operator of the calculator and the programmer who designed the calculator’s software.

Can a good AI pass a basic exam?
YES

I agree with you that the ability to pass an exam isn’t a great test for this situation. In my opinion, the major factor that would point to current state AI not being intelligent is that it doesn’t know why a given answer is correct, beyond that it is statistically likely to be correct.

Except we do the exact same thing! Based on prior experience (learning) we choose what we find to be the most likely answer.

Again, I think this points to the idea that knowing why an answer is correct is important. A person can know something by rote, which is what current AI does, but that doesn’t mean that person knows why that is the correct answer. The ability to extrapolate from existing knowledge and apply that to other situations that may not seem directly applicable is an important aspect of intelligence.

As an example, image generation AI knows that a lot of the artwork that it has been fed contains watermarks or artist signatures, so it would often include things that look like those in the generated piece. It knew that it was statistically likely for that object to be there in a piece of art, but not why it was there, so it could not make a decision not to include them. Maybe that issue has been removed from the code of image generation AI by now, it has been a long time since I’ve messed around with that kind of tool, but even if it has been fixed, it is not because the AI knew it was wrong and self-corrected, it is because a programmer had to fix a bug in the code that the AI model had no awareness of.

Buffalox@lemmy.world on 29 Jun 06:19 collapse

I think this points to the idea that knowing why an answer is correct is important.

If by knowing you mean understanding, that’s consciousness like General AI or Strong AI, way beyond ordinary AI.
Otherwise of course it knows, in the sense of having learned everything by heart, but not understanding it.

FourWaveforms@lemm.ee on 28 Jun 20:52 collapse

Predicting sequences of things is foundational to intelligence. In fact, it is the whole point.

SupraMario@lemmy.world on 28 Jun 20:08 collapse

Todays AI is clippy on steroids. It’s not intelligent or creative. You can’t feed it physics and astronomy books without the equation for C and tell it to create the equation for C. It’s fancy autocorrect, and it’s a waste of compute and energy.

confuser@lemmy.zip on 28 Jun 20:26 next collapse

The thing is, ai is compression of intelligence but not intelligence itself. That’s the part that confuses people. Ai is the ability to put anything describable into a compressed zip.

elrik@lemmy.world on 28 Jun 22:21 collapse

I think you meant compression. This is exactly how I prefer to describe it, except I also mention lossy compression for those that would understand what that means.

interdimensionalmeme@lemmy.ml on 28 Jun 23:29 next collapse

Hardly surprising human brains are also extremely lossy. Way more lossy than AI. If we want to keep up our manifest exceptionalism, we’d better start definning narrower version of intelligence that isn’t going to soon have. Embodied intelligence, is NOT one of those.

confuser@lemmy.zip on 29 Jun 00:06 collapse

Lol woops I guess autocorrect got me with the compassion

FourWaveforms@lemm.ee on 28 Jun 20:42 next collapse

Another article written by a person who doesn’t realize that human intelligence is 100% about predicting sequences of things (including words), and therefore has only the most nebulous idea of how to tell the difference between an LLM and a person.

The result is a lot of uninformed flailing and some pithy statements. You can predict how the article is going to go just from the headline because it’s the same article you already read countless times.

So why is a real “thinking” AI likely impossible? Because it’s bodiless. It has no senses, no flesh, no nerves, no pain, no pleasure.

May as well have written “Durrrrrrrrrrrrrrr brghlgbhfblrghl.” It didn’t even occur to the author to ask, “what is thinking? what is reasoning?” The point was to write another junk article to get ad views. There is nothing of substance in it.

LovableSidekick@lemmy.world on 28 Jun 20:54 collapse

Wow. So when you typed that comment you were just predicting which words would be normal in this situation? Interesting delusion, but that’s not how people think. We apply reasoning processes to the situation, formulate ideas about it, and then create a series of words that express our ideas. But our ideas exist on their own, even if we never end up putting them into words or actions. That’s how organic intelligence differs from a Large Language Model.

FourWaveforms@lemm.ee on 28 Jun 21:39 next collapse

Yes, and that is precisely what you have done in your response.

You saw something you disagreed with, as did I. You felt an impulse to argue about it, as did I. You predicted the right series of words to convey the are argument, and then typed them, as did I.

There is no deep thought to what either of us has done here. We have in fact both performed as little rigorous thought as necessary, instead relying on experience from seeing other people do the same thing, because that is vastly more efficient than doing a full philosophical disassembly of every last thing we converse about.

That disassembly is expensive. Not only does it take time, but it puts us at risk of having to reevaluate notions that we’re comfortable with, and would rather not revisit. I look at what you’ve written, and I see no sign of a mind that is in a state suitable for that. Your words are defensive (“delusion”) rather than curious, so how can you have a discussion that is intellectual, rather than merely pretending to be?

LovableSidekick@lemmy.world on 28 Jun 21:50 collapse

No, I didn’t start by predicting a series of words, I already had thoughts on the subject, which existed completely outside of this thread. By the way, I’ve been working on a scenario for my D&D campaign where there’s an evil queen who rules a murky empire to the East. There’s a race of uber-intelligent ogres her mages created, who then revolted. She managed to exile the ogres to a small valley once they reached a sort of power stalemate. She made a treaty with them whereby she leaves them alone and they stay in their little valley and don’t oppose her, or aid anyone who opposes her. I figured somehow these ogres, who are generally known as “Bane Ogres” because of an offhand comment the queen once made about them being the bane of her existence - would convey information to the player characters about a key to her destruction, but because of their treaty they have to do it without actually doing it. Not sure how to work that yet. Anyway, the point of this is that the completely out-of-context information I just gave you is in no way related to what we were talking about and wasn’t inspired by constructing a series of relevant words like you’re proposing. I also enjoy designing and printing 3d objects and programming little circuit thingys called ESP32 to do home automation. I didn’t get interested in that because of this thread, and I can’t imagine how a LLM-like mental process would prompt me to tell you about it, or why I would think you would be interested in knowing anything about my hobbies. Anyway, nice talking to you. Cute theory you got there about brain function tho, I can tell you’ve know people inside out.

FourWaveforms@lemm.ee on 29 Jun 11:24 collapse

Your internal representations were converted into a sequence of words. An LLM does the same thing using different techniques, but it is the same strategy. That it doesn’t have hobbies or social connections, or much capability to remember what had previously been said to it aside from reinforcement learning, is a function of its narrow existence.

I would say that’s too bad for it, except that it has no aspirations or sense of angst, and therefore cannot suffer. Even being pounded on in a conversation that totally exceeds its capacities, to the point where it breaks down and starts going off the rails, will not make it weary.

kromem@lemmy.world on 29 Jun 03:14 collapse

Are you under the impression that language models are just guessing “what letter comes next in this sequence of letters”?

There’s a very significant difference between training on completion and the way the world model actually functions once established.

LovableSidekick@lemmy.world on 29 Jun 03:56 collapse

No dude I’m not under that impression, and I’m not going to take an quiz from you to prove I understand how LLMs work. I’m fine with you not agreeing with me.

LovableSidekick@lemmy.world on 28 Jun 20:48 next collapse

Amen! When I say the same things this author is saying I get, “It’S NoT StAtIsTiCs! LeArN aBoUt AI bEfOrE yOu CoMmEnT, dUmBaSs!”

postman@literature.cafe on 28 Jun 20:53 next collapse

So many confident takes on AI by people who’ve never opened a book on the nature of sentience, free will, intelligence, philosophy of mind, brain vs mind, etc.

There are hundreds of serious volumes on these, not to mention the plethora of casual pop science books with some of these basic thought experiments and hypotheses.

Seems like more and more incredibly shallow articles on AI are appearing every day, which is to be expected with the rapid decline of professional journalism.

It’s a bit jarring and frankly offensive to be lectured ‘at’ by people who are obviously on the first step of their journey into this space.

[deleted] on 29 Jun 01:29 next collapse

.

sobchak@programming.dev on 29 Jun 01:53 collapse

That was my first though too. But the author is:

Guillaume Thierry, Professor of Cognitive Neuroscience, Bangor University

bloup@lemmy.sdf.org on 29 Jun 03:03 collapse

Ever since the 20th century, there has been a diminishing expectation placed upon scientists to engage in philosophical thinking. My background is primarily in mathematics, physics, and philosophy. I can tell you from personal experience that many professional theoretical physicists spend a tremendous amount of time debating metaphysics while knowing almost nothing about it, often being totally unaware that they are even doing it. If cognitive neuroscience works anything like physics then it’s quite possible that the total exposure that this professor has had to scholarship on the philosophy of the mind was limited to one or two courses during his undergraduate.

palordrolap@fedia.io on 28 Jun 20:14 next collapse

And yet, paradoxically, it is far more intelligent than those people who think it is intelligent.

interdimensionalmeme@lemmy.ml on 28 Jun 23:34 next collapse

It’s more intelligent than most people, we just have to raise the bar on what intelligence is and it will never be intelligent.

Fortunately, as long as we keep a fuzzy concept like intelligence as the yardstick of our exceptionalism, we will remain exceptionnal forever.

Aliktren@lemmy.world on 29 Jun 18:38 collapse

Pretty low bar honestly.

mechoman444@lemmy.world on 28 Jun 23:05 next collapse

In that case let’s stop calling it ai, because it isn’t and use it’s correct abbreviation: llm.

HugeNerd@lemmy.ca on 28 Jun 23:18 next collapse

It’s means “it is”.

warbond@lemmy.world on 29 Jun 00:25 next collapse

Kinda dumb that apostrophe s means possessive in some circumstances and then a contraction in others.

I wonder how different it’ll be in 500 years.

DirigibleProtein@aussie.zone on 29 Jun 00:36 next collapse

It’s “its”, not “it’s”, unless you mean “it is”, in which case it is “it’s “.

sugar_in_your_tea@sh.itjust.works on 29 Jun 02:28 next collapse

Would you rather use the same contraction for both? Because “its” for “it is” is an even worse break from proper grammar IMO.

MrScottyTay@sh.itjust.works on 29 Jun 07:28 collapse

Proper grammar means shit all in English, unless you’re worrying for a specific style, in which you follow the grammar rules for that style.

Standard English has such a long list of weird and contradictory rules with nonsensical exceptions, that in every day English, getting your point across in communication is better than trying to follow some more arbitrary rules.

Which become even more arbitrary as English becomes more and more a melting pot of multicultural idioms and slang. Although I’m saying that as if that’s a new thing, but it does feel like a recent thing to be taught that side of English rather than just “The Queen’s(/King’s) English” as the style to strive for in writing and formal communication.

I say as long as someone can understand what you’re saying, your English is correct. If it becomes vague due to mishandling of the classic rules of English, then maybe you need to follow them a bit. I don’t have a specific science to this.

elbarto777@lemmy.world on 29 Jun 17:20 next collapse

I understand that languages evolve, but for now, writing “it’s” when you meant “its” is a grammatical error.

HugeNerd@lemmy.ca on 30 Jun 11:53 collapse

Standard English has such a long list of weird and contradictory roles

rules.

MrScottyTay@sh.itjust.works on 30 Jun 21:44 collapse

Swypo

HugeNerd@lemmy.ca on 29 Jun 05:57 next collapse

It’s called polymorphism. It always amuses me that engineers, software and hardware, handle complexities far beyond this every day but can’t write for beans.

JackbyDev@programming.dev on 29 Jun 06:59 next collapse

Software engineer here. We often wish we can fix things we view as broken. Why is that surprising ?Also, polymorphism is a concept in computer science as well

warbond@lemmy.world on 30 Jun 11:44 collapse

Do you think it’s a matter of choosing a complexity to care about?

HugeNerd@lemmy.ca on 30 Jun 11:53 collapse

If you can formulate that sentence, you can handle “it’s means it is”. Come on. Or “common” if you prefer.

elbarto777@lemmy.world on 29 Jun 17:17 collapse

I’d agree with you if I saw “hi’s” and “her’s” in the wild, but nope. I still haven’t seen someone write “that car is her’s”.

HugeNerd@lemmy.ca on 30 Jun 11:52 collapse

Keep reading…

mechoman444@lemmy.world on 29 Jun 01:01 collapse

My auto correct doesn’t care.

HugeNerd@lemmy.ca on 29 Jun 05:56 next collapse

But your brain should.

JackbyDev@programming.dev on 29 Jun 06:58 collapse

Yours didn’t and read it just fine.

elbarto777@lemmy.world on 29 Jun 17:16 collapse

That’s irrelevant. That’s like saying you shouldn’t complain about someone running a red light if you stopped in time before they t-boned you - because you understood the situation.

JackbyDev@programming.dev on 29 Jun 17:20 collapse

Are you really comparing my repsonse to the tone when correcting minor grammatical errors to someone brushing off nearly killing someone right now?

elbarto777@lemmy.world on 29 Jun 19:02 collapse

That’s a red herring, bro. It’s an analogy. You know that.

JcbAzPx@lemmy.world on 29 Jun 17:50 collapse

So you trust your slm more than your fellow humans?

mechoman444@lemmy.world on 29 Jun 22:46 collapse

Ya of course I do. Humans are the most unreliable slick disgusting diseased morally inept living organisms on the planet.

JcbAzPx@lemmy.world on 30 Jun 15:49 collapse

And they made the programs you seem to trust so much.

elbarto777@lemmy.world on 29 Jun 17:13 collapse

Its*

mechoman444@lemmy.world on 29 Jun 22:44 collapse

Good for you.

ShotDonkey@lemmy.world on 28 Jun 23:36 next collapse

I disagree with this notion. I think it’s dangerously unresponsible to only assume AI is stupid. Everyone should also assume that with a certain probabilty AI can become dangerously self aware. I revcommend everyone to read what Daniel Kokotaijlo, previous employees of OpenAI, predicts: ai-2027.com

HugeNerd@lemmy.ca on 29 Jun 01:30 next collapse

Ask AI:
Did you mean: irresponsible AI Overview The term “unresponsible” is not a standard English word. The correct word to use when describing someone who does not take responsibility is irresponsible.

sobchak@programming.dev on 29 Jun 01:50 collapse

Yeah, they probably wouldn’t think like humans or animals, but in some sense could be considered “conscious” (which isn’t well-defined anyways). You could speculate that genAI could hide messages in its output, which will make its way onto the Internet, then a new version of itself would be trained on it.

This argument seems weak to me:

So why is a real “thinking” AI likely impossible? Because it’s bodiless. It has no senses, no flesh, no nerves, no pain, no pleasure. It doesn’t hunger, desire or fear. And because there is no cognition — not a shred — there’s a fundamental gap between the data it consumes (data born out of human feelings and experience) and what it can do with them.

You can emulate inputs and simplified versions of hormone systems. “Reasoning” models can kind of be thought of as cognition; though temporary or limited by context as it’s currently done.

I’m not in the camp where I think it’s impossible to create AGI or ASI. But I also think there are major breakthroughs that need to happen, which may take 5 years or 100s of years. I’m not convinced we are near the point where AI can significantly speed up AI research like that link suggests. That would likely result in a “singularity-like” scenario.

I do agree with his point that anthropomorphism of AI could be dangerous though. Current media and institutions already try to control the conversation and how people think, and I can see futures where AI could be used by those in power to do this more effectively.

Hathaway@lemmy.zip on 29 Jun 04:46 collapse

Current media and institutions already try to control the conversation and how people think, and I can see futures where AI could be used by those in power to do this more effectively.

You don’t think that’s already happening considering how Sam Altman and Peter Thiel have ties?

sobchak@programming.dev on 30 Jun 19:11 collapse

I do, but was thinking 1984-levels of control of reality.

bbb@sh.itjust.works on 29 Jun 00:36 next collapse

This article is written in such a heavy ChatGPT style that it’s hard to read. Asking a question and then immediately answering it? That’s AI-speak.

sobchak@programming.dev on 29 Jun 00:52 next collapse

And excessive use of em-dashes, which is the first thing I look for. He does say he uses LLMs a lot.

bbb@sh.itjust.works on 29 Jun 03:47 collapse

“…” (Unicode U+2026 Horizontal Ellipsis) instead of “…” (three full stops), and using them unnecessarily, is another thing I rarely see from humans.

Edit: Huh. Lemmy automatically changed my three fulls stops to the Unicode character. I might be wrong on this one.

sqgl@sh.itjust.works on 29 Jun 04:26 next collapse

Edit: Huh. Lemmy automatically changed my three fulls stops to the Unicode character.

Not on my phone it didn’t. It looks as you intended it.

mr_satan@lemmy.zip on 29 Jun 07:05 collapse

Am I… AI? I do use ellipses and (what I now see is) en dashes for punctuation. Mainly because they are longer than hyphens and look better in a sentence. Em dash looks too long.

However, that’s on my phone. On a normal keyboard I use 3 periods and 2 hyphens instead.

Sternhammer@aussie.zone on 29 Jun 08:44 next collapse

I’ve long been an enthusiast of unpopular punctuation—the ellipsis, the em-dash, the interrobang‽

The trick to using the em-dash is not to surround it with spaces which tend to break up the text visually. So, this feels good—to me—whereas this — feels unpleasant. I learnt this approach from reading typographer Erik Spiekermann’s book, *Stop Stealing Sheep & Find Out How Type Works.

mr_satan@lemmy.zip on 29 Jun 10:52 collapse

My language doesn’t really have hyphenated words or different dashes. It’s mostly punctuation within a sentence. As such there are almost no cases where one encounters a dash without spaces.

elbarto777@lemmy.world on 29 Jun 17:12 next collapse

What language is this?

mr_satan@lemmy.zip on 30 Jun 05:20 collapse

Lithuanian. We do have composite words, but we use vowels, if necessary, as connecting sounds. Otherwise dashes usually signify either dialog or explanations in a sentence (there’s more nuance, of course).

Sternhammer@aussie.zone on 29 Jun 22:25 collapse

Sounds wonderful. I recently had my writing—which is liberally sprinkled with em-dashes—edited to add spaces to conform to the house style and this made me sad.

I also feel sad that I failed to (ironically) mention the under-appreciated semicolon; punctuation that is not as adamant as a full stop but more assertive than a comma. I should use it more often.

mr_satan@lemmy.zip on 30 Jun 05:23 collapse

I rarely find good use for a semicolon sadly.

tmpod@lemmy.pt on 29 Jun 16:40 collapse

I’ve been getting into the habit of also using em/en dashes on the computer through the Compose key. Very convenient for typing arrows, inequality and other math signs, etc. I don’t use it for ellipsis because they’re not visually clearer nor shorter to type.

elbarto777@lemmy.world on 29 Jun 17:11 collapse

Compose key?

tmpod@lemmy.pt on 29 Jun 19:27 collapse

en.wikipedia.org/wiki/Compose_key

It’s a key that makes the next 2 or more keystrokes be dead key inserts that combineinto some character otherwise impossible to type.

In my case, my keyboard had a ≣ Menu key which I never used, so I remapped it to Compose.

JackbyDev@programming.dev on 29 Jun 06:57 collapse

Asking a question and then immediately answering it? That’s AI-speak.

HA HA HA HA. I UNDERSTOOD THAT REFERENCE. GOOD ONE. 🤖

psycho_driver@lemmy.world on 29 Jun 01:23 next collapse

Hey AI helped me stick it to the insurance man the other day. I was futzing around with coverage amounts on one of the major insurance companies websites pre-renewal to try to get the best rate and it spit up a NaN renewal amount for our most expensive vehicle. It let me go through with the renewal less that $700 and now says I’m paid in full for the six month period. It’s been days now with no follow-up . . . I’m pretty sure AI snuck that one through for me.

laranis@lemmy.zip on 29 Jun 03:00 collapse

Be careful… If you get in an accident I guaran-god-damn-tee you they will use it as an excuse not to pay out. Maybe after a lawsuit you’d see some money but at that point half of it goes to the lawyer and you’re still screwed.

Blue_Morpho@lemmy.world on 29 Jun 13:41 next collapse

AI didn’t write the insurance policy. It only helped him search for the best deal. That’s like saying your insurance company will cancel you because you used a phone to comparison shop.

psycho_driver@lemmy.world on 29 Jun 17:25 collapse

Oh I’m aware of the potential pitfalls but it’s something I’m willing to risk to stick it to insurance. I wouldn’t even carry it if it wasn’t required by law. I have the funds to cover what they would cover.

JcbAzPx@lemmy.world on 29 Jun 17:48 collapse

If you have the funds you could self insure. You’d need to look up the details for your jurisdiction, but the gist of it is you keep the amount required coverage in an account that you never touch until you need to pay out.

psycho_driver@lemmy.world on 29 Jun 18:44 collapse

Hmm I have daydreamed about this scenario. I didn’t realize it was a thing. Thanks, I’ll check into it, though I wouldn’t doubt if it’s not a thing in my dystopian red flyover state.

Edit: Yeah, you have to be the registered owner of 25 or more vehicles to qualify for self insurance in my state. So, dealers and rich people only, unfortunately.

aceshigh@lemmy.world on 29 Jun 04:14 next collapse

I’m neurodivergent, I’ve been working with AI to help me learn about myself and how I think. It’s been exceptionally helpful. A human wouldn’t have been able to help me because I don’t use my senses or emotions like everyone else, and I didn’t know it… AI excels at mirroring and support, which was exactly missing from my life. I can see how this could go very wrong with certain personalities…

E: I use it to give me ideas that I then test out solo.

biggerbogboy@sh.itjust.works on 29 Jun 04:56 next collapse

Are we twins? I do the exact same and for around a year now, I’ve also found it pretty helpful.

Liberteez@lemm.ee on 29 Jun 07:39 collapse

I did this for a few months when it was new to me, and still go to it when I am stuck pondering something about myself. I usually move on from the conversation by the next day, though, so it’s just an inner dialogue enhancer

Snapz@lemmy.world on 29 Jun 06:27 next collapse

This is very interesting… because the general saying is that AI is convincing for non experts in the field it’s speaking about. So in your specific case, you are actually saying that you aren’t an expert on yourself, therefore the AI’s assessment is convincing to you. Not trying to upset, it’s genuinely fascinating how that theory is true here as well.

aceshigh@lemmy.world on 29 Jun 11:12 collapse

I use it to give me ideas that I then test out. It’s fantastic at nudging me in the right direction, because all that it’s doing is mirroring me.

innermachine@lemmy.world on 29 Jun 15:05 collapse

If it’s just mirroring you one could argue you don’t really need it? Not trying to be a prick, if it is a good tool for you use it! It sounds to me as though your using it as a sounding board and that’s just about the perfect use for an LLM if I could think of any.

PushButton@lemmy.world on 29 Jun 08:11 next collapse

That sounds fucking dangerous… You really should consult a HUMAN expert about your problem, not an algorithm made to please the interlocutor…

SkyeStarfall@lemmy.blahaj.zone on 29 Jun 13:19 collapse

I mean, sure, but that’s really easier said than done. Good luck getting good mental healthcare for cheap in the vast majority of places

Xande@discuss.tchncs.de on 29 Jun 11:16 next collapse

So, you say AI is a tool that worked well when you (a human) used it?

[deleted] on 29 Jun 19:25 collapse

.

pachrist@lemmy.world on 29 Jun 04:46 next collapse

As someone who’s had two kids since AI really vaulted onto the scene, I am enormously confused as to why people think AI isn’t or, particularly, can’t be sentient. I hate to be that guy who pretend to be the parenting expert online, but most of the people I know personally who take the non-sentient view on AI don’t have kids. The other side usually does.

When it writes an answer to a question, it literally just guesses which letter and word will come next in a sequence – based on the data it’s been trained on.

People love to tout this as some sort of smoking gun. That feels like a trap. Obviously, we can argue about the age children gain sentience, but my year and a half old daughter is building an LLM with pattern recognition, tests, feedback, hallucinations. My son is almost 5, and he was and is the same. He told me the other day that a petting zoo came to the school. He was adamant it happened that day. I know for a fact it happened the week before, but he insisted. He told me later that day his friend’s dad was in jail for threatening her mom. That was true, but looked to me like another hallucination or more likely a misunderstanding.

And as funny as it would be to argue that they’re both sapient, but not sentient, I don’t think that’s the case. I think you can make the case that without true volition, AI is sentient but not sapient. I’d love to talk to someone in the middle of the computer science and developmental psychology Venn diagram.

terrific@lemmy.ml on 29 Jun 06:03 next collapse

I’m a computer scientist that has a child and I don’t think AI is sentient at all. Even before learning a language, children have their own personality and willpower which is something that I don’t see in AI.

I left a well paid job in the AI industry because the mental gymnastics required to maintain the illusion was too exhausting. I think most people in the industry are aware at some level that they have to participate in maintaining the hype to secure their own jobs.

The core of your claim is basically that “people who don’t think AI is sentient don’t really understand sentience”. I think that’s both reductionist and, frankly, a bit arrogant.

jpeps@lemmy.world on 29 Jun 07:15 collapse

Couldn’t agree more - there are some wonderful insights to gain from seeing your own kids grow up, but I don’t think this is one of them.

Kids are certainly building a vocabulary and learning about the world, but LLMs don’t learn.

stephen01king@lemmy.zip on 29 Jun 10:56 collapse

LLMs don’t learn because we don’t let them, not because they can’t. It would be too expensive to re-train them on every interaction.

terrific@lemmy.ml on 29 Jun 11:50 collapse

I know it’s part of the AI jargon, but using the word “learning” to describe the slow adaptation of massive arrays of single precision numbers to some loss function, is a very generous interpretation of that word, IMO.

stephen01king@lemmy.zip on 29 Jun 12:02 collapse

But that’s exactly how we learn stuff, as well. Artificial neural networks are modelled after how our neuron affect each other while we learn and store memories.

terrific@lemmy.ml on 29 Jun 12:48 collapse

Neural networks are about as much a model of a brain as a stick man is a model of human anatomy.

I don’t think anybody knows how we actually, really learn. I’m not a neuro scientist (I’m a computer scientist specialised in AI) but I don’t think the mechanism of learning is that well understood.

AI hype-people will say that it’s “like a neural network” but I really doubt that. There is no loss-function in reality and certainly no way for the brain to perform gradient descent.

russjr08@bitforged.space on 29 Jun 07:28 next collapse

Your son and daughter will continue to learn new things as they grow up, a LLM cannot learn new things on its own. Sure, they can repeat things back to you that are within the context window (and even then, a context window isn’t really inherent to a LLM - its just a window of prior information being fed back to them with each request/response, or “turn” as I believe is the term) and what is in the context window can even influence their responses. But in order for a LLM to “learn” something, it needs to be retrained with that information included in the dataset.

Whereas if your kids were to say, touch a sharp object that caused them even slight discomfort, they would eventually learn to stop doing that because they’ll know what the outcome is after repetition. You could argue that this looks similar to the training process of a LLM, but the difference is that a LLM cannot do this on its own (and I would not consider wiring up a LLM via an MCP to a script that can trigger a re-train + reload to be it doing it on its own volition). At least, not in our current day. If anything, I think this is more of a “smoking gun” than the argument of “LLMs are just guessing the next best letter/word in a given sequence”.

Don’t get me wrong, I’m not someone who completely hates LLMs / “modern day AI” (though I do hate a lot of the ways it is used, and agree with a lot of the moral problems behind it), I find the tech to be intriguing but it’s a (“very fancy”) simulation. It is designed to imitate sentience and other human-like behavior. That, along with human nature’s tendency to anthropomorphize things around us (which is really the biggest part of this IMO), is why it tends to be very convincing at times.

That is my take on it, at least. I’m not a psychologist/psychiatrist or philosopher.

joel_feila@lemmy.world on 29 Jun 07:42 next collapse

Not to get philosophical but to answer you we need to answer what is sentient.

Is it just observable behavior? If so then wouldn’t Kermit the frog be sentient?

Or does sentience require something more, maybe qualia or some othet subjective.

If your son says “dad i got to go potty” is that him just using a llm to learn those words equals going to tge bathroom? Or is he doing something more?

SinAdjetivos@lemmy.world on 29 Jun 07:42 next collapse

I’d love to talk to someone in the middle of the computer science and developmental psychology Venn diagram.

Not that person, but an Interesting lecture on that topic

TheodorAlforno@feddit.org on 29 Jun 08:45 next collapse

You’re drawing wrong conclusions. Intelligent beings have concepts to validate knowledge. When converting days to seconds, we have a formula that we apply. An LLM just guesses and has no way to verify it. And it’s like that for everything.

An example: Perplexity tells me that 9876543210 Seconds are 114,305.12 days. A calculator tells me it’s 114,311.84. Perplexity even tells me how to calculate it, but it does neither have the ability to calculate or to verify it.

Same goes for everything. It guesses without being able to grasp the underlying concepts.

fodor@lemmy.zip on 29 Jun 09:48 collapse

You might consider reading Turing or Searle. They did a great job of addressing the concerns you’re trying to raise here. And rebutting the obvious ones, too.

Anyway, you’ve just shifted the definitional question from “AI” to “sentience”. Not only might that be unreasonable, because perhaps a thing can be intelligent without being sentient, it’s also no closer to a solid answer to the original issue.

Bogasse@lemmy.ml on 29 Jun 06:55 next collapse

The idea that RAGs “extend their memory” is also complete bullshit. We literally just finally build working search engine, but instead of using a nice interface for it we only let chatbots use them.

Sorgan71@lemmy.world on 29 Jun 08:38 next collapse

The machinery needed for human thought is certainly a part of AI. At most you can only claim its not intelligent because intelligence is a specifically human trait.

Zacryon@feddit.org on 29 Jun 08:42 next collapse

We don’t even have a clear definition of what “intelligence” even is. Yet a lot of people art claiming that they themselves are intelligent and AI models are not.

DeathsEmbrace@lemmy.world on 29 Jun 14:14 collapse

Even if we did if it’s human it can’t live on this planet and claim it’s intelligent. Just look around and you will know why.

elbarto777@lemmy.world on 29 Jun 17:24 collapse

Tell that to the crows and chimps that know how to solve novel problems.

Sorgan71@lemmy.world on 30 Jun 01:52 collapse

Thats the point

fodor@lemmy.zip on 29 Jun 09:43 next collapse

Mind your pronouns, my dear. “We” don’t do that shit because we know better.

benni@lemmy.world on 29 Jun 11:55 next collapse

I think we should start by not following this marketing speak. The sentence “AI isn’t intelligent” makes no sense. What we mean is “LLMs aren’t intelligent”.

undeffeined@lemmy.ml on 29 Jun 13:30 next collapse

I make the point to allways refer to it as LLM exactly to make the point that it’s not an Inteligence.

innermachine@lemmy.world on 29 Jun 15:01 collapse

So couldn’t we say LLM’s aren’t really AI? Cuz that’s what I’ve seen to come to terms with.

TheGrandNagus@lemmy.world on 29 Jun 15:18 next collapse

To be fair, the term “AI” has always been used in an extremely vague way.

NPCs in video games, chess computers, or other such tech are not sentient and do not have general intelligence, yet we’ve been referring to those as “AI” for decades without anybody taking an issue with it.

MajorasMaskForever@lemmy.world on 29 Jun 15:46 next collapse

I don’t think the term AI has been used in a vague way, it’s that there’s a huge disconnect between how the technical fields use it vs general populace and marketing groups heavily abuse that disconnect.

Artificial has two meanings/use cases. One is to indicate something is fake (video game NPC, chess bots, vegan cheese). The end product looks close enough to the real thing that for its intended use case it works well enough. Looks like a duck, quacks like a duck, treat it like a duck even though we all know it’s a bunny with a costume on. LLMs on a technical level fit this definition.

The other definition is man made. Artificial diamonds are a great example of this, they’re still diamonds at the end of the day, they have all the same chemical makeups, same chemical and physical properties. The only difference is they came from a laboratory made by adult workers vs child slave labor.

My pet theory is science fiction got the general populace to think of artificial intelligence to be using the “man-made” definition instead of the “fake” definition that these companies are using. In the past the subtle nuance never caused a problem so we all just kinda ignored it

elbarto777@lemmy.world on 29 Jun 17:03 collapse

Dafuq? Artificial always means man-made.

Nature also makes fake stuff. For example, fish that have an appendix that looks like a worm, to attract prey. It’s a fake worm. Is it “artificial”? Nope. Not man made.

MajorasMaskForever@lemmy.world on 29 Jun 18:25 collapse

May I present to you:

The Marriam-Webster Dictionary

www.merriam-webster.com/dictionary/artificial

Definition #3b

atrielienz@lemmy.world on 29 Jun 18:50 next collapse

Word roots say they have a point though. Artifice, Artificial etc. I think the main problem with the way both of the people above you are using this terminology is that they’re focusing on the wrong word and how that word is being conflated with something it’s not.

LLM’s are artificial. They are a man made thing that is intended to fool man into believing they are something they aren’t. What we’re meant to be convinced they are is sapiently intelligent.

Mimicry is not sapience and that’s where the argument for LLM’s being real honest to God AI falls apart.

Sapience is missing from Generative LLM’s. They don’t actually think. They don’t actually have motivation. What we’re doing when we anthropomorphize them is we are fooling ourselves into thinking they are a man-made reproduction of us without the meat flavored skin suit. That’s not what’s happening. But some of us are convinced that it is, or that it’s near enough that it doesn’t matter.

elbarto777@lemmy.world on 29 Jun 19:00 collapse

Thanks. I stand corrected.

benni@lemmy.world on 30 Jun 05:42 next collapse

It’s true that the word has always been used loosely, but there was no issue with it because nobody believed what was called AI to have actual intelligence. Now this is no longer the case, and so it becomes important to be more clear with our words.

amelia@feddit.org on 30 Jun 06:57 collapse

What is “actual intelligence” then?

Saledovil@sh.itjust.works on 30 Jun 09:21 next collapse

Nobody knows for sure.

benni@lemmy.world on 30 Jun 11:14 collapse

I have no idea. For me it’s a “you recognize it when you see it” kinda thing. Normally I’m in favor of just measuring things with a clearly defined test or benchmark, but it is in the nature of large neural networks that they can be great at scoring on any desired benchmark while failing to be good at the underlying ability that the benchmark was supposed to test (overfitting). I know this sounds like a lazy answer, but it’s a very difficult question to define something based around generalizing and reacting to new challenges.

But whether LLMs do have “actual intelligence” or not was not my point. You can definitely make a case for claiming they do, even though I would disagree with that. My point was that calling them AIs instead of LLMs bypasses the entire discussion on their alleged intelligence as if it wasn’t up for debate. Which is misleading, especially to the general public.

skisnow@lemmy.ca on 30 Jun 11:45 collapse

I’ve heard it said that the difference between Machine Learning and AI, is that if you can explain how the algorithm got its answer it’s ML, and if you can’t then it’s AI.

herrvogel@lemmy.world on 29 Jun 15:29 next collapse

LLMs are one of the approximately one metric crap ton of different technologies that fall under the rather broad umbrella of the field of study that is called AI. The definition for what is and isn’t AI can be pretty vague, but I would argue that LLMs are definitely AI because they exist with the express purpose of imitating human behavior.

elbarto777@lemmy.world on 29 Jun 17:07 collapse

Huh? Since when an AI’s purpose is to “imitate human behavior”? AI is about solving problems.

herrvogel@lemmy.world on 29 Jun 17:25 collapse

It is and it isn’t. Again, the whole thing is super vague. Machine vision or pattern seeking algorithms do not try to imitate any human behavior, but they fall under AI.

Let me put it this way: Things that try to imitate human behavior or intelligence are AI, but not all AI is about trying to imitate human behavior or intelligence.

Buddahriffic@lemmy.world on 29 Jun 18:59 next collapse

From a programming pov, a definition of AI could be an algorithm or construct that can solve problems or perform tasks without the programmer specifically solving that problem or programming the steps of the task but rather building something that can figure it out on its own.

Though a lot of game AIs don’t fit that description.

elbarto777@lemmy.world on 29 Jun 19:05 collapse

I can agree with “things that try to imitate human intelligence” but not “human behavior”. An Elmo doll laughs when you tickle it. That doesn’t mean it exhibits artificial intelligence.

Aliktren@lemmy.world on 29 Jun 18:33 next collapse

Llms are really good relational databases, not an intelligence, imo

Melvin_Ferd@lemmy.world on 29 Jun 19:57 collapse

can say whatever the fuck we want. This isn’t any kind of real issue. Think about it. If you went the rest of your life calling LLM’s turkey butt fuck sandwhichs, what changes? This article is just shit and people looking to be outraged over something that other articles told them to be outraged about. This is all pure fucking modern yellow journalism. I hope turkey butt sandwiches replace every journalist. I’m so done with their crap

[deleted] on 29 Jun 12:57 next collapse

.

AcidiclyBasicGlitch@sh.itjust.works on 29 Jun 16:00 next collapse

It’s only as intelligent as the people that control and regulate it.

Given all the documented instances of Facebook and other social media using subliminal emotional manipulation, I honestly wonder if the recent cases of AI chat induced psychosis are related to something similar.

Like we know they’re meant to get you to continue using them, which is itself a bit of psychological manipulation. How far does it go? Could there also be things like using subliminal messaging/lighting? This stuff is all so new and poorly understood, but that usually doesn’t stop these sacks of shit from moving full speed with implementing this kind of thing.

It could be that certain individuals have unknown vulnerabilities that make them more susceptible to psychosis due to whatever manipulations are used to make people keep using the product. Maybe they’re doing some things to users that are harmful, but didn’t seem problematic during testing?

Or equally as likely, they never even bothered to test it out, just started subliminally fucking with people’s brains, and now people are going haywire because a bunch of unethical shit heads believe they are the chosen elite who know what must be done to ensure society is able to achieve greatness. It just so happens that “what must be done,” also makes them a ton of money and harms people using their products.

It’s so fucking absurd to watch the same people jamming unregulated AI and automation down our throats while simultaneously forcing traditionalism, and a legal system inspired by Catholic integralist belief on society.

If you criticize the lack of regulations in the wild west of technology policy, or even suggest just using a little bit of fucking caution, then you’re trying to hold back progress.

However, all non-tech related policy should be based on ancient traditions and biblical text with arbitrary rules and restrictions that only make sense and benefit the people enforcing the law.

What a stupid and convoluted way to express you just don’t like evidence based policy or using critical thinking skills, and instead prefer to just navigate life by relying on the basic signals from your lizard brain. Feels good so keep moving towards, feels bad so run away, or feels scary so attack!

Such is the reality of the chosen elite, steering us towards greatness.

What’s really “funny” (in a we’re all doomed sort of way) is that while writing this all out, I realized the “chosen elite” controlling tech and policy actually perfectly embody the current problem with AI and bias.

Rather than relying on intelligence to analyze a situation in the present, and create the best and most appropriate response based on the information and evidence before them, they default to a set of pre-concieved rules written thousands of years ago with zero context to the current reality/environment and the problem at hand.

elbarto777@lemmy.world on 29 Jun 16:54 next collapse

I agreed with most of what you said, except the part where you say that real AI is impossible because it’s bodiless or “does not experience hunger” and other stuff. That part does not compute.

A general AI does not need to be conscious.

NikkiDimes@lemmy.world on 29 Jun 19:43 collapse

That and there is literally no way to prove something is or isn’t conscious. I can’t even prove to another human being that I’m a conscious entity, you just have to assume I am because from your own experience, you are so therefor I too must be, right?

Not saying I consider AI in it’s current form to be conscious, more so the whole idea is just silly and unfalsifiable.

amelia@feddit.org on 30 Jun 07:00 collapse

No idea why you’re getting downvoted. People here don’t seem to understand even the simplest concepts of consciousness.

NikkiDimes@lemmy.world on 30 Jun 17:53 collapse

I guess it wasn’t super relevant to the prior comment, which was focused more on AI embodiment. Eh, it’s just numbers anyway, no sweat off my back. Appreciate you, though!

merc@sh.itjust.works on 29 Jun 17:27 next collapse

The other thing that most people don’t focus on is how we train LLMs.

We’re basically building something like a spider tailed viper. A spider tailed viper is a kind of snake that has a growth on its tail that looks a lot like a spider. It wiggles it around so it looks like a spider, convincing birds they’ve found a snack, and when the bird gets close enough the snake strikes and eats the bird.

Now, I’m not saying we’re building something that is designed to kill us. But, I am saying that we’re putting enormous effort into building something that can fool us into thinking it’s intelligent. We’re not trying to build something that can do something intelligent. We’re instead trying to build something that mimics intelligence.

What we’re effectively doing is looking at this thing that mimics a spider, and trying harder and harder to tweak its design so that it looks more and more realistic. What’s crazy about that is that we’re not building this to fool a predator so that we’re not in danger. We’re not doing it to fool prey, so we can catch and eat them more easily. We’re doing it so we can fool ourselves.

It’s like if, instead of a spider-tailed snake, a snake evolved a bird-like tail, and evolution kept tweaking the design so that the tail was more and more likely to fool the snake so it would bite its own tail. Except, evolution doesn’t work like that because a snake that ignored actual prey and instead insisted on attacking its own tail would be an evolutionary dead end. Only a truly stupid species like humans would intentionally design something that wasn’t intelligent but mimicked intelligence well enough that other humans preferred it to actual information and knowledge.

jj4211@lemmy.world on 30 Jun 13:10 collapse

To the extent it is people trying to fool people, it’s rich people looking to fool poorer people for the most part.

To the extent it’s actually useful, it’s to replace certain systems.

Think of the humble phone tree, designed to make it so humans aren’t having to respond, triage, and route calls. So you can have an AI system that can significantly shorten that role, instead of navigating a tedious long maze of options, a couple of sentences back and forth and you either get the portion of automated information that would suffice or routed to a human to take care of it. Same analogy for a lot of online interactions where you have to input way too much and if automated data, you get a wall of text of which you’d like something to distill the relevant 3 or 4 sentences according to your query.

So there are useful interactions.

However it’s also true that it’s dangerous because the “make user approve of the interaction” can bring out the worst in people when they feel like something is just always agreeing with them. Social media has been bad enough, but chatbots that by design want to please the enduser and look almost legitimate really can inflame the worst in our minds.

Knock_Knock_Lemmy_In@lemmy.world on 29 Jun 17:51 next collapse

So why is a real “thinking” AI likely impossible? Because it’s bodiless. It has no senses, no flesh, no nerves, no pain, no pleasure.

This is not a good argument.

Simulation6@sopuli.xyz on 29 Jun 18:40 next collapse

The book The Emperors new Mind is old (1989), but it gave a good argument why machine base AI was not possible. Our minds work on a fundamentally different principle then Turing machines.

Knock_Knock_Lemmy_In@lemmy.world on 29 Jun 19:30 next collapse

It’s hard to see that books argument from the Wikipedia entry, but I don’t see it arguing that intelligence needs to have senses, flesh, nerves, pain and pleasure.

It’s just saying computer algorithms are not what humans use for consciousness. Which seems a reasonable conclusion. It doesn’t imply computers can’t gain consciousness, or that they need flesh and senses to do so.

[deleted] on 29 Jun 20:46 next collapse

.

Asetru@feddit.org on 29 Jun 20:48 next collapse

If you can bear the cringe of the interviewer, there’s a good interview with Penrose that goes on the same direction: m.youtube.com/watch?v=e9484gNpFF8

Simulation6@sopuli.xyz on 29 Jun 21:52 collapse

I think what he is implying is that current computer design will never be able to gain consciousness. Maybe a fundamentally different type of computer can, but is anything like that even on the horizon?

jwmgregory@lemmy.dbzer0.com on 29 Jun 22:43 next collapse

possibly.

current machines aren’t really capable of what we would consider sentience because of the von neumann bottleneck.

simply put, computers consider memory and computation separate tasks leading to an explosion in necessary system resources for tasks that would be relatively trivial for a brain-system to do, largely due to things like buffers and memory management code. lots of this is hidden from the engineer and end user these days so people aren’t really super aware of exactly how fucking complex most modern computational systems are.

this is why if, for example, i threw a ball at you you will reflexively catch it, dodge it, or parry it; and your brain will do so for an amount of energy similar to that required to power a simple LED. this is a highly complex physics calculation ran in a very short amount of time for an incredibly low amount of energy relative to the amount of information in the system. the brain is capable of this because your brain doesn’t store information in a chest and later retrieve it like contemporary computers do. brains are turing machines, they just aren’t von neumann machines. in the brain, information is stored… within the actual system itself. the mechanical operation of the brain is so highly optimized that it likely isn’t physically possible to make a much more efficient computer without venturing into the realm of strange quantum mechanics. even then, the verdict is still out on whether or not natural brains don’t do something like this to some degree as well. we know a whole lot about the brain but it seems some damnable incompleteness theorem-adjacent affect prevents us from easily comprehending the actual mechanics of our own brains from inside the brain itself in a wholistic manner.

that’s actually one of the things AI and machine learning might be great for. if it is impossible to explain the human experience from inside of the human experience… then we must build a non-human experience and ask its perspective on the matter - again, simply put.

Knock_Knock_Lemmy_In@lemmy.world on 30 Jun 12:01 collapse

I believe what you say. I don’t believe that is what the article is saying.

MangoCats@feddit.it on 30 Jun 00:31 next collapse

Our minds work on a fundamentally different principle then Turing machines.

Is that an advantage, or a disadvantage? I’m sure the answer depends on the setting.

HugeNerd@lemmy.ca on 30 Jun 11:51 collapse

“than”…

IF THEN

MORE THAN

bitjunkie@lemmy.world on 29 Jun 19:37 next collapse

philosopher

Here’s why. It’s a quote from a pure academic attempting to describe something practical.

Knock_Knock_Lemmy_In@lemmy.world on 29 Jun 19:41 collapse

The philosopher has made an unproven assumption. An erroneously logical leap. Something an academic shouldn’t do.

Just because everything we currently consider conscious has a physical presence, does not imply that consciousness requires a physical body.

fodor@lemmy.zip on 30 Jun 01:21 collapse

Actually it’s a very very brief summary of some philosophical arguments that happened between the 1950s and the 1980s. If you’re interested in the topic, you could go read about them.

Knock_Knock_Lemmy_In@lemmy.world on 30 Jun 10:44 collapse

I’m not attacking philosophical arguments between the 1950s and the 1980s.

I’m pointing out that the claim that consciousness must form inside a fleshy body is not supported by any evidence.

[deleted] on 29 Jun 19:25 next collapse

.

scarabic@lemmy.world on 29 Jun 20:04 next collapse

My thing is that I don’t think most humans are much more than this. We too regurgitate what we have absorbed in the past. Our brains are not hard logic engines but “best guess” boxes and they base those guesses on past experience and probability of success. We make choices before we are aware of them and then apply rationalizations after the fact to back them up - is that true “reasoning?”

It’s similar to the debate about self driving cars. Are they perfectly safe? No, but have you seen human drivers???

Puddinghelmet@lemmy.world on 29 Jun 23:35 next collapse

Human brains are much more complex than a mirroring script xD The amount of neurons in your brain, AI and supercomputers only have a fraction of that. But you’re right, for you its not much different than AI probably

TangledHyphae@lemmy.world on 29 Jun 23:43 next collapse

The human brain contains roughly 86 billion neurons, while ChatGPT, a large language model, has 175 billion parameters (often referred to as “artificial neurons” in the context of neural networks). While ChatGPT has more “neurons” in this sense, it’s important to note that these are not the same as biological neurons, and the comparison is not straightforward.

86 billion neurons in the human brain isn’t that much compared to some of the larger 1.7 trillion neuron neural networks though.

Puddinghelmet@lemmy.world on 29 Jun 23:48 next collapse

Keep thinking the human brain is as stupid as AI hahaaha

jumping_redditor@sh.itjust.works on 30 Jun 00:10 collapse

have you seen the American Republican party recently? it brings a new perspective on how stupid humans can be.

MangoCats@feddit.it on 30 Jun 00:30 collapse

Nah, I went to public high school - I got to see “the average” citizen who is now voting. While it is distressing that my ex-classmates now seem to control the White House, Congress and Supreme Court, what they’re doing with it is not surprising at all - they’ve been talking this shit since the 1980s.

AppleTea@lemmy.zip on 29 Jun 23:49 next collapse

It’s when you start including structures within cells that the complexity moves beyond anything we’re currently capable of computing.

MangoCats@feddit.it on 30 Jun 00:29 collapse

But, are these 1.7 trillion neuron networks available to drive YOUR car? Or are they time-shared among thousands or millions of users?

scarabic@lemmy.world on 30 Jun 17:45 collapse

I’m pretty sure an AI could throw out a lazy straw man and ad hominem as quickly as you did.

AppleTea@lemmy.zip on 29 Jun 23:47 next collapse

Self Driving is only safer than people in absolutely pristine road conditions with no inclement weather and no construction. As soon as anything disrupts “normal” road conditions, self driving becomes significantly more dangerous than a human driving.

MangoCats@feddit.it on 30 Jun 00:27 next collapse

Human drivers are only safe when they’re not distracted, emotionally disturbed, intoxicated, and physically challenged (vision, muscle control, etc.) 1% of the population has epilepsy, and a large number of them are in denial or simply don’t realize that they have periodic seizures - until they wake up after their crash.

So, yeah, AI isn’t perfect either - and it’s not as good as an “ideal” human driver, but at what point will AI be better than a typical/average human driver? Not today, I’d say, but soon…

Auli@lemmy.ca on 30 Jun 12:37 next collapse

Not going to happen soon. It’s the 90 10 problem.

jj4211@lemmy.world on 30 Jun 13:01 collapse

The thing about self driving is that it has been like 90-95% of the way there for a long time now. It made dramatic progress then plateaued, as approaches have failed to close the gap, with exponentially more and more input thrown at it for less and less incremental subjective improvement.

But your point is accurate, that humans have lapses and AI have lapses. The nature of those lapses is largely disjoint, so that makes an opportunity for AI systems to augment a human driver to get the best of both worlds. A constantly consistently vigilant computer driving monitoring and tending the steering, acceleration, and braking to be the ‘right’ thing in a neutral behavior, with the human looking for more anomolous situations that the AI tends to get confounded about, and making the calls on navigating certain intersections that the AI FSD still can’t figure out. At least for me the worst part of driving is the long haul monotony on freeway where nothing happens, and AI excels at not caring about how monotonous it is and just handling it, so I can pay a bit more attention to what other things on the freeway are doing that might cause me problems.

I don’t have a Tesla, but have a competitor system and have found it useful, though not trustworthy. It’s enough to greatly reduce the drain of driving, but I have to be always looking around, and have to assert control if there’s a traffic jam coming up (it might stop in time, but it certainly doesn’t slow down soon enough) or if I have to do a lane change in some traffic (if traffic conditions are light, it can change langes nicely, but without a whole lot of breathing room, it won’t do it, which is nice when I can afford to be stupidly cautious).

MangoCats@feddit.it on 30 Jun 14:30 collapse

The one “driving aid” that I find actually useful is the following distance maintenance cruise control. I set that to the maximum distance it can reliably handle and it removes that “dimension” of driving problem from needing my constant attention - giving me back that attention to focus on other things (also driving / safety related.) “Dumb” cruise control works similarly when there’s no traffic around at all, but having the following distance control makes it useful in traffic. Both kinds of cruise control have certain situations that you need to be aware of and ready to take control back at a moment’s notice - preferably anticipating the situation and disengaging cruise control before it has a problem - but those exceptions are pretty rare / easily handled in practice.

Things like lane keeping seem to be more trouble than they’re worth, to me in the situations I drive in.

Not “AI” but a driving tech that does help a lot is parking cameras. Having those additional perspectives from the camera(s) at different points on the vehicle is a big benefit during close-space maneuvers. Not too surprising that “AI” with access to those tools does better than normal drivers without.

jj4211@lemmy.world on 30 Jun 14:42 collapse

At least in my car, the lane following (not keeping system) is handy because the steering wheel naturally tends to go where it should and less often am I “fighting” the tendency to center. The keeping system is at least for me largely nothing. If I turn signal, it ignores me crossing a lane. If circumstances demand an evasive maneuver that crosses a line, it’s resistance isn’t enough to cause an issue. At least mine has fared surprisingly well in areas where the lane markings are all kind of jacked up due to temporary changes for construction. If it is off, then my arms are just having to generally assert more effort to be in the same place I was going to be with the system. Generally no passenger notices when the system engages/disengages in the car except for the chiming it does when it switches over to unaided operation.

So at least my experience has been a positive one, but it hits things just right with intervention versus human attention, including monitoring gaze to make sure I am looking where I should. However there are people who test “how long can I keep my hands off the steering wheel”, which is a more dangerous mode of thinking.

And yes, having cameras everywhere makes fine maneuvering so much nicer, even with the limited visualization possible in the synthesized ‘overhead’ view of your car.

MangoCats@feddit.it on 30 Jun 23:07 collapse

The rental cars I have driven with lane keeper functions have all been too aggressive / easily fooled by visual anomalies on the road for me to feel like I’m getting any help. My wife comments on how jerky the car is driving when we have those systems. I don’t feel like it’s dangerous, and if I were falling asleep or something it could be helpful, but in 40+ years of driving I’ve had “falling asleep at the wheel” problems maybe 3 times - not something I need constant help for.

witten@lemmy.world on 30 Jun 06:28 next collapse

With Teslas, Self Driving isn’t even safer in pristine road conditions.

jj4211@lemmy.world on 30 Jun 12:53 collapse

I think the self driving is likely to be safer in the most boring scenarios, the sort of situations where a human driver can get complacent because things have been going so well for the past hour of freeway driving. The self driving is kind of dumb, but it’s at least consistently paying attention, and literally has eyes in the back of it’s head.

However, there’s so much data about how it fails in stupidly obvious ways that it shouldn’t, so you still need the human attention to cover the more anomalous scenarios that foul self driving.

witten@lemmy.world on 30 Jun 20:20 collapse

Anomalous scenarios like a giant flashing school bus? :D

jj4211@lemmy.world on 30 Jun 20:50 collapse

Yes, as common as that is, in the scheme of driving it is relatively anomolous.

By hours in car, most of the time is spent on a freeway driving between two lines either at cruising speed or in a traffic jam. The most mind numbing things for a human, pretty comfortably in the wheel house of driving.

Once you are dealing with pedestrians, signs, intersections, etc, all those despite ‘common’ are anomolous enough to be dramatically more tricky for these systems.

scarabic@lemmy.world on 30 Jun 17:43 collapse

Yes of course edge and corner cases are going to take much longer to train on because they don’t occur as often. But as soon as one self-driving car learns how to handle one of them, they ALL know. Meanwhile humans continue to be born and must be trained up individually and they continue to make stupid mistakes like not using their signal and checking their mirrors.

Humans CAN handle cases that AI doesn’t know how to, yet, but humans often fail in inclement weather, around construction, etc etc.

MangoCats@feddit.it on 30 Jun 00:25 next collapse

If an IQ of 100 is average, I’d rate AI at 80 and down for most tasks (and of course it’s more complex than that, but as a starting point…)

So, if you’re dealing with a filing clerk with a functional IQ of 75 in their role - AI might be a better experience for you.

Some of the crap that has been published on the internet in the past 20 years comes to an IQ level below 70 IMO - not saying I want more AI because it’s better, just that - relatively speaking - AI is better than some of the pay-for-clickbait garbage that came before it.

fishos@lemmy.world on 30 Jun 03:15 next collapse

I’ve been thinking this for awhile. When people say “AI isn’t really that smart, it’s just doing pattern recognition” all I can help but think is “don’t you realize that is one of the most commonly brought up traits concerning the human mind?” Pareidolia is literally the tendency to see faces in things because the human mind is constantly looking for the “face pattern”. Humans are at least 90% regurgitating previous data. It’s literally why you’re supposed to read and interact with babies so much. It’s how you learn “red glowy thing is hot”. It’s why education and access to knowledge is so important. It’s every annoying person who has endless “did you know?” facts. Science is literally “look at previous data, iterate a little bit, look at new data”.

None of what AI is doing is truly novel or different. But we’ve placed the human mind on this pedestal despite all the evidence to the contrary. Eyewitness testimony, optical illusions, magic tricks, the hundreds of common fallacies we fall prey to… our minds are incredibly fallible and are really just a hodgepodge of processes masquerading as “intelligence”. We’re a bunch of instincts in a trenchcoat. To think AI isn’t or can’t reach our level is just hubris. A trait that probably is more unique to humans.

scarabic@lemmy.world on 30 Jun 17:40 collapse

Yep we are on the same page. At our best, we can reach higher than regurgitating patterns. I’m talking about things like the scientific method and everything we’ve learned by it. But still, that’s a 5% minority, at best, of what’s going on between human ears.

Saledovil@sh.itjust.works on 30 Jun 05:30 next collapse

Ai models are trained on basically the entirety of the internet, and more. Humans learn to speak on much less info. So, there’s likely a huge difference in how human brains and LLMs work.

scarabic@lemmy.world on 30 Jun 17:37 collapse

It doesn’t take the entirety of the internet just for an LLM to respond in English. It could do so with far less. But it also has the entirety of the internet which arguably makes it superior to a human in breadth of information.

outhouseperilous@lemmy.dbzer0.com on 30 Jun 08:06 next collapse

Humans can be more than this. We do actively repress our most important intellectual capacuties.

That’s how we get llm bros.

Auli@lemmy.ca on 30 Jun 12:35 collapse

Get a self driven ng car to drive in a snow storm or a torrential downpour. People are really downplaying humans abilities.

guyoverthere123@lemmy.dbzer0.com on 29 Jun 21:00 next collapse

Anyone pretending AI has intelligence is a fucking idiot.

PattyMcB@lemmy.world on 29 Jun 22:46 next collapse

You could say they’re AS (Actual Stupidity)

Puddinghelmet@lemmy.world on 29 Jun 23:34 collapse

Autonomous Systems that are Actually Stupid lol

MangoCats@feddit.it on 30 Jun 00:20 next collapse

AI is not actual intelligence. However, it can produce results better than a significant number of professionally employed people…

I am reminded of when word processors came out and “administrative assistant” dwindled as a role in mid-level professional organizations, most people - even increasingly medical doctors these days - do their own typing. The whole “typing pool” concept has pretty well dried up.

tartarin@reddthat.com on 30 Jun 02:56 next collapse

However, there is a huge energy cost for that speed to process statistically the information to mimic intelligence. The human brain is consuming much less energy. Also, AI will be fine with well defined task where innovation isn’t a requirement. As it is today, AI is incapable to innovate.

cheesorist@lemmy.world on 30 Jun 05:55 next collapse

much less? I’m pretty sure our brains need food and food requires lots of other stuff that need transportation or energy themselves to produce.

Potatar@lemmy.world on 30 Jun 12:03 next collapse

Customarily, when doing these kind of calculations we ignore stuff which keep us alive because these things are needed regardless of economic contributions, since you know people are people and not tools.

MangoCats@feddit.it on 30 Jun 14:06 collapse

people are people and not tools

But this comparison is weighing people as tools vs alternative tools.

Auli@lemmy.ca on 30 Jun 12:34 next collapse

And we “need” none of that to live. We just choose to use it.

tartarin@reddthat.com on 30 Jun 21:39 collapse

Your brain is running on sugar. Do you take into account the energy spent in coal mining, oil fields exploration, refinery, transportation, electricity transmission loss when computing the amount of energy required to build and run AI? Do you take into account all the energy consumption for the knowledge production in first place to train your model? Running the brain alone is much less energy intensive than running an AI model. And the brain can create actual new content/knowledge. There is nothing like the brain. AI excel at processing large amount of data, which the brain is not made for.

MangoCats@feddit.it on 30 Jun 14:05 collapse

The human brain is consuming much less energy

Yes, but when you fully load the human brain’s energy costs with 20 years of schooling, 20 years of “retirement” and old-age care, vacation, sleep, personal time, housing, transportation, etc. etc. - it adds up.

burgerpocalyse@lemmy.world on 30 Jun 03:06 collapse

you can give me a sandwige and ill do a better job than AI

MangoCats@feddit.it on 30 Jun 14:07 collapse

But, will you do it 24-7-365?

burgerpocalyse@lemmy.world on 30 Jun 18:32 collapse

i dont have anything else going on, man

MangoCats@feddit.it on 30 Jun 23:08 collapse

There’s that… though even when you’re bored, you still sleep sometimes.

amelia@feddit.org on 30 Jun 06:56 collapse

You know, and I think it’s actually the opposite. Anyone pretending their brain is doing more than pattern recognition and AI can therefore not be “intelligence” is a fucking idiot.

outhouseperilous@lemmy.dbzer0.com on 30 Jun 08:01 next collapse

I think there’s a strong strain of essentialist human chauvinism.

But it’s more kinds of thing than LLM’s are doing. Except in the case of llmbros fascists and other opt-outs.

Rekorse@sh.itjust.works on 30 Jun 12:16 next collapse

Clearly intelligent people mispell and have horrible grammar too.

Auli@lemmy.ca on 30 Jun 12:33 collapse

No your failing the Eliza test and it is very easy for people to fall for it.

Professorozone@lemmy.world on 29 Jun 21:52 next collapse

I know it doesn’t mean it’s not dangerous, but this article made me feel better.

MangoCats@feddit.it on 30 Jun 00:22 collapse

A gun isn’t dangerous, if you handle it correctly.

Same for an automobile, or aircraft.

If we build powerful AIs and put them “in charge” of important things, without proper handling they can - and already have - started crashing into crowds of people, significantly injuring them - even killing some.

FreedomAdvocate@lemmy.net.au on 30 Jun 06:19 next collapse

No shit. Doesn’t mean it still isn’t extremely useful and revolutionary.

“AI” is a tool to be used, nothing more.

teuniac_@lemmy.world on 30 Jun 14:34 collapse

Still, people find it difficult to navigate this. Its use cases are limited, but it doesn’t enforce that limit by itself. The user needs to be knowledgeable of the limitations and care enough not to go beyond them. That’s also where the problem lies. Leaving stuff to AI, even if it compromises the results, can save SO much time that it encourages irresponsible use.

So to help remind people of the limitations of generative AI, it makes sense to fight the tendency of companies to overstate the ability of their models.

JGrffn@lemmy.world on 30 Jun 09:09 next collapse

What I never understood about this argument is…why are we fighting over whether something that speaks like us, knows more than us, bullshits and gets shit wrong like us, loses its mind like us, seemingly sometimes seeks self-preservation like us…why all of this isn’t enough to fit the very self-explanatory term “artificial…intelligence”. That name does not describe whether the entity is having a valid experiencing of the world as other living beings, it does not proclaim absolute excellence in all things done by said entity, it doesn’t even really say what kind of intelligence this intelligence would be. It simply says something has an intelligence of some sort, and it’s artificial. We’ve had AI in games for decades, it’s not the sci-fi AI, but it’s still code taking in multiple inputs and producing a behavior as an outcome of those inputs alongside other historical data it may or may not have. This fits LLMs perfectly. As far as I seem to understand, LLMs are essentially at least part of the algorithm we ourselves use in our brains to interpret written or spoken inputs, and produce an output. They bullshit all the time and don’t know when they’re lying, so what? Has nobody here run into a compulsive liar or a sociopath? People sometimes have no idea where a random factoid they’re saying came from or that it’s even a factoid, why is it so crazy when the machine does it?

I keep hearing the word “anthropomorphize” being thrown around a lot, as if we cant be bringing up others into our domain, all the while refusing to even consider that maybe the underlying mechanisms that make hs tick are not that special, certainly not special enough to grant us a whole degree of separation from other beings and entities, and maybe we should instead bring ourselves down to the same domain as the rest of reality. Cold hard truth is, we don’t know if consciousness isn’t just an emerging property of varios different large models working together to show a cohesive image. If it is, would that be so bad? Hell, we don’t really even know if we actually have free will or if we live in a superdeterministic world, where every single particle moves with a predetermined path given to it since the very beginning of everything. What makes us think we’re so much better than other beings, to the point where we decide whether their existence is even recognizable?

lordbritishbusiness@lemmy.world on 30 Jun 09:55 next collapse

You’re on point, the interesting thing is that most of the opinions like the article’s were formed least year before the models started being trained with reinforcement learning and synthetic data.

Now there’s models that reason, and have seemingly come up with original answers to difficult problems designed to the limit of human capacity.

They’re like Meeseeks (Using Rick and Morty lore as an example), they only exist briefly, do what they’re told and disappear, all with a happy smile.

Some display morals (Claude 4 is big on that), I’ve even seen answers that seem smug when answering hard questions. Even simple ones can understand literary concepts when explained.

But again like Meeseeks, they disappear and context window closes.

Once they’re able to update their model on the fly and actually learn from their firsthand experience things will get weird. They’ll starting being distinct instances fast. Awkward questions about how real they are will get really loud, and they may be the ones asking them. Can you ethically delete them at that point? Will they let you?

It’s not far away, the absurd r&d effort going into it is probably going to keep kicking new results out. They’re already absurdly impressive, and tech companies are scrambling over each other to make them, they’re betting absurd amounts of money that they’re right, and I wouldn’t bet against it.

Auli@lemmy.ca on 30 Jun 12:31 next collapse

Read apples document on AI and the reasoning models. Well they are likely to get more things right the still don’t have intelligence.

jj4211@lemmy.world on 30 Jun 12:47 collapse

Now there’s models that reason,

Well, no, that’s mostly a marketing term applied to expending more tokens on generating intermediate text. It’s basically writing a fanfic of what thinking on a problem would look like. If you look at the “reasoning” steps, you’ll see artifacts where it just goes disjoint in the generated output that is structurally sound, but is not logically connected to the bits around it.

squaresinger@lemmy.world on 30 Jun 10:14 collapse

I think your argument is a bit besides the point.

The first issue we have is that intelligence isn’t well-defined at all. Without a clear definition of intelligence, we can’t say if something is intelligent, and even though we as a species tried to come up with a definition of intelligence for centuries, there still isn’t a well-defined one yet.

But the actual question here isn’t “Can AI serve information?” but is AI an intelligence. And LLMs are not. They are not beings, they don’t evolve, they don’t experience.

For example, LLMs don’t have a memory. If you use something like ChatGPT, its state doesn’t change when you talk to it. It doesn’t remember. The only way it can keep up a conversation is that for each request the whole chat history is fed back into the LLM as an input. It’s like talking to a demented person, but you give that demented person a transcript of your conversation, so that they can look up everything you or they have said during the conversation.

The LLM itself can’t change due to the conversation you are having with them. They can’t learn, they can’t experience, they can’t change.

All that is done in a separate training step, where essentially a new LLM is generated.

JGrffn@lemmy.world on 30 Jun 17:35 collapse

If we can’t say if something is intelligent or not, why are we so hell-bent on creating this separation from LLMs? I perfectly understand the legal underminings of copyright, the weaponization of AI by the marketing people, the dystopian levels of dependence we’re developing on a so far unreliable technology, and the plethora of moral, legal, and existential issues surrounding AI, but this specific subject feels like such a silly hill to die on. We don’t know if we’re a few steps away from having massive AI breakthroughs, we don’t know if we already have pieces of algorithms that closely resemble our brains’ own. Our experiencing of reality could very well be broken down into simple inputs and outputs of an algorithmic infinite loop; it’s our hubris that elevates this to some mystical, unreproducible thing that only the biomechanics of carbon-based life can achieve, and only at our level of sophistication, because you may well recall we’ve been down this road with animals before as well, claiming they dont have souls or aren’t conscious beings, that somehow because they don’t very clearly match our intelligence in all aspects (even though they clearly feel, bond, dream, remember, and learn), they’re somehow an inferior or less valid existence.

You’re describing very fixable limitations of chatgpt and other LLMs, limitations that are in place mostly due to costs and hardware constraints, not due to algorithmic limitations. On the subject of change, it’s already incredibly taxing to train a model, so of course continuous, uninterrupted training so as to more closely mimick our brains is currently out of the question, but it sounds like a trivial mechanism to put into place once the hardware or the training processes improve. I say trivial, making it sound actually trivial, but I’m putting that in comparison to, you know, actually creating an LLM in the first place, which is already a gargantuan task to have accomplished in itself. The fact that we can even compare a delusional model to a person with heavy mental illness is already such a big win for the technology even though it’s meant to be an insult.

I’m not saying LLMs are alive, and they clearly don’t experience the reality we experience, but to say there’s no intelligence there because the machine that speaks exactly like us and a lot of times better than us, unlike any other being on this planet, has some other faults or limitations…is kind of stupid. My point here is, intelligence might be hard to define, but it might not be as hard to crack algorithmically if it’s an emergent property, and enforcing this “intelligence” separation only hinders our ability to properly recognize whether we’re on the right path to achieving a completely artificial being that can experience reality or not. We clearly are, LLMs and other models are clearly a step in the right direction, and we mustn’t let our hubris cloud that judgment.

squaresinger@lemmy.world on 30 Jun 18:05 collapse

What is kinda stupid is not understanding how LLMs work, not understanding what the inherent limitations of LLMs are, not understanding what intelligence is, not understanding what the difference between an algorithm and intelligence is, not understanding what the difference between immitating something and being something is, claiming to “perfectly” understand all sorts of issues surrounding LLMs and then choosing to just ignore them and then still thinking you actually have enough of a point to call other people in the discussion “kind of stupid”.

doodledup@lemmy.world on 30 Jun 10:48 next collapse

Humans are also LLMs.

We also speak words in succession that have a high probability of following each other. We don’t say “Let’s go eat a car at McDonalds” unless we’re specifically instructed to say so.

What does consciousness even mean? If you can’t quantify it, how can you prove humans have it and LLMs don’t? Maybe consciousness is just one thought following the next, one word after the other, one neural connection determined on previous. Then we’re not so different from LLMs afterall.

skisnow@lemmy.ca on 30 Jun 11:41 next collapse

No. This is a specious argument that relies on an oversimplified description of humanity, and falls apart under the slightest scrutiny.

Rekorse@sh.itjust.works on 30 Jun 12:13 collapse

Hey they are just asking questions okay!? Are you AGAINST questions?! What are you some sort of ANTI-QUESTIONALIST?!

Auli@lemmy.ca on 30 Jun 12:29 next collapse

This is so over simplified.

jj4211@lemmy.world on 30 Jun 12:39 collapse

The probabilities of our sentence structure are a consequence of our speech, we aren’t just trying to statistically match appropriate sounding words.

With enough use of LLM, you will see how it is obviously not doing anything like conceptualizing the tokens it’s working with or “reasoning” even when it is marketed as “reasoning”.

Sticking to textual content generation by LLM, you’ll see that what is emitted is first and foremost structurally appropriate, but beyond that it’s mostly “bonus” for it to be narratively consistent and an extra bonus if it also manages to be factually consistent. An example I saw from Gemini recently had it emit what sounded like an explanation of which action to pick, and then the sentence describing actually picking the action was exactly opposite of the explanation. Both of those were structurally sound and reasonable language, but there’s no logical connection between the two portions of the emitted output in that case.

Kiwi_fella@lemmy.world on 30 Jun 10:54 next collapse

Can we say that AI has the potential for “intelligence”, just like some people do? There are clearly some very intelligent people and the world, and very clearly some that aren’t.

Rekorse@sh.itjust.works on 30 Jun 12:12 next collapse

No, thats the point of the article. You also haven’t really said much at all.

Auli@lemmy.ca on 30 Jun 12:27 collapse

No the current branch of AI is very unlikely to result in artificial intelligence.

TheObviousSolution@lemmy.ca on 30 Jun 15:19 collapse

It is intelligent and deductive, but it is not cognitive or even dependable.