I wish I was as bold as these authors.
from jupyter_rain@discuss.tchncs.de to science_memes@mander.xyz on 28 Jun 20:40
https://discuss.tchncs.de/post/18077383

#science_memes

threaded - newest

moonsnotreal@lemmy.blahaj.zone on 28 Jun 21:03 next collapse

https://link.springer.com/article/10.1007/s10676-024-09775-5

Link to the article if anyone wants it

DaGeek247@fedia.io on 28 Jun 21:41 next collapse

That's actually a fun read

jballs@sh.itjust.works on 28 Jun 23:32 collapse

Applications of these systems have been plagued by persistent inaccuracies in their output; these are often called “AI hallucinations”. We argue that these falsehoods, and the overall activity of large language models, is better understood as bullshit in the sense explored by Frankfurt (On Bullshit, Princeton, 2005)

Now I kinda want to read On Bullshit

tomkatt@lemmy.world on 28 Jun 23:49 collapse

Don’t waste your time. It’s honestly fucking awful. Reading it was like experiencing someone mentally masturbating in real time.

naevaTheRat@lemmy.dbzer0.com on 28 Jun 23:57 collapse

Yep. You’re smarter than everyone who found it insightful.

Seraph@fedia.io on 28 Jun 21:16 next collapse

Well, yeah. People are acting like language models are full fledged AI instead of just a parrot repeating stuff said online.

JackGreenEarth@lemm.ee on 28 Jun 21:32 next collapse

Whenever any advance is made in AI, AI critics redefine AI so its not achieved yet according to their definition. Deep Blue Chess was an AI, an artificial intelligence. If you mean human or beyond level general intelligence, you’re probably talking about AGI or ASI (general or super intelligence, respectively).

And the second comment about LLMs being parrots arises from a misunderstanding of how LLMs work. The early chatbots were actual parrots, saying prewritten sentences that they had either been preprogrammed with or got from their users. LLMs work differently, statistically predicting the next token (roughly equivalent to a word) based on all those that came before it, and parameters finetuned during training. Their temperature can be changed to give more or less predictable output, and as such, they have the potential for actually original output, unlike their parrot predecessors.

Seraph@fedia.io on 28 Jun 21:41 next collapse

I appreciate you taking the time to clarify thank you!

Prunebutt@slrpnk.net on 28 Jun 21:42 next collapse

Whenever any advance is made in AI, AI critics redefine AI so its not achieved yet according to their definition.

That stems from the fact that AI is an ill-defined term that has no actual meaning. Before Google maps became popular, any route finding algorithm utilizing A* was considered “AI”.

And the second comment about LLMs being parrots arises from a misunderstanding of how LLMs work.

Bullshit. These people know exactly how LLMs work.

LLMs reproduce the form of language without any meaning being transmitted. That’s called parroting.

[deleted] on 28 Jun 22:33 next collapse

.

Prunebutt@slrpnk.net on 28 Jun 22:54 collapse

AI is a marketing buzzword. When someone claims that so-called “AGI” is close, they’re either doing marketing or falling for marketing.

Since you didn°t address the “parroting” part, I’m assuming that you retract your point.

lunarul@lemmy.world on 29 Jun 00:07 collapse

LLMs reproduce the form of language without any meaning being transmitted. That’s called parroting.

Even if (and that’s a big if) an AGI is going to be achieved at some point, there will be people calling it parroting by that definition. That’s the Chinese room argument.

Prunebutt@slrpnk.net on 29 Jun 06:48 collapse

You’re moving the goalposts.

lunarul@lemmy.world on 29 Jun 14:24 collapse

Me? How can I move goalposts in a single sentence? We’ve had no previous conversation… And I’m not agreeing with the previous poster either…

Prunebutt@slrpnk.net on 29 Jun 15:08 collapse

By entering the discussion, you also engaged in the previops context. The discussion uas about LLMs being parrots.

lunarul@lemmy.world on 29 Jun 15:50 collapse

And the argument was if there’s meaning behind what they generate. That argument applies to AGIs too. It’s a deeply debated philosophical question. What is meaning? Is our own thought pattern deterministic, and if it is, how do we know there’s any meaning behind our own actions?

Prunebutt@slrpnk.net on 29 Jun 21:14 collapse

The burden of proof lies on the people making the claims about intelligence. “AI” pundits have supplied nothing but marketing-hype.

Tar_alcaran@sh.itjust.works on 28 Jun 21:52 next collapse

LLMs work differently, statistically predicting the next token (roughly equivalent to a word) based on all those that came before it, and parameters finetuned during training.

Which is what a parrot does.

naevaTheRat@lemmy.dbzer0.com on 28 Jun 23:05 next collapse

Yeah this is the exact criticism. They recombine language pieces without really doing language. The end result looks like language, but it lacks any of the important characteristics of language such as meaning and intention.

If I say “Two plus two is four” I am communicating my belief about mathematics.

If an llm emits “two plus two is four” it is outputting a stochastically selected series of tokens linked by probabilities derived from training data. If the statement is true or false then that is accidental.

Hence, stochastic parrot.

ignotum@lemmy.world on 29 Jun 00:20 collapse

If i train an LLM to do math, for the training data i generate a+b=cstatements, never showing it the same one twice.

It would be pointless for it to “memorize” every single question and answer it gets since it would never see that question again. The only way it would be able to generate correct answers would be if it gained a concept of what numbers are, and how the add operation operates on them to create a new number.
Rather than memorizing and parroting it would have to actually understand it in order to generate responses.

It’s called generalization, it’s why large amounts of data is required (if you show the same data again and again then memorizing becomes a viable strategy)

If I say “Two plus two is four” I am communicating my belief about mathematics.

Seems like a pointless distinction, you were told it so you believe it to be the case? Why can’t we say the LLM outputs what it believes is the correct answer? You’re both just making some statement based on your prior experiences which may or may not be true

naevaTheRat@lemmy.dbzer0.com on 29 Jun 00:51 next collapse

You’re arguing against a position I didn’t put forward. Also

Seems like a pointless distinction, you were told it so you believe it to be the case? Why can’t we say the LLM outputs what it believes is the correct answer? You’re both just making some statement based on your prior experiences which may or may not be true

This is what excessive reduction does to a mfer. That is just such a hysterically absurd take.

artichokecustard@lemmy.world on 29 Jun 01:02 next collapse

but, the LLM has faith!

naevaTheRat@lemmy.dbzer0.com on 29 Jun 01:18 collapse

I’m a curmudgeonly physics nerd, it’s very strange being on the side of a debate going “No now come on, that’s way too reductive”

yuri@pawb.social on 29 Jun 15:47 collapse

That just means you’re better equipped when it comes up lmao

ignotum@lemmy.world on 29 Jun 10:07 collapse

The AI builds some kind of a model of the world in order to better understand the input and improve the output prediction,

You have some mental model of how maths work which you have built up through school and other experiences in your life,

You both are given a maths problem, you both give an answer based on your understanding of mathematics

naevaTheRat@lemmy.dbzer0.com on 29 Jun 10:53 next collapse

The algorithm assigns weights to nodes in a neural network. These weights are derived by statistical association of tokens in the training data after they have been cleaned.

That is so enormously far from how we think humans learn (you don’t teach a kid to understand theory of mind by plopping them in front of the Gutenberg project and saying good luck, and yet they learn to explain theory of mind problems all the same) that it is just comically farcial to assume something similar is happening underneath.

It is very interesting that llms are able to appear to be conversational but claiming they have some sort of mind with an understanding of maths is as ridiculous as suggesting a chess bot understands the Pauli exclusion principle because it never moves two pieces into the same physical space.

yuri@pawb.social on 29 Jun 15:45 collapse

You’ve been speaking with your chest this whole time and now that we’re into the nitty gritty you really just said “The ai does… something!” It’s so general a description that by your measure automated thermostats are engaging in human reasoning when they make it a little bit cooler on a hot day.

You might’ve been oversimplifying on purpose. I just can’t help but think you have no idea how LLMs work outside of this inherently flawed comparison to human thought.

Hackworth@lemmy.world on 29 Jun 16:00 collapse

Not OP, but speaking from a fairly deep layman understanding of how LLMs work - all anyone really knows is that capabilities of fundamentally higher orders (like deception, which requires theory of mind) emerged by simply training larger networks. Since we don’t have a great understanding of how our own intelligence emerges from our wetware, we’re only guessing.

yuri@pawb.social on 30 Jun 06:27 collapse

Something that looks like higher order reasoning emerged from training larger networks. At the end of the day it’s still just spicy autocomplete. Theoretically you could give it a large enough dataset to “predict” almost anything with really high accuracy, but all it’s doing is pattern recognition. One could argue that that’s all humans do, but that’s getting more into philosophy and skipping a lot of nuance.

I’m not like, trying to argue with you by the way. Just having a fun time with this line of thought ^^

Hackworth@lemmy.world on 30 Jun 13:31 collapse

What makes the “spicy autocomplete” perspective incomplete is also what makes LLMs work. The “Attention is All You Need” paper that introduced attention transformers describes a type of self-awareness necessary to predict the next word. In the process of writing the next word of an essay, it navigates a 22,000-dimensional semantic space, And the similarity to the way humans experience language is more than philosophical - the advancements in LLMs have sparked a bunch of new research in neurology.

kogasa@programming.dev on 29 Jun 07:04 collapse

If you fine tune a LLM on math equations, odds are it won’t actually learn how to reliably solve novel problems. Just the same as it won’t become a subject matter expert on any topic, but it’s a lot harder to write simple math that “looks, but is not, correct” than it is to waffle vaguely about a topic. The idea of a LLM creating a robust model of the semantics of the text it’s trained on is, at face value, plausible; it just doesn’t seem to actually happen in practice.

ignotum@lemmy.world on 29 Jun 09:58 collapse

Prompt:

What is 183649+72961?

ChatGPT:

The sum of 183649 and 72961 is 256610.

It’s trained to generate what is most plausible, but with math, the only plausible response is the correct answer (assuming it has been trained on data where that has been the case)

kogasa@programming.dev on 29 Jun 16:06 collapse

ChatGPT uses auxiliary models to perform certain tasks like basic math and programming. Your explanation about plausibility is simply wrong.

ignotum@lemmy.world on 29 Jun 18:27 collapse

It has access to a python interpreter and can use that to do math, but it shows you that this is happening, and it did not when i asked it.

I asked it to do another operation, this time specifying i wanted it to use an external tool, and it did

You have access to a dictionary, that doesn’t prove you’re incapable of spelling simple words on your own, like goddamn people what’s with the hate boners for ai around here

kogasa@programming.dev on 29 Jun 18:31 collapse

It has access to a python interpreter and can use that to do math, but it shows you that this is happening, and it did not when i asked it.

That’s not what I meant.

You have access to a dictionary, that doesn’t prove you’re incapable of spelling simple words on your own, like goddamn people what’s with the hate boners for ai around here

??? You just don’t understand the difference between a LLM and a chat application using many different tools.

ignotum@lemmy.world on 29 Jun 00:04 next collapse

You take in some information, combine that with some precious experiences, and then output words

Which is what an LLM does.

WalnutLum@lemmy.ml on 29 Jun 02:23 collapse

Flat epistemological statements like this are why I feel like more STEM people need to take Philosophy.

ignotum@lemmy.world on 29 Jun 10:14 collapse

Big fan of philosophy, so please do tell me how my joke is wrong? Does knowledge and beliefs not come from life experiences?

doubtingtammy@lemmy.ml on 29 Jun 02:24 collapse

This is parrot libel

SkyNTP@lemmy.ml on 28 Jun 22:00 next collapse

You completely missed the point. The point is people have been lead to believe LLM can do jobs that humans do because the output of LLMs sounds like the jobs people do, when in reality, speech is just one small part of these jobs. It turns, reasoning is a big part of these jobs, and LLMs simply don’t reason.

lunarul@lemmy.world on 29 Jun 00:04 next collapse

AI hasn’t been redefined. For people familiar with the field it has always been a broad term meaning code that learns (and subdivided in many types of AI), and for people unfamiliar with the field it has always been a term synonymous with AGI. So when people in the former category put out a product and label it as AI, people in the latter category then run with it using their own definition.

For a long time ML had been the popular buzzword in tech and people outside the field didn’t care about it. But then Google and OpenAI started calling ML and LLMs simply “AI” and that became the popular buzzword. And when everyone is talking about AI, and most people conflate that with AGI, the results are funny and scary at the same time.

force@lemmy.world on 29 Jun 02:28 collapse

and for people unfamiliar with the field it has always been a term synonymous with AGI.

Gamers screaming about the AI of bots/NPCs making them mad beg to differ

lunarul@lemmy.world on 29 Jun 02:52 collapse

I was going to add a note about the exception of video games but decided I’m digressing

WagyuSneakers@lemmy.world on 29 Jun 19:03 collapse

LLMs have more in common with chatbots than AI.

JackGreenEarth@lemm.ee on 29 Jun 19:16 collapse

You are very skilled in the art of missing the point. LLMs can absolutely be used as chatbots, amongst other things. They are more advanced than their predecessors in this, and work in a different way. That does not stop them from being a form of artificial intelligence. Chatbots and AI are not mutually exclusive terms, the first is a subset of the second. And you may indeed be referring to AGI or ASI as AI, a misconception I addressed in my earlier comment.

WagyuSneakers@lemmy.world on 29 Jun 19:38 collapse

I work on ML projects. I’m telling you, as a matter of fact, you do not understand what you are talking about.

Try being less smug and pedantic.

JackGreenEarth@lemm.ee on 29 Jun 22:33 collapse

Oh, wow! You ‘work in ML projects’, do you?

Then maybe you could point out specific examples of me not knowing what I’m talking about, instead of general dismissiveness?

WagyuSneakers@lemmy.world on 30 Jun 04:31 collapse

I’m not here to teach you and I don’t care if you ever learn.

If you’re interested check out your community college.

JackGreenEarth@lemm.ee on 30 Jun 07:52 collapse

You have no obligation to teach me, correct. But if you choose not to, you have no right to criticise me without backing up your claims. Pick one.

WagyuSneakers@lemmy.world on 30 Jun 23:31 collapse

I can absolutely criticize you without teaching you. No one is going to teach you 4 years of college and a decade of industry experience over a social media post so you stop lying online.

GBU_28@lemm.ee on 29 Jun 00:33 next collapse

Spicy auto complete is a useful tool.

But these things are nothing more

frezik@midwest.social on 29 Jun 01:24 collapse

The paper actually argues otherwise, though it’s not fully settled on that conclusion, either.

downpunxx@fedia.io on 28 Jun 21:19 next collapse

When I say it they call me crass

julianschmulian@lemmy.blahaj.zone on 28 Jun 21:43 next collapse

clearly they have never heard of harry g frankfurts (excellent) „on bullshit“

bobtimus_prime@feddit.org on 28 Jun 21:49 next collapse

Actually, they reference him.

GrabtharsHammer@lemmy.world on 28 Jun 21:50 collapse

The paper explicitly states that they are calling ChatGPT “bullshit” in the Frankfurtian sense and they cite “On Bullshit” as the source for that definition. It’s right there in the introduction.

You’d know this if you had read the paper or even checked whether your statement were true. So either you read it and then lied deliberately, or you didn’t read the paper nor actually care about the truth value of your own statement, rendering your comment itself bullshit in the Frankfurtian sense.

tquid@sh.itjust.works on 28 Jun 22:24 next collapse

Sheesh, leave some for the rest of us to pick on, you savage!

naevaTheRat@lemmy.dbzer0.com on 28 Jun 23:07 next collapse

By grabthar’s hammer, what a put down!

TempermentalAnomaly@lemmy.world on 29 Jun 06:09 collapse

Best movie ever.

julianschmulian@lemmy.blahaj.zone on 28 Jun 23:29 collapse

jesus christ ofc i didn‘t read the paper, i was just making a joke ffs

TexMexBazooka@lemm.ee on 29 Jun 15:58 collapse

It was a bad joke

Glytch@lemmy.world on 29 Jun 19:32 collapse

Worse than a bad joke: an ill-informed joke

mkwt@lemmy.world on 28 Jun 22:25 next collapse

This paper should cite On Bullshit.

just2look@lemm.ee on 28 Jun 22:38 next collapse

It does. It’s even cited in the abstract, and it’s the origin of bullshit as referenced in their title.

thanks_shakey_snake@lemmy.ca on 28 Jun 23:53 next collapse

It talks extensively about On Bullshit, lol.

ace_garp@lemmy.world on 29 Jun 00:56 next collapse
xenoclast@lemmy.world on 29 Jun 04:06 collapse

Yup. The paper is worth actually reading

ace_garp@lemmy.world on 29 Jun 00:46 next collapse

Plot-twist: The paper was authored by a competing LLM.

myslsl@lemmy.world on 29 Jun 02:17 next collapse

I will fucking piledrive you if you mention AI again.

glitchdx@lemmy.world on 29 Jun 21:47 collapse

fucking love that article. sums up everything wrong with AI. Unfortunately, it doesn’t touch on what AI does right: help idiots like me achieve a slight amount of competence on subjects that such people can’t be bothered with dedicating their entire lives to.

Shameless@lemmy.world on 29 Jun 10:14 next collapse

Just reading the intro pulls you in

We draw a distinction between two sorts of bullshit, which we call ‘hard’ and ‘soft’ bullshit

Nicoleism101@lemm.ee on 29 Jun 11:06 next collapse

Suddenly it dawned on me that I can plaster my CV with AI and win over actual competent people easy peasy

What were you doing between 2020 and 23? I was working on my AI skillset. Nobody will even question me because they fucking have no idea what it is themselves but only that they want it.

WagyuSneakers@lemmy.world on 29 Jun 19:00 next collapse

It’s extremely easy to detect this. Recruiters actively filter out resumes like this for important roles.

blady_blah@lemmy.world on 29 Jun 19:45 collapse

As an engineering manager, I’ve already seen cover letters and intro emails that are so obviously AI generated that it’s laughable. These should be used like you use them for writing essays, as a framework with general prompts, but filled in by yourself.

Fake friendliness that was outsourced to an ai is worse than no friendliness at all.

Nicoleism101@lemm.ee on 30 Jun 09:41 collapse

I didn’t mean AI generated anything though 🙄. I meant put lots of ‘AI’ keyword in the resume in whatever way looks professional but in reality is pure bullshit

Watch their neuron being activated as they see magic word. Gotta play the marketing game.

You want to be AI ready? Hire me. I have spent three years working with AI and posses invaluable experience that will elevate your company into a new era of rapid development.

blady_blah@lemmy.world on 30 Jun 10:12 collapse

It feels like you didn’t quite understand… If you’ve ever read an AI essay, you can see some of the way they currently write. When you see facts and figures thrown in from the internet in terms of what the company does and they sound… Artificial… It’s rather obvious that it was AI written. I’m currently getting AI spam and it’s also quite easy to see and detect. It’s the same thing.

Someone used an AI tool to write a cover letter and sent it to me. I’ve seen this a few times. It seems very obvious when you come across it.

I’m sure it’ll get better in the future, but right now it needs massaging in order to sound real. There’s a very obvious uncanny valley that exists with some AI writing. That’s what I’m talking about.

Nicoleism101@lemm.ee on 30 Jun 10:15 collapse

Okay but we are talking about two different things which is fine by me of course but it is a little tricky. I agree though on that second topic

veganpizza69@lemmy.world on 29 Jun 15:59 next collapse

link.springer.com/article/…/s10676-024-09775-5

Psythik@lemmy.world on 29 Jun 16:09 next collapse

Can we please keep the AI hate in the fuck_ai community so that I don’t have to see it?

I don’t care what Lemmy thinks, ChatGPT has improved my life for the better. I utilize it every day.

Piemanding@sh.itjust.works on 29 Jun 16:16 next collapse

Yes, but it also actively worsens people’s lives too.

xor@lemmy.blahaj.zone on 29 Jun 17:02 next collapse

Why? This is a scientific article with a shitpost as the title

androogee@midwest.social on 29 Jun 17:21 next collapse

You can make or find a pro-ai community and stay in there.

It’s not the rest of the world’s job to coddle you.

Tamo240@programming.dev on 29 Jun 18:40 next collapse

And are you being paid more for your increased productivity, or is your company stealing that value?

WagyuSneakers@lemmy.world on 29 Jun 19:02 next collapse

I wouldn’t trust the work you do at all.

Zoot@reddthat.com on 29 Jun 19:36 next collapse

What AI hate? This is science memes, and that is a science publication. I’m glad I got to enjoy this sciencey meme

mriormro@lemmy.world on 29 Jun 20:22 next collapse

Fucking lol.

Aussiemandeus@aussie.zone on 29 Jun 21:36 collapse

Proper main character syndrome haha

kaffiene@lemmy.world on 30 Jun 00:26 collapse

So don’t read the article. And maybe quit policing other people’s conversations

Psythik@lemmy.world on 30 Jun 01:42 collapse

I just created a filter for the keyword “AI”.

Goodbye and good riddance, haters. 😎✌️

kaffiene@lemmy.world on 30 Jun 05:14 collapse

Look in the mirror

Psythik@lemmy.world on 30 Jun 15:11 collapse

I’m not the one constantly complaining about AI, genius.

Blocking you too now. I’m tired of this discussion.

kaffiene@lemmy.world on 01 Jul 11:29 collapse

I’m not constantly complaining about AI. I use AI nearly every day Nimrod

Sibbo@sopuli.xyz on 29 Jun 18:49 next collapse

Because these programs cannot themselves be concerned with truth, and because they are designed to produce text that looks truth-apt without any actual concern for truth, it seems appropriate to call their outputs bullshit.

This is actually a really nice insight on the quality of the output of current LLMs. And it teaches about how they work and what the goals given by their creators are.

They are but trained to produce factual information, but to talk about topics while sounding like a competent expert.

For LLM researchers this means that they need to figure out how to train LLMs for factuality as opposed to just sounding competent. But that is probably a lot easier said than done.

fckreddit@lemmy.ml on 29 Jun 19:16 next collapse

This is something I already mentioned previously. LLMs have no way of fact checking, no measure of truth or falsity built into. In the training process, it probably accepts every piece of text as true. This is very different from how our minds work. When faced with a piece of text we have many ways to deal with it, which range from accepting it as it is to going on the internet to verify it to actually designing and conducting experiments to prove or disprove the claim. So, yeah what ChatGPT outputs is probably bullshit.

Of course, the solution is that ChatGPT be trained by labelling text with some measure of truth. Of course, LLMs need so much data that labelling it all would be extremely slow and expensive and suddenly, the fast moving world of AI to screech to almost a halt, which would be unacceptable to the investors.

FiniteBanjo@lemmy.today on 29 Jun 21:26 next collapse

It’s even more than just “accepting everything as true” the machines have no concept of true. The machine doesn’t think. It’s a combination of three processes: prediction algorithm for the next word, algorithm that compares grammar and sentence structure parity, and at least one algorithm to help police the other two for problematic statements.

Clearly the problem is with that last step, but the solution would be a human or a general intelligience, meaning the current models in use will never progress beyond this point.

MenacingPerson@lemm.ee on 30 Jun 05:31 next collapse

This is very different from how our minds work.

Childrens’ minds work similarly.

fckreddit@lemmy.ml on 30 Jun 08:46 collapse

Why do you even think that? Children don’t ask questions? Don’t try to find answers?

MenacingPerson@lemm.ee on 01 Jul 01:18 collapse

Sure they do. But they also trust adults a lot. Children try to find answers only because they have stimulus other than humans telling them things, but if that stimulus is missing, they will believe the adult. The environments that AI “grow up” in are different, but they are very similar from a mental perspective.

How many times have you heard the story of something hearing something false from a family member and holding it close to their heart for years?

fckreddit@lemmy.ml on 01 Jul 04:53 collapse

Now that I think about children develop critical thinking at around the age of 10. Perhaps you are right. But, the question remains, will LLMs develop such critical thinking on it’s own or are we still missing something?

iamkindasomeone@feddit.de on 30 Jun 10:32 collapse

Your statement on no way of fact checking is not a 100% correct as developers found ways to ground LLMs, e.g., by prepending context pulled from „real time“ sources of truth (e.g., search engines). This data is then incorporated into the prompt as context data. Well obviously this is kind of cheating and not baked into the LLM itself, however it can be pretty accurate for a lot of use cases.

fckreddit@lemmy.ml on 30 Jun 11:59 collapse

Does using authoritative sources is fool proof? For example, is everything written in Wikipedia factually correct? I don’t believe so unless I actually check it. Also, what about reddit or stack overflow? Can they be considered factually correct? To some extent, yes. But not completely. That is why most of these LLMs give such arbitrary answers. They extrapolate on information they have no way knowing or understanding.

iamkindasomeone@feddit.de on 30 Jun 18:16 collapse

I don’t quite understand what you mean by extrapolate on information. LLMs have no model of what an information or the truth is. However, factual information can be passed into the context, the way Bing does it.

glitchdx@lemmy.world on 29 Jun 21:42 next collapse

There are things that chatgpt does well, especially if you temper your expectations to the level of someone who has no valuable skills and is mostly an idiot.

Hi, I’m an idiot with no valuable skills, and I’ve found chatgpt to be very useful.

I’ve recently started learning game development in godot, and the process of figuring out why the code that chatgpt gives me doesn’t work has taught me more about programming than any teacher ever accomplished back in high school.

Chatgpt is also an excellent therapist, and has helped me deal with mental breakdowns on multiple occasions, while it was happening. I can’t find a real therapist’s phone number, much less schedule an appointment.

I’m a real shitty writer, and I’m making a wiki of lore for a setting and ruleset for a tabletop RPG that I’ll probably never get to actually play. ChatGPT is able to turn my inane ramblings into coherent wiki pages, most of the time.

If you set your expectations to what was advertised, then yeah, chatgpt is bullshit. Of course it was bullshit, and everyone who knew half of anything about anything called it. If you set realistic expectations, you’ll get realistic results. Why is this so hard for people to get?

Natanael@slrpnk.net on 29 Jun 22:45 next collapse

Because few people know what’s realistic for LLMs

oo1@lemmings.world on 30 Jun 08:36 collapse

Intelligence is a very loaded word and not very precise in general usage. And i mean that amongst humans and animals as well as robots.

I’m sure the real AI and compsci researchers have precise terms and taxonomies for it and ways to measure it, but the word itself, in the hands of marketing people and the general population as an audience . . . not useful.

dmalteseknight@programming.dev on 30 Jun 11:20 next collapse

Yeah it is as if someone invented the microwave oven and everyone over hypes it as being able to cook Michelin star meals. People then dismiss it entirely since it cannot produce said Michelin star meals.

They fail to see that is a great reheating machine and a good machine for quick meals.

interdimensionalmeme@lemmy.ml on 30 Jun 15:15 collapse

Also, you can make a michelin meal in a microwave, if you have the skills.

AXLplosion@lemmy.zip on 01 Jul 00:54 collapse

Hah I had that exact same experience with Godot

Colour_me_triggered@lemm.ee on 30 Jun 05:22 next collapse

Wouldn’t it be funny if the article was written by chat GPT.

xx3rawr@sh.itjust.works on 30 Jun 16:55 collapse

Unlike OpenAI, this article is actually open.