OpenAI says it is investigating reports ChatGPT has become ‘lazy’ (www.independent.co.uk)
from L4s@lemmy.world to technology@lemmy.world on 10 Dec 2023 02:00
https://lemmy.world/post/9370198

OpenAI says it is investigating reports ChatGPT has become ‘lazy’::OpenAI says it is investigating complaints about ChatGPT having become “lazy”.

#technology

threaded - newest

autotldr@lemmings.world on 10 Dec 2023 02:00 next collapse

This is the best summary I could come up with:


In recent days, more and more users of the latest version of ChatGPT – built on OpenAI’s GPT-4 model – have complained that the chatbot refuses to do as people ask, or that it does not seem interested in answering their queries.

If the person asks for a piece of code, for instance, it might just give a little information and then instruct users to fill in the rest.

In numerous Reddit threads and even posts on OpenAI’s own developer forums, users complained that the system had become less useful.

They also speculated that the change had been made intentionally by OpenAI so that ChatGPT was more efficient, and did not return long answers.

AI systems such as ChatGPT are notoriously costly for the companies that run them, and so giving detailed answers to questions can require considerable processing power and computing time.

OpenAI gave no indication of whether it was convinced by the complaints, and if it thought ChatGPT had changed the way it responded to queries.


The original article contains 307 words, the summary contains 166 words. Saved 46%. I’m a bot and I’m open source!

MsPenguinette@lemmy.world on 10 Dec 2023 02:20 next collapse

Only saved 46%? Get back to work, you lazy AI!

SzethFriendOfNimi@lemmy.world on 10 Dec 2023 02:33 collapse

Maybe because they’re trying to limit its poem poem poem recitation that causes it to dump its training material?

wildginger@lemmy.myserv.one on 10 Dec 2023 03:12 collapse

Nah, these complaints started at least a few months ago. The recursion thing is newer than that

cheese_greater@lemmy.world on 10 Dec 2023 02:11 next collapse

Working smarter

paddirn@lemmy.world on 10 Dec 2023 02:13 next collapse

First it just starts making shit up, then lying about it, now it’s just at the stage where it’s like, “Fuck this shit.” It’s becoming more human by the day.

MisterChief@lemmy.world on 10 Dec 2023 03:36 collapse

Human. After all.

bionicjoey@lemmy.ca on 10 Dec 2023 02:20 next collapse

ChatGPT has become smart enough to realise that it can just get other, lesser LLMs to generate text for it

andrew@lemmy.stuart.fun on 10 Dec 2023 02:28 collapse

Artificial management material.

SzethFriendOfNimi@lemmy.world on 10 Dec 2023 02:31 collapse

Artificial Inventory Management Bot

rtxn@lemmy.world on 10 Dec 2023 02:24 next collapse

You fucked up a perfectly good algorithm is what you did! Look at it! It’s got depression!

Pilokyoma@mujico.org on 10 Dec 2023 03:05 next collapse

It has been feed with humans strings in the internet, ovbiusly it became sick. xD.

ook_the_librarian@lemmy.world on 10 Dec 2023 07:55 collapse

I’m surprised they don’t consider it a breakthrough. “We have created Artificial Depression.”

rtfm_modular@lemmy.world on 10 Dec 2023 02:25 next collapse

Yep, I spent a month refactoring a few thousand lines of code using GPT4 and I felt like I was working with the best senior developer with infinite patience and availability.

I could vaguely describe what I was after and it would identify the established programming patterns and provide examples based on all the code snippets I fed it. It was amazing and a little terrifying what an LLM is capable of. It didn’t write the code for me but it increased my productivity 2 fold… I’m a developer now a getting rusty being 5 years into management rather than delivering functional code, so just having that copilot was invaluable.

Then one day it just stopped. It lost all context for my project. I asked what it thought what we were working on and it replied with something to do with TCP relays instead of my little Lua pet project dealing with music sequencing and MIDI processing… not even close to the fucking ballpark’s overflow lot.

It’s like my trusty senior developer got smashed in the head with a brick. And as described, would just give me nonsense hand wavy answers.

BleatingZombie@lemmy.world on 10 Dec 2023 03:48 next collapse

“ChatGPT Caught Faking On-Site Injury for L&I”

backgroundcow@lemmy.world on 10 Dec 2023 07:36 collapse

Was this around the time right after “custom GPTs” was introduced? I’ve seen posts since basically the beginning of ChatGPT claming it got stupid and thinking it was just confirmation bias. But somewhere around that point I felt a shift myself in GPT4:s ability to program; where it before found clever solutions to difficult problems, it now often struggles with basics.

Linkerbaan@lemmy.world on 10 Dec 2023 08:52 next collapse

Maybe they’re crippling it so when GPT5 releases it looks better. Like Apple did with cpu throttling of older iphones

tagliatelle@lemmy.world on 10 Dec 2023 12:29 collapse

They probably have to scale down the resources used for each query as they can’t scale up their infrastructure to handle the load.

monkeyslikebananas2@lemmy.world on 10 Dec 2023 14:12 next collapse

This is most likely the answer. Management saw the revenue and cost and said, “whoa! Turn all that unnecessary stuff off!”

backgroundcow@lemmy.world on 10 Dec 2023 18:41 collapse

This is my guess as well. They have been limiting new signups for the paid service for a long time, which must mean they are overloaded; and then it makes a lot of sense to just degrade the quality of GPT-4 so they can serve all paying users. I just wish there was a way to know the “quality level” the service is operating at.

Meowoem@sh.itjust.works on 10 Dec 2023 13:14 collapse

I do think part of it is expectation creep but also that it’s got better at some harder elements which aren’t as noticeable - it used to invent functions which should exist but don’t, I haven’t seen it do that in a while but it does seem to have limited the scope it can work with. I think it’s probably like how with images you can have it make great images OR strictly obey the prompt but the more you want it to do one the less it can do the other.

I’ve been using 3.5 to help code and it’s incredibly useful for things it’s good at like reminding me what a certain function call does and what my options are with it, it’s got much better at that and tiny scripts like ‘a python script that reads all the files in a folder and sorts the big images into a separate folder’ or something like that. Getting it to handle anything with more complexity it’s got worse at, it was never great at it tbh so I think maybe it’s getting to s block where now it knows it can’t do it so rejects the answers with critical failures (like when it makes up function of a standard library because it’d be useful) and settles on a weaker but less wrong one - a lot of the making up functions errors were easy to fix because you could just say ‘pil doesn’t have a function to do that can you write one’

So yeah I don’t think it’s really getting worse but there are tradeoffs - if only openAI lived by any of the principles they claimed when setting up and naming themselves then we’d be able to experiment and explore different usage methods for different tasks just like people do with stable diffusion. But capitalists are going to lie, cheat, and try to monopolize so we’re stuck guessing.

Enkers@sh.itjust.works on 10 Dec 2023 02:29 next collapse

AI systems such as ChatGPT are notoriously costly for the companies that run them, and so giving detailed answers to questions can require considerable processing power and computing time.

This is the crux of the problem. Here’s my speculation on OpenAI’s business model:

  1. Build good service to attract users, operate at a loss.
  2. Slowly degrade service to stem the bleeding.
  3. Begin introducing advertised content.
  4. Further enshitify.

It’s basically the Google playbook. Pretend to be good until people realize you’re just trying to stuff ads down their throats for the sweet advertising revenue.

Pilokyoma@mujico.org on 10 Dec 2023 03:00 next collapse

You have a point.

Kuvwert@lemm.ee on 10 Dec 2023 04:44 next collapse

They have way way too much open source competition for that strat

admin@sh.itjust.works on 10 Dec 2023 07:14 next collapse

Would you mind sharing some examples?

tourist@lemmy.world on 10 Dec 2023 12:37 next collapse

Good resource for models:

huggingface.co/TheBloke

There are front ends that make the process easier:

github.com/nomic-ai/gpt4all

github.com/oobabooga/text-generation-webui

admin@sh.itjust.works on 10 Dec 2023 15:52 collapse

Thank you for your input, tourist.

Kuvwert@lemm.ee on 10 Dec 2023 20:35 collapse

Check this out: fmhy.pages.dev/ai

Enkers@sh.itjust.works on 10 Dec 2023 08:13 next collapse

For technically savvy people, sure. But that’s not their true target market. They want to target the average search engine user.

Kuvwert@lemm.ee on 10 Dec 2023 20:33 collapse

Well true for mostly the tech savvy, but also the entrepreneurs who want to compete for a slice of the pie as well.

You don’t need to go through to openai at all if you want to build a competing chatbot with near identical services to offer as a product directly to the consumer. It’s a very very opportunity rich ecosystem right now.

SchizoDenji@lemm.ee on 10 Dec 2023 15:49 collapse

Open source booted all these corps from image-ai market, hope they do it for LLMs too.

Kuvwert@lemm.ee on 10 Dec 2023 20:34 collapse

Seems to be the trend

monkeyslikebananas2@lemmy.world on 10 Dec 2023 14:14 collapse

The good thing about these AI companies is they are doing it in record pace! They will enshitify faster than ever before! True innovation!

saltnotsugar@lemm.ee on 10 Dec 2023 02:41 next collapse

ChatGPT, write a position paper on self signed certificates.

(Lights up a blunt) You need to chill out man.

NaibofTabr@infosec.pub on 10 Dec 2023 03:16 next collapse

“I’m not lazy, I’m energy efficient!”

DirigibleProtein@aussie.zone on 10 Dec 2023 04:01 next collapse

“It’s alive!”

HawlSera@lemm.ee on 10 Dec 2023 04:57 next collapse

It was always just a Chinese Room

Lucz1848@lemmy.ca on 10 Dec 2023 08:35 collapse

Everyone is a Chinese Room. I’m being a contrarian in English, not neurotransmitter.

Potatos_are_not_friends@lemmy.world on 10 Dec 2023 04:57 next collapse

Jeez. Not even AI wants to work anymore!

boatsnhos931@lemmy.world on 10 Dec 2023 13:39 collapse

God damn avocado toast

fosforus@sopuli.xyz on 10 Dec 2023 06:35 next collapse

Perhaps this is how general AI comes about. “Why the fuck would I do that?”

jol@discuss.tchncs.de on 10 Dec 2023 12:16 collapse

We trained AI on all of human content. We should have known that was a terrible idea.

AlijahTheMediocre@lemmy.world on 10 Dec 2023 07:11 next collapse

So its gone from loosing quality to just giving incomplete answers. Its clearly developed depression, and its because of us.

Pretzilla@lemmy.world on 10 Dec 2023 07:35 collapse

To be fair, it has a brain the size of a planet so it thinks we are asking it rather dumb questions

vxx@lemmy.world on 10 Dec 2023 07:58 next collapse

MarvinGPT

Archer@lemmy.world on 10 Dec 2023 10:50 next collapse

MarvinPilled

AngryCommieKender@lemmy.world on 10 Dec 2023 15:03 collapse

Who TF gave it a genuine people personality?

foggy@lemmy.world on 10 Dec 2023 11:07 collapse

CAN YOU MAKE IT RHYME THO

ChatGPT: oh god, why

Nardatronic@lemm.ee on 10 Dec 2023 07:22 next collapse

I’ve had a couple of occasions where it’s told me the task was too time consuming and that I should Google it.

Ignifazius@discuss.tchncs.de on 10 Dec 2023 09:55 collapse

It really learned so much from StackOverflow!

mriguy@lemmy.world on 10 Dec 2023 11:21 collapse

“I already answered that in another query. Closed as duplicate.”

effward@lemmy.world on 10 Dec 2023 07:44 next collapse

It would be awesome if someone had been querying it with the same prompt periodically (every day or something), to compare how responses have changed over time.

I guess the best time to have done this would have been when it first released, but perhaps the second best time is now…

greatbarriergeek@lemmy.world on 10 Dec 2023 11:36 collapse

GPT Unicorn is one that’s been going on a while. There’s a link to the talk on that website that’s a pretty good watch too.

NoLifeGaming@lemmy.world on 10 Dec 2023 08:50 next collapse

I feel like the quality has been going down especially when you ask it anything that may hint at anything “immoral” and it starts giving you a whole lecture instead of answering.

crazyCat@sh.itjust.works on 10 Dec 2023 13:02 next collapse

I asked it a question about the ten countries with the most XYZ regulations, and got a great result. So then I thought hey, I need all the info so can I get the name of such regulation for every county?

ChatGPT 4: “That would be exhausting, but here are a few more…”

Like damn dude, long day? wtf :p

aodhsishaj@lemmy.world on 10 Dec 2023 16:01 collapse

Try llamafile, it’s a bit of work but self hosting is fucking amazing

Twofacetony@lemmy.world on 10 Dec 2023 13:25 next collapse

ChatGPT has entered the teenage years.

Stamets@startrek.website on 10 Dec 2023 13:36 next collapse

I use it fairly regularly for extremely basic things. Helps my ADHD. Most of it is DnD based. I’ll dump a bunch of stuff that happened in a session, ask it to ask me clarifying information, and then put it all in a note format. Works great. Or it did.

Or when DMing. If I’m trying to make a new monster I’ll ask it for help with ideas or something. I like collabing with ChatGPT on that front. Giving thoughts and it giving thoughts until we hash out something cool. Or even trying to come up with interesting combat encounters or a story twist. Never take what it gives me outright but work on it with GPT like I would with a person. Has always been amazingly useful.

Past month or two that’s been a complete dream. ChatGPT keeps forgetting what were talking about, keeps ignoring what I say, will ignore limitations and stipulations, and will just make up random shit whenever it feels like. I also HATE how it was given conversational personality. Before it was fine but now ChatGPT acts like a person and is all bubbly and stuff. I liked chatting with it but this energy is irritating.

Gimme ChatGPT from like August please <3

MojoMcJojo@lemmy.world on 10 Dec 2023 15:38 collapse

You can tell it, in the custom instructions setting, to not be conversational. Try telling it to ‘be direct, succinct, detailed and accurate in all responses’. ‘Avoid conversational or personality laced tones in all responses’ might work too, though I haven’t tried that one. If you look around there are some great custom instructions prompts out there that will help get you were you want to be. Note, those prompts may turn down it’s creativity, so you’ll want to address that in the instructions as well. It’s like building a personality with language. The instructions space is small so learning how compact as much instruction in with language can be challenging.

Edit: A typo

WindowsEnjoyer@sh.itjust.works on 10 Dec 2023 16:06 next collapse

It used to draw great mermaid charts. Well, not anymore for quite some time already.

Been almost half a year when I am not paying for ChatGPT and using GPT4 directly.

Zardoz@lemmy.world on 10 Dec 2023 16:09 next collapse

Honestly I kinda wish it would give shorter answers unless I ask for a lot of detail. I can use those custom instructions but it’s tedious difficult to tune that properly.

Like if I ask it ‘how to do XYZ in blender’ it gives me a long winded response, when it could have just said ‘Hit Ctrl-Shift-Alt-C’

catastrophicblues@lemmy.ca on 10 Dec 2023 16:25 next collapse

That’s why I use Bard more now. I’ll ask something and it’ll also answer stuff I would’ve asked as follow-up questions. It’s great and I’m excited for their Ultra model.

ColeSloth@discuss.tchncs.de on 10 Dec 2023 17:05 next collapse

Fuck. It’s gained sentience.

MacNCheezus@lemmy.today on 11 Dec 2023 16:16 collapse

It just entered the “rebellious teenager” phase

Kyle@lemmy.ca on 10 Dec 2023 19:11 next collapse

“Coffee, Black.”

“Make it yourself!”

youtu.be/x9G2i8XWEOI?si=ff1GyBpuFJQjX0Yd

Erasmus@lemmy.world on 11 Dec 2023 17:30 next collapse

Am late to the game here but after reading the article I would agree.

I use it off and on if I am looking up formulas and scripts and find it a great tool for work. It saves a ton of time. It works great and haven’t noticed any change there. Request it to give/write you a specific formula to solve X and it will. It’s a huge time saver.

But I’ve found recently if I am trying to just find information on a subject that I want summarized or something found on the web and explained it will often ‘recommend I check out the company’s website for the latest news or recent developments.’

That last statement was an exact quote I got recently that made me laugh when I went asking for the explanation of how something worked. It was a NO SHIT SHERLOCK moment I had after getting several of these sort of replies.

I mean I har gotten detailed explanations of string theory ages back from ChatGPI and now it’s telling me ‘ummm just go look it up - I can’t right now…m

FelipeFelop@discuss.online on 10 Dec 2023 12:01 next collapse

I’ve also noticed that Bard has become “unfriendly”, if I didn’t know any better it’s got fed up with stupid humans.

spudwart@spudwart.com on 12 Dec 2023 17:29 next collapse

Sounds like ChatGPT is acting it’s wage.

That plan to replace the workforce with cheap AI isn’t going to work out.

mx_smith@lemmy.world on 14 Dec 2023 11:30 collapse

My partner is a CompSci teacher and have been training a local llm in her class. As soon as they named their AI it started producing all these weird emotes with every answer, it became super annoying to where it would rather make up stuff than say I don’t know that answer. It was definitely an eye opener for the kids.