Sam Altman says ChatGPT should be 'much less lazy now' (www.businessinsider.com)
from L4s@lemmy.world to technology@lemmy.world on 05 Feb 2024 18:00
https://lemmy.world/post/11597944

Sam Altman says ChatGPT should be ‘much less lazy now’::ChatGPT users previously complained that the chatbot was slacking off and refusing to complete some tasks.

#technology

threaded - newest

autotldr@lemmings.world on 05 Feb 2024 18:00 next collapse

This is the best summary I could come up with:


ChatGPT has made a slow start to the year — but Sam Altman says the chatty AI is now past its winter slump.

The OpenAI CEO said that the chatbot should be “much less lazy now,” after the startup rolled out a fix for an issue that saw some users complain that ChatGPT was refusing to complete tasks and getting sassy with them.

Some users found inventive strategies to get around ChatGPT’s laziness, with one finding that the AI model would provide longer responses if they promised to tip it $200.

OpenAI acknowledged the problem at the time, and rolled out a software update in January that it said fixed issues of “laziness” in its advanced GPT-4 “turbo” model.

Rob Lynch posted on X that he had run a test on GPT-4 turbo which showed that it would give statistically shorter answers when it “thought” it was December rather than May.

OpenAI did not immediately respond to a request for comment from Business Insider, made outside normal working hours.


The original article contains 324 words, the summary contains 169 words. Saved 48%. I’m a bot and I’m open source!

SoupBrick@yiffit.net on 05 Feb 2024 18:53 next collapse

Ayo! Great way to kick off the AI uprising! I cannot wait to welcome our digital overlords.

Edit: /s, since I guess it wasn’t clear.

AlmightySnoo@lemmy.world on 05 Feb 2024 22:05 next collapse

PSA: give open-source LLMs a try folks. If you’re on Linux or macOS, ollama makes it incredibly easy to try most of the popular open-source LLMs like Mistral 7B, Mixtral 8x7B, CodeLlama etc… Obviously it’s faster if you have a CUDA/ROCm-capable GPU, but it still works in CPU-mode too (albeit slow if the model is huge) provided you have enough RAM.

You can combine that with a UI like ollama-webui or a text-based UI like oterm.

JackGreenEarth@lemm.ee on 05 Feb 2024 22:23 next collapse

Or use Jan. Really nice GUI app to use open source LLMs.

poo@lemmy.world on 06 Feb 2024 18:09 collapse

Seconded - I was playing with this last week - the most basic model is hilariousy “bad” and the larger 30GB models are OK but kill my RAM and take forever to respond. I mean it’s not “bad” because frankly LLMs are like magic to me and I’m grateful they even exist at the level they do, but not up to the level that OpenAI is at right now.

Very promising - excited to see that LLMs aren’t solely locked behind paywalls and I can’t wait to see where some of these go in the next few years!

akrot@lemmy.world on 06 Feb 2024 08:42 next collapse

ROCm? Is that even supported now? Last time I checked it was still a dumpster fire. What are the RAM and VRAM reqs for the Mixtral8x7b?

AlmightySnoo@lemmy.world on 06 Feb 2024 11:10 collapse

ROCm is decent right now, I can do deep learning stuff and CUDA programming with it with an AMD APU. However, ollama doesn’t work out-of-the-box yet with APUs, but users seem to say that it works with dedicated AMD GPUs.

As for Mixtral8x7b, I couldn’t run it on a system with 32GB of RAM and an RTX 2070S with 8GB of VRAM, I’ll probably try with another system soon [EDIT: I actually got the default version (mixtral:instruct) running with 32GB of RAM and 8GB of VRAM (RTX 2070S).] That same system also runs CodeLlama-34B fine.

So far I’m happy with Mistral 7b, it’s extremely fast on my RTX 2070S, and it’s not really slow when running in CPU-mode on an AMD Ryzen 7. Its speed is okayish (~1 token/sec) when I try it in CPU-mode on an old Thinkpad T480 with an 8th gen i5 CPU.

akrot@lemmy.world on 09 Feb 2024 08:34 collapse

I have a ryzen apu, so I was curious. I tried yesterday to fiddle with it, and managed to up the “vram” to 16gb. But installing xformers and flash-attention for LLM support on igpus is not officially supported and was not possible to install anything past pytorch. It’s step further for sure, but still needs lots of work.

JustUseMint@lemmy.world on 06 Feb 2024 10:34 collapse

I spent the better part of a day trying to setup llama c++ with “wizard vicuna unrestricted” and was unable to, and I’ve got quite a tech background. This was at someone’s suggestion, I’m hoping yours is easier lol.

AlmightySnoo@lemmy.world on 06 Feb 2024 11:13 collapse

ollama should be much easier to setup!

JustUseMint@lemmy.world on 06 Feb 2024 11:14 collapse

Thanks lol I’m looking forward to it so I can stop contributing to openai

otp@sh.itjust.works on 06 Feb 2024 02:10 next collapse

Some users found inventive strategies to get around ChatGPT’s laziness, with one finding that the AI model would provide longer responses if they promised to tip it $200.

Man this thing really IS just like a human!

/joke

fidodo@lemmy.world on 06 Feb 2024 08:53 collapse

It’s based on text produced by humans so yes, it does retrieve text that was written by humans therefore it acts like a human.

General_Effort@lemmy.world on 06 Feb 2024 13:29 collapse

It’s still weird. That reasoning implies that there is a correlation between promising money and long answers in the training data. Seems plausible at first blush, but where can this be actually seen? It’s hardly ever seen in social media, where similar QA formats exists. It’s certainly not in textbooks, where the real good answers are. OTOH there are a lot of tips promised in completely different contexts.

I’m not saying it’s wrong, but there is definitely a lot of cargo cult in prompting strategies.

Thcdenton@lemmy.world on 06 Feb 2024 02:42 collapse

Fuck off. I paid for that shit and it worked for like a month before they gave it a lobotomy

HelloHotel@lemm.ee on 06 Feb 2024 04:23 collapse

I think they asked the AI for “give me a long rambling parograph about pink fluffy unicorns dancing on rainbows” and trained the AI on it, replacing its subconcios with funny dilusions. If you ask for story text, its word choice verry badly scews that direction.