ChatGPT is losing some of its hype, as traffic falls for the third month in a row (www.businessinsider.com)
from L4s@lemmy.world to technology@lemmy.world on 08 Sep 2023 14:00
https://lemmy.world/post/4675408

ChatGPT is losing some of its hype, as traffic falls for the third month in a row::August marked the third month in a row that the number of monthly visits to ChatGPT’s website worldwide was down, per data from Similarweb.

#technology

threaded - newest

autotldr@lemmings.world on 08 Sep 2023 14:00 next collapse

This is the best summary I could come up with:


ChatGPT took the world by storm when it was released last November, but it looks like it’s losing momentum.

“One theory about why ChatGPT’s web traffic dropped over the summer is that school was out, which would help explain why the traffic trend stabilized in August as schoolchildren in the US were back in class in greater numbers toward the end of the month,” David F. Carr, a senior insights manager at Similarweb, wrote in the report.

Before Meta’s Threads assumed the title in July, ChatGPT was the fastest-growing app ever when it reached 100 million users in two months.

Some of that hype was prompted by students, leading to professors finding ways to combat ChatGPT plagiarism, and one Princeton student launching GPTZero to detect if an essay was written by AI.

But it’s also being used in the workplace, with employees using ChatGPT to write code, do research, and improve time management.

In July, users of OpenAI’s latest model, GPT-4, started complaining that the chatbot’s performance had declined.


The original article contains 338 words, the summary contains 169 words. Saved 50%. I’m a bot and I’m open source!

SSUPII@sopuli.xyz on 08 Sep 2023 14:19 next collapse

I mean, yeah? You can’t rely on hype ever being present. Honestly with the very light use I do (Asking a couple of questions at most in a month) I feel like it has not changed at all. Just at the top I now have a locked button that says “GPT 4”.

ChrisLicht@lemm.ee on 08 Sep 2023 14:50 next collapse

FWIW, it’s become an inextricable part of my life. I use it for hours every day, for programming and Linux advice, spreadsheet help, foreign language practice, and random trivia.

Yesterday, I discovered that Snoop Dogg’s -izz speak from the aughts was actually derived from carny pig Latin.

demlet@lemmy.world on 08 Sep 2023 17:52 collapse

Yikes, I would be very scared to take anything ChatGPT says as accurate. Google keeps trying to get me to use theirs when I do searches, and I refuse.

Prandom_returns@lemm.ee on 08 Sep 2023 18:26 collapse

I think (hope) that peroson is being facetious.

I hope people are smart enough to understand that the statistical sentence generators don’t “know” anything.

penguin@sh.itjust.works on 08 Sep 2023 19:05 next collapse

It can generate simple stuff accurately quite often. You just have to keep in mind that it could be dead wrong and you have to test/verify what it says.

Sonetimes I feel like a few lines of code should be doable in one line using a specific technique, so I ask it to do that and see what it does. I don’t just take what it says and use it, I see how it tried to solve it and then check it. For example by looking up if the method it used exists and reading the doc for that method.

Exact same as what I would do if I saw someone on stack overflow or reddit recommending something.

demlet@lemmy.world on 08 Sep 2023 20:31 next collapse

You may be right now that I reread their comment.

ribboo@lemm.ee on 09 Sep 2023 09:18 collapse

It’s just very quick at doing simple things you already could do - or doing things that you’d need to think about for a couple of minutes.

I wouldn’t trust it to do things I couldn’t achieve. But for stuff I could, it’s often much quicker. And I’m well equipped to check what it’s doing myself.

Statistical sentence generator gets thrown around so much, if anything I doubt people actually understand what can be achieved through just that. It doesn’t matter if it doesn’t know anything. If it can generate sentences statistically with a 100% correct and proficient outcome, it’d always be correct regardless of its lack of knowledge.

We’re not at 100%. But we’re not at 10% either.

Prandom_returns@lemm.ee on 09 Sep 2023 10:13 collapse

A parrot can generate sentences with a 100% correct and proficient outcome, but it’s just using sounds their owner taught them.

Garbage in, garbage out.

Even the smartest, most educated people are never 100% sure of anything, just because there’s always nuances.

These engines are fed information that is written witg 100% surety, completely devoid of nuance. These engines will not produce “answers to questions” that are correct, because “correct” is fluid.

ribboo@lemm.ee on 09 Sep 2023 11:48 collapse

Meh.

That’s a very fallibilistic viewpoint. There are lots of certainties that can be answered correctly.

Prandom_returns@lemm.ee on 09 Sep 2023 13:10 collapse

There are fields and fields in science that work on things that are “certainties”.

If you’re talking about simple stuff like “what is the first letter in the english alphabet”, then sure. But many people, even in this thread, say they use the engines for hours, to get answers, guide them, and discuss.

It is a parrot on steroids, but even a parrot has knowledge. LLMs have 0% knowledge.

ribboo@lemm.ee on 09 Sep 2023 13:36 collapse

Well, we are back at my earlier point. There is no need for knowledge if the statistical models are good enough.

A weather forecast does not have any knowledge whatsoever. It has data and statistical models. No one goes around dismissing them due to them not have any knowledge. Sure, we can be open to the fact that the statistical models are not perfect. But the models have gotten so good that they are used in people’s everyday life with rather high degree of certainty, they are used for hurricane warnings and whatnot, saving tens of thousands of life’s - if not more - yearly.

Your map app has no knowledge either. But it’s still amazing for knowing with a high degree of certainty how much time you’ll need from place A to B and which route will be shortest. Even taking live traffic into account. We could argue it’s just a parrot on steroid, that has been fed with billions of data points with some statistics on top, and say that it doesn’t know anything. But it’s such a useless point, because knowledge is not necessary if the data and statistical models are sound enough.

Prandom_returns@lemm.ee on 09 Sep 2023 14:13 collapse

It is exactly my point.

None of the “predictive” apps pretend to have knowledge, to give you answers, to “think”, to “hallucinate”, to “give you wrong answers”.

Everybody knows the weather app is “ballpark predictions”, even though it’s based on physical events that are measurable and extrapolatable.

Same with maps. People who follow maps 100% end up in lakes. The predictions the maps give are based on real-life measured data, topical for that particular frame of time.

With LLMs, the input is language. The output is language. It wraps the generated text in pleasantries to imitate knowledge. Unless it’s fed 100% correct material (no such thing), the output is 100% bullshit that sounds about right; right enough to lure naive and, maybe, less IT-literate people to make them feel they’re getting “correct” information.

Statistical engine. No knowledge. Garbage input, garbage output. No sign of “intelligence” whatsoever.

“asking” it questions is not carring about the “information” it returns.

ribboo@lemm.ee on 09 Sep 2023 14:47 collapse

So you can feed a weather model weather data, but you cannot feed a language model, programming languages and get accurate predictions?

Basically no one is saying that “yeah I just go off the output, it’s perfect”. People use it to get a ballpark and then they work off that. Much like a meteorologist would do.

It’s not 100% or 0%. With imperfect data, we get imperfect responses. But that’s no difference from a weather model. We can still get results that are 50% or 80% accurate with less than 100% correct information. Given that a large enough amount of the data is correct.

Prandom_returns@lemm.ee on 09 Sep 2023 16:40 collapse

Yeah, no difference between real-life physical measurements & data calculations made from proven formulas, and random shit collected of random places on the internet (even, possibly, random “LLM” generated sentences).

People do “just go off the output”. There are people like that in this very thread.

Statements like “no difference” are just idiotic.

ribboo@lemm.ee on 09 Sep 2023 19:18 collapse

Of course there is. But weather forecasting have also gotten ridiculously much more accurate with time. Better data, better models. We’ll get there with language models as well.

I’m not arguing language models of today are amazingly accurate, I’m arguing they can be. That they are statistical models is not the problem. That they are new statistical models are.

Prandom_returns@lemm.ee on 09 Sep 2023 19:27 collapse

I’m not arguing language models of today are amazingly accurate, I’m arguing they can be. That they are statistical models is not the problem. That they are new statistical models are.

A broken clock is accurate twice in a day.

I’m arguing that they will never be accurate, because accuracy is not possible. I mean, look at Wikipedia. At least it’s written by people.

Full self driving next year, right?

moog@lemm.ee on 08 Sep 2023 15:40 collapse

you have to pay for gpt4. its smarter and more capable, but slower than gpt3.5

SSUPII@sopuli.xyz on 09 Sep 2023 10:40 collapse

I didn’t say I wanted it for free.

Cyberflunk@lemmy.world on 08 Sep 2023 14:21 next collapse

I switched from their $20 interface to straight API, it makes more sense for me. I wonder if other people are doing this and these metrics aren’t accounting for that.

eggymachus@sh.itjust.works on 08 Sep 2023 14:46 next collapse

Yeah, that’s what I did. With my very light usage the fixed-price subscription isn’t justifiable, but the api works nicely.

Chreutz@lemmy.world on 08 Sep 2023 15:41 collapse

Do you know of any interfaces like (or better than) ChatGPT that you can selfhost, that uses the openai api?

Cyberflunk@lemmy.world on 09 Sep 2023 14:55 collapse

I absolutely love typingmind.com, I’ve invested a lot of personalization into it’s prompt library, and characters.

Chreutz@lemmy.world on 09 Sep 2023 14:58 collapse

That you!

Edit: wow…

Thank you

Iwasondigg@lemmy.one on 08 Sep 2023 14:39 next collapse

Hasn’t the service also gotten worse? When it first came out, you kept hearing how it could pass the bar exam and medical license test. And now all you hear is that it can’t do basic high school homework without wrong answers. Maybe it was hype in the beginning and it never could do those things.

PlasmaDistortion@lemm.ee on 08 Sep 2023 14:48 next collapse

It is so neutered now that I rarely find it is worth the time to use it. Now it just gives non-answers that are rarely helpful or accurate.

Edit: To give a bit more context, I still use it several times a day just to be sure it is still a disappointment.

[deleted] on 08 Sep 2023 15:04 next collapse

.

mo_ztt@lemmy.world on 08 Sep 2023 15:06 next collapse

It was always able to do some genuinely amazing things, but it was always limited when you took it beyond its wheelhouse. It helps to think of it not as an “AI” as people keep saying, but as a “text completer” with a huge amount of power within that domain.

Or, another way to think of it is as a super-powerful search engine. If the answers and knowledge you’re asking of it were fed into it as input data at some point in its training, it’ll probably be able to find it and reformat it back to you with a scary amount of smoothness and precision. If you’re asking it to figure out something new, it may be able to fake it in some short-term fashion or another based on what it’s seen, but not with any genuine understanding behind it. That’s just not what it does. I actually have a little private theory that if it was given something like the bar exam in scale and complexity, but an exam was genuinely a whole new novel invention that hadn’t been extensively discussed and represented in its input corpus, it would fail pretty badly. A lot of what humans can do that makes them capable is adapt to new domains – we can teach ourselves to play chess, or do math, or fly airplanes, or play Celeste. GPT is hugely impressive but it’s still only one domain.

I actually don’t believe that it’s gotten substantively less capable. I think there are little ticks up and down in its capability sometimes in particular areas, and people seize on those to conclude that it’s now becoming dumber, but in my experience, the raw API was always quite capable (more so than the somewhat nerfed chat interface), and it was always super-capable with some tasks and not at all capable with others. I think journalists are just now figuring out that, after having studied the issue in their professional capacity for the better part of a year, and reporting on it as if it’s a new thing.

SirGolan@lemmy.sdf.org on 08 Sep 2023 15:21 collapse

From what I’ve seen, here’s what happened. GPT 4 came out, and it can pass the bar exam and medical boards. Then more recently some studies came out. Some of them from before GPT 4 was released that just finally got out or picked up by the press, others that were poorly done or used GPT 3 (probably because of gpt 4 being expensive) and the press doesn’t pick up on the difference. Gpt 4 is really good and has lots of uses. Gpt 3 has many uses as well but is definitely way more prone to hallucinating.

art@lemmy.world on 08 Sep 2023 14:44 next collapse

This is just balancing out. Anything that gets over-hyped will eventually drop in use. It’ll eventually be a boring yet useful tool just like spreadsheets, spellcheck, or email.

gamer@lemm.ee on 08 Sep 2023 14:44 next collapse

This guy looks like a cadaver.

rDrDr@lemmy.world on 08 Sep 2023 15:16 collapse

He looks like the Change My Mind meme guy.

mo_ztt@lemmy.world on 08 Sep 2023 14:51 next collapse

<img alt="" src="https://lemmy.world/pictrs/image/864ed652-1bfe-4b58-a349-8308ec7d589f.png">

GregoryTheGreat@programming.dev on 08 Sep 2023 15:01 collapse

It took a while but yeah that seems about right. It takes a lot of guiding to have it produce something usable. I have to know a lot about what I want it to do. It can teach me things but the hallucinations are strong sometimes so you have to be careful.

Still it helps me out and I make a lot of progress because of it.

Prandom_returns@lemm.ee on 08 Sep 2023 18:24 next collapse

“I can suggest an equation that has been a while to get the money to buy a new one for you to be a part of the wave of the day I will be there for you”

There, my phone keyboard “hallucinated” this by suggesting the next word.

I understand that anthropomorphising is fun, but it gives the statistical engines more hype than they deserve.

chaircat@lemdro.id on 09 Sep 2023 03:49 collapse

Your phone keyboard statistical engine is not a very insightful comparison to the neural networks that power LLMs. They’re not the same technology at all and just share the barest minimum superficial similarities.

Prandom_returns@lemm.ee on 09 Sep 2023 10:24 collapse

Ah “neural networks” with no neurons?

I’m not comparing technologies, I’m saying those are not “hallucinations”, the engines don’t “think” and they don’t “get something wrong”.

The output is dependent on the input, statistically calculated and presented to the user.

A parrot is, in the most literal of ways, smarter than the “Artificial intelligence” sentence generators we have now.

Mdotaut801@lemmy.world on 09 Sep 2023 11:45 collapse

Why are you being downvoted? What you say is correct. It’s almost like a calculator. Garbage in = garbage out.

Womble@lemmy.world on 09 Sep 2023 12:45 collapse

because they are being wilfully obtuse suggesting that “neural network” a term going back over half a century for a computatuonal method doesnt apply to things without biological neurons, and doing the same thing applying an overly narrow deffinition of halucination when it has a clear meaning in this context of stating textually probable but incorrect statements.

Prandom_returns@lemm.ee on 09 Sep 2023 13:05 collapse

Aka “hype”

penguin@sh.itjust.works on 08 Sep 2023 19:01 collapse

I like it for certain techy things. I just used it to create a linux one-liner command for counting the unique occurances of a regex pattern. I often forget specific flags for Linux commands like how uniq can perform counting.

And something like that is easy to test each piece of what it said and go from there.

As long as you treat it like a peer who prefaced the statement with “I might be wrong / if I recall correctly” it ends up being a pretty good aid.

Bishma@discuss.tchncs.de on 08 Sep 2023 15:00 next collapse

I’m starting to think I’m incompatible with GPT. Code that’s laughably wrong (like sticking in things that aren’t even in the language), DM advice that I could get walking down a greeting card isle, and explanations that would get a Wikipedia editor sent to the firing squad.

orclev@lemmy.world on 08 Sep 2023 15:41 next collapse

Nah, that sounds about right. This is just the natural result of people actually trying to use GPT for all the things they were told it would be able to do, and now discovering that was in fact all bullshit. The LLMs are and always have been massively overhyped and oversold on what they can do. Sadly this won’t stop the corporate executives from trying to use them to replace workers, although when that effort eventually face plants they’ll just quietly re-hire a bunch of people and find some middle manager to blame for their failure. This is was and always will be merely a productivity tool to automate some repetitive work, but it still needs someone to review and clean up its output. It’s not “replace someone doing 40 hours of work a week”, it’s “allow someone to do what used to take 40 hours in 35 hours instead”.

Sadly the most impact this is going to have is on spammers and scammers, who can now automate generating their garbage since it never mattered that any of that crap was accurate or not, merely that on a casual glance it looks reasonable.

Bishma@discuss.tchncs.de on 08 Sep 2023 16:07 next collapse

Yeah. There are a lot of shitty marketing ideas that suddenly become profitable if you don’t have to pay people to generate the content it needs. Honestly I’ve had a couple of those ideas over the years and I’m glad I’m no longer in a position to propose them to anyone.

dustyData@lemmy.world on 08 Sep 2023 18:58 next collapse

Unethical parlour tricks, and nothing more.

penguin@sh.itjust.works on 08 Sep 2023 19:08 next collapse

I agree. I think people are just missing the point. It’s really far from being able to replace a worker.

It’s current capabilities at best can help that worker be slightly faster at certain things. It’s akin to a type of search engine.

superkret@feddit.de on 08 Sep 2023 20:04 collapse

It’s not “replace someone doing 40 hours of work a week”, it’s “allow someone to do what used to take 40 hours in 35 hours instead”.

That someone will then still have to work 40 hours for the same pay, but be more productive, so then 1 in 8 of these someones can be fired.

Womble@lemmy.world on 09 Sep 2023 12:48 collapse

The exact same statement applies to computers, mechanical looms and the plough. Thats how technology works.

Sacha@lemmy.world on 08 Sep 2023 19:07 next collapse

When I first started messing with it, I was kind of neat and fun. I like making characters so I was using it for like story prompts and general outlines. Some were better than others, but it was neat for some inspiration and fleshing out. I never took it’s outputs 1:1.

But when I messed with it again recently. It was a lot worse. Like it ignored parts of my prompt. Like as an example a prompt was about a romance story, but the story was about character A and their family. The love interest character was barely a footnote and could have been removed entirely and nothing would have changed with the story outlines it was giving me.

I thought maybe it doesnt like romance prompts, so I tried less specific and more broad prompts from there, and it was the same thing of just… not outputting what I was asking it to. It got worse and worse and sometimes wouldn’t output anything at all.

Steeve@lemmy.ca on 09 Sep 2023 02:32 collapse

Not sure what language you’re coding in, but I’ve found GPT-4 incredibly helpful for coding in C++ and Python.

o0joshua0o@lemmy.world on 08 Sep 2023 15:04 next collapse

They definitely nerfed it. We will probably end up in a situation where corporations and the rich have access to god-tier AI, and everyone else has access to mediocre, ad-supported AI.

demlet@lemmy.world on 08 Sep 2023 17:54 collapse

Just wait until we find out what the US military has.

The_Picard_Maneuver@startrek.website on 08 Sep 2023 15:05 next collapse

These types of articles bother me. Almost every game, movie, and product has an initial unsustainable level of hype, then comes back down from it.

But these articles inevitably try to frame it as if it’s an indication that something’s failing.

orclev@lemmy.world on 08 Sep 2023 16:08 collapse

That very much depends on if you believed all the hype or not. If you did, then yes, it’s failing, as ChatGPT was supposed to be the next big breakthrough that was going to automate everything ever, and any company that didn’t get in on that right now was going to be left in the dust by all their competitors. On the other hand, if you were an actual sane person (so you know, not a CTO/CEO), then this is very much a non-story as you always knew that all those outlandish claims were nonsense and that this was always going to be yet another niche piece of tech that’s useful in a few places in limited amounts.

oillut@lemm.ee on 08 Sep 2023 15:11 next collapse

Yeah but they keep removing stuff and locking down how useful it can be. First they took GPT4 and put it behind a paywall, now I have a limit to how much I can use it per day and have to switch between multiple accounts sometimes. Makes it a lot harder to work it into new projects knowing I might have to wait on GPT to get its shit together every other day

some_guy@lemmy.sdf.org on 08 Sep 2023 16:09 next collapse

First time I’ve seen a pic of Sam Altman and of course he looks like a freak.

alienanimals@lemmy.world on 08 Sep 2023 16:18 collapse

Maybe if you did something meaningful with your life, you could make it into the news and we could all talk shit about how you look.

radau@lemmy.dbzer0.com on 09 Sep 2023 03:07 collapse

Wow he must’ve struck a nerve with you to get this reply

computerboss@sh.itjust.works on 08 Sep 2023 18:46 next collapse

No one seems to have thought about the fact that most schools have been out for those three months. Not sure exactly how much of the traffic is high schoolers and college students cheating, but that could account for at least some of the loss in traffic.

Edit: missed a word

Lantern@lemmy.world on 08 Sep 2023 19:39 collapse

Cheating isn’t necessarily the only use case for GPT, although it definitely does play an impact on the overall number of users.

Clbull@lemmy.world on 08 Sep 2023 19:55 next collapse

Why does the guy in the thumbnail look like Steven Crowder if you bought him on Wish?

gridleaf@lemmy.world on 08 Sep 2023 20:04 next collapse

School’s just starting. It won’t hit the peak of hype without some huge new features or improvements, but it’ll rise again.

eddanja@lemmy.world on 09 Sep 2023 02:40 next collapse

ChatGPT has gotten dumb. I used to have to code check it’s answer every few responses. Now it’s every response. It wrote me an if/else statement the other day where if and else had the same outcome.

zikk_transport2@lemmy.dbzer0.com on 09 Sep 2023 13:29 collapse

Strange that this isn’t top 1 comment in this thread.

The only reason why I unsubsribed from ChatGPT is because what you mentioned - it became dumber. Now I use GPT4 API via bettergpt.chat and instead of paying 22€ a month, now I only end up with approx 4-5eur a month depending on my usage.

I would probably agree to pay 50€ a month if it wouldn’t progress backwards and actually gets better over time, but it is not happening.

eddanja@lemmy.world on 11 Sep 2023 23:30 collapse

I had a thought about this. I wonder if they intentionally made it dumb so you would opt for the paid version.

zikk_transport2@lemmy.dbzer0.com on 12 Sep 2023 01:17 collapse

intentionally made it dumb

They are busimess, their goal is money.

I am pretty sure they “optimized” it, so it’s cheaper to run and as a result - dumber AI.

stevedidWHAT@lemmy.world on 09 Sep 2023 07:09 next collapse

Aw yeah dude chatgpt sucks for sure all of you should stop using it immediately so all the rich people who wanna fuck with it and discover ways to make money can and you can continue to be a little bitch to the system.

Definitely don’t use the api and learn some Python so you can control all the settings including system level prompting and so on. Definitely not a fucking blast to play with and I’ve definitely been so annoyed and have hated it for months and months. 👌🏻

EnderMB@lemmy.world on 09 Sep 2023 10:04 next collapse

Source: Work in AI, sometimes on LLM’s, mostly on the software engineering side rather than the science side.

I have a few theories on why ChatGPT was so successful, and why the hype is starting to crumble, but they all largely centre around a well-known problem that LLM’s have had for years - they hallucinate, a lot.

When your product becomes popular, you deal with unique problems that don’t seem to scale without insane amounts of money, or a literal army of people to plug gaps where your model is saying things it shouldn’t - whether it’s accusing high-level politician’s of crimes they didn’t commit, or telling people how to make chemical weapons in the style of your grandma. It’s an expensive loop in compositional models, so I’d hate to know how much work it took ChatGPT to get to it’s “best” version.

Over time, valuable data disappears, and your data naturally skews over time due to it being incorrect, invalid, or pushed into just being biased towards a given inaccuracy. Sometimes, you do everything right and you train on manual input that you’ve vetted as correct through expert analysis or user feedback - and it’s still wrong. IMO, ChatGPT was always going to struggle to keep the hype, and it will eventually be seen as what it has always been: a concept that shows the utility of LLM’s as a commercial product.

Make no mistake, the likes of Google, Amazon, Apple, and Meta will probably plug the gap, and will reach either parity with GPT4 or improve on it. However, the fundamental problem of hallucinations will not disappear, and we’ll continue to see neutered experiences that make for great tools, but burn cash to provide these tools with minimal possibility of offending people/damaging the brand.

The main thing I hope to see from the rise and fall from ChatGPT is a rise in productivity tooling, but also people to finally see those that hype these technologies as what they are - grifters.

T156@lemmy.world on 09 Sep 2023 10:28 collapse

There’s also the novelty factor. Like how DALL-E was the rage not that long ago, people flocked to it because it was new and interesting.

But the novelty has rather worn out by now.

Theharpyeagle@lemmy.world on 09 Sep 2023 11:02 collapse

This is a big part for me. When ChatGPT first came on the scene, I was absolutely blown away by its natural language parsing capabilities, but it wasn’t long before I started to hit the boundaries of its abilities. I was disappointed by how unreliable it was with anything but the most simple queries. Now it just doesn’t do enough to really bother with.

Sygheil@lemmy.world on 09 Sep 2023 12:04 next collapse

ChatGPT: *declines in popularity *develops sentience *gets emotional *evolves into SkyNet

Widowmaker_Best_Girl@lemmy.world on 09 Sep 2023 16:17 next collapse

Yeah I’d love to continue using ChatGPT but I got warned for making it roleplay as Widowmaker and trying to fuck the bot.

They don’t want my money? Fine. I’ll give it to someone else who doesn’t have arbitrary morality rules on playing wall-ball with linear algebra.

Lucidlethargy@sh.itjust.works on 09 Sep 2023 17:37 next collapse

I made the mistake of asking chatgpt questions about securing my network setup. It confidently gave me a huge amount of misinformation that led to 8-10 hours of frustration and pointless troubleshooting.

Do NOT trust ChatGPT.

danielton@lemmy.frozeninferno.xyz on 09 Sep 2023 19:50 collapse

Came here to say this. ChatGPT is very good at coming up with convincing bullshit. Always do your own research.

_e____b@lemmy.world on 09 Sep 2023 18:27 collapse

I would like to see if the use of their API increased.