AI chatbots unable to accurately summarise news, BBC finds (www.bbc.com)
from misk@sopuli.xyz to technology@lemmy.world on 11 Feb 2025 16:20
https://sopuli.xyz/post/22513032

#technology

threaded - newest

db0@lemmy.dbzer0.com on 11 Feb 2025 16:36 next collapse

As always, never rely on llms for anything factual. They’re only good with things which have a massive acceptance for error, such as entertainment (eg rpgs)

Eheran@lemmy.world on 11 Feb 2025 17:34 next collapse

Nonsense, I use it a ton for science and engineering, it saves me SO much time!

Atherel@lemmy.dbzer0.com on 11 Feb 2025 20:41 collapse

Do you blindly trust the output or is it just a convenience and you can spot when there’s something wrong? Because I really hope you don’t rely on it.

Eheran@lemmy.world on 11 Feb 2025 21:36 collapse

How could I blindly trust anything in this context?

otp@sh.itjust.works on 12 Feb 12:19 next collapse

Y’know, a lot of the hate against AI seems to mirror the hate against Wikipedia, search engines, the internet, and even computers in the past.

Do you just blindly believe whatever it tells you?

It’s not absolutely perfect, so it’s useless.

It’s all just garbage information!

This is terrible for jobs, society, and the environment!

Eheran@lemmy.world on 12 Feb 16:02 collapse

You know what… now that you say it, it really is just like the anti-Wikipedia stuff.

Nalivai@lemmy.world on 12 Feb 13:29 collapse

In which case you probably aren’t saving time. Checking bullshit is usually harder and longer to just research shit yourself. Or should be, if you do due diligence

Womble@lemmy.world on 12 Feb 13:41 collapse

Its nice that you inform people that they cant tell if something is saving them time or not without knowing what their job is or how they are using a tool.

WagyuSneakers@lemmy.world on 12 Feb 14:49 collapse

If they think AI is working for them then he can. If you think AI is an effective tool for any profession you are a clown. If my son’s preschool teacher used it to make a lesson plan she would be incompetent. If a plumber asked what kind of wrench he needed he would be kicked out of my house. If an engineer of one of my teams uses it to write code he gets fired.

AI “works” because you’re asking questions you don’t know and it’s just putting words together so they make sense without regard to accuracy. It’s a hard limit of “AI” that we’ve hit. It won’t get better in our lifetimes.

stephen01king@lemmy.zip on 13 Feb 13:12 collapse

Anyone blindly saying a tool is ineffective for every situation that exists in the world is a tool themselves.

WagyuSneakers@lemmy.world on 13 Feb 14:30 collapse

Lame platitude

1rre@discuss.tchncs.de on 11 Feb 2025 17:59 next collapse

The issue for RPGs is that they have such “small” context windows, and a big point of RPGs is that anything could be important, investigated, or just come up later

Although, similar to how deepseek uses two stages (“how would you solve this problem”, then “solve this problem following this train of thought”), you could have an input of recent conversations and a private/unseen “notebook” which is modified/appended to based on recent events, but that would need a whole new model to be done properly which likely wouldn’t be profitable short term, although I imagine the same infrastructure could be used for any LLM usage where fine details over a long period are more important than specific wording, including factual things

db0@lemmy.dbzer0.com on 11 Feb 2025 18:08 collapse

The problem is that the “train of the thought” is also hallucinations. It might make the model better with more compute but it’s diminishing rewards.

Rpg can use the llms because they’re not critical. If the llm spews out nonsense you don’t like, you just ask to redo, because it’s all subjective.

kboy101222@sh.itjust.works on 11 Feb 2025 18:47 next collapse

I tried using it to spit ball ideas for my DMing. I was running a campaign set in a real life location known for a specific thing. Even if I told it to not include that thing, it would still shoe horn it in random spots. It quickly became absolutely useless once I didn’t need that thing included

Sorry for being vague, I just didn’t want to post my home town on here

homesweethomeMrL@lemmy.world on 11 Feb 2025 22:12 collapse

You can say Space Needle. We get it.

kat@orbi.camp on 11 Feb 2025 19:17 collapse

Or at least as an assistant on a field your an expert in. Love using it for boilerplate at work (tech).

tal@lemmy.today on 11 Feb 2025 16:42 next collapse

They are, however, able to inaccurately summarize it in GLaDOS’s voice, which is a strong point in their favor.

JackGreenEarth@lemm.ee on 11 Feb 2025 16:47 next collapse

Surely you’d need TTS for that one, too? Which one do you use, is it open weights?

brucethemoose@lemmy.world on 11 Feb 2025 17:22 collapse

Zonos just came out, seems sick:

huggingface.co/Zyphra

There are also some “native” tts LLMs like GLM 9B, which “capture” more information in the output than pure text input.

ag10n@lemmy.world on 11 Feb 2025 19:04 collapse

A website with zero information, and barely anything on their huggingface page. What’s exciting about this?

Ahh, you should link to the model

www.zyphra.com/post/beta-release-of-zonos-v0-1

brucethemoose@lemmy.world on 11 Feb 2025 20:39 collapse

Whoops, yeah, should have linked the blog.

I didn’t want to link the individual models because I’m not sure hybrid or pure transformers is better?

ag10n@lemmy.world on 12 Feb 2025 03:47 collapse

Looks pretty interesting, thanks for sharing it

JohnEdwa@sopuli.xyz on 12 Feb 12:17 collapse

Yeah, out of all the generative AI fields, voice generation at this point is like 95% there in its capability of producing convincing speech even with consumer level tech like ElevenLabs. That last 5% might not even be solvable currently, as it’s those moments it gets the feeling, intonation or pronunciation wrong when the only context you give it is a text input, which is why everything purely automated tends to fall apart quite fast.

Especially voice cloning - the DRG Cortana Mission Control mod is one of the examples I like to use.

small44@lemmy.world on 11 Feb 2025 17:05 next collapse

BBC finds lol. No, we slresdy knew about that

ininewcrow@lemmy.ca on 11 Feb 2025 17:09 next collapse

The owners of LLMs don’t care about ‘accurate’ … they care about ‘fast’ and ‘summary’ … and especially ‘profit’ and ‘monetization’.

As long as it’s quick, delivers instant content and makes money for someone … no one cares about ‘accurate’

Eheran@lemmy.world on 11 Feb 2025 17:41 collapse

Especially after the open source release of DeepSeak… What…?

brucethemoose@lemmy.world on 11 Feb 2025 17:15 next collapse

What temperature and sampling settings? Which models?

I’ve noticed that the AI giants seem to be encouraging “AI ignorance,” as they just want you to use their stupid subscription app without questioning it, instead of understanding how the tools works under the hood. They also default to bad, cheap models.

I find my local thinking models (FuseAI, Arcee, or Deepseek 32B 5bpw at the moment) are quite good at summarization at a low temperature, which is not what these UIs default to, and I get to use better sampling algorithms than any of the corporate APis. Same with “affordable” flagship API models (like base Deepseek, not R1). But small Gemini/OpenAI API models are crap, especially with default sampling, and Gemini 2.0 in particular seems to have regressed.

My point is that LLMs as locally hosted tools you understand the mechanics/limitations of are neat, but how corporations present them as magic cloud oracles is like everything wrong with tech enshittification and crypto-bro type hype in one package.

paraphrand@lemmy.world on 11 Feb 2025 17:41 next collapse

I don’t think giving the temperature knob to end users is the answer.

Turning it to max for max correctness and low creativity won’t work in an intuitive way.

Sure, turning it down from the balanced middle value will make it more “creative” and unexpected, and this is useful for idea generation, etc. But a knob that goes from “good” to “sort of off the rails, but in a good way” isn’t a great user experience for most people.

Most people understand this stuff as intended to be intelligent. Correct. Etc. Or they At least understand that’s the goal. Once you give them a knob to adjust the “intelligence level,” you’ll have more pushback on these things not meeting their goals. “I clearly had it in factual/correct/intelligent mode. Not creativity mode. I don’t understand why it left out these facts and invented a back story to this small thing mentioned…”

Not everyone is an engineer. Temp is an obtuse thing.

But you do have a point about presenting these as cloud genies that will do spectacular things for you. This is not a great way to be executing this as a product.

I loathe how these things are advertised by Apple, Google and Microsoft.

Eheran@lemmy.world on 11 Feb 2025 17:45 next collapse

This is really a non-issue, as the LLM itself should have no problem at setting a reasonable value itself. User wants a summary? Obviously maximum factual. He wants gaming ideas? Etc.

brucethemoose@lemmy.world on 11 Feb 2025 18:09 collapse

For local LLMs, this is an issue because it breaks your prompt cache and slows things down, without a specific tiny model to “categorize” text… which few have really worked on.

I don’t think the corporate APIs or UIs even do this. You are not wrong, but it’s just not done for some reason.

It could be that the trainers don’t realize its an issue. For instance, “0.5-0.7” is the recommended range for Deepseek R1, but I find much lower or slightly higher is far better, depending on the category and other sampling parameters.

brucethemoose@lemmy.world on 11 Feb 2025 17:57 collapse

  • Temperature isn’t even “creativity” per say, it’s more a band-aid to patch looping and dryness in long responses.

  • Lower temperature is much better with modern sampling algorithms, E.G., MinP, DRY, maybe dynamic temperature like mirostat and such. Ideally, structure output, too. Unfortunately, corporate APIs usually don’t offer this.

  • It can be mitigated with finetuning against looping/repetition/slop, but most models are the opposite, massively overtuning on their own output which “inbreeds” the model.

  • And yes, domain specific queries are best. Basically the user needs separate prompt boxes for coding, summaries, creative suggestions and such each with their own tuned settings (and ideally tuned models). You are right, this is a much better idea than offering a temperature knob to the user, but… most UIs don’t even do this for some reason?

What I am getting at is this is not a problem companies seem interested in solving.They want to treat the users as idiots without the attention span to even categorize their question.

Eheran@lemmy.world on 11 Feb 2025 17:43 next collapse

Rare that people here argument for LLMs like that here, usually it is the same kind of “uga suga, AI bad, did not already solve world hunger”.

brucethemoose@lemmy.world on 11 Feb 2025 18:05 next collapse

Lemmy is understandably sympathetic to self-hosted AI, but I get chewed out or even banned literally anywhere else.

In one fandom (the Avatar fandom), there used to be enthusiasm for a “community enhancement” of the original show since the official DVD/Blu-ray looks awful. Years later in a new thread, I don’t even mention the word “AI,” just the idea of restoration, and I got bombed and threadlocked for the mere tangential implication.

Nalivai@lemmy.world on 12 Feb 13:35 next collapse

What a nuanced representation of the position, I just feel trustworthiness oozes out of the screen.
In case you’re using random words generation machine to summarise this comment for you, it was a sarcasm, and I meant the opposite.

Eheran@lemmy.world on 12 Feb 19:29 collapse

So many arguments… Wow!

Nalivai@lemmy.world on 13 Feb 05:28 collapse

Ask a forest burning machine to read the surrounding treads for you, then you will find the arguments you’re looking for. You have at least 80% chance it will produce something coherent, and unknown chance of there being something correct, but hey, reading is hard amirite?

Eheran@lemmy.world on 13 Feb 08:42 collapse

“If you try hard you might find arguments for my side”

What kind of meta-argument is that supposed to be?

Nalivai@lemmy.world on 13 Feb 11:22 collapse

If you read what people write, you will understand what they’re trying to tell you. Shocking concept, I know. It’s much easier to imagine someone in your head, paint him as a soyjack and yourself as a chadjack and epicly win an argument.

Eheran@lemmy.world on 13 Feb 14:43 collapse

Wrong thread?

heavydust@sh.itjust.works on 12 Feb 16:23 collapse

Your comment would be acceptable if AI was not advertised as solving all our problems, like world hunger.

Eheran@lemmy.world on 12 Feb 19:30 collapse

So the ads are the problem? Do you have a link to such an ad?

heavydust@sh.itjust.works on 12 Feb 19:38 collapse

Not ads, whole governments talking about it and funding that crap like Altman/Musk in the USA or Macron in Europe.

1rre@discuss.tchncs.de on 11 Feb 2025 18:04 next collapse

I’ve found Gemini overwhelmingly terrible at pretty much everything, it responds more like a 7b model running on a home pc or a model from two years ago than a medium commercial model in how it completely ignores what you ask it and just latches on to keywords… It’s almost like they’ve played with their tokenisation or trained it exclusively for providing tech support where it links you to an irrelevant article or something

brucethemoose@lemmy.world on 11 Feb 2025 18:12 next collapse

Gemini 1.5 used to be the best long context model around, by far.

Gemini Flash Thinking from earlier this year was very good for its speed/price, but it regressed a ton.

Gemini 1.5 Pro is literally better than the new 2.0 Pro in some of my tests, especially long-context ones. I dunno what happened there, but yes, they probably overtuned it or something.

Imgonnatrythis@sh.itjust.works on 11 Feb 2025 19:23 collapse

Bing/chatgpt is just as bad. It loves to tell you it’s doing something and then just ignores you completely.

jrs100000@lemmy.world on 12 Feb 2025 03:32 next collapse

They were actually really vague about the details. The paper itself says they used GPT-4o for ChatGPT, but apparently they didnt even note what versions of the other models were used.

MoonlightFox@lemmy.world on 13 Feb 02:22 collapse

I have been pretty impressed by Gemini 2.0 Flash.

Its slightly worse than the very best on the benchmarks I have seen, but is pretty much instant and incredibly cheap. Maybe a loss leader?

Anyways, which model of the commercial ones do you consider to be good?

brucethemoose@lemmy.world on 13 Feb 02:27 collapse

benchmarks

Benchmarks are so gamed, even Chatbot Arena is kinda iffy. TBH you have to test them with your prompts yourself.

Honestly I am getting incredible/creative responses from Deepseek R1, the hype is real, though its frequently overloaded. Tencent’s API is a bit under-rated. If llama 3.3 70B is smart enough for you, Cerebras API is super fast.

Qwen Max is… not bad? The reasoning models kinda spoiled me, but I think they have more reasoning releases coming.

MiniMax is ok for long context, but I still tend to lean on Gemini for this.

I dunno about Claude these days, as its just so expensive. I haven’t touched OpenAI in a long time.

Oh, and sometimes “weird” finetunes you can find on OpenRouter or whatever will serve niches much better than “big” API models.

EDIT:

Locally, I used to hop around, but now I pretty much always run a Qwen 32B finetune. Either coder, Arcee Distill, FuseAI, R1, EVA-Gutenberg, or Openbuddy, usually.

MoonlightFox@lemmy.world on 13 Feb 02:41 next collapse

So there is not any trustworthy benchmarks I can currently use to evaluate? That in combination with my personal anecdotes is how I have been evaluating them.

I was pretty impressed with Deepseek R1. I used their app, but not for anything sensitive.

I don’t like that OpenAI defaults to a model I can’t pick. I have to select it each time, even when I use a special URL it will change after the first request

I am having a hard time deciding which models to use besides a random mix between o3-mini-high, o1, Sonnet 3.5 and Gemini 2 Flash

brucethemoose@lemmy.world on 13 Feb 04:37 collapse

Heh, only obscure ones that they can’t game, and only if they fit your use case. One example is the ones in EQ bench: eqbench.com

…And again, the best mix of models depends on your use case.

I can suggest using something like Open Web UI with APIs instead of native apps. It gives you a lot more control, more powerful tooling to work with, and the ability to easily select and switch between models.

Knock_Knock_Lemmy_In@lemmy.world on 13 Feb 09:52 collapse

What are the local use cases? I’m running on a 3060ti but output is always inferior to the free tier of the various providers.

Can I justify an upgrade to a 4090 (or more)?

chemical_cutthroat@lemmy.world on 11 Feb 2025 17:19 next collapse

Which is hilarious, because most of the shit out there today seems to be written by them.

Paradox@lemdro.id on 11 Feb 2025 17:45 next collapse

Funny, I find the BBC unable to accurately convey the news

addie@feddit.uk on 12 Feb 06:52 next collapse

Dunno why you’re being downvoted. If you’re wanting a somewhat right-wing, pro-establishment, slightly superficial take on the news, mixed in with lots of “celebrity” frippery, then the BBC have got you covered. Their chairmen have historically been a list of old Tories, but that has never stopped the Tory party of accusing their news of being “left leaning” when it’s blatantly not.

bilb@lem.monster on 13 Feb 06:49 collapse

Yeah, haha

Perplexity misquoted BBC News in a story about the Middle East, saying Iran initially showed “restraint” and described Israel’s actions as “aggressive”

Perplexity did fail to summarize the article, but it did correct it.

badbytes@lemmy.world on 11 Feb 2025 17:50 next collapse

Why, where they trained using MAIN STREAM NEWS? That could explain it.

mentalNothing@lemmy.world on 11 Feb 2025 17:57 next collapse

Idk guys. I think the headline is misleading. I had an AI chatbot summarize the article and it says AI chatbots are really, really good at summarizing articles. In fact it pinky promised.

rottingleaf@lemmy.world on 11 Feb 2025 18:06 next collapse

Yes, I think it would be naive to expect humans to design something capable of what humans are not.

maniclucky@lemmy.world on 11 Feb 2025 21:45 collapse

We do that all the time. It’s kind of humanity’s thing. I can’t run 60mph, but my car sure can.

rottingleaf@lemmy.world on 11 Feb 2025 23:03 collapse

Qualitatively.

maniclucky@lemmy.world on 11 Feb 2025 23:07 collapse

That response doesn’t make sense. Please clarify.

rottingleaf@lemmy.world on 12 Feb 07:52 collapse

A human can move, a car can move. a human can’t move with such speed, a car can. The former is qualitative difference how I meant it, the latter quantitative.

Anyway, that’s how I used those words.

maniclucky@lemmy.world on 12 Feb 11:45 collapse

Ooooooh. Ok that makes sense.

With that said, you might look at researchers using AI to come up with new useful ways to fold proteins and biology in general. The roadblock, to my understanding (data science guy not biologist), is the time it takes to discover these things/how long it would take evolution to get there. Admittedly that’s still somewhat quantitative.

For qualitative examples we always have hallucinations and that’s a poorly understood mechanism that may well be able to create actual creativity. But it’s the nature of AI to remain within (or close to within) the corpus of knowledge they were trained on. Though now it leads to “nothing new under the sun” so I’ll stop rambling now.

rottingleaf@lemmy.world on 12 Feb 12:53 collapse

The roadblock, to my understanding (data science guy not biologist), is the time it takes to discover these things/how long it would take evolution to get there. Admittedly that’s still somewhat quantitative.

Yes.

But it’s the nature of AI to remain within (or close to within) the corpus of knowledge they were trained on.

That’s fundamentally solvable.

I’m not against attempts at global artificial intelligence, just against one approach to it. Also no matter how we want to pretend it’s something general, we in fact want something thinking like a human.

What all these companies like DeepSeek and OpenAI and others are doing lately, with some “chain-of-thought” model, is in my opinion what they should have been focused on, how do you organize data for a symbolic logic model, how do you generate and check syllogisms, how do you, then, synthesize algorithms based on syllogisms ; there seems to be something like a chicken and egg problem between logic and algebra, one seems necessary for the other in such a system, but they depend on each other (for a machine, humans remember a few things constant for most of our existence). And the predictor into which they’ve invested so much data is a minor part which doesn’t have to be so powerful.

maniclucky@lemmy.world on 12 Feb 13:47 collapse

I’m not against attempts at global artificial intelligence, just against one approach to it. Also no matter how we want to pretend it’s something general, we in fact want something thinking like a human.

Agreed. The techbros pretending that the stochastic parrots they’ve created are general AI annoys me to no end.

While not as academically cogent as your response (totally not feeling inferior at the moment), it has struck me that LLMs would make a fantastic input/output to a greater system analogous to the Wernicke/Broca areas of the brain. It seems like they’re trying to get a parrot to swim by having it do literally everything. I suppose the thing that sticks in my craw is the giveaway that they’ve promised that this one technique (more or less, I know it’s more complicated than that) can do literally everything a human can, which should be an entire parade of red flags to anyone with a drop of knowledge of data science or fraud. I know that it’s supposed to be a universal function appropriator hypothetically, but I think the gap between hypothesis and practice is very large and we’re dumping a lot of resources into filling in the canyon (chucking more data at the problem) when we could be building a bridge (creating specialized models that work together).

Now that I’ve used a whole lot of cheap metaphor on someone who causally dropped ‘syllogism’ into a conversation, I’m feeling like a freshmen in a grad level class. I’ll admit I’m nowhere near up to date on specific models and bleeding edge techniques.

rottingleaf@lemmy.world on 12 Feb 16:11 collapse

While not as academically cogent as your response

An elegant way to make someone feel ashamed for using many smart words, ha-ha.

I know that it’s supposed to be a universal function appropriator hypothetically, but I think the gap between hypothesis and practice is very large and we’re dumping a lot of resources into filling in the canyon (chucking more data at the problem) when we could be building a bridge (creating specialized models that work together).

The metaphor is correct, I think it’s some social mechanism making them choose a brute force solution first. Say, spending more resources to achieve the same might be a downside usually, but if it’s a resource otherwise not in demand, that only the stronger parties possess in sufficient amounts, like corporations and governments, then that may be an upside for someone by changing the balance.

And LLMs appear good enough to make captcha-solving machines, proof image or video faking machines, fraudulent chatbot machines, or machines predicting someone’s (or some crowd’s) responses well enough to play them. So I’d say commercially they already are successful.

Now that I’ve used a whole lot of cheap metaphor on someone who causally dropped ‘syllogism’ into a conversation, I’m feeling like a freshmen in a grad level class. I’ll admit I’m nowhere near up to date on specific models and bleeding edge techniques.

We-ell, it’s just hard to describe the idea without using that word, but I haven’t even finished my BS yet (lots of procrastinating, running away and long interruptions), and also the only bit of up to date knowledge I had was what DeepSeek prints when answering, so.

maniclucky@lemmy.world on 12 Feb 18:04 collapse

An elegant way to make someone feel ashamed for using many smart words, ha-ha.

Unintentional I assure you.

I think it’s some social mechanism making them choose a brute force solution first.

I feel like it’s simpler than that. Ye olde “when all you have is a hammer, everything’s a nail”. Or in this case, when you’ve built the most complex hammer in history, you want everything to be a nail.

So I’d say commercially they already are successful.

Definitely. I’ll never write another cover letter. In their use-case, they’re solid.

but I haven’t even finished my BS yet

Currently working on my masters after being in industry for a decade. The paper is nice, but actually applying the knowledge is poorly taught (IMHO, YMMV) and being willing to learn independently has served me better than by BS in EE.

[deleted] on 11 Feb 2025 19:19 next collapse

.

untorquer@lemmy.world on 11 Feb 2025 20:57 next collapse

Fuckin news!

homesweethomeMrL@lemmy.world on 11 Feb 2025 22:15 next collapse

Turns out, spitting out words when you don’t know what anything means or what “means” means is bad, mmmmkay.

It got journalists who were relevant experts in the subject of the article to rate the quality of answers from the AI assistants.

It found 51% of all AI answers to questions about the news were judged to have significant issues of some form.

Additionally, 19% of AI answers which cited BBC content introduced factual errors, such as incorrect factual statements, numbers and dates.

Introduced factual errors

Yeah that’s . . . that’s bad. As in, not good. As in - it will never be good. With a lot of work and grinding it might be “okay enough” for some tasks some day. That’ll be another 200 Billion please.

MDCCCLV@lemmy.ca on 12 Feb 04:13 next collapse

Is it worse than the current system of editors making shitty click bait titles?

homesweethomeMrL@lemmy.world on 12 Feb 15:10 collapse

Surprisingly, yes

SamboT@lemmy.world on 12 Feb 04:16 next collapse

Do you dislike ai?

fine_sandy_bottom@discuss.tchncs.de on 12 Feb 10:43 next collapse

I don’t necessarily dislike “AI” but I reserve the right to be derisive about inappropriate use, which seems to be pretty much every use.

Using AI to find pertoglyphs in Peru was cool. Reviewing medical scans is pretty great. Everything else is shit.

WagyuSneakers@lemmy.world on 12 Feb 14:38 collapse

I work in tech and can confirm the the vast majority of engineers “dislike ai” and are disillusioned with AI tools. Even ones that work on AI/ML tools. It’s fewer and fewer people the higher up the pay scale you go.

There isn’t a single complex coding problem an AI can solve. If you don’t understand something and it helps you write it I’ll close the MR and delete your code since it’s worthless. You have to understand what you write. I do not care if it works. You have to understand every line.

“But I use it just fine and I’m an…”

Then you’re not an engineer and you shouldn’t have a job. You lack the intelligence, dedication and knowledge needed to be one. You are detriment to your team and company.

Eheran@lemmy.world on 12 Feb 16:11 next collapse

“I can calculate powers with decimal values in the exponent and if you can not do that on paper but instead use these machines, your calculations are worthless and you are not an engineer”

You seem to fail to see that this new tool has unique strengths. As the other guy said, it is just like people ranting about Wikipedia. Absurd.

WagyuSneakers@lemmy.world on 12 Feb 16:32 collapse

You can also just have an application designed to do that do it more accurately.

If you can’t do that you’re not an engineer. If you don’t recommend that you’re not an engineer.

5gruel@lemmy.world on 13 Feb 07:00 collapse

That’s some weird gatekeeping. Why stop there? Whoever is using a linter is obviously too stupid to write clean code right off the bat. Syntax highlighting is for noobs.

I full-heartedly dislike people that think they need to define some arcane rules how a task is achieved instead of just looking at the output.

Accept that you probably already have merged code that was generated by AI and it’s totally fine as long as tests are passing and it fits the architecture.

WagyuSneakers@lemmy.world on 13 Feb 17:34 collapse

You’re supposed to gatekeep code. There is nothing wrong with gatekeeping things that aren’t hobbies.

If someone can’t explain every change they’re making and why they chose to do it that way they’re getting denied. The bar is low.

desktop_user@lemmy.blahaj.zone on 12 Feb 04:43 next collapse

alternatively: 49% had no significant issues and 81% had no factual errors, it’s not perfect but it’s cheap quick and easy.

Nalivai@lemmy.world on 12 Feb 06:28 next collapse

It’s easy, it’s quick, and it’s free: pouring river water in your socks.
Fortunately, there are other possible criteria.

itslilith@lemmy.blahaj.zone on 12 Feb 11:36 collapse

Flip a coin every time you read an article whether you get quick and easy significant issues

Rivalarrival@lemmy.today on 12 Feb 08:02 next collapse

It found 51% of all AI answers to questions about the news were judged to have significant issues of some form.

How good are the human answers? I mean, I expect that an AI’s error rate is currently higher than an “expert” in their field.

But I’d guess the AI is quite a bit better than, say, the average Republican.

balder1991@lemmy.world on 12 Feb 15:07 collapse

I guess you don’t get the issue. You give the AI some text to summarize the key points. The AI gives you wrong info in a percentage of those summaries.

There’s no point in comparing this to a human, since this is usually something done for automation, that is, to work for a lot of people or a large quantity of articles. At best you can compare it to other automated summaries that existed before LLMs, which might not have all the info, but won’t make up random facts that aren’t in the article.

Rivalarrival@lemmy.today on 12 Feb 16:04 collapse

I’m more interested in the technology itself, rather than its current application.

I feel like I am watching a toddler taking her first steps; wondering what she will eventually accomplish in her lifetime. But the loudest voices aren’t cheering her on: they’re sitting in their recliners, smugly claiming she’s useless. She can’t even participate in a marathon, let alone compete with actual athletes!

Basically, the best AIs currently have college-level mastery of language, and the reasoning skills of children. They are already far more capable and productive than anti-vaxxers, or our current president.

balder1991@lemmy.world on 12 Feb 16:15 collapse

It’s not the people that simply decided to hate on AI, it was the sensationalist media hyping it up so much to the point of scaring people: “it’ll take all your jobs”, or companies shoving it down our throats by putting it in every product even when it gets in the way of the actual functionality people want to use. Even my company “forces” us all to use X prompts every week as a sign of being “productive”. Literally every IT consultancy in my country has a ChatGPT wrapper that they’re trying to sell and they think they’re different because of it. The result couldn’t be different, when something gets too much exposure it also gets a lot of hate, especially when it is forced down on people.

devfuuu@lemmy.world on 12 Feb 09:13 next collapse

I’ll be here begging for a miserable 1 million to invest in some freaking trains and bicycle paths. Thanks.

chud37@lemm.ee on 12 Feb 19:50 collapse

that’s the core problem though, isn’t it. They are just predictive text machines, not understanding what they are saying. Yet we are treating them as if they were some amazing solution to all our problems

homesweethomeMrL@lemmy.world on 12 Feb 19:59 collapse

Well, “we” arent’ but there’s a hype machine in operation bigger than anything in history because a few tech bros think they’re going to rule the world.

Etterra@discuss.online on 12 Feb 05:23 next collapse

You don’t say.

TroublesomeTalker@feddit.uk on 12 Feb 10:04 next collapse

But the BBC is increasingly unable to accurately report the news, so this finding is no real surprise.

MoonlightFox@lemmy.world on 13 Feb 02:28 collapse

Why do you say that? I have had no reason to doubt their reporting

StarlightDust@lemmy.blahaj.zone on 13 Feb 02:36 next collapse

Look at their reporting of the Employment Tribunal for the nurse from Five who was sacked for abusing a doctor. They refused to correctly gender the doctor correctly in every article to a point where the lack of any pronoun other than the sacked transphobe referring to her with “him”. They also very much paint it like it is Dr Upton on trial and not Ms Peggie.

TroublesomeTalker@feddit.uk on 13 Feb 06:20 collapse

It’s a “how the mighty have fallen” kind of thing. They are well into the click-bait farm mentality now - have been for a while.

It’s present on the news sites, but far worse on things where they know they steer opinion and discourse. They used to ensure political parties has coverage inline with their support, but for like 10 years prior to Brexit, they gave Farage and his Jackasses hugely disproportionate coverage - like 20X more than their base. This was at a time when SNP were doing very well and were frequently seen less than the UK independence party. And I don’t recall a single instance of it being pointed out that 10 years of poor interactions with Europe may have been at least partially fuelled by Nidge being our MEP and never turning up. Hell we had veto rights and he was on the fisheries commission. All that shit about fisherman was a problem he made.

Current reporting is heavily spun and they definitely aren’t the worst in the world, but the are also definitely not the bastion of unbiased news I grew up with.

Until relatively recently you could see the deterioration by flipping to the world service, but that’s fallen into line now.

If you have the time to follow independent journalists the problem becomes clearer, if not, look at output from parody news sites - it’s telling that Private Eye and Newsthump manage the criticism that the BBC can’t seem to get too

Go look at the bylinetimes.com front page, grab a random story and compare coverage with the BBC. One of these is crowd funded reporters and the other a national news site with great funding and legal obligations to report in the public interest.

I don’t hate them, they just need to be better.

Phoenicianpirate@lemm.ee on 12 Feb 12:27 next collapse

I learned that AI chat bots aren’t necessarily trustworthy in everything. In fact, if you aren’t taking their shit with a grain of salt, you’re doing something very wrong.

Redex68@lemmy.world on 12 Feb 14:08 next collapse

This is my personal take. As long as you’re careful and thoughtful whenever using them, they can be extremely useful.

Llewellyn@lemm.ee on 13 Feb 01:53 next collapse

Extremely?

echodot@feddit.uk on 13 Feb 09:06 collapse

Could you tell me what you use it for because I legitimately don’t understand what I’m supposed to find helpful about the thing.

We all got sent an email at work a couple of weeks back telling everyone that they want ideas for a meeting next month about how we can incorporate AI into the business. I’m heading IT, so I’m supposed to be able to come up with some kind of answer and yet I have nothing. Even putting aside the fact that it probably doesn’t work as advertised, I still can’t really think of a use for it.

The main problem is it won’t be able to operate our ancient and convoluted ticketing system, so it can’t actually help.

Everyone I’ve ever spoken to has said that they use it for DMing or story prompts. All very nice but not really useful.

Knock_Knock_Lemmy_In@lemmy.world on 13 Feb 09:30 next collapse

Great for turning complex into simple.

Bad for turning simple into complex.

echodot@feddit.uk on 13 Feb 10:19 collapse

I think my largest gripe with it is it can’t actually do anything. It can just tell you about stuff.

I can ask it how to change the desktop background on my computer and it will 100% be able to tell me, but if you then prompt it to change the background itself it won’t be able to. It has zero ability to interact with the computer, this is even the case with AI run locally.

It can’t move the mouse around it can’t send keyboard commands.

WraithGear@lemmy.world on 13 Feb 12:47 collapse

Um… yea? It’s not supposed to? Let’s ignore how dangerous and foolish it would be to allow llm’s admin control of a system. The thing that prevents it from doing that is well, the llm has no mechanism to do that. The best it could do is ask you to open a command line and give you some code to put in. Its kinda like asking siri to preheat your oven. It didn’t have access to your ovens system.

You COULD get a digital only stove, and the llm could be changed to give it to reach out side itself, but its not there yet, and with how much siri miss interprets things, there would be a lot more fires

echodot@feddit.uk on 14 Feb 07:19 collapse

It wouldn’t have the administrative access. You don’t need admin access to use a computer system you need admin access to configure stuff but there’s no reason for the AI to have that.

Anyway if AI is going to be useful to businesses it needs to be able to interface with their legacy applications.

Phoenicianpirate@lemm.ee on 13 Feb 14:21 next collapse

I am a creative writer (as in, I write stories and stuff) or at least I used to be. Sometimes when talking to chatGPT about ideas for writing it can be interesting, but other times it is kinda annoying since I am more into fine tuning instead of having it innudate me with ideas that I don’t find particularly interesting.

quokka1@mastodon.au on 19 Feb 01:55 collapse

@echodot @Redex68 off top of my head, script generation. making content more readable. dictating a brain dump while walking and having it spit out a cohesive summary.

it's all about the prompt you put in. shit in/shit out. And making sure you check/understand what it spits out. and that sometimes it's garbage.

Knock_Knock_Lemmy_In@lemmy.world on 13 Feb 09:18 collapse

Treat LLMs like a super knowledgeable, enthusiastic, arrogant, unimaginative intern.

milicent_bystandr@lemm.ee on 13 Feb 09:39 next collapse

Super knowledgeable but with patchy knowledge, so they’ll confidently say something that practically everyone else in the company knows is flat out wrong.

Phoenicianpirate@lemm.ee on 13 Feb 14:19 collapse

I noticed that. When I ask it about things that I am knowledgeable about or simply wish to troubleshoot I often find myself having to correct it. This does make me hestitant to follow the instructions given on something I DON’T know much about.

Knock_Knock_Lemmy_In@lemmy.world on 13 Feb 16:28 collapse

Oh yes. The LLM will lie to you, confidently.

Phoenicianpirate@lemm.ee on 13 Feb 17:45 collapse

Exactly. I think this is a good barometer of gauging whether or not you can trust it. Ask it about things you know you’re good at or knowledgeable about. If it is giving good information, the type you would give out, then it is probably OK. If it is bullshitting you or making you go ‘uhh, no, actually…’ then you need to do more old-school research.

Turbonics@lemmy.sdf.org on 12 Feb 12:41 next collapse

BBC is probably salty the AI is able to insert the word Israel alongside a negative term in the headline

Krelis_@lemmy.world on 13 Feb 07:08 collapse

Some examples of inaccuracies found by the BBC included:

Gemini incorrectly said the NHS did not recommend vaping as an aid to quit smoking

ChatGPT and Copilot said Rishi Sunak and Nicola Sturgeon were still in office even after they had left

Perplexity misquoted BBC News in a story about the Middle East, saying Iran initially showed “restraint” and described Israel’s actions as “aggressive”

Turbonics@lemmy.sdf.org on 17 Feb 22:59 collapse

Perplexity misquoted BBC News in a story about the Middle East, saying Iran initially showed “restraint” and described Israel’s actions as “aggressive”

I did not even read up to there but wow BBC really went there openly.

Prandom_returns@lemm.ee on 12 Feb 12:59 next collapse

But every techbro on the planet told me it’s exactly what LLMs are good at. What the hell!? /s

lemm.ee/comment/18029491

heavydust@sh.itjust.works on 12 Feb 16:21 collapse

Not only techbros though. Most of my friends are not into computers but they all think AI is magical and will change the whole world for the better. I always ask “how can a blackbox that throws up random crap and runs on the computers of big companies out of the country would change anything?” They don’t know what to say but they still believe something will happen and a program can magically become sentient. Sometimes they can be fucking dumb but I still love them.

shrugs@lemmy.world on 12 Feb 21:08 collapse

the more you know what you are doing the less impressed you are by ai. calling people that trust ai idiots is not a good start to a conversation though

echodot@feddit.uk on 13 Feb 09:01 collapse

It’s not like they’re flat earthers they are not conspiracy theorists. They have been told by the media, businesses, and every goddamn YouTuber that AI is the future.

I don’t think they are idiots I just think they are being lied to and are a bit gullible. But it’s not worth having the argument with them, AI is going to fail on its own it doesn’t matter what they think.

underwire212@lemm.ee on 12 Feb 13:06 next collapse

News station finds that AI is unable to perform the job of a news station

🤔

tacosplease@lemmy.world on 12 Feb 13:35 next collapse

Neither are my parents

NutWrench@lemmy.world on 12 Feb 13:52 next collapse

But AI is the wave of the future! The hot, NEW thing that everyone wants! ** furious jerking off motion **

Teknikal@eviltoast.org on 12 Feb 14:10 next collapse

I just tried it on deepseek it did it fine and gave the source for everything it mentioned as well.

Flocklesscrow@lemm.ee on 12 Feb 17:08 next collapse

Now ask it whether Taiwan is a country.

qaz@lemmy.world on 12 Feb 19:54 collapse

That depends on if you ask the online app (which will cut you off or give you a CCP sanctioned answer) or run it locally (which seems to give a normal answer)

datalowe@lemmy.world on 13 Feb 06:39 collapse

Do you mean you rigorously went through a hundred articles, asking DeepSeek to summarise them and then got relevant experts in the subject of the articles to rate the quality of answers? Could you tell us what percentage of the summaries that were found to introduce errors then? Literally 0?

Or do you mean that you tried having DeepSeek summarise a couple of articles, didn’t see anything obviously problematic, and figured it is doing fine? Replacing rigorous research and journalism by humans with a couple of quick AI prompts, which is the core of the issue that the article is getting at. Because if so, please reconsider how you evaluate (or trust others’ evaluations of) information tools which might help or help destroy democracy.

Petter1@lemm.ee on 12 Feb 14:35 next collapse

ShockedPikachu.svg

buddascrayon@lemmy.world on 13 Feb 05:51 next collapse

That’s why I avoid them like the plague. I’ve even changed almost every platform I’m using to get away from the AI-pocalypse.

echodot@feddit.uk on 13 Feb 08:56 next collapse

I can’t stand the corporate double think.

Despite the mountains of evidence that AI is not capable of something even basic as reading an article and telling you what is about it’s still apparently going to replace humans. How do they come to that conclusion?

The world won’t be destroyed by AI, It will be destroyed by idiot venture capitalist types who reckon that AI is the next big thing. Fire everyone, replace it all with AI; then nothing will work and nobody will be able to buy anything because nobody has a job.

Que global economic collapse.

vxx@lemmy.world on 13 Feb 10:26 collapse

It’s a race, and bullshitting brings venture capital and therefore an advantage.

99.9% of AI companies will go belly up when Investors start asking for results.

buddascrayon@lemmy.world on 14 Feb 02:30 collapse

Yeah seriously just look at Sam Bankman-Fried and that Theranos dipshit. Both bullshitted their way into millions. Only difference is that Altman and Musk’s bubbles haven’t popped yet.

Opisek@lemmy.world on 13 Feb 17:29 collapse

No better time to get into self hosting!

ehpolitical@lemmy.ca on 13 Feb 10:09 next collapse

I recently had one chatbot refuse to answer a couple of questions, and another delete my question after warning me that my question was verging on breaking its rules… never happened before, thought it was interesting.

Joelk111@lemmy.world on 13 Feb 15:09 collapse

I’m pretty sure that every user of Apple Intelligence could’ve told you that. If AI is good at anything, it isn’t things that require nuance and factual accuracy.