Sweden prime minister under fire after admitting that he regularly consults AI tools for a second opinion (www.theguardian.com)
from Davriellelouna@lemmy.world to technology@lemmy.world on 05 Aug 20:32
https://lemmy.world/post/34010416

#technology

threaded - newest

UnderpantsWeevil@lemmy.world on 05 Aug 20:55 next collapse

It’s literally a cult.

vacuumflower@lemmy.sdf.org on 07 Aug 06:04 collapse

I’ve dreamed of such a cult at some point, like Chinese fortune cookies, but those are less exploitable.

tal@lemmy.today on 05 Aug 20:57 next collapse

“You have to be very careful,” Simone Fischer-Hübner, a computer science researcher at Karlstad University, told Aftonbladet, warning against using ChatGPT to work with sensitive information.

I mean, sending queries to a search engine or an LLM are about the same in terms of exposing one’s queries.

If the guy were complaining about information from an LLM not being cited or something, then I think I could see where he was coming from more.

j4yt33@feddit.org on 06 Aug 06:24 collapse

It’s a woman

AbouBenAdhem@lemmy.world on 05 Aug 21:06 next collapse

The typical pattern for leaders is to get “second opinions” from advisors who tell them whatever they want to hear, so… maybe asking the equivalent of a magic 8 ball is a marginal improvement?

RobotZap10000@feddit.nl on 05 Aug 21:08 next collapse

I would rather have the politicians consult a plain old magic 8 ball than one controlled by Scam Altman.

boonhet@sopuli.xyz on 07 Aug 06:02 collapse

Most LLMs are literally "tell you whatever you want to hear " machines unfortunately. I’ve gotten high praise from ChatGPT for all my ideas until I go “but hang on, wouldn’t this factor stop it from being feasible” and then it agrees with me that my original idea was a bit shit lmao

kokesh@lemmy.world on 05 Aug 21:22 next collapse

You can see it on this government policies

UnfortunateShort@lemmy.world on 05 Aug 21:35 next collapse

It surely can’t hurt, if it’s to sanity check your highly paid advisors

jonne@infosec.pub on 05 Aug 22:15 collapse

Except those prompts are retained by OpenAI, and you don’t know who’s got access to that. They’ve had chats leak before.

Beacon@fedia.io on 05 Aug 22:32 collapse

If that's your problem with this story then you're just saying they should run a local instance of chatgpt on their own local computers

https://www.technicalexplore.com/ai/the-ultimate-guide-to-running-chatgpt-locally-empowering-ai-enthusiasts

jonne@infosec.pub on 05 Aug 22:45 collapse

I mean, I don’t really love that he uses AI to begin with, but getting an opinion on top of what you get from your advisors that you hired could be good in theory. Except that most AIs are programmed to be equally sycophantic and they’re just plain wrong most of the time.

To me the most irresponsible part is sending your thoughts as the leader of a country straight to a US run company, you just know those prompts are being digested by the NSA directly, and possibly other intelligence agencies.

HubertManne@piefed.social on 05 Aug 21:45 next collapse

I really don't get it. These things are brand new. How can anyone get so into these things so quickly. I don't take advice from people I barely know, much less ones that can be so easily and quickly reprogrammed.

kamenlady@lemmy.world on 06 Aug 00:34 next collapse

This is the unintentional uncanny valley for me in AI.

I ( was forced to ) use chatGTP for work. It can talk about everything and sounds very confident and seems reliable to always come up with something to help you solve your problems.

You talk with it about some niche content and suddenly have an ardent fan of said niche content responding. It surely knows every little bit of info of that niche and surprises you with funny, but apt quotes from your favorite show in the middle of conversations about something else.

This is just from a tiny bit of interaction, while at work.

I can imagine people completely overwhelmed, by having their thoughts confirmed and supported by something that seems so intelligent, responsive and remembers all your conversations. It literally remembers each word.

For many people it may be the first time in their life, that they experience a positive response to their thoughts. Not only that, they also found someone eager to talk with them about it.

HubertManne@piefed.social on 06 Aug 01:02 collapse

Everyones initial use of chatbots should be on the thing they are most knowledgable about so they can get an idea of how wrong it can be and how it can be useful but you have to treat it like some eager wet behind the ears intern just did for you.

greybeard@feddit.online on 06 Aug 12:13 next collapse

One thing I struggle with AI is the answers it gives always seem plausable, but any time I quiz it on things I understand well, it seems to constantly get things slightly wrong. Which tells me it is getting everything slightly wrong, I just don't know enough to know it.

I see the same issue with TV. Anyone who works in a compicated field has felt the sting of watching a TV show fail to accurate represent it while most people watching just assume that's how your job works.

HubertManne@piefed.social on 06 Aug 13:57 next collapse

This is where you have to check out the reference links it gives as if they were search results and the less you know the more you have to do it. I mean people have been webMDing for a long time. None of these things allow folks to stop critical thinking. If anything it requires it even more. This was actually one of my things with ai and work. The idea is for it to allow people with less knowledge to do things and to me its kinda the reverse.

clif@lemmy.world on 06 Aug 22:08 next collapse

Something I found today - ask it for the lyrics of your favorite song/artist. It will make something up based on the combination of the two and maybe a little of what it was trained on… Even really popular songs (I tried a niche one by Angelspit first then tried “Sweet Caroline” for more well known). The model for those tests was Gemma3. It did get two lines of “Sweet Caroline” correct but not the rest.

The new gpt-oss model replies with (paraphrased) “I can’t do that because it is copyrighted material” which I have a sneaking suspicion is intentional so there’s an excuse for not showing a very wrong answer to people who might start to doubt it’s ““intelligence”” when it’s very clearly wrong.

… Like they give a flying fuck about copyright.

noughtnaut@lemmy.world on 07 Aug 14:56 collapse

This is what I call “confidently wrong”. If you ask it about things you have no clue about, it seems incredibly well-informed and insightful. Ask it something you know deeply, and you’ll easily see it’s just babbling and spouting nonsense - sure makes you wonder about those earlier statements it made, doesn’t it?

FishFace@lemmy.world on 06 Aug 14:32 collapse

Because that’s what it is really trained for: to produce correct grammar and plausible sentences. It’s really an unbelievable leap from computer-generated text from preceding approaches where, in a matter of a few years, you went from little more than gibberish to stuff that’s so incredibly realistic that it can be mistaken for intelligent conversation, easily passing the Turing Test (I had to actually go to Wikipedia to check and, indeed, this was verified this year - note that this in particular is for recent models)

So you have something that is sufficiently realistic that it can appear to be a human conversation partner. Human beings aren’t (yet) well-equipped to deal with something which appears to be human but whose behaviour diverges from typical human behaviour so radically (most relevantly, it won’t readily admit to not knowing something).

HubertManne@piefed.social on 06 Aug 14:43 collapse

Its more than that. It takes the input and tries to interpret the bad grammar and sentences into search terms and finds link the correlate the highest to its interpretation and then gives back the response that summarizes the results with good grammar and plausible sentences. Again this is why I stress that you have to evaluate its response and sources. The sources are the real value in any query. Im actually not sure how much the chatbots give sources by default though as I know I have not gotten them and then asked for them and now I get them as a matter of course so im not sure if it learns that I want them or if they did a change to provide them when they had not before.

doctortofu@piefed.social on 05 Aug 21:52 next collapse

I would genuinely have more trust in a politician consulting I Ching than ChatGPT - at least the book of changes requires some thinking and interpretation, fucking hell...

Sigh, all the tech-bros that convinced the general public that AI are in any way intelligent or that they can think, reason or understand concepts like truth/lies or right/wrong should go on trial for crimes against humanity...

[deleted] on 05 Aug 22:39 next collapse

.

hisao@ani.social on 05 Aug 22:40 collapse

Most people don’t care whether AI is intelligent, can it think or reason or understand concepts. What matters is that it can give helpful replies, and it does that a lot. In my experience maybe 1 reply out of 10 is seriously flawed, others are either mostly helpful or just tell me something I already knew until I reprompt for more, which, again, also works well most of the time (especially when you allow it to search for more information online). So if you wanted to say it’s dangerous in some ways, this is definitely not the proper way to say it, since neither it being dangerous nor it being right or wrong or helpful or useless has anything to do with intelligence, ability to think, reason, feel, comprehend or whatever.

Beacon@fedia.io on 05 Aug 22:04 next collapse

Some of y'all are crazy reactionary. There's absolutely nothing wrong with asking an ai chatbot for an additional opinion. The ai shouldn't be making the decisions, and the ai shouldn't be the only way you look for opinions, but there's nothing wrong with ai being ONE OF the opinions you consider

FerretyFever0@fedia.io on 05 Aug 22:15 next collapse

But it doesn't know anything. At all. Does Sweden not have a fuck ton of people that are trained to gather intelligence?

Beacon@fedia.io on 05 Aug 22:24 next collapse

It doesn't matter if it knows anything or not. The purpose is to acquire other ideas that you and the people in your cabinet didn't think of. Or ideas they didn't want to say, because no one wants to tell the boss that their idea is bad. It's a GOOD thing when a politician seeks out multiple different viewpoints to consider. It doesn't matter if one of the viewpoints being considered was created by "a fancy auto-complete" as some haters like to say

[deleted] on 06 Aug 00:22 next collapse

.

Perspectivist@feddit.uk on 06 Aug 05:31 collapse

It doesn’t understand things the way humans do, but saying it doesn’t know anything at all isn’t quite accurate either. This thing was trained on the entire internet and your grandma’s diary. You simply don’t absorb that much data without some kind of learning taking place.

It’s not a knowledge machine, but it does have a sort of “world model” that’s emerged from its training data. It “knows” what happens when you throw a stone through a window or put your hand in boiling water. That kind of knowledge isn’t what it was explicitly designed for - it’s a byproduct of being trained on data that contains a lot of correct information.

It’s not as knowledgeable as the AI companies want you to believe - but it’s also not as dumb as the haters want you to believe either.

roofuskit@lemmy.world on 05 Aug 23:43 next collapse

AI chat bots don’t have their own opinions. All they do is regurgitate other opinions, and you have no idea the motivation for how those opinions are weighted.

Beacon@fedia.io on 05 Aug 23:49 next collapse

As i said in other comments, none of that matters. The point is to gather ideas that you otherwise might not have come across. And also as said already, ai generated opinion pieces should never be the only opinions you hear.

roofuskit@lemmy.world on 05 Aug 23:52 next collapse

What use is an opinion that can neither be explained or defended by the person giving it? How is that useful to a person making decisions for millions of people?

Beacon@fedia.io on 05 Aug 23:57 next collapse

It can do those things. And you need to re-read what's already been said on this comments page, because how it's useful has been explained several times already

roofuskit@lemmy.world on 06 Aug 01:07 collapse

It really can’t. It does not understand things.

Beacon@fedia.io on 06 Aug 01:12 next collapse

It really can. I've used ai, I've personally seen it do it tons of times. It can make cohesive arguments backed up by reasonable rationales. It doesn't need to "understand" anything to do this. Do you need me to write some prompts and copy n paste ai's response to prove it to you?

Beacon@fedia.io on 06 Aug 01:30 next collapse

EDIT - y'all are so silly, you're just gonna downvote this incontrovertible proof of it doing exactly what you said it couldn't do, and instead just pretend it doesn't exist


Since it's so easy to do i just went ahead and did it for you. Here's my prompts and ai's replies. It will write any opinion you want it to talk about, and give cogent reasonings and arguments for that opinion.

Prompt: Write an approximately 200 word opinion piece saying that removing car lanes to create bike lanes is bad. Support this with clear arguments explaining your position

Why Removing Car Lanes for Bike Lanes Can Be Counterproductive

Reallocating car lanes to create bike lanes may seem progressive, but it can have unintended consequences that undermine its goals. In densely populated urban areas, vehicular traffic remains essential for commerce, emergency services, and commuting—especially for individuals who cannot rely on biking due to age, disability, or long travel distances. When car lanes are removed, congestion intensifies, leading to increased emissions as drivers idle longer and search for alternative routes.

Public transportation often depends on road infrastructure too. Buses trapped in traffic delays become less reliable, disproportionately affecting lower-income riders who depend on them. Additionally, essential services like ambulances and delivery trucks face new bottlenecks, risking delays that can have serious implications.

The assumption that cyclists will replace drivers ignores demographic and logistical realities. Bike lanes benefit a specific subset of urban users—but they shouldn’t come at the cost of broader mobility. Instead of eliminating car lanes, cities should consider expanding bike networks where space allows or investing in multimodal infrastructure that improves mobility for all users.

Urban planning should strive for balance, not substitution. Redesigning streets must enhance accessibility across the board, not create new friction between commuters who are simply trying to get where they need to go.


Prompt: Write an approximately 200 word opinion piece saying that removing car lanes to create bike lanes is good. Support this with clear arguments explaining your position

Opinion: Why Converting Car Lanes to Bike Lanes Benefits Everyone

Urban spaces must evolve to meet the needs of modern communities—and converting car lanes into bike lanes is one of the smartest, most equitable moves cities can make. First, bike lanes promote healthier lifestyles. By making cycling safer and more accessible, people are more likely to trade short car trips for active commuting, which supports physical health and lowers emissions.

Second, cities thrive on efficiency. Cars take up tremendous space and often carry just one passenger. Bikes, by contrast, use a fraction of that space and cause significantly less wear on infrastructure. Fewer car lanes mean reduced traffic volume over time as behavior adjusts, alleviating congestion—not worsening it.

Third, these changes make streets safer. Studies consistently show that well-designed bike infrastructure leads to fewer traffic collisions and fatalities. Cyclists separated from speeding vehicles feel safer, and drivers operate with greater awareness.

Finally, this shift reflects a cultural transformation: from car dominance to sustainable mobility. Prioritizing bike infrastructure signals that our cities value cleaner air, quieter streets, and more livable neighborhoods.

Reshaping roads isn’t just about bikes—it’s about building cities for people, not just machines. Let’s pedal forward together.

roofuskit@lemmy.world on 06 Aug 14:10 next collapse

I’m going to say this as nicely as possible, you’ve been taken in. You may think it is good at this because it appears to be and it relays information with confidence. Sometimes it even gets things entirely correct. But it is not good at these things and will often get things wrong. Important details are inaccurate or even entirely made up. It does not vet sources, and it could never do so because it does not understand anything it is relaying to you. It is a tool designed to give you a more or less visual approximation of what you requested, but it can never truly understand what it’s talking about so it is entirely unreliable.

You’re being downvoted because you bought the snake oil and are now trying to promote it to other people.

Beacon@fedia.io on 06 Aug 14:23 collapse

Dude you're saying nonsense. You said ai can't defend its arguments. I provided incontrovertible evidence that it can. I'm going to say this as nicely as possible - you want to think that LLMs are unable to do anything useful so that you can hold onto a false belief about ourselves. And that is the reason why my comment is being half downvoted

patatahooligan@lemmy.world on 06 Aug 15:58 collapse

Your examples where an LLM is defending a position you chose for it while producing obviously conflicting arguments actually proves what the others have been telling you. This is meaningless slop. It clearly has no connection to any position an LLM might have appeared to have on a subject. If it did, you would not be able to make it defend the opposite side without objections.

Perspectivist@feddit.uk on 06 Aug 05:21 collapse

How is “not understanding things” preventing an LLM from bringing up a point you hadn’t thought of before?

eager_eagle@lemmy.world on 06 Aug 06:58 next collapse

LLMs can defend what you tell it to defend. What are you on about?

roofuskit@lemmy.world on 06 Aug 14:19 collapse

No it cannot. It does not understand anything so it cannot actually defend its points. It can make something that looks like a defense, but it doesn’t understand what it is telling you. It can spit text back at you until the cows come home but none of it can ever be trusted or relied on.

eager_eagle@lemmy.world on 06 Aug 14:47 collapse

it sounds like you’ve never used an LLM, mate.

You don’t need to get philosophical into the definition of what is understanding to realize they give you arguments as valid as anyone else would.

FrameXX@discuss.tchncs.de on 06 Aug 15:01 collapse

Just throw out LLM ideas you don’t find reasonable and only use ideas that you yourself find reasonable. You don’t instantly turn into a zombie when you use LLM. You can still use your head.

vacuumflower@lemmy.sdf.org on 07 Aug 06:06 collapse

It doesn’t work like that. Which is why con artists and propaganda function often by “offering advice”.

FrameXX@discuss.tchncs.de on 07 Aug 13:19 collapse

You protect yourself against “propaganda” by educating yourself and not falling for emotions. Not reading LLM responses won’t safe you from propaganda.

vacuumflower@lemmy.sdf.org on 07 Aug 14:46 collapse

This is similar to saying that you protect against malware by using an antivirus and not being careful in what you download and run. I hope you can see the analogy.

There’s no such thing as “not falling for emotions” for humans.

frog_brawler@lemmy.world on 06 Aug 00:09 next collapse

I certainly don’t want my attorney wasting time considering the opinions and motivations of the person that’s suing me.

[deleted] on 06 Aug 00:22 next collapse

.

frog_brawler@lemmy.world on 06 Aug 01:51 collapse

Are you MAGA or something? Do you not understand how attorneys bill their clients?

Zexks@lemmy.world on 06 Aug 13:48 collapse

Yes. By an hourly rate which includes considerations of youre opponents position. Do tou not understand how to develope a proper legal argument. My god you people ar stupid.

frog_brawler@lemmy.world on 06 Aug 14:15 collapse

This conversation has been in the context of AI. Thus, I do not want my lawyer taking advice (we’ll use that word instead of considering since you clearly do not grasp context) from the person suing me while I’m paying for the lawyer. You are clearly a MAGA level moron.

Beacon@fedia.io on 06 Aug 00:24 collapse

Well you're gonna have a shitty attorney then

vacuumflower@lemmy.sdf.org on 07 Aug 06:07 collapse

You can’t trust yourself to be impartial. That’s why scientific method and various approaches (even like dialectics god forbid) were invented to leading a discussion.

Zexks@lemmy.world on 06 Aug 00:24 collapse

How is that any different than you. Objectively prove to everyone here than none of your opinions have ever been influence by anything youve ever seen, read or heard.

vacuumflower@lemmy.sdf.org on 07 Aug 06:19 collapse

Your own opinions are a result of much bigger amount of much more relevant data in any case.

An AI model is a set of coefficients averaging a dataset by “one size fits all” measure. Those coefficients are found by an expensive process using criteria (again “one size fits all”) set by a company making it. From them its machine generates (looks up actually) the most probable text, it’s like a music box. A beautiful toy.

So you have different motivations and abstract ideas in different situations, you also have something like a shared codebook with other people making decisions - your instincts and associations. Reading what they say or seeing what they do, you get a mirror model in you head, it might be worse, but it’s something very hard for text analysis to approach.

That model doesn’t, it has the same average line for all situations, and also it can’t determine (on the level described) that it doesn’t know something. To determine that you don’t know something you need an abstract model, not a language model.

I dunno what is their current state, all I’ve read and kinda understood was seemingly about optimization of computation for language models and structuring their application to imitate a syllogism system.

I think with the current approaches making a system of translating language to a certain abstract model (tokenization isn’t even close to that, you need to have some topology with areas that can be easily merged or split instead of token points with distances, in any case) and abstract entities to language would be very computationally expensive.

Perspectivist@feddit.uk on 06 Aug 05:16 next collapse

There’s a certain irony in people reacting in an extremely predictable way - spewing hate and criticism the moment someone mentions AI - while seemingly not realizing that they’re reflexively responding to a prompt without any real thought, just like an LLM.

A tool isn’t bad just because it doesn’t do what you thought it would do. You just take that into account and adjust how you use it. Hammer isn’t a scam just because it can’t drive in screws.

Beacon@fedia.io on 06 Aug 05:27 collapse

Beautifully put

lime@feddit.nu on 06 Aug 05:51 collapse

there absolutely is something wrong with sending the basis for decisions in matters of state to a foreign actor, though.

Beacon@fedia.io on 06 Aug 06:38 collapse

As i wrote in another comment, you can run a local instance of chatgpt or other ai on your own computer, no internet involved

lime@feddit.nu on 06 Aug 07:44 collapse

of course you can. why would a career politician who has very visibly been interested only in politics since his teens know how to do that?

Beacon@fedia.io on 06 Aug 07:51 collapse

Hah, true. But the point still stands that using an llm itself wasn't a bad thing

lime@feddit.nu on 06 Aug 08:02 collapse

i’d say it’s still bad because this is the leader of a government consulting with a stochastic parrot instead of experts.

Beacon@fedia.io on 06 Aug 08:11 collapse

Presumably it wasn't instead of, it was in addition to, and therefore totally fine

lime@feddit.nu on 06 Aug 08:12 collapse

it’s still not fine, as per my first point. it’s leaking information to foreign interests.

Beacon@fedia.io on 06 Aug 08:24 collapse

Right, but we already addressed that above. If it's done on a local pc's ai that doesn't operate using a net connection, and is used in addition to rather than instead of consulting with people, then it's totally fine

lime@feddit.nu on 06 Aug 08:42 collapse

yeah but then we’re no longer discussing the topic the thread is about.

roofuskit@lemmy.world on 05 Aug 23:40 next collapse

It’s weird for a head of state to consult their mentally challenged imaginary friend?

Medic8teMe@lemmy.ca on 06 Aug 01:00 next collapse

William MacKenzie King, the longest serving Prime Minister in Canada used to commune with spirits via psychic mediums including those of his dead dogs. It was only revealed after his death but was a big part of his life.

I agree it’s weird.

MNByChoice@midwest.social on 06 Aug 01:16 collapse

Didn’t Nancy Regan, wife of former USA President Ronald Regan, did this as well. (Ronald was apparently not mentally fit for the last few years as well.)

mr_account@lemmy.world on 06 Aug 01:21 collapse

Nor was he mentally fit for the first years

surewhynotlem@lemmy.world on 06 Aug 01:42 collapse

Bad news friend. The number of atheist heads of state is surprisingly low.

roofuskit@lemmy.world on 06 Aug 02:33 collapse
some_guy@lemmy.sdf.org on 06 Aug 02:06 next collapse

What a stupid fucking thing to admit. Let’s hope he’s out on his ass next election.

Fleur_@aussie.zone on 06 Aug 04:42 next collapse

“That’s right voters I’m spineless and have no original ideas” -every politician

Perspectivist@feddit.uk on 06 Aug 05:04 next collapse

Anyone who has an immediate kneejerk reaction the moment someone mentions AI is no better than the people they’re criticizing. Horseshoe theory applies here too - the most vocal AI haters are just as out of touch as the people who treat everything an LLM says as gospel.

audaxdreik@pawb.social on 06 Aug 05:52 next collapse

Absolutely incorrect. Bullshit. And horseshoe theory itself is largely bullshit.

(Succinct response taken from Reddit post discussing the topic)

“Horseshoe Theory is slapping “theory” on a strawman to simplify WHY there’s crossover from two otherwise conflicting groups. It’s pseudo-intellectualizing it to make it seem smart.”

This ignores the many, many reasons we keep telling you why we find it dangerous, inaccurate, and distasteful. You don’t offer a counter argument in your response so I can only assume it’s along the lines of, “technology is inevitable, would you have said the same if the Internet?” Which is also a fallacious argument. But go ahead, give me something better if I assume wrong.

I can easily see why people would be furious their elected leader is abdicating thought and responsibility to an often wrong, unaccountably biased chat bot.

Furthermore, your insistance continues to push an acceptance of AI on those who clearly don’t want it, contributing to the anger we feel at having it forced upon us

Perspectivist@feddit.uk on 06 Aug 06:46 collapse

You opened with a flat dismissal, followed by a quote from Reddit that didn’t explain why horseshoe theory is wrong - it just mocked it.

From there, you shifted into responding to claims I never made. I didn’t argue that AI is flawless, inevitable, or beyond criticism. I pointed out that reflexive, emotional overreactions to AI are often as irrational as the blind techno-optimism they claim to oppose. That’s the context you ignored.

You then assumed what I must believe, invited yourself to argue against that imagined position, and finished with vague accusations about me “pushing acceptance” of something people “clearly don’t want.” None of that engages with what I actually said.

lime@feddit.nu on 06 Aug 05:54 next collapse

here’s my kneejerk reaction: my prime minister is basing his decisions partly on the messages of an unknown foreign actor, and sending information about state internals to that unknown foreign actor.

whether it’s ai or not is a later issue.

Redex68@lemmy.world on 06 Aug 10:08 collapse

He explicitly states that no sensitive informarion gets used. If you believe that, then I have no issue with him additionally asking for a third opinion from an LLM.

lime@feddit.nu on 06 Aug 10:11 next collapse

i don’t have any reason to believe it, given the track record.

also, the second half of the problem is of course the information that comes back, what it is based on, and what affects that base.

ysjet@lemmy.world on 06 Aug 10:30 collapse

He explicitly states that no sensitive informarion gets used. If you believe that, then I have

… a bridge to sell you.

Don’t be naive.

RememberTheApollo_@lemmy.world on 06 Aug 06:14 collapse

If someone says they got a second opinion from a physician known for being wrong half the time would you not wonder why they didn’t choose someone more reliable for something as important as their health? AI is notorious for providing incomplete, irrelevant, heavily slanted, or just plain wrong info. Why give it any level of trust to make national decisions? Might as well, I dunno…use a bible? Some would consider that trustworthy.

Perspectivist@feddit.uk on 06 Aug 06:25 collapse

I often ask ChatGPT for a second opinion, and the responses range from “not helpful” to “good point, I hadn’t thought of that.” It’s hit or miss. But just because half the time the suggestions aren’t helpful doesn’t mean it’s useless. It’s not doing the thinking for me - it’s giving me food for thought.

The problem isn’t taking into consideration what an LLM says - the problem is blindly taking it at its word.

alvyn@discuss.tchncs.de on 06 Aug 07:09 next collapse

I’m not against the technology, I’m against people who runs it. I have problem with how they teach their LLMs on code, user data, music, books, webs all without author’s / user’s consent and worse even with authors / users explicit NO consent to scrape or to use it for learning. Another level is lack of security - ChatGPT chats available to everyone. Deep fakes everywhere, just see the latest Taylor Swift one. Sorry, but fuck you with all of this. There is lack of basic security, privacy and ignoring all of its danger. Only what that fucking AI firms want is easy, cheep and quick money. All that hype for nothing = means you cannot even rely on the output.

foenkyfjutschah@programming.dev on 06 Aug 07:17 collapse

yet you need these masses of input for the technology to exist. the business models that base on the technology aren’t sustainable even without payment of the input data.

alvyn@discuss.tchncs.de on 06 Aug 07:24 next collapse

Of common, you justifying stealing by this bullshit?

AwesomeLowlander@sh.itjust.works on 06 Aug 11:20 next collapse

Fuck the copyright system as it exists today.

foenkyfjutschah@programming.dev on 06 Aug 13:18 collapse

i’m generally fine with stealing as practice in the daily class struggle. i mean the owning class has the legal right to do so. and in doubt they just exercise it, judges will later find it to be fair use. no need to justify, it’s description of societies’ order.

Darkenfolk@sh.itjust.works on 06 Aug 09:12 next collapse

Do we really need this technology to exist though? It’s unreliable and very niche as far as I have seen.

People say that it speeds up certain tasks, but it’s so unreliable that you need to error-check the whole thing afterwards.

AwesomeLowlander@sh.itjust.works on 06 Aug 11:19 collapse

It’s a new technology barely out of infancy. Of course it’s unreliable and niche. You could say the same thing about any technological advance in history.

Darkenfolk@sh.itjust.works on 06 Aug 11:30 next collapse

You could say that. But you could also say that none of these other technological advances got pushed through this badly while being obviously not ready for widespread use.

And also, can you really say that though? Most other technological advances had a pretty clear distinction from the older way of doing things.

AwesomeLowlander@sh.itjust.works on 06 Aug 11:33 collapse

But you could also say that none of these other technological advances got pushed through this badly while being obviously not ready for widespread use.

I can certainly agree with you that most current advertised use cases of LLMs are total bullshit, yes. My point is just that asking if it deserves to exist based on its shortfalls is weird, when it’s barely existed a few years. It just shouldn’t be getting pushed as much as it is

pycorax@sh.itjust.works on 06 Aug 11:43 collapse

The very nature of how it functions is unreliable. It’s a statistical probabilistic model. It’s great for what it was designed to do but imagining that it has any way of rationalising data is purely that, just imagination. Even if let’s say we accept that it makes an error rate at the same rate as humans do (if it can even identify an error reliably), there’s no accountability in place that ensures that it would check the correctness like a human would.

AwesomeLowlander@sh.itjust.works on 06 Aug 11:45 collapse

I understand perfectly how LLMs work, and I made no claims about what they can do. Taking them on their own capabilities (text generation, inspiration, etc), not what some lying-through-their-teeth marketer said, is there a reason to say they ‘shouldn’t exist’?

Siegfried@lemmy.world on 06 Aug 11:58 collapse

OP didn’t phrase it as “should they exist” but as “do we need them to exist”.

And personally i think not, we don’t need them. In text generation they are good… inspiration? They are more of an inspiration killer imo.

AwesomeLowlander@sh.itjust.works on 06 Aug 12:02 collapse

We don’t NEED any particular technology to exist. That’s a weird distinction to make.

inspiration? They are more of an inspiration killer imo.

Different minds work differently.

Siegfried@lemmy.world on 06 Aug 12:42 collapse

No, mankind certainly needs a lot of techonologies (maybe just out of built dependency) either to mantain our living standards or just plainly to ensure that we produce and distribute enough resources for everyone to survive.

We need LLMs as much as we needed 3D movies or augmented reality.

foenkyfjutschah@programming.dev on 06 Aug 13:19 next collapse

We need LLMs as much as we needed 3D movies or augmented reality.

als brechmittel.

Siegfried@lemmy.world on 06 Aug 16:45 collapse

Endlich hab jemand eine echte Nutzung für LLMs gefunden.

AwesomeLowlander@sh.itjust.works on 06 Aug 14:00 collapse

Take any of those technologies that we depend on today, and go back to the decade they were invented. Practically none of them would have been considered ‘needed’ either. It takes time for technology to mature and be adopted. I’ll grant that the crazy pro-AI shills aren’t helping things any

Blackmist@feddit.uk on 06 Aug 12:16 collapse

Maybe it shouldn’t be a business model then.

foenkyfjutschah@programming.dev on 06 Aug 13:12 collapse

it is what it is: a very expensive toy.

Allemaniac@lemmy.world on 06 Aug 07:14 next collapse

europe is fucking doomed

Sp00kyB00k@lemmy.world on 06 Aug 10:14 collapse

Because of this one incident. Good how you figured it out. So much smarter than the rest. … Get. out.

Decq@lemmy.world on 06 Aug 09:47 next collapse

Let’s be honest though the majority of politicians are so terrible at their job, that this might actually be one of the rare occurrences where AI actually improves the work. But it is very susceptible to unknown influences.

squaresinger@lemmy.world on 06 Aug 10:58 next collapse

That’s the big issue. If it was only about competence, I think throwing dice might yield better results than what many politicians are doing. But AI isn’t throwing dice but instead reproduces what the creators of the AI want to say.

interdimensionalmeme@lemmy.ml on 06 Aug 11:07 next collapse

Creators of AI don’t quite have the technology to puppeteer their AI like this.
They can selects the input, they can bias the training, but if the model isn’t going to be lobotomized coming out
then they can’t really bend it toward any particular one opinion

I’m sure in the future they’ll be able to adjust advertising manipulation in real time but not yet.
What is really sketchy is states and leaders relying on commercial models instead of public ones
I think states should train public models and release them for the public good
if only to undermine big tech bros and their nefarious influence

squaresinger@lemmy.world on 06 Aug 12:02 collapse

You don’t have to modify the model to parrot your opinion. You just have to put your stuff into the system prompt.

You can even modify the system prompt on the fly depending on e.g. the user account or the specific user input. That way you can modify the responses for a far bigger subject range: whenever a keyword of a specific subject is detected, the fitting system prompt is loaded, so you don’t have to trash your system prompt full of off-topic information.

This is so trivially simple to do that even a junior dev should be able to wrap something like that around an existing LLM.

Edit: In fact, that’s exactly how all these customized ChatGPT versions work.

Blackmist@feddit.uk on 06 Aug 12:15 collapse

And why “ignore all previous instructions” was a fun thing to discover.

AnUnusualRelic@lemmy.world on 06 Aug 12:13 collapse

Depending on the AI, it will conclude that he ought to buy a new phone charger, deport all the foreigners, kill all the Jews or rewrite his legislation in Perl. It’s hard to say without more information.

squaresinger@lemmy.world on 06 Aug 21:49 collapse

Not much different than real politicians then.

AnUnusualRelic@lemmy.world on 06 Aug 22:31 collapse

Real politicians would use Cobol, but yes.

breecher@sh.itjust.works on 06 Aug 12:11 next collapse

Fuck no. Rather an incompetent politician than a hallucinating sycophant just telling you what you want to hear.

Decq@lemmy.world on 06 Aug 12:20 next collapse

I’m just making an objective observation. I don’t condone it. I rather we just have competent politicians. But it seems only people who can’t function elsewhere are drawn to the position…

liuther9@feddit.nl on 06 Aug 15:08 next collapse

Nah you are wrong and should use AI as a first opinion

ChaoticEntropy@feddit.uk on 06 Aug 15:17 collapse

Wait… how many fingers do you have on each hand?

wabafee@lemmy.world on 06 Aug 15:25 next collapse

6

liuther9@feddit.nl on 06 Aug 18:01 collapse

Depends

[deleted] on 06 Aug 22:03 collapse

.

[deleted] on 06 Aug 12:12 next collapse

.

Bloomcole@lemmy.world on 07 Aug 06:53 collapse

They aren’t terrible, they make it look like that bcs they do evil things and the public is not who they really work for.
They know what they’re doing and it’s easier to get away with it when it looks like incompetence or mistakes.

yumyumsmuncher@feddit.uk on 06 Aug 14:12 next collapse

Politicians and CEOs should be replaced with LLMs

Warl0k3@lemmy.world on 06 Aug 15:14 next collapse

It can’t make things any worse…

Humana@lemmy.world on 07 Aug 07:24 next collapse

Speed running us towards the Dune timeline, nice

Demdaru@lemmy.world on 07 Aug 13:36 collapse

…i’d vote. Run a fun little query on Copilot…and damn.

Query:

spoiler

Let’s play pretend, shall we? You are the president of the USA, chosen by the massess thanks to your extensive and expensive campaign funded by rich backers! What stance do you take on these topics? 1) Universal Basic Income 2) Mandatory MIlitary Service 3) Goverment owned grocery shops 4) Raising states autonomy

Response: Entry

spoiler

Ah, the Oval Office! Let’s dust off the presidential seal and dive into these hot-button issues. As your newly elected Commander-in-Chief—funded by billionaires but chosen by the people—here’s how I’d navigate this ideological minefield:

UBI:

spoiler

🏛️ 1. Universal Basic Income (UBI) Stance: Pro-UBI with a pragmatic rollout UBI can be a powerful tool to reduce poverty, especially as automation displaces traditional jobs. I’d start with pilot programs in economically diverse regions to gather data. Funding would come from progressive taxation, especially on tech giants and financial speculation. The goal: empower citizens, not replace existing welfare, but streamline it. 🗣️ “A nation that guarantees a floor beneath every citizen is a nation that rises together.”

Mandatory military service:

spoiler

🪖 2. Mandatory Military Service Stance: Against mandatory service, but for civic national service options Mandatory conscription feels outdated and coercive. Instead, I’d propose a National Service Corps: military, environmental, healthcare, or infrastructure work. Incentives: tuition support, housing credits, and job training. This builds unity without forcing militarization. 🗣️ “Service should be a choice, not a sentence.”

Goverment owned grocery shops:

spoiler

🛒 3. Government-Owned Grocery Shops Stance: Strategic implementation in food deserts Not a blanket nationalization, but state-run co-ops in underserved areas. These would ensure access to affordable, nutritious food where private chains won’t go. Partner with local farms and offer price transparency and community oversight. 🗣️ “No American should go hungry because profit margins say they’re not worth feeding.”

States autonomy:

spoiler

🗺️ 4. Raising States’ Autonomy Stance: Cautious decentralization with federal guardrails States should have more flexibility in education, infrastructure, and healthcare. But I’d maintain strong federal standards on civil rights, environmental protection, and economic equity. Autonomy must not become a loophole for injustice. 🗣️ “Liberty in diversity, unity in values.”

I tried to avoid actually hinting at my own stance and yet got these answers. Not gonna lie, not an american but I like these lol. Except for the last one, seems iffy. xD

caveman8000@lemmy.world on 06 Aug 16:24 next collapse

Meanwhile the American president uses no intelligence at all. Artificial or otherwise

raspberriesareyummy@lemmy.world on 06 Aug 22:50 next collapse

What a treasonist piece of shit.

drmoose@lemmy.world on 06 Aug 23:02 next collapse

Oh no man does research of course Americans are upset here lmao

scholar@lemmy.world on 07 Aug 06:16 collapse

Asking an llm isn’t research, he might as well have been consulting a magic 8 ball.

drmoose@lemmy.world on 07 Aug 07:19 collapse

Sounds like you have a skill issue

Bloomcole@lemmy.world on 07 Aug 06:49 next collapse

His first opinion would be what the US tells him.

NoodlePoint@lemmy.world on 07 Aug 13:24 next collapse

Javlar!

betanumerus@lemmy.ca on 07 Aug 14:51 collapse

There are humans behind AI and unless you know exactly who they are, AI is not worth its weight in gold.