Microsoft Study Finds Relying on AI Kills Your Critical Thinking Skills (gizmodo.com)
from abobla@lemm.ee to technology@lemmy.world on 13 Feb 17:46
https://lemm.ee/post/55445037

#technology

threaded - newest

Telorand@reddthat.com on 13 Feb 17:55 next collapse

Good. Maybe the dumbest people will forget how to breathe, and global society can move forward.

gerbler@lemmy.world on 13 Feb 18:34 next collapse

Oh you can guarantee they won’t forget how to vote 😃

RobotToaster@mander.xyz on 13 Feb 18:45 collapse

Microsoft will just make a subscription AI for that, BaaS.

dbkblk@lemmy.world on 13 Feb 18:55 collapse

Which we will rebrand “Bullshit as a service”!

Sibbo@sopuli.xyz on 13 Feb 19:04 collapse

I thought that’s what it means?

dbkblk@lemmy.world on 13 Feb 19:40 collapse

No, he said Breath as a service, which is funny!

DarkCloud@lemmy.world on 13 Feb 18:04 next collapse

Quickly, ask AI how to improve or practice critical thinking skills!

ThePowerOfGeek@lemmy.world on 13 Feb 18:47 next collapse

Chat GPT et al; “To improve your critical thinking skills you should rely completely on AI.”

VitoRobles@lemmy.today on 13 Feb 20:02 collapse

That sounds right. Lemme ask Gemini and DeepSink just in case.

Ste41th@lemmy.ml on 14 Feb 05:59 collapse

“Deepsink” lmao sounds like some sink cleaner brand

Petter1@lemm.ee on 15 Feb 00:33 collapse

Improving your critical thinking skills is a process that involves learning new techniques, practicing them regularly, and reflecting on your thought processes. Here’s a comprehensive approach:

1. Build a Foundation in Logic and Reasoning

• Study basic logic: Familiarize yourself with formal and informal logic (e.g., learning about common fallacies, syllogisms, and deductive vs. inductive reasoning). This forms the groundwork for assessing arguments objectively.

• Learn structured methods: Books and online courses on critical thinking (such as Lewis Vaughn’s texts) provide a systematic introduction to these concepts.

2. Practice Socratic Questioning

• Ask open-ended questions: Challenge assumptions by repeatedly asking “why” and “how” to uncover underlying beliefs and evidence.

• Reflect on responses: This method helps you clarify your own reasoning and discover alternative viewpoints.

3. Engage in Reflective Practice

• Keep a journal: Write about decisions, problems, or debates you’ve had. Reflect on what went well, where you might have been biased, and what could be improved.

• Use structured reflection models: Approaches like Gibbs’ reflective cycle guide you through describing an experience, analyzing it, and planning improvements.

4. Use Structured Frameworks

• Follow multi-step processes: For example, the Asana article “How to build your critical thinking skills in 7 steps” suggests: identify the problem, gather information, analyze data, consider alternatives, draw conclusions, communicate solutions, and then reflect on the process.

• Experiment with frameworks like Six Thinking Hats: This method helps you view issues from different angles (facts, emotions, positives, negatives, creativity, and process control) by “wearing” a different metaphorical hat for each perspective.

5. Read Widely and Critically

• Expose yourself to diverse perspectives: Reading quality journalism (e.g., The Economist, FT) or academic articles forces you to analyze arguments, recognize biases, and evaluate evidence.

• Practice lateral reading: Verify information by consulting multiple sources and questioning the credibility of each.

6. Participate in Discussions and Debates

• Engage with peers: Whether through formal debates, classroom discussions, or online forums, articulating your views and defending them against criticism deepens your reasoning.

• Embrace feedback: Learn to view criticism as an opportunity to refine your thought process rather than a personal attack.

7. Apply Critical Thinking to Real-World Problems

• Experiment in everyday scenarios: Use critical thinking when making decisions—such as planning your day, solving work problems, or evaluating news stories.

• Practice with “what-if” scenarios: This helps build your ability to foresee consequences and assess risks (as noted by Harvard Business’s discussion on avoiding the urgency trap).

8. Develop a Habit of Continuous Learning

• Set aside regular “mental workout” time: Like scheduled exercise, devote time to tackling complex questions without distractions.

• Reflect on your biases and update your beliefs: Over time, becoming aware of and adjusting for your cognitive biases will improve your judgment.

By integrating these strategies into your daily routine, you can gradually sharpen your critical thinking abilities. Remember, the key is consistency and the willingness to challenge your own assumptions continually.

Happy thinking!

Sibbo@sopuli.xyz on 13 Feb 18:13 next collapse

Sounds a bit bogus to call this a causation. Much more likely that people who are more gullible in general also believe AI whatever it says.

UnderpantsWeevil@lemmy.world on 13 Feb 18:57 next collapse

This isn’t a profound extrapolation. It’s akin to saying “Kids who cheat on the exam do worse in practical skills tests than those that read the material and did the homework.” Or “kids who watch TV lack the reading skills of kids who read books”.

Asking something else to do your mental labor for you means never developing your brain muscle to do the work on its own. By contrast, regularly exercising the brain muscle yields better long term mental fitness and intuitive skills.

This isn’t predicated on the gullibility of the practitioner. The lack of mental exercise produces gullibility.

Its just not something particular to AI. If you use any kind of 3rd party analysis in lieu of personal interrogation, you’re going to suffer in your capacity for future inquiry.

fushuan@lemm.ee on 14 Feb 08:15 collapse

All tools can be abused tbh. Before chatgpt was a thing, we called those programmers the StackOverflow kids, copy the first answer and hope for the best memes.

After searching for a solution a bit and not finding jack shit, asking a llm about some specific API thing or simple implementation example so you can extrapolate it into your complex code and confirm what it does reading the docs, both enriches the mind and you learn new techniques for the future.

Good programmers do what I described, bad programmers copy and run without reading. It’s just like SO kids.

ODuffer@lemmy.world on 13 Feb 23:19 collapse

Seriously, ask AI about anything you have expert knowledge in. It’s laughable sometimes… However you need to know, to know it’s wrong. At face value, if you have no expertise it sounds entirely plausible, however the details can be shockingly incorrect. Do not trust it implicitly about anything.

kitnaht@lemmy.world on 13 Feb 18:17 next collapse

How many phone numbers do you know off of the top of your head?

In the 90s, my mother could rattle off 20 or more.

But they’re all in her phone now. Are luddites going to start abandoning phones because they’re losing the ability to remember phone numbers? No, of course not.

Either way, these fancy prediction engines have better critical thinking skills than most of the flesh and bone people I meet every day to begin with. The world might actually be smarter on average if they didn’t open their mouths.

ch00f@lemmy.world on 13 Feb 18:19 next collapse

Memorization is not the same thing as critical thinking.

A well designed test will freely give you an equation sheet or even allow a cheat sheet.

kitnaht@lemmy.world on 13 Feb 18:38 next collapse

You’re right it’s not the same thing as critical thinking, but it is a skill we’ve lost. How many skills have we lost throughout history due to machines and manufacturing?

This is the same tale over and over again - these people weren’t using critical thinking to begin with if they were trusting a prediction engine with their tasks.

homesweethomeMrL@lemmy.world on 13 Feb 19:22 collapse

I think “deliberately suppressed” is different than lost.

UnderpantsWeevil@lemmy.world on 13 Feb 19:07 next collapse

Memorization is not the same thing as critical thinking.

A library of internalized axioms is necessary for efficient critical thinking. You can’t just turn yourself into a Chinese Room of analysis.

A well designed test will freely give you an equation sheet or even allow a cheat sheet.

Certain questions are phrased to force the reader to pluck out and categorize bits of information, to implement complex iterations of simple formulae, and to perform long-form calculations accurately without regard to the formulae themselves.

But for elementary skills, you’re often challenging the individual to retain basic facts and figures. Internalizing your multiplication tables can serve as a heuristic that’s quicker than doing simple sums in your head. Knowing the basic physics formulae - your F = ma, ρ=m/V, f= V/λ etc - can give you a broader understanding of the physical world.

If all you know how to do is search for answers to basic questions, you’re slowing down your ability to process new information and recognize patterns or predictive signals in a timely manner.

ch00f@lemmy.world on 13 Feb 23:06 collapse

I agree with all of this. My comment is meant to refute the implication that not needing to memorize phone numbers is somehow analogous to critical thinking. And yes, internalized axioms are necessary, but largely the core element is memorizing how these axioms are used, not necessarily their rote text.

artificialfish@programming.dev on 13 Feb 19:19 collapse

When was the last time you did math without a calculator?

ch00f@lemmy.world on 13 Feb 22:15 collapse

Calculators also don’t think critically.

artificialfish@programming.dev on 13 Feb 22:18 collapse

Damn. I wonder where all the calculus identities and mathematical puzzle solving abilities in my head disappeared to then. Surely not into the void that is Wolfram Mathematica. Surely not…

BradleyUffner@lemmy.world on 13 Feb 18:27 next collapse

Mostly just this one:

0118 999 881 999 119 725 3

But even back when we only had land lines, I could barely remember my own phone number. I didn’t think it’s a good measure.

Snapz@lemmy.world on 13 Feb 19:33 collapse

Something something… Only phone number I remember is your mother’s phone number (Implying that is for when I’m calling her to arrange a session of sexual intercourse, that she willingly and enthusiastically participates in).

_haha_oh_wow_@sh.itjust.works on 13 Feb 18:25 next collapse

Duh?

homesweethomeMrL@lemmy.world on 13 Feb 19:20 collapse

Buh?

mindlesscrollyparrot@discuss.tchncs.de on 13 Feb 18:33 next collapse

Well thank goodness that Microsoft isn’t pushing AI on us as hard as it can, via every channel that it can.

Zacryon@feddit.org on 13 Feb 18:40 next collapse

Well at least they communicate such findings openly and don’t try to hide them. Other than ExxonMobil who saw global warming coming due to internal studies since the 1970s and tried to hide or dispute it, because it was bad for business.

UnderpantsWeevil@lemmy.world on 13 Feb 18:58 collapse

Learning how to evade and disable AI is becoming a critical thinking skill unto itself. Feels a bit like how I’ve had to learn to navigate around advertisements and other intrusive 3rd party interruptions while using online services.

Mrkawfee@lemmy.world on 13 Feb 18:37 next collapse

Can confirm. I’ve stopped using my brain at work. Moreso.

ObviouslyNotBanana@lemmy.world on 13 Feb 18:38 next collapse

No way!

thefartographer@lemm.ee on 13 Feb 18:52 next collapse

<img alt="" src="https://i.makeagif.com/media/4-10-2022/62Ljnc.gif">

sircac@lemmy.world on 13 Feb 18:55 next collapse

It was already soooooo dead out there that I doubt they considered this systematic properly in the study…

Alwaysnownevernotme@lemmy.world on 13 Feb 19:02 next collapse

Good thing most Americans already don’t possess those!

venusaur@lemmy.world on 13 Feb 19:08 next collapse

The same could be said about people who search for answers anywhere on the internet, or even the world, and don’t have some level of skepticism about their sources of information.

It’s more like, not having critical thinking skills perpetuates a lack of critical thinking skills.

Not_mikey@lemmy.dbzer0.com on 14 Feb 01:01 collapse

Yeah, if you repeated this test with the person having access to a stack exchange or not you’d see the same results. Not much difference between someone mindlessly copying an answer from stack overflow vs copying it from AI. Both lead to more homogeneous answers and lower critical thinking skills.

OhVenus_Baby@lemmy.ml on 14 Feb 02:11 next collapse

Copying isn’t the same as using your brain to form logical conclusions. Instead your taking someone else’s wild interpretation, research, study, and blindly copying it as fact. That lowers critical thinking because your not thinking at all. Bad information is always bad no matter how far it spreads. Incomplete info is no different.

venusaur@lemmy.world on 14 Feb 07:46 collapse

I’d agree that anybody who just takes the first answer offered them by any means as fact would have the same results as this study.

Snapz@lemmy.world on 13 Feb 19:29 next collapse

Corporations and politicians: "oh great news everyone… It worked. Time to kick off phase 2…"

  • Replace all the water trump wasted in California with brawndo
  • Sell mortgages for eggs, but call them patriot pods
  • Welcome to Costco, I love you
  • All medicine replaced with raw milk enemas
  • Handjobs at Starbucks
  • Ow my balls, Tuesdays this fall on CBS
  • Chocolate rations have gone up from 10 to 6
  • All government vehicles are cybertrucks
  • trump nft cartoons on all USD, incest legal, Ivanka new first lady.
  • Public executions on pay per view, lowered into deep fried turkey fryer on white house lawn, your meat is then mixed in with the other mechanically separated protein on the Tyson foods processing line (run exclusively by 3rd graders) and packaged without distinction on label.
  • FDA doesn’t inspect food or drugs. Everything approved and officially change acronym to F(uck You) D(umb) A(ss)
abobla@lemm.ee on 13 Feb 19:50 next collapse

that “ow, my balls” reference caught me off-guard

Eheran@lemmy.world on 13 Feb 20:38 next collapse

I love how you mix in the Idiocracy quotes :D

singletona@lemmy.world on 13 Feb 20:47 next collapse

I hate how it just seems to slide in.

Snapz@lemmy.world on 13 Feb 21:51 collapse

A savvy consumer, glad you mentioned. Felt better than hitting it on the nose.

whostosay@lemmy.world on 13 Feb 21:35 next collapse

Bullet point 3 was my single issue vote

LePoisson@lemmy.world on 13 Feb 22:41 next collapse

  • Handjobs at Starbucks

Well that’s just solid policy right there, cum on.

peoplebeproblems@midwest.social on 14 Feb 01:25 collapse

It would wake me up more than coffee that’s for sure

AtariDump@lemmy.world on 14 Feb 03:14 collapse
BaroqueInMind@lemmy.one on 13 Feb 19:31 next collapse

Unless you suffer from ADHD with object permanence issues, then in that case you can go fuck yourself.

jdeath@lemm.ee on 13 Feb 19:49 next collapse

i use my thinking skills to tell the LLM to quit fucking up and try again or I’m gonna fire his ass

werefreeatlast@lemmy.world on 13 Feb 20:39 collapse

Keep it on its toes… Ask chatgpt, then copy paste the answer and ask perplexity why that’s wrong and go back and forth…human, AI, Human, AI…until you get a satisfactory answer.

jdeath@lemm.ee on 13 Feb 21:33 collapse

i like to say “are you sure you even understand this? do you know what you’re doing or do i need to spell it out for you?!”

Lexam@lemmy.world on 13 Feb 20:21 next collapse

Gemini told me critical thinking wasn’t important. So I guess that’s ok.

ThomasCrappersGhost@feddit.uk on 13 Feb 20:26 next collapse

No shit.

werefreeatlast@lemmy.world on 13 Feb 20:40 next collapse

That’s the same company that approved Clippie and the magic wizard.

CosmoNova@lemmy.world on 13 Feb 20:41 next collapse

I‘m surprised they even published this finding given how hard they‘re pushing AI.

OutlierBlue@lemmy.ca on 13 Feb 20:54 collapse

That’s because they’re bragging, not warning.

Sam_Bass@lemmy.world on 13 Feb 20:42 next collapse

Oddly enough that’s exactly what corporate wants. Mindless drones to do their bidding unquestioned

superglue@lemmy.dbzer0.com on 13 Feb 20:54 next collapse

Of course. Relying on a lighter kills your ability to start a fire without one. Its nothing new.

lobut@lemmy.ca on 13 Feb 21:29 next collapse

Remember the:

Personal computers were “bicycles for the mind.”

I guess with AI and social media it’s more like melting your mind or something. I can’t find another analogy. Like a baseball bat to your leg for the mind doesn’t roll off the tongue.

I know Primeagen has turned off copilot because he said the “copilot pause” daunting and affects how he codes.

dragonfucker@lemmy.nz on 13 Feb 22:57 collapse

Cars for the mind.

Cars are killing people.

[deleted] on 13 Feb 22:22 next collapse

.

OsrsNeedsF2P@lemmy.ml on 13 Feb 22:22 next collapse

Really? I just asked ChatGPT and this is what it had to say:

This claim is misleading because AI can enhance critical thinking by providing diverse perspectives, data analysis, and automating routine tasks, allowing users to focus on higher-order reasoning. Critical thinking depends on how AI is used—passively accepting outputs may weaken it, but actively questioning, interpreting, and applying AI-generated insights can strengthen cognitive skills.

ChapulinColorado@lemmy.world on 13 Feb 23:02 next collapse

Not sure if sarcasm…

OhVenus_Baby@lemmy.ml on 14 Feb 02:07 collapse

I agree with the output for legitimate reasons but it’s not black and white wrong or right. I think it’s wildly misjudged and while there plenty of valid reasons behind that I still think there is much to be had for what AI in general can do for us on a whole and individual basis.

Today I had it analyze 8 medical documents, told it to provide analysis, cross reference its output with scientific studies including sources, and other lengthy queries. These documents are dealing with bacterial colonies and multiple GI and bodily systems on a per document basis in great length. Some of the most advanced testing science offers.

It was able to not only provide me with accurate numbers that I fact checked from my documents side by side but explain methods to counter multi faceted systemic issues that matched multiple specialty Dr.s. Which is fairly impressive given to see a Dr takes 3 to 9 months or longer, who may or may not give a shit, over worked and understaffed, pick your reasoning.

While I tried having it scan from multiple fresh blank chat tabs and even different computers to really test it out for testing purposes.

Overall some of the numbers were off say 3 or 4 individual colony counts across all 8 documents. I corrected the values, told it that it was incorrect and to reasses giving it more time and ensuring accuracy, supplied a bit more context about how to understand the tables and I mean broad context such as page 6 shows gene expression use this as reference to find all underlying issues as it isnt a mind reader. It managed to fairly accurately identify the dysbiosis and other systemic issues with reasonable accuracy on par with physicians I have worked with. Dealing with antibiotic gene resistant analysis it was able to find multiple approaches to therapies to fight antibiotic gene resistant bacteria in a fraction of the time it would take for a human to study.

I would not bet my life solely on the responses as it’s far from perfected and as always with any info it should be cross referenced and fact checked through various sources. But those who speak such ill towards the usage while there is valid points I find unfounded. My 2 cents.

alteredracoon@lemm.ee on 14 Feb 06:27 collapse

Totally agree with you! I’m in a different field but I see it in the same light. Let it get you to 80-90% of whatever that task is and then refine from there. It saves you time to add on all the extra cool shit that that 90% of time would’ve taken into. So many people assume you have to use at 100% face value. Just take what it gives you as a jumping off point.

OhVenus_Baby@lemmy.ml on 14 Feb 14:09 collapse

I think specifically Lemmy and just the in general anti corpo mistrust drives the majority of the negativity towards AI. Everyone is cash/land grabbing towards anything that sticks. Trying to shove their product down everyone’s throat.

People don’t like that behavior and thus shun it. Understandable. However don’t let that guide your entire logical thinking as a whole, it seems to cloud most people entirely to the point they can’t fathom an alternative perspective.

I think the vast majority of tools/software originate from a source of good but then get transformed into bad actors because of monetization. Eventually though and trends over time prove this, things become open source or free and the real good period arrives after the refinement and profit period…

It’s very parasitic even, to some degree.
There is so much misinformation about emerging technologies because info travels so fast unchecked that there becomes tons of bullshit to sift through. I think smart contracts (removing multi party input) and business anti trust can be alleviated in the future but it will require correct implementation and understanding from both consumers and producers which we are far from as of now. Topic for another time though.

Pacattack57@lemmy.world on 13 Feb 22:27 next collapse

Pretty shit “study”. If workers use AI for a task, obviously the results will be less diverse. That doesn’t mean their critical thinking skills deteriorated. It means they used a tool that produces a certain outcome. This doesn’t test their critical thinking at all.

“Another noteworthy finding of the study: users who had access to generative AI tools tended to produce “a less diverse set of outcomes for the same task” compared to those without. That passes the sniff test. If you’re using an AI tool to complete a task, you’re going to be limited to what that tool can generate based on its training data. These tools aren’t infinite idea machines, they can only work with what they have, so it checks out that their outputs would be more homogenous. Researchers wrote that this lack of diverse outcomes could be interpreted as a “deterioration of critical thinking” for workers.”

4am@lemm.ee on 13 Feb 22:35 collapse

That doesn’t mean their critical thinking skills deteriorated. It means they used a tool that produces a certain outcome.

Dunning, meet Kruger

Womble@lemmy.world on 14 Feb 08:56 collapse

That snark doesnt help anyone.

Imagine the AI was 100% perfect and gave the correct answer every time, people using it would have a significantly reduced diversity of results as they would always be using the same tool to get the correct same answer.

People using an ai get a smaller diversity of results is neither good nor bad its just the way things are, the same way as people using the same pack of pens use a smaller variety of colours than those who are using whatever pens they have.

4am@lemm.ee on 14 Feb 10:35 collapse

First off the AI isn’t correct 100% of the time, and it never will be.

Secondly, you as well are stating in so many more words that people stop thinking critically about its output. They accept it.

That is a lack of critical thinking on the part of the AI users, as well as yourself and the original poster.

Like, I don’t understand the argument you all are making here - am I going fucking crazy? “Bro it’s not that they don’t think critically it’s just that they accept whatever they’re given” which is the fucking definition of a lack of critical thinking.

Womble@lemmy.world on 15 Feb 11:42 collapse

Let me try with another example that can get round your blind AI hatred.

If people were using a calculator to calculate the value of an integral they would have significantly less diversity of results because they were all using the same tool. Less diversity of results has nothing to do with how good the tool is, it might be 100% right or 100% wrong but if everyone is using it then they will all get the same (or similar if it has a random element to it as LLMs do).

peoplebeproblems@midwest.social on 13 Feb 22:46 next collapse

You mean an AI that literally generated text based on applying a mathematical function to input text doesn’t do reasoning for me? (/s)

I’m pretty certain every programmer alive knew this was coming as soon as we saw people trying to use it years ago.

It’s funny because I never get what I want out of AI. I’ve been thinking this whole time “am I just too dumb to ask the AI to do what I need?” Now I’m beginning to think “am I not dumb enough to find AI tools useful?”

zipzoopaboop@lemmynsfw.com on 13 Feb 22:59 next collapse

Critical thinking skills are what hold me back from relying on ai

Jeffool@lemmy.world on 13 Feb 23:50 next collapse

When it was new to me I tried ChatGPT out of curiosity, like with any tech, and I just kept getting really annoyed at the expansive bullshit it gave to the simplest of input. “Give me a list of 3 X” lead to fluff-filled paragraphs for each. The bastard children of a bad encyclopedia and the annoying kid in school.

I realized I was understanding it wrong, and it was supposed to be understood not as a useful tool, but as close to interacting with a human, pointless prose and all. That just made me more annoyed. It still blows my mind people say they use it when writing.

[deleted] on 14 Feb 00:21 collapse

.

FlyingSquid@lemmy.world on 14 Feb 08:12 collapse

Please show me the peer-reviewed scientific journal that requires a minimum number of words per article.

Seems like these journals don’t have a word count minimum: paperpile.com/blog/shortest-papers/

Womble@lemmy.world on 14 Feb 08:49 collapse

They in fact often have word and page limits and most journal articles I’ve been a part of have had a period at the end of cutting and trimming in order to fit into those limits.

FlyingSquid@lemmy.world on 14 Feb 08:51 collapse

That makes sense considering a journal can only be so many pages long.

ctkatz@lemmy.ml on 14 Feb 00:44 next collapse

never used it in any practical function. i tested it to see if it was realistic and i found it extremely wanting. as in, it sounded nothing like the prompts i gave it.

the absolutely galling and frightening part is that the tech companies think that this is the next big innovation they should be pursuing and have given up on innovating anyplace else. it was obvious to me when i saw that they all are pushing ai shit on me with everything from keyboards to search results. i only use voice commands to do simple things and it works just about half the time, and ai is built on the back of that which is why i really do not ever use voice commands for anything anymore.

FlyingSquid@lemmy.world on 14 Feb 08:54 collapse

I once asked ChatGPT who I was and hallucinated this weird thing about me being a motivational speaker for businesses. I have a very unusual name and there is only one other person in the U.S. (now the only person in the U.S. since I just emigrated) with my name. And we don’t even have the same middle name. Neither of us are motivational speakers or ever were.

Then I asked it again and it said it had no idea who I was. Which is kind of insulting to my namesake since he won an Emmy award. Sure, it was a technical Emmy, but that’s still insulting.

Edit: HAHAHAHA! I just asked it who I was again. It got my biography right… for when I was in my 20s and in college. It says I’m a college student. I’m 47. Also, I dropped out of college. I’m most amused that it’s called the woman I’ve been married to since the year 2000, when I was 23, my girlfriend. And yet mentions a project I worked on in 2012.

ArchRecord@lemm.ee on 14 Feb 01:19 next collapse

The only beneficial use I’ve had for “AI” (LLMs) has just been rewriting text, whether that be to re-explain a topic based on a source, or, for instance, sort and shorten/condense a list.

Everything other than that has been completely incorrect, unreadably long, context-lacking slop.

SplashJackson@lemmy.ca on 14 Feb 01:19 next collapse

Weren’t these assholes just gung-ho about forcing their shitty “AI” chatbots on us like ten minutes ago? Microsoft can go fuck itself right in the gates.

msage@programming.dev on 14 Feb 11:28 collapse

Training those AIs was expensive. It swallowed very large sums of VC’s cash, and they will make it back.

Remember, their money is way more important than your life.

Blaster_M@lemmy.world on 14 Feb 01:53 next collapse

Garbage in, Garbage out. Ingesting all that internet blather didn’t make the ai smarter by much if anything.

ALoafOfBread@lemmy.ml on 14 Feb 01:53 next collapse

You can either use AI to just vomit dubious information at you or you can use it as a tool to do stuff. The more specific the task, the better LLMs work. When I use LLMs for highly specific coding tasks that I couldn’t do otherwise (I’m not a [good] coder), it does not make me worse at critical thinking.

I actually understand programming much better because of LLMs. I have to debug their code, do research so I know how to prompt it best to get what I want, do research into programming and software design principles, etc.

MojoMcJojo@lemmy.world on 14 Feb 02:13 next collapse

Like any tool, it’s only as good as the person wielding it.

FinalRemix@lemmy.world on 14 Feb 02:13 next collapse

I use a bespoke model to spin up pop quizzes, and I use NovelAI for fun.

Legit, being able to say “I want these questions. But… not these…” and get them back in a moment’s notice really does let me say “FUCK it. Pop quiz. Let’s go, class.” And be ready with brand new questions on the board that I didn’t have before I said that sentence. NAI is a good way to turn writing into an interactive DnD session, and is a great way to force a ram through writer’s block, with a “yeah, and—!” machine. If for no other reason than saying “uhh… no, not that, NAI…” and then correct it my way.

DarthKaren@lemmy.world on 14 Feb 02:46 next collapse

I’ve spent all week working with DeepSeek to write DnD campaigns based on artifacts from the game Dark Age of Camelot. This week was just on one artifact.

AI/LLMs are great for bouncing ideas off of and using it to tweak things. I gave it a prompt on what I was looking for (the guardian of dusk steps out and says: “the dawn brings the warmth of the sun, and awakens the world. So does your trial begin.” He is a druid and the party is a party of 5 level 1 players. Give me a stat block and XP amount for this situation.

I had it help me fine tune puzzle and traps. Fine tune the story behind everything and fine tune the artifact at the end (it levels up 5 levels as the player does specific things to gain leveling points for just the item).

I also ran a short campaign with it as the DM. It did a great job at acting out the different NPCs that it created and adjusting to both the tone and situation of the campaign. It adjusted pretty good to what I did as well.

SabinStargem@lemmings.world on 14 Feb 22:23 collapse

Can the full-size DeepSeek handle dice and numbers? I have been using the distilled 70b of DeepSeek, and it definitely doesn’t understand how dice work, nor the ranges I set out in my ruleset. For example, a 1d100 being used to determine character class, with the classes falling into certain parts of the distribution. I did it this way, since some classes are intended to be rarer than others.

DarthKaren@lemmy.world on 14 Feb 23:27 collapse

I ran a campaign by myself with 2 of my characters. I had DS act as DM. It seemed to handle it all perfectly fine. I tested it later and gave it scenarios. I asked it to roll the dice and show all its work. Dice rolls, any bonuses, any advantage/disadvantage. It got all of it right.

I then tested a few scenarios to check and see if it would follow the rules as they are supposed to be from 5e. It got all of that correct as well. It did give me options as if the rules were corrected (I asked it to roll damage as a barbarian casting fireball, it said barbs couldn’t, but gave me reasons that would allow exceptions).

What it ended up flubbing on later was forgetting the proper initiative order. I had to remind it a couple times that it messed it up. This only happened way later in the campaign. So I think I was approaching the limits of its memory window.

I tried the distilled locally. It didn’t even realize I was asking it to DM. It just repeating the outline of the campaign.

SabinStargem@lemmings.world on 14 Feb 23:30 collapse

It is good to hear what a full DeepSeek can do. I am really looking forward to having a better, localized version in 2030. Thank you for relating your experience, it is helpful. :)

DarthKaren@lemmy.world on 14 Feb 23:37 collapse

I’m anxious to see it as well. I would love to see something like this implemented into games, and focused solely on whatever game it’s in. I imagine something like Skyrim but with a LLM on every character, or at least the main ones. I downloaded the mod that adds it to Skyrim now, but I haven’t had the chance to play with it. It does require prompts for the NPC to let you know you’re talking to it. I’d love to see a natural thing. Even NPCs carrying out their own natural conversations with each other and not with the PC.

I’ve also been watching the Vivaladirt people. We need a 4th wall breaking npc in every game when we get a llm like above.

SabinStargem@lemmings.world on 15 Feb 00:54 collapse

Looking up the Vilvaladirt, I am guessing it is a group of Let’s Players who do a Mystery Science Theater 3,000 take on their gameplay? If so, that would be neat.

DarthKaren@lemmy.world on 15 Feb 01:19 collapse

These guys. Greg the garlic farmer is their 4th wall breaking guy.

Bigfoot@lemmy.world on 15 Feb 00:09 collapse

I literally created an iOS app with zero experience and distributed it on the App Store. AI is an amazing tool and will continue to get better. Many people bash the technology but it seems like those people misunderstand it or think it’s all bad.

But I agree that relying on it to think for you is not a good thing.

ColeSloth@discuss.tchncs.de on 14 Feb 01:58 next collapse

I grew up as a kid without the internet. Google on your phone and youtube kills your critical thinking skills.

WrenFeathers@lemmy.world on 14 Feb 02:14 next collapse

Yup.

FlyingSquid@lemmy.world on 14 Feb 08:10 next collapse

AI makes it worse though. People will read a website they find on Google that someone wrote and say, “well that’s just what some guy thinks.” But when an AI says it, those same people think it’s authoritative. And now that they can talk, including with believable simulations of emotional vocal inflections, it’s going to get far, far worse.

Humans evolved to process auditory communications. We did not evolve to be able to read. So we tend to trust what we hear a lot more than we trust what we read. And companies like OpenAI are taking full advantage of that.

ColeSloth@discuss.tchncs.de on 14 Feb 10:19 collapse

Jokes on you. Volume is always off on my phone, so I read the ai.

Also, I don’t actually ever use the ai.

FlyingSquid@lemmy.world on 14 Feb 10:21 collapse

I am not worried about people here on Lemmy. I am worried about people who don’t know much about computers at all. i.e. the majority of the public. They think computers are magic. This will make it far worse.

Petter1@lemm.ee on 15 Feb 00:29 collapse

I don’t think those people are still the majority in 20 years…

FlyingSquid@lemmy.world on 15 Feb 06:57 collapse

20 years? Do you know how much damage can be done in 20 years?

Petter1@lemm.ee on 15 Feb 07:09 collapse

20 years is not soo long…

FlyingSquid@lemmy.world on 15 Feb 07:21 collapse

How old are you that 20 years is not so long?

And also, why does that matter that it’s not so long? Have you even bothered noticing all the damage Trump has done in under a month?

His administration just fired a bunch of people responsible for keeping U.S. nuclear weapons secure without knowing what their jobs were.

Less than one month.

VitoRobles@lemmy.today on 14 Feb 14:29 next collapse

I know a guy who ONLY quotes and references YouTube videos.

Every topic, he answers with “Oh I saw this YouTube video…”

Phoenicianpirate@lemm.ee on 14 Feb 14:46 next collapse

To be fair, YouTube is a huge source of information now for a massive amount of people.

Spaniard@lemmy.world on 14 Feb 16:07 collapse

Should he say: “I saw this documentary” or “I read this article”?

interdimensionalmeme@lemmy.ml on 14 Feb 14:36 collapse

Everyone I’ve ever known to use a thesaurus has been eventually found out to be a mouth breathing moron.

ColeSloth@discuss.tchncs.de on 14 Feb 15:53 collapse

Umm…ok. Thanks for that relevant to the conversation bit of information.

pineapplelover@lemm.ee on 14 Feb 05:28 next collapse

Idk man. I just used it the other day for recalling some regex syntax and it was a bit helpful. However, if you use it to help you generate the regex prompt, it won’t do that successfully. However, it can break down the regex and explain it to you.

Ofc you all can say “just read the damn manual”, sure I could do that too, but asking an generative a.i to explain a script can also be as effective.

Tangent5280@lemmy.world on 14 Feb 05:38 next collapse

Hey, just letting you know getting the answers you want after getting a whole lot of answers you dont want is pretty much how everyone learns.

Nalivai@lemmy.world on 14 Feb 08:10 collapse

People generally don’t learn from an unreliable teacher.

Womble@lemmy.world on 14 Feb 08:45 next collapse

Literally everyone learns from unreliable teachers, the question is just how reliable.

Nalivai@lemmy.world on 19 Feb 06:01 collapse

You are being unnecessarily pedantic. “A person can be wrong therefore I will get my information from a random words generator” is exactly the attitude we need to avoid.
A teacher can be mistaken, yes. But when they start lying on purpose, they stop being a teacher. When they don’t know the difference between the truth and a lie, they never were.

JackbyDev@programming.dev on 14 Feb 19:19 collapse

I’d rather learn from slightly unreliable teachers than teachers who belittle me for asking questions.

Nalivai@lemmy.world on 19 Feb 05:58 collapse

No, obviously not. You don’t actually learn if you get misinformation, it’s actually the opposite of learning.
But thankfully you don’t have to chose between those two options.

vrighter@discuss.tchncs.de on 14 Feb 06:21 next collapse

yes, exactly. You lose your critical thinking skills

pineapplelover@lemm.ee on 19 Feb 07:14 collapse

As I was learning regex I was wondering why the * doesn’t act like a wildcard and why I had to use .* instead. That doesn’t make me lose my critical thinking skills. That was wondering what’s wrong with the way I’m using this character.

Minizarbi@jlai.lu on 14 Feb 09:09 next collapse
Xatolos@reddthat.com on 14 Feb 10:05 next collapse

researchers at Microsoft and Carnegie Mellon University found that the more humans lean on AI tools to complete their tasks, the less critical thinking they do, making it more difficult to call upon the skills when they are needed.

It’s one thing to try to do and then ask for help (as you did), it’s another to just ask it to “do x” without thought or effort which is what the study is about.

Petter1@lemm.ee on 15 Feb 00:22 collapse

So the study just checks how many people not yet learned how to properly use GenAI

I think there exists a curve from not trusting to overtrusting than back to not blindly trusting outputs (because you suffered consequences from blindly trusting)

And there will always be people blindly trusting bullshit, we have that longer than genAI. We have enough populists proving that you can tell many people just anything and they believe.

foenkyfjutschah@programming.dev on 14 Feb 20:03 collapse

what got regex to do with critical thinking?

mervinp14@lemmy.world on 14 Feb 07:58 next collapse

Damn. Guess we oughtta stop using AI like we do drugs/pron/<addictive-substance> 😀

FlyingSquid@lemmy.world on 14 Feb 08:07 next collapse

Unlike those others, Microsoft could do something about this considering they are literally part of the problem.

And yet I doubt Copilot will be going anywhere.

interdimensionalmeme@lemmy.ml on 14 Feb 14:33 collapse

Yes, it’s an addiction, we’ve got to stop all these poor being lulled into a false sense of understanding and just believing anyhing the AI tells them. It is constantly telling lies about us, their betters.

Just look what happenned when I asked it about the venerable and well respected public intellectual Jordan b peterson. It went into a defamatory diatribe against his character.

And they just gobble that up those poor, uncritical and irresponsible farm hands and water carriers! We can’t have that,!

Example

Open-Minded Closed-Mindedness: Jordan B. Peterson’s Humility Behind the Mote—A Cautionary Tale

Jordan B. Peterson presents himself as a champion of free speech, intellectual rigor, and open inquiry. His rise as a public intellectual is, in part, due to his ability to engage in complex debates, challenge ideological extremes, and articulate a balance between chaos and order. However, beneath the surface of his engagement lies a pattern: an open-mindedness that appears flexible but ultimately functions as a defense mechanism—a “mote” guarding an impenetrable ideological fortress.

Peterson’s approach is both an asset and a cautionary tale, revealing the risks of appearing open-minded while remaining fundamentally resistant to true intellectual evolution.

The Illusion of Open-Mindedness: The Mote and the Fortress

In medieval castles, a mote was a watery trench meant to create the illusion of vulnerability while serving as a strong defensive barrier. Peterson, like many public intellectuals, operates in a similar way: he engages with critiques, acknowledges nuances, and even concedes minor points—but rarely, if ever, allows his core positions to be meaningfully challenged.

His approach can be broken down into two key areas:

The Mote (The Appearance of Openness)

    Engages with high-profile critics and thinkers (e.g., Sam Harris, Slavoj Žižek).

    Acknowledges complexity and the difficulty of absolute truth.

    Concedes minor details, appearing intellectually humble.

    Uses Socratic questioning to entertain alternative viewpoints.

The Fortress (The Core That Remains Unmoved)

    Selectively engages with opponents, often choosing weaker arguments rather than the strongest critiques.

    Frames ideological adversaries (e.g., postmodernists, Marxists) in ways that make them easier to dismiss.

    Uses complexity as a way to avoid definitive refutation (“It’s more complicated than that”).

    Rarely revises fundamental positions, even when new evidence is presented.

While this structure makes Peterson highly effective in debate, it also highlights a deeper issue: is he truly open to changing his views, or is he simply performing open-mindedness while ensuring his core remains untouched?

Examples of Strategic Open-Mindedness

  1. Debating Sam Harris on Truth and Religion

In his discussions with Sam Harris, Peterson appeared to engage with the idea of multiple forms of truth—scientific truth versus pragmatic or narrative truth. He entertained Harris’s challenges, adjusted some definitions, and admitted certain complexities.

However, despite the lengthy back-and-forth, Peterson never fundamentally reconsidered his position on the necessity of religious structures for meaning. Instead, the debate functioned more as a prolonged intellectual sparring match, where the core disagreements remained intact despite the appearance of deep engagement.

  1. The Slavoj Žižek Debate on Marxism

Peterson’s debate with Žižek was highly anticipated, particularly because Peterson had spent years criticizing Marxism and postmodernism. However, during the debate, it became clear that Peterson’s understanding of Marxist theory was relatively superficial—his arguments largely focused on The Communist Manifesto rather than engaging with the broader Marxist intellectual tradition.

Rather than adapting his critique in the face of Žižek’s counterpoints, Peterson largely held his ground, shifting the conversation toward general concerns about ideology rather than directly addressing Žižek’s challenges. This was a clas

ameancow@lemmy.world on 14 Feb 15:57 next collapse

This was one of the posts of all time.

JackbyDev@programming.dev on 14 Feb 19:16 collapse

New copy pasta just dropped

bane_killgrind@slrpnk.net on 14 Feb 18:03 collapse

But Peterson is a fuckhead… So it’s accurate in this case. Afaik he does do the things it says.

interdimensionalmeme@lemmy.ml on 14 Feb 19:33 collapse

That’s the addiction talking. Use common sense! AI bad

bane_killgrind@slrpnk.net on 15 Feb 00:55 collapse

Oh you are actually trying to say that AI isn’t a stain on existence. Weird.

interdimensionalmeme@lemmy.ml on 15 Feb 01:56 collapse

I’m saying it is what it is.

Phoenicianpirate@lemm.ee on 14 Feb 14:40 next collapse

The one thing that I learned when talking to chatGPT or any other AI on a technical subject is you have to ask the AI to cite its sources. Because AIs can absolutely bullshit without knowing it, and asking for the sources is critical to double checking.

Spaniard@lemmy.world on 14 Feb 15:38 next collapse

Microsoft LLM whatever the name is gives sources, or at least it did to me yesterday.

ameancow@lemmy.world on 14 Feb 15:56 next collapse

I consider myself very average, and all my average interactions with AI have been abysmal failures that are hilariously wrong. I invested time and money into trying various models to help me with data analysis work, and they can’t even do basic math or summaries of a PDF and the data contained within.

I was impressed with how good the things are at interpreting human fiction, jokes, writing and feelings. Which is really weird, in the context of our perceptions of what AI will be like, it’s the exact opposite. The first AI’s aren’t emotionless robots, they’re whiny, inaccurate, delusional and unpredictable bitches. That alone is worth the price of admission for the humor and silliness of it all, but certainly not worth upending society over, it’s still just a huge novelty.

Phoenicianpirate@lemm.ee on 14 Feb 16:21 collapse

It makes HAL 9000 from 2001: A Space Odyessy seem realistic. In the movie he is a highly technical AI but doesn’t understand the implications of what he wants to do. He sees Dave as a detriment to the mission and it can be better accomplished without him… not stopping to think about the implications of what he is doing.

ameancow@lemmy.world on 14 Feb 22:23 collapse

I mean, leave it up the one of the greatest creative minds of all time to predict that our AI will be unpredictable and emotional. The man invented the communication satellite and wrote franchises that are still being lined up to make into major hollywood releases half a century later.

JackbyDev@programming.dev on 14 Feb 19:14 collapse

I’ve found questions about niche tools tend to get worse answers. I was asking if some stuff about jpackage and it couldn’t give me any working suggestions or correct information. Stuff I’ve asked about Docker was much better.

Vorticity@lemmy.world on 14 Feb 20:09 next collapse

The ability of AI to write things with lots of boilerplate like Kubernetes manifests is astounding. It gets me 90-95% of the way there and saves me about 50% of my development time. I still have to understand the result before deployment because I’m not going to blindly deploy something that AI wrote and it rarely works without modifications, but it definitely cuts my development time significantly.

Petter1@lemm.ee on 15 Feb 00:15 collapse

Well that is obvious why, isn’t it!?

gramie@lemmy.ca on 14 Feb 16:18 next collapse

I was talking to someone who does software development, and he described his experiments with AI for coding.

He said that he was able to use it successfully and come to a solution that was elegant and appropriate.

However, what he did not do was learn how to solve the problem, or indeed learn anything that would help him in future work.

BigBenis@lemmy.world on 14 Feb 16:39 next collapse

I’m a senior software dev that uses AI to help me with my job daily. There are endless tools in the software world all with their own instructions on how to use them. Often they have issues and the solutions aren’t included in those instructions. It used to be that I had to go hunt down any references to the problem I was having though online forums in the hopes that somebody else figured out how to solve the issue but now I can ask AI and it generally gives me the answer I’m looking for.

If I had AI when I was still learning core engineering concepts I think shortcutting the learning process could be detrimental but now I just need to know how to get X done specifically with Y this one time and probably never again.

Vorticity@lemmy.world on 14 Feb 20:06 collapse

100% this. I generally use AI to help with edge cases in software or languages that I already know well or for situations where I really don’t care to learn the material because I’m never going to touch it again. In my case, for python or golang, I’ll use AI to get me started in the right direction on a problem, then go read the docs to develop my solution. For some weird ugly regex that I just need to fix and never touch again I just ask AI, test the answer it gices, then play with it until it works because I’m never going to remember how to properly use a negative look-behind in regex when I need it again in five years.

I do think AI could be used to help the learning process, too, if used correctly. That said, it requires the student to be proactive in asking the AI questions about why something works or doesn’t, then going to read additional information on the topic.

JackbyDev@programming.dev on 14 Feb 19:12 next collapse

I feel you, but I’ve asked it why questions too.

foenkyfjutschah@programming.dev on 14 Feb 20:01 collapse

how does he know that the solution is elegant and appropriate?

gramie@lemmy.ca on 14 Feb 22:48 collapse

Because he has the knowledge and experience to completely understand the final product. It used an approach that he hadn’t thought of, that is better suited to the problem.

Petter1@lemm.ee on 15 Feb 00:13 collapse

Lol, how can he not learn from that??

technocrit@lemmy.dbzer0.com on 14 Feb 16:36 next collapse

Misleading headline: No such thing as “AI”. No such thing as people “relying” on it. No objective definition of “critical thinking skills”. Just a bunch of meaningless buzzwords.

MuadDoc@lemmy.world on 14 Feb 17:09 next collapse

Why do you think AI doesn’t exist? Or that there’s “no such thing as people ‘relying’ on it”? “AI” is commonly used to refer to LLMs right now. Within the context of a gizmodo article summarizing a study on the subject, “AI” does exist. A lack of precision doesn’t mean it’s not descriptive of a real thing.

Also, I don’t personally know anyone who “relies” on generative AI, but I don’t see why it couldn’t happen.

Vorticity@lemmy.world on 14 Feb 20:14 collapse

Do you want the entire article in the headline or something? Go read the article and the journal article that it cites. They expand upon all of those terms.

Also, I’m genuinely curious, what do you mean when you say that there is “No such thing AS “AI””?

RangerJosey@lemmy.ml on 14 Feb 19:44 next collapse

Well no shit Sherlock.

dill@lemmy.world on 14 Feb 20:20 next collapse

Tinfoil hat me goes straight to: make the population dumber and they’re easier to manipulate.

It’s insane how people take LLM output as gospel. It’s a TOOL just like every other piece of technology.

Korhaka@sopuli.xyz on 14 Feb 20:21 collapse

I mostly use it for wordy things like filing out review forms HR make us do and writing templates for messages to customers

dill@lemmy.world on 14 Feb 20:26 collapse

Exactly. It’s great for that, as long as you know what you want it to say and can verify it.

The issue is people who don’t critically think about the data they get from it, who I assume are the same type to forward Facebook memes as fact.

It’s a larger problem, where convenience takes priority over actually learning and understanding something yourself.

Jakeroxs@sh.itjust.works on 14 Feb 20:34 collapse

As you mentioned tho, not really specific to LLMs at all

dill@lemmy.world on 14 Feb 20:40 collapse

Yeah it’s just escalating the issue due to its universal availability. It’s being used in lieu of Google by many people, who blindly trust whatever it spits out.

If it had a high technological floor of entry, it wouldn’t be as influential to the general public as it is.

Jakeroxs@sh.itjust.works on 14 Feb 21:46 collapse

It’s such a double edged sword though, Google is a good example, I became a netizen at a very young age and learned how to properly search for information over time.

Unfortunately the vast majority of the population over the last two decades have not put in that effort, and it shows lol.

Fundamentally, I do not believe in arbitrarily deciding who can and can not have access to information though.

dill@lemmy.world on 16 Feb 03:23 collapse

I completely agree - I personally love that there’s so many Open Source AI tools out there.

The scary part is (similar to what we experienced with DeepSeek’s web interface) that its extremely easy for these corporations to manipulate, or censor information.

I should have clarified my concern - I believe we need to revisit critical thinking as a society (whole other topic) and especially so when it comes to tools like this.

Ensuring everyone using it, is aware of what it does, its flaws, how to process its output, and its potential for abuse. Similar to internet safety training for kids in the mid-2000s.

badbytes@lemmy.world on 14 Feb 20:44 next collapse

Linux study, finds that relying on MS kills critical thinking skills. 😂

intensely_human@lemm.ee on 14 Feb 21:33 next collapse

Microsoft said it so I guess it must be true then 🤷‍♂️

sumguyonline@lemmy.world on 14 Feb 21:53 next collapse

Just try using AI for a complicated mechanical repair. For instance draining the radiator fluid in your specific model of car, chances are googles AI model will throw in steps that are either wrong, or unnecessary. If you turn off your brain while using AI, you’re likely to make mistakes that will go unnoticed until the thing you did is business necessary. AI should be a tool like a straight edge, it has it’s purpose and it’s up to you the operator to make sure you got the edges squared(so to speak).

Jarix@lemmy.world on 14 Feb 22:28 next collapse

Well there’s people that followed apple maps into lakes and other things so the precedent is there already(I have no doubt it also existed before that)

You would need to heavily regulate it and thats not happening anytime soon if ever

Petter1@lemm.ee on 15 Feb 00:09 collapse

I think, this is only a issue in the beginning, people will sooner or later realise that they can’t blindly trust an LMM output and how to create prompts to verify prompts (or better said prove that not enough relevant data was analysed and prove that it is hallucinations)

kratoz29@lemm.ee on 14 Feb 22:11 next collapse

Is that it?

One of the things I like more about AI is that it explains to detail each command they output for you, granted, I am aware it can hallucinate, so if I have the slightest doubt about it I usually look in the web too (I use it a lot for Linux basic stuff and docker).

Some people would give a fuck about what it says and just copy & past unknowingly? Sure, that happened too in my teenage days when all the info was shared along many blogs and wikis…

As usual, it is not the AI tool who could fuck our critical thinking but ourselves.

LovableSidekick@lemmy.world on 14 Feb 22:54 next collapse

I love how they chose the term “hallucinate” instead of saying it fails or screws up.

Petter1@lemm.ee on 15 Feb 00:09 collapse

Because the term fits way better…

pulsewidth@lemmy.world on 15 Feb 07:38 collapse

A hallucination is a false perception of sensory experiences (sights, sounds, etc).

LLMs don’t have any senses, they have input, algorithms and output. They also have desired output and undesired output.

So, no, ‘hallucinations’ fits far worse than failure or error or bad output. However assigning the term ‘hallucinaton’ does serve the billionaires in marketing their LLMs as actual sentience.

Tehdastehdas@lemmy.world on 15 Feb 08:42 collapse

You might prefer confabulation, or bullshitting.

Petter1@lemm.ee on 15 Feb 00:12 collapse

I see it exactly the same, I bet you find similar articles about calculators, PCs, internet, smartphones, smartwatches, etc

Society will handle it sooner or later

Mouette@jlai.lu on 14 Feb 22:13 next collapse

The definition of critical thinking is not relying on only one source. Next rain will make you wet keep tuned.

Hiro8811@lemmy.world on 14 Feb 22:48 next collapse

Also your ability to search information on the web. Most people I’ve seen got no idea how to use a damn browser or how to search effectively, ai is gonna fuck that ability completely

bromosapiens@lemm.ee on 14 Feb 23:35 next collapse

Gen Zs are TERRIBLE at searching things online in my experience. I’m a sweet spot millennial, born close to the middle in 1987. Man oh man watching the 22 year olds who work for me try to google things hurts my brain.

shortrounddev@lemmy.world on 15 Feb 04:31 collapse

To be fair, the web has become flooded with AI slop. Search engines have never been more useless. I’ve started using kagi and I’m trying to be more intentional about it but after a bit of searching it’s often easier to just ask claude

LovableSidekick@lemmy.world on 14 Feb 22:50 next collapse

Their reasoning seems valid - common sense says the less you do something the more your skill atrophies - but this study doesn’t seem to have measured people’s critical thinking skills. It measured how the subjects felt about their skills. People who feel like they’re good at a job might not feel as adequate when their job changes to evaluating someone else’s work. The study said the subjects felt that they used their analytical skills less when they had confidence in the AI. The same thing happens when you get a human assistant - as your confidence in their work grows you scrutinize it less. But that doesn’t mean you yourself become less skillful. The title saying use of AI “kills” critical thinking skill isn’t justified, and is very clickbaity IMO.

Dil@is.hardlywork.ing on 15 Feb 01:04 next collapse

I felt it happen realtime everytime, I still use it for questions but ik im about to not be able to think crtically for the rest of the day, its a last resort if I cant find any info online or any response from discords/forums

Its still useful for coding imo, I still have to think critically, it just fills some tedious stuff in.

Dil@is.hardlywork.ing on 15 Feb 01:06 collapse

It was hella useful for research in college and it made me think more because it kept giving me useful sources and telling me the context and where to find it, i still did the work and it actually took longer because I wouldnt commit to topics or keep adding more information. Just dont have it spit out your essay, it sucks at that, have it spit out topics and info on those topics with sources, then use that to build your work.

Dil@is.hardlywork.ing on 15 Feb 01:07 collapse

Google used to be good, but this is far superior, I used bings chatgpt when I was in school idk whats good now (it only gave a paragraph max and included sources for each sentence)

RisingSwell@lemmy.dbzer0.com on 15 Feb 02:01 collapse

How did you manage to actually use bing gpt? I’ve tried like 20 times and it’s wrong the majority of the time

Dil@is.hardlywork.ing on 15 Feb 04:03 collapse

It worked for school stuff well, I always added "prioritize factual sources with .edu " or something like that. Specify that it is for a research paper and tell it to look for stuff how you would.

RisingSwell@lemmy.dbzer0.com on 15 Feb 04:46 collapse

Only time I told it to be factual was looking at 4k laptops, it gave me 5 laptops, 4 marked as 4k, 0 of the 5 were actually 4k.

That was last year though so maybe it’s improved by now

Dil@is.hardlywork.ing on 15 Feb 05:23 collapse

I wouldnt use it on current info like that only scraped data, like using it on history classes itll be useful, using it for sales right now definitely not

RisingSwell@lemmy.dbzer0.com on 15 Feb 07:02 collapse

Ive also tried using it for old games but at the time it said wailord was the heaviest Pokemon (the blimp whale in fact does not weigh more than the sky scraper).

Dil@is.hardlywork.ing on 15 Feb 07:15 next collapse

again not a usecase id use it for, its basically a better search engine that summarizes and skips through the ads and bs on the front page

Dil@is.hardlywork.ing on 15 Feb 07:16 next collapse

if you needed to find a source for the heaviest pokemon say that and you have a better chance, otherwise you get random comments its scraped

RisingSwell@lemmy.dbzer0.com on 15 Feb 08:00 collapse

I just asked it what the heaviest Pokemon was, and it said wailord. I dont care about what it uses as a source as long as it’s right.

Dil@is.hardlywork.ing on 15 Feb 12:54 collapse

What is the heaviest pokemon according to the pokedex? The heaviest Pokémon according to the Pokédex is Celesteela, which weighs 2204.4 lbs (999.9 kg). It’s an Ultra Beast introduced in Pokémon Sun & Moon and resembles a massive rocket.

Interestingly, the Pokédex caps weights at 999.9 kg, so Celesteela might not even be its true maximum weight!

RisingSwell@lemmy.dbzer0.com on 16 Feb 00:03 collapse

<img alt="" src="https://lemmy.dbzer0.com/pictrs/image/501b81e9-40f8-405c-9c38-9f645ba7f713.webp">

It’s still wrong and it even has the information in its own chat to know it is wrong. It’s literally contradicting itself.

Dil@is.hardlywork.ing on 16 Feb 06:06 next collapse

ai does that doesnt make it less useful for factual information lol, you literally yourself said that its a question with no answer that is up to debate

RisingSwell@lemmy.dbzer0.com on 16 Feb 07:24 collapse

It has an answer that isn’t up for debate. It’s celesteela and cosmoem. Both of them. They weigh the same.

Saying one weighs more is just wrong.

Dil@is.hardlywork.ing on 16 Feb 06:08 collapse

Use a tool wrong and its useless, use it correctly and save some time, or complain that it isnt perfect and cant do everything for you, idc either way, I used it, worked for me, I got good grades, graduated with my degree and still use ai when I need it time to time, never been an issue, if you expect it to be your guide to fiction, good luck

Dil@is.hardlywork.ing on 15 Feb 07:16 next collapse

Honestly, i’ve had fairly good luck with AI, im not sure how yall havent, its really not that bad, I typically gotta make it bad on purpose for fun.

RisingSwell@lemmy.dbzer0.com on 15 Feb 08:05 collapse

I don’t put effort in a 30 line question with a ton of specific stuff. I just ask it a question.

What is the heaviest Pokemon?

That’s it. And then it goes and finds a Pokemon that isn’t the heaviest now, and at no point in the series was it ever the heaviest.

If I need multiple lines and clarification and stuff, that makes it worse than just finding it myself.

Btw heaviest Pokemon is a many way tie as the weights don’t go over 999.9kg.

Dil@is.hardlywork.ing on 15 Feb 12:52 next collapse

What is the heaviest pokemon according to the pokedex? Did you try that, its not time consuming. Celesteela according to chatgpt Idk if thats right Idk pokemon

RisingSwell@lemmy.dbzer0.com on 15 Feb 23:48 collapse

Because that’s double the sentence to type for a question. It’s on my search thing that is meant to be for facts. I type the minimum sentence and the normal search works perfectly fine as it always has.

Celesteela is tied first with cosmoem apparently. Searching for a list of heaviest Pokemon (typed heaviest Pokemon list) got gpt bing to respond with a list of Pokemon that are not the heaviest. Was looking for the actual list on a site, which the top link was but the AI ignored the top results of the search and spit out exclusively wrong answers.

Dil@is.hardlywork.ing on 15 Feb 12:52 next collapse

Asking it the way you asked opens the way for opinions from internet comments everywhere, and its not necesarily wrong since itd be subjective

RisingSwell@lemmy.dbzer0.com on 15 Feb 23:53 collapse

Asking what the heaviest anything is isn’t subjective at all? Like, not even a tiny bit.

Dil@is.hardlywork.ing on 16 Feb 06:08 next collapse

When it comes to pokemon it is, they can weigh in concepts

Dil@is.hardlywork.ing on 16 Feb 06:10 collapse

Ais cooked it cant even figure out what the heaviest pokemon is, is there even a reliable factual source on that on the internet its not gonna yell you no or accept that there is no answe, it cant think itll give you an answer no matter what, thats how ai hallucination works, use the tool correctly for the correct things and it works fine, use it for pointless stuff and itll be pointless

RisingSwell@lemmy.dbzer0.com on 16 Feb 07:29 collapse

The top link of the search was a bulbapedia list of every Pokemon ordered by weight. It’s not like it couldn’t have gotten it. It’s a static list, the old answers won’t change.

Dil@is.hardlywork.ing on 15 Feb 12:53 next collapse

Like the way you asked it is conversational, so it responded like any random person would, but if you ask for it to base it on something real itll check against that

Dil@is.hardlywork.ing on 15 Feb 12:54 collapse

The heaviest Pokémon according to the Pokédex is Celesteela, which weighs 2204.4 lbs (999.9 kg). It’s an Ultra Beast introduced in Pokémon Sun & Moon and resembles a massive rocket.

Interestingly, the Pokédex caps weights at 999.9 kg, so Celesteela might not even be its true maximum weight!

Dil@is.hardlywork.ing on 15 Feb 07:18 collapse

When all the hw answers sites put up paywalls, id get the steps to do the problem off chatgpt, id try to find it off google first, see the answer with paywall then try chatgpt.

arotrios@lemmy.world on 15 Feb 05:12 next collapse

Counterpoint - if you must rely on AI, you have to constantly exercise your critical thinking skills to parse through all its bullshit, or AI will eventually Darwin your ass when it tells you that bleach and ammonia make a lemon cleanser to die for.

Joeyfingis@lemmy.world on 15 Feb 05:33 next collapse

Let me ask chatgpt what I think about this

underwire212@lemm.ee on 15 Feb 05:37 next collapse

It’s going to remove all individuality and turn us into a homogeneous jelly-like society. We all think exactly the same since AI “smoothes out” the edges of extreme thinking.

endeavor@sopuli.xyz on 15 Feb 07:11 next collapse

Copilot told me you’re wrong and that I can’t play with you anymore.

Melvin_Ferd@lemmy.world on 15 Feb 12:20 collapse

Vs text books? What’s the difference?

Squizzy@lemmy.world on 15 Feb 13:07 collapse

The variety of available text books, reviewed for use by educators vs autocratic loving tech bros pushing black box solutions to the masses.

Just off thebtopnofnmy head.

Melvin_Ferd@lemmy.world on 15 Feb 21:35 collapse

Tech Bros aren’t really reviewing it individually.

Squizzy@lemmy.world on 16 Feb 22:40 collapse

I know

Guidy@lemmy.world on 15 Feb 07:22 next collapse

I use it to write code for me sometimes, saving me remembering the different syntax and syntactic sugar when I hop between languages. And I use to answer questions about things I wonder - it always provides references. So far it’s been quite useful. And for all that people bitch and piss and cry giant crocodile tears while gnashing their teeth - I quite enjoy Apple AI. It’s summaries have been amazing and even scarily accurate. No, it doesn’t mean Siri’s good now, but the rest of it’s pretty amazing.

yournamehere@lemm.ee on 15 Feb 07:28 next collapse

so no real chinese LLMs…who would have thought…not the chinese apparently…but yet they think their “culture” of opression and stome-like-thinking will get them anywhere. the honey badger Xi calls himself an antiintellectual. this is how i perceive moat students from china i get to know. i pitty the chinese kids for the regime they live in.

j4yt33@feddit.org on 15 Feb 18:34 next collapse

I’ve only used it to write cover letters for me. I tried to also use it to write some code but it would just cycle through the same 5 wrong solutions it could think of, telling me “I’ve fixed the problem now”

protonslive@lemm.ee on 18 Feb 15:21 collapse

I find this very offensive, wait until my chatgpt hears about this! It will have a witty comeback for you just you watch!