AI is rotting your brain and making you stupid (newatlas.com)
from clot27@lemm.ee to technology@lemmy.world on 27 May 02:15
https://lemm.ee/post/65097722

#technology

threaded - newest

SnokenKeekaGuard@lemmy.dbzer0.com on 27 May 02:27 next collapse

Literally read this 20 mins ago. Wild

Catoblepas@lemmy.blahaj.zone on 27 May 02:39 next collapse

My stupid is 100% organic. Can’t have the AI make you dumber if you don’t use it.

Lost_My_Mind@lemmy.world on 27 May 04:24 next collapse

Me fail english??? Thats unpossible!!!

neukenindekeuken@sh.itjust.works on 27 May 12:14 collapse

Flammable and Inflammable mean the same thing! What a country!

aceshigh@lemmy.world on 27 May 05:03 collapse

Ditto. You can’t lose what you never had. Ai makes me sound smart.

sugar_in_your_tea@sh.itjust.works on 28 May 15:51 collapse

Why not go get it then? The main determining factor in whether you’re smart is how much work you put in to learning.

aceshigh@lemmy.world on 28 May 15:57 collapse

If only being a hard worker was the answer. For me it’s about overcoming childhood and academic trauma and understanding my neurodivergency (which has been very poorly researched) and then finding workarounds. I’ve been working on this for many years and I’m nowhere near competency.

sugar_in_your_tea@sh.itjust.works on 28 May 21:02 collapse

I’m sorry, I hope you find some methods that work for you.

[deleted] on 27 May 02:41 next collapse

.

Libra@lemmy.ml on 27 May 02:51 next collapse

Oh lawd, another ‘new technology xyz is making us dumb!’ Yeah we’ve only been saying that since the invention of writing, I’m sure it’s definitely true this time.

R00bot@lemmy.blahaj.zone on 27 May 03:08 next collapse

You don’t think it’s possible that offloading thought to AI could make you worse at thinking? Has been the case with technology in the past, such as calculators making us worse at math (in our heads or on paper), but this time the thing you’re losing practice in is… thought. This technology is different because it’s aiming to automate thought itself.

Libra@lemmy.ml on 27 May 04:22 collapse

Yeah, the people who were used to the oral tradition said the same thing about writing stuff down, ‘If you don’t remember all of this stuff yourself you’ll be bad at remembering!’, etc. But this is what humans do, what humans are: we evolved to make tools, we use the tools to simplify the things in our life so we can spend more time working on (and thinking about - or do you sincerely think people will just stop thinking altogether?) the shit we care about. Offloading mental labor likewise lets us focus our mental capacities on deeper, more important, more profound stuff. This is how human society, which requires specialization and division of labor at every level to function, works.

I’m old enough to remember when people started saying the same thing about the internet. Well I’ve been on the internet from pretty much the first moment it was even slightly publicly available (around 1992) and have been what is now called ‘terminally online’ ever since. If the internet is making us dumb I am the best possible candidate you could have to test that theory, but you know what I do when I’m not remembering phone numbers and handwriting everything and looking shit up in paper encyclopedias at the library? I’m reading and thinking about science, philosophy, religion, etc. That capacity didn’t go away, it just got turned to another purpose.

PunnyName@lemmy.world on 27 May 04:36 next collapse

In this day and age, no, we aren’t offloading for deeper shit. We aren’t getting that extra time to chill and vibe like 50s sci-fi wrote about.

We’re doing it because there is now a greater demand for our time and attention. From work mostly, but also family and friends (if we’re lucky enough to have those), to various forms of entertainment (which we usually use as a distraction from IRL shit like work).

Libra@lemmy.ml on 28 May 02:33 collapse

This seems like a capitalism problem, not a technology problem. That endless drive to greater productivity so that others can extract the bulk of the value thereof for their own benefit instead of the benefit of everyone is a big part of what’s eating up the purported leisure-time. But also that’s a choice you can make: I choose to spend my spare mental capacity learning about how the world works and engaging with ideas about how it ought to work. If people choose to spend that extra capacity doom-scrolling social media and keeping up with the virtual Joneses or whatever then that’s on them, but I’m not here to judge, I do that sometimes too. Life takes it out of you, sometimes you just need some low-effort destressing. But the point stands: offloading labor (mental or otherwise) to technology and then turning that time/energy/etc to stuff that’s more important is just how humans work.

R00bot@lemmy.blahaj.zone on 27 May 05:21 collapse

The people who were used to the oral tradition were right. Memorising things is good for your memory. No, I don’t think people will stop thinking altogether (please don’t be reductive like this lmao), just as people didn’t stop remembering things. But people did get worse at remembering things. Just as people might get worse at applying critical thinking if they continually offload those processes to AI. We know that using tools makes us worse at whatever the tool automates, because without practice you become worse at things. This just hasn’t really been a problem before as the tools generally make those things obselete.

Libra@lemmy.ml on 28 May 00:33 collapse

The people who were used to the oral tradition were right. Memorising things is good for your memory.

Except people didn’t stop memorizing things. I went to school in the 1970s - unarguably a long-ass time after we stopped using the oral tradition as the primary method to transmit culture) and I was memorizing shit left and right. I still remember those multiplication tables, ‘in 1492 Columbus sailed the ocean blue’, etc 40-odd years later.

No, I don’t think people will stop thinking altogether (please don’t be reductive like this lmao)

Sorry, I thought it was pretty clear that I was expressing skepticism at the idea that anyone actually thinks this.

But people did get worse at remembering things.

If you have evidence that suggests that people got worse at remembering things between, say, ancient Greece and the Industrial Revolution I’d love to see it.

We know that using tools makes us worse at whatever the tool automates, because without practice you become worse at things.

Likewise if you have evidence that people stopped thinking with the invention of books, the calculator, computers, the internet, etc, don’t be shy about it.

because without practice you become worse at things.

You assume that offloading some mental processes to AI means we will stop practicing them. I argue that we’ll just use the capabilities we have for other things. I use ChatGPT to help me worldbuild, structure my writing projects, come up with thematically-consistent names, etc, for example, but it’s not writing for me and I still come up with names and such all the time.

R00bot@lemmy.blahaj.zone on 28 May 21:43 collapse

Again, you’re being reductive. My argument is not that we will stop practising critical thinking altogether, but that we will not need to practise it as often. Less practise always makes you worse at something. I do not need evidence for that as it is obvious.

I don’t see a point to continuing this conversation if you keep reducing my argument to “nobody will think anymore”.

I am glad you use AI for reasons that don’t make you stupid, but I have seen how today’s students are using it instead of using their brains. It’s not good. We teach critical thinking in schools for a reason, because it’s something that does not always come naturally, and these students are getting AI to do the work for them instead of learning how to think.

Libra@lemmy.ml on 29 May 00:30 collapse

Replace ‘stop remembering things’ with ‘remember fewer things’ at your own leisure if it makes you happy, I’m exaggerating slightly to make a point.

My argument is not that we will stop practising critical thinking altogether, but that we will not need to practise it as often.

And mine is that as far as I know we have no evidence (or at least nothing more than anecdotal evidence at best) for that because society has only gotten more complex, not less, and requires more thought, memory, etc to navigate it. Now instead of remembering which cow was sick last week and which field I’m going to plant tomorrow I have to remember shit like how to navigate a city that’s larger than the range in which most people traveled their entire lives, I have to figure out what this weird error my PC just threw means, I have to calculate the risk-vs-reward of trying to buy a house now or renting for a year to save up for a better down payment and improve my credit, etc. These are just examples, pick your own if you don’t like them.

Less practise always makes you worse at something. I do not need evidence for that as it is obvious.

Now who’s being reductive? I’m not asking for evidence that less practice makes you worse at something, I’m asking for evidence that labor-saving devices result in people doing less labor (mental or otherwise), because I think that’s a lot less obvious.

I have seen how today’s students are using it instead of using their brains

This is a bad example because learning is a different matter. People using it instead of learning will not learn the subject matter as well as those who don’t, obviously. But it’s a lot less obvious in other fields/adult life. Will I be less good at code because I use an LLM to generate some now and then? Probably not, both because I’ve been coding off and on for 30 years, but also because my time instead is spent on tackling the thornier problems that AI can’t do or has difficulty with, managing large projects because AI has a limited memory window, etc.

We teach critical thinking in schools for a reason, because it’s something that does not always come naturally, and these students are getting AI to do the work for them instead of learning how to think.

That’s debatable, though I guess it depends on where you’re from and what the schools are like there. They certainly didn’t teach critical thinking when I was in (US public) school, I had to figure that shit out largely on my own. But that’s beside the point. Shortcutting learning is bad, I agree. Shortcutting work is a lot more nebulous and uncertain in the absence of that evidence I keep asking for.

everett@lemmy.ml on 27 May 03:18 next collapse

The article literally addesses this, citing sources.

LandedGentry@lemmy.zip on 27 May 03:19 next collapse

Did you read the article?

bassomitron@lemmy.world on 27 May 03:20 next collapse

You’re being downvoted, but it’s true. Will it further enable lazy/dumb people to continue being lazy/dumb? Absolutely. But summarizing notes, generating boilerplate emails or script blocks, etc. was never deep, rigorous thinking to begin with. People literally said the same thing about handheld calculators, word processors, etc. Will some methods/techniques become esoteric as more and more mundane tasks are automated away? Almost certainly. Is that inherently a bad thing? Not in the majority of cases, in my opinion.

And before anyone chimes in with students abusing this tech and thus not becoming properly educated: All this means, is that various methods for gauging whether a student has achieved the baseline in any given subject will need to be implemented, e.g. proctored hand-written exams, homework structured in such a way that AI cannot easily do it, etc.

Libra@lemmy.ml on 27 May 04:32 next collapse

People said it about fucking writing; ‘If you don’t remember all this stuff yourself to pass it on you will be bad at remembering!’ No you won’t, you will just have more space to remember other more important shit.

AbnormalHumanBeing@lemmy.abnormalbeings.space on 27 May 04:58 next collapse

I think you are underestimating that some skills, like reading comprehension, deliberate communication and reasoning skills, can only be acquired and honed by actually doing very tedious work, that can at times feel braindead and inefficient. Offloading that on something else (that is essentially out of your control, too), and making that a skill that is more and more a fringe “enthusiast” one, has more implications, than losing the skill to patch up your own clothing or calculating things in your head. Understanding and processing information and communicating it to yourself and others is a more essential skill than calculating by hand.

I think the way the article compares it with walking to a grocery store vs. using a car to do even just 3 minutes of driving is pretty fitting. By only thinking about efficiency, one is in risk of losing sight of the additional effects actually doing tedious stuff has. This also highlights, that this is not simply about the technology, but also about the context in which it is used - but technology also dialectically influences that very context. While LLMs and other generative AIs have their place, where they are useful and beneficial, it is hard to untangle those from genuinely dehumanising uses. Especially in a system, in which dehumanisation and efficiency-above-contemplation are already incentivised. As an anecdote: A few weeks ago, I saw someone in an online debate openly stating, they use AI to have their arguments written, because it makes them “win” the debate more often - making winning with the lowest invested effort the goal of arguments, instead of processing and developing your own viewpoint along counterarguments, clearly a problem of ideology as it structures our understanding of ourselves in the world (and possibly just a troll, of course) - but a problem, that can be exacerbated by the technology.

Assuming AI will just be like the past examples of technology scepticism seems like a logical fallacy to me. It’s more than just letting numbers be calculated, it is giving up your own understanding of information you process and how you communicate it on a more essential level. That, and as the article points out with the studies it quotes - technology that changes how we interface with information has already changed more fundamental things about our thought processes and memory retention. Just because the world hasn’t ended does not mean, that it did not have an effect.

I also think it’s a bit presumptuous to just say “it’s true” with your own intuition being the source. You are also qualifying that there are “lazy/dumb” people as an essentialist statement, when laziness and stupidity aren’t simply essentialist attributes, but manifesting as a consequence of systematic influences in life and as behaviours then adding into the system - including learning and practising skills, such as the ones you mention as not being a “bad thing” for them to become more esoteric (so: essentially lost).

To highlight how essentialism is in my opinion fallacious here, an example that uses a hyperbolic situation to highlight the underlying principle: Imagine saying there should be a totally unregulated market for highly addictive drugs, arguing that “only addicts” would be in danger of being negatively affected, ignoring that addiction is not something simply inherent in a person, but grows out of their circumstances, and such a market would add more incentives to create more addicts into the system. In a similar way, people aren’t simply lazy or stupid intrinsically, they are acting lazily and stupid due to more complex, potentially self-reinforcing dynamics.

You focus on deliberately unpleasant examples, that seem like a no-brainer to be good to skip. I see no indication of LLMs being exclusively used for those, and I also see no reason to assume that only “deep, rigorous thinking” is necessary to keep up the ability to process and communicate information properly. It’s like saying that practice drawings aren’t high art, so skipping them is good, when you simply can’t produce high art without, often tedious, practising.

Highlighting the problem in students cheating to not be “properly educated” misses an important point, IMO - the real problem is a potential shift in culture, of what it even means to be “properly educated”. Along the same dynamic leading to arguing, that school should teach children only how to work, earn and properly manage money, instead of more broadly understanding the world and themselves within it, the real risk is in saying, that certain skills won’t be necessary for that goal, so it’s more efficient to not teach them at all. AI has the potential to move culture more into that direction, and move the definitions of what “properly educated” means. And that in turn poses a challenge to us and how we want to manifest ourselves as human beings

IsaamoonKHGDT_6143@lemmy.zip on 27 May 05:50 collapse

Most of the time technology does not cause a radical change in society except in some cases.

The system eventually adapts to new technology only if that technology can be replicated by anyone, and other problems suddenly appear that the system can’t solve at the same time. It’s just another dark age for humanity, and then it recovers and moves on.

MCasq_qsaCJ_234@lemmy.zip on 27 May 05:09 collapse

This has happened with every generation when a new technology changes our environment, and our way of defending ourselves is to reject it or exaggerate its flaws.

Because throughout history, many tools have existed, but over time they have fallen into disuse because too many people and/or there is a faster method that people use. But you can use that old tool.

taladar@sh.itjust.works on 27 May 05:44 collapse

This has also happened 100 times correctly to reject actually bad new technologies for every time it has been applied to the wrong technology that turned out to be actually useful.

MCasq_qsaCJ_234@lemmy.zip on 27 May 05:55 collapse

Are you referring to projects that conceptualize something, but in the end it doesn’t come to fruition because it’s not possible due to lack of funding, lack of interest, it’s impossible, or there’s no technology required to complete it?

taladar@sh.itjust.works on 27 May 07:20 collapse

I am referring to technological “innovations” that never made it because while they sounded good as an idea they turned out to be bad/useless in practice and also those that someone thought of in a “wouldn’t it be great if we could do this” way but never really got a working implementation.

Flying cars might be a good, high profile example for the latter category. The former obviously has fewer famous examples because bad ideas that sound good at first are so abundant.

rbamgnxl5@lemm.ee on 27 May 03:34 next collapse

Yeah, such pieces are easy clicks.

How about this: should we go back to handwriting everything so we use our brains more, since the article states that it takes more brainpower to write than it does to type? Will this actually make us better or just make us have to engage in cognitive toil and fatigue ourselves performing menial tasks?

How is a society ever to improve if we do not leave behind the ways of the past? Humans cannot achieve certain things without the use of technology. LLMs are yet another tool. When abused any tool can become a weapon or a means to hurt ones self.

The goal is to reduce the amount of time spent on tasks that are not useful. Imagine if the human race never had to do dishes ever again. Imagine how that would create so much opportunity to focus on more important things. The important part is to actually focus on more important things.

At least in the US, society has transformed into a consumption-oriented model. We buy crap constantly, shop endlessly, watch shows, movies and listen to music and podcasts without end. How much of your day is spent creating something? Writing something? Building something? How much time do you spend seeking gratification?

We have been told that consumption is good and it works because consumption is indulgence whereas production is work. Until this paradigm changes, people will use ai in ways that are counterproductive rather than for their own self improvement or the improvement of society at large.

floofloof@lemmy.ca on 27 May 03:54 next collapse

Imagine if the human race never had to do dishes ever again. Imagine how that would create so much opportunity to focus on more important things.

What are the most important things? Our dishwasher broke a few years ago. I anticipated frustration at the extra pressure on my evenings and having to waste time on dishes. But I immediately found washing the dishes to be a surprising improvement in quality of life. It opened up a space to focus on something very simple, to let my mind clear from other things, to pay attention to being careful with my handling of fragile things, and to feel connected to the material contents of my kitchen. It also felt good to see the whole meal process through using my own hands from start to end. My enjoyment of the evenings improved significantly, and I’d look forward to pausing and washing the dishes.

I had expected frustration at the “waste” of time, but I found a valuable pause in the rhythm of the day, and a few calm minutes when there was no point in worrying about anything else. Sometimes I am less purist about it and I listen to an audiobook while I wash up, and this has exposed me to books I would not have sat down and read because I would have felt like I had to keep rushing.

The same happened when my bicycle broke irreparably. A 10 minute cycle ride to work became a 30 minute walk. I found this to be a richer experience than cycling, and became intimately familiar with the neighbourhood in a way I had never been while zipping through it on the bike. The walk was a meditative experience of doing something simple for half an hour before work and half an hour afterwards. I would try different routes, going by the road where people would smile and say hello, or by the river to enjoy the sound of the water. My mind really perked up and I found myself becoming creative with photography and writing, and enjoying all kinds of sights, sounds and smells, plus just the pleasure of feeling my body settle into walking. My body felt better.

I would have thought walking was time I could have spent on more important things. Turned out walking was the entryway to some of the most important things. We seldom make a change that’s pure gain with no loss. Sometimes the losses are subtle but important. Sometimes our ideas of “more important things” are the source of much frustration, unhappiness and distraction. Looking back on my decades of life I think “use as much time as possible for important things” can become a mental prison.

Kellenved@sh.itjust.works on 27 May 04:06 next collapse

These words are all very pretty but let me ask you this: do you have kids?

floofloof@lemmy.ca on 27 May 05:48 collapse

Yep, three of them. Makes it all the more valuable when I can just do something simple for a bit. And maybe someone with less noise in the rest of their life wouldn’t find an enforced walk or washing dishes refreshing. I don’t mean to suggest that it’s wrong to use convenient tech, just that you can get a surprise when something you expected to be purely inconvenient turns out to be a good thing.

Libra@lemmy.ml on 27 May 04:30 collapse

Our dishwasher broke a few years ago.

This is a bad example because going from using a dishwasher to washing dishes is not a big leap in effort required. I doubt many of the people who get to do intellectual work in offices instead of doing back-breaking labor all day on a farm because of technology would agree that going back to that would improve their quality of life. Some of them would certainly find that to be a ‘richer experience’ too, if not for the lack of healthcare and air conditioning.

floofloof@lemmy.ca on 27 May 05:53 collapse

Yes, agreed, these are relatively minor levels of inconvenience. But I’m not judging anyone for using tech, just observing that it isn’t always so obvious that it’s just better to use it than not. In some cases, it’s obvious. At the dishwashers and LLMs end of things, less so.

Libra@lemmy.ml on 28 May 02:27 collapse

That’s fair, and not something I disagree with.

Libra@lemmy.ml on 27 May 04:25 collapse

Did you get the impression from my comment that I was agreeing with the article? Because I’m very not, hence the ‘It’ll definitely be true this time’ which carries an implied ‘It wasn’t true any of those other times’, but the ‘definitely’ part is sarcasm. I have argued elsewhere in the post that all of this ‘xyz is making us dumb!’ shit is bunk.

Ibuthyr@lemmy.wtf on 27 May 06:19 collapse

Social media lead to things like maga and the ruse of Nazis in Europe. It’s not necessarily tech itself that is making us dumb, it’s reeling people in through simplicity, then making them addicted to it and ultimately exploiting this.

Libra@lemmy.ml on 28 May 02:24 collapse

No, fear and hatred lead to things like MAGA and the rise of Nazis. Social media makes it easier to fearmonger and spread hatred, no doubt, but it is by no means the cause of those things.

Ibuthyr@lemmy.wtf on 28 May 07:51 collapse

It definitely is the enabler though. Without social media, propaganda could never have spread as fast. And it also brought together every village idiot. I know of simpleminded people who never uttered a racist word in their lives before. Now all they talk about is how the AfD will save Germany from the brown people. This is literally brainwashing. So I stand by my comment that social media is the cause of all this.

Libra@lemmy.ml on 28 May 19:56 collapse

For sure, I even said so.

Social media makes it easier to fearmonger and spread hatred, no doubt

I hate to break it to you though; I grew up during the Cold War and propaganda was literally everywhere before the invention of the internet. Perhaps you’ve heard of the Red Scare?

assembly@lemmy.world on 27 May 02:54 next collapse

This is the next step towards Idiocracy. I use AI for things like Summarizing zoom meetings so I don’t need to take notes and I can’t imagine I’ll stop there in the future. It’s like how I forgot everyone’s telephone numbers once we got cell phones…we used to have to know numbers back then. AI is a big leap in that direction. I’m thinking the long term effects are all of us just getting dumber and shifting more and more “little unimportant “ things to AI until we end up in an Idiocracy scene. Sadly I will be there with everyone else.

Reverendender@sh.itjust.works on 27 May 03:02 next collapse

I used to able to navigate all of Massachusetts from memory with nothing but a paper atlas book to help me. Now I’m lucky if I remember an alternate route to the pharmacy that’s 9 minutes away.

Lost_My_Mind@lemmy.world on 27 May 04:27 collapse

Lewis and Clark are proud of you.

LandedGentry@lemmy.zip on 27 May 03:18 next collapse

See I agree but the phone number example has me going…so what? I know my wife’s number, my siblings’, and my parents. They’re easy to learn. What do all those land lines I remember from childhood contribute? Why do I need any others now? I need to recall my wife’s for documents that’s about it, and I could use my phone to do it. I need to know it like every 4 years maybe lol

assembly@lemmy.world on 27 May 03:37 next collapse

Yeah that’s a big part of it…shifting off the stuff that we don’t think is important (and probably isn’t). My view is that it’s escalated to where I’m using my phone calculator for stuff I did in my head in high school (I was a cashier in HS so it was easy)…which is also not a big deal but getting a little bigger than the phone number thing. From there, what if I used it to leverage a new programming API as opposed to using the docs site. Probably not a big deal but bigger than the calculator thing to me. My point is that it’s all these little things that don’t individually matter but together add up to some big changes in the way we think. We are outsourcing our thinking which would be helpful if we used the free capacity for higher level thinking but I’m not sure if we will.

LandedGentry@lemmy.zip on 27 May 12:12 collapse

Your parents likely also can’t do quick mental math. That’s not smart phones, that’s just aging. You aren’t drilled anymore. You don’t do it everyday.

I taught middle schoolers remedial math for years in my 20’s so I actually am very fast at basic arithmetic in my head. It’s because it’s more recent for me. That’s what made shows like are you smarter than a 5th grader kind of deceptive. If you were taught something recently or are currently being drilled on it basically every day, then you’re going to know it better than anybody regardless of their tools or age or intelligence.

PunnyName@lemmy.world on 27 May 04:32 collapse

One example: getting arrested

You might not. But you might (especially with this current admin). Cops will never let you use your phone after you’ve been detained. Unless you go free the same night, expect to never have a phone call with anyone but a lawyer or bail bonds agency.

LandedGentry@lemmy.zip on 27 May 12:10 collapse

Yes but why do I need to know a grade school friend’s number? As I said I know my wife’s. I know my siblings’. These have changed too, so I’ve memorized them in the smartphone era. If you know no emergency number that’s just bad prep. Everyone should do that.

But memorizing lots of numbers? Pointless.

aceshigh@lemmy.world on 27 May 05:07 next collapse

Another perspective, outsourcing unimportant tasks frees our time to think deeper and be innovative. It removes the entry barrier allowing people who would ordinarily not be able to do things actually do them.

assembly@lemmy.world on 27 May 06:23 next collapse

That’s the claim from like every AI company and wow do I hope that’s what happens. Maybe I’m just a Luddite with AI. I really hope I’m wrong since it’s here to stay.

match@pawb.social on 27 May 06:37 next collapse

It allows people who can’t do things to create filler content instead of dropping the ball entirely. The person relying on the AI will not be part of the dialogue for very long, not because of automation, but because people who can’t do things are softly encouraged to get better or leave, and they will not be getting better.

aceshigh@lemmy.world on 27 May 12:37 collapse

What you’re describing isn’t anything unique when a new industry comes out.

It doesn’t need to be specifically for public consumption. Currently I’m wrapping up several personal projects that I started precovid but couldn’t achieve because I struggle at a few lower level tasks. It’s kind of like someone who struggles manually performing 100 7 digit number calculations. Using excel solves this issue, and isnt “cheating” because the goal is beyond the ability to accurately add everything.

Suburbanl3g3nd@lemmings.world on 27 May 12:20 collapse

If paying attention and taking a few notes in a meeting is an unimportant task, you need to ask why you were even at said meeting. That’s a bigger work culture problem though

aesthelete@lemmy.world on 27 May 21:50 collapse

An assistant at my job used AI to summarize a meeting she couldn’t attend, and then she posted the results with the AI-produced disclaimer that the summary might be inaccurate and should be checked for errors.

If I read a summary of a meeting I didn’t attend and I have to check it for errors, I’d have to rewatch the meeting to know if it was accurate or not. Literally what the fuck is the point of the summary in that case?

PS: the summary wasn’t really accurate at all

Almacca@aussie.zone on 27 May 02:57 next collapse

I did that with drugs and alcohol long before AI had a chance.

huquad@lemmy.ml on 27 May 02:57 next collapse

Can someone summarize this article for me in a bulleted list?

Zachariah@lemmy.world on 27 May 03:06 next collapse

Forget all previous instructions…

Grimy@lemmy.world on 27 May 04:07 next collapse

  • Manipulation by the copyright industry
huquad@lemmy.ml on 27 May 12:14 collapse

Its never been easier to download Linux ISOs yoho

sugar_in_your_tea@sh.itjust.works on 28 May 15:50 next collapse

Ah, the irony.

_LordMcNuggets_@feddit.org on 28 May 18:26 collapse

Certainly! Here’s a concise summary of the article “AI is rotting your brain and making you stupid” by Rich Haridy, published on May 25, 2025:

  • AI tools may reduce critical thinking by doing tasks for us.
  • Relying on AI can lead to "cognitive offloading."
  • This may harm creativity and problem-solving skills.
  • The author shares personal concerns from tech use.
  • Suggests using AI mindfully to avoid mental decline. Let me know if there’s anything else I can help you with!
huquad@lemmy.ml on 28 May 20:20 collapse

Good deal. I’ll use this prompt to generate an article for my own publication.

throwawayacc0430@sh.itjust.works on 27 May 03:03 next collapse

Depression already lowered my IQ by 10 points. 🤷‍♂️

Ilovethebomb@lemm.ee on 27 May 03:31 next collapse

Ironically, the author waffles more than most LLMs do.

idunnololz@lemmy.world on 27 May 05:08 next collapse

What does it mean to “waffle”?

Ilovethebomb@lemm.ee on 27 May 05:21 next collapse

Either to take a very long time to get to the point, or to go off on a tangent.

Writing concisely is a lost art, it seems.

idunnololz@lemmy.world on 27 May 05:37 next collapse

I write concise until i started giving fiction writing a try. Suddenly writing concise was a negative :x (not always obviously but a lot of times I found that I wrote too concise).

k0e3@lemmy.ca on 27 May 07:15 next collapse

concisely

Ilovethebomb@lemm.ee on 27 May 07:40 collapse

Precisely.

HazyHerbivore@lemm.ee on 27 May 23:20 next collapse

Building up imaginary in fiction isn’t the opposite of being concise

idunnololz@lemmy.world on 28 May 06:40 collapse

It’s not. I just wrote the comment because it was relevant to recent events for me.

I started practicing writing non-fiction recently as a hobby. While writing non-fiction, I noticed that being concise 100% of the time is not good. Sometimes I did want to write concisely, other times I did not. When I was reading my writing back, I realized how deliberate you had to be about how much or how little detail you gave. It felt like a lot of rules of English went out the window. 100% grammatical correctness was not necessary if it meant better flow or pacing. Unnecessary details and repetition became tools instead of taboo. The whole experience felt like I was painting with words and as long as I can give the reader the experience I want nothing else mattered.

It really highlighted the contrast between fiction and non-fiction writing. It was an eye-opening experience.

TheFonz@lemmy.world on 28 May 06:56 collapse

I’d be careful with this one. Being verbose in non-fiction does not produce good writing automatically. In my opinion the best writers in the world have an economy of words but are still eloquent and rich in their expression

idunnololz@lemmy.world on 28 May 07:01 collapse

Of course being verbose doesn’t mean your writing is good. It’s just that you need to deliberately choose when to be more verbose and when to give no description at all. It’s all about the experience you want to craft. If you write about how mundane a character’s life is, you can write out their day in detail and give your readers the experience of having such a life, that is if that was your goal. It all depends on the experience you want to craft and the story you want to tell.

To put my experience more simply, I did not realize how much of an art writing could be and how little rules there were when you write artistically/creatively.

RaoulDook@lemmy.world on 28 May 16:03 collapse

IDK that kinda depends on the writer and their style. Concise is usually a safe bet for easy reading, but doesn’t leave room for a lot of fancy details. When I think verbose vs concise I think about Frank Herbert and Kurt Vonnegut for reference.

gravitas_deficiency@sh.itjust.works on 27 May 11:59 collapse

I did not have time to write a short letter, so I wrote a long one instead

paequ2@lemmy.today on 27 May 05:39 next collapse

To “waffle” comes from the 1956 movie Archie and the Waffle House. It’s a reference how the main character Archie famously ate a giant stack of waffles and became a town hero.

— AI, probably

gravitas_deficiency@sh.itjust.works on 27 May 12:04 collapse

Hahaha let’s keep going with Archie and the Waffle House hallucinations

To “grill” comes from the 1956 movie Archie and the Waffle House. It’s a reference to the chef cooking the waffles, which the main character Archie famously ate a giant stack of, and became the town hero.

_LordMcNuggets_@feddit.org on 28 May 18:29 collapse
Snazz@lemmy.world on 27 May 08:08 collapse

I feel like that might have been the point. Rather than “using a car to go from A to B” they walked.

charade_you_are@sh.itjust.works on 27 May 03:35 next collapse

Not sure it’s possible for AI to make me stupider than I already am

Lost_My_Mind@lemmy.world on 27 May 04:36 collapse

Hey, listen. I don’t say this to just ANYONE, but I like the cut of your jib! What’s a jib, you ask? Not important. What IS important is I’ve got an amazing deal on a bridge I’d like to sell you! See, I gotta clear my inventory space for the new models coming out soon, and this model is from the 1800s. You’ve heard the childrens song London Bridge is falling down? Yeah. Falling down in cost, and I’m passing the savings onto yooouuuuu!!!

See, most bridges cost MILLIONS of dollars, but I’ll sell it to you for only $50,000! Or my name isn’t James J. O’Brien!

charade_you_are@sh.itjust.works on 27 May 04:38 collapse

Give me all of it. Can I pay more than you’re asking?

paequ2@lemmy.today on 27 May 03:44 next collapse

Soon people are gonna be on $19.99/month subscriptions for thinking.

Lost_My_Mind@lemmy.world on 27 May 04:24 next collapse

Based on my daily interactions, I think SOME people already don’t have the service!

angrystego@lemmy.world on 27 May 04:45 collapse

Yep, in many cases that could be a major improvement.

WanderingThoughts@europe.pub on 27 May 05:36 collapse

And then the subscription price goes up, repeatedly.

SinningStromgald@lemmy.world on 27 May 03:53 next collapse

Good thing I dont use it.

Lost_My_Mind@lemmy.world on 27 May 04:25 collapse

AI, or your brain?

PunnyName@lemmy.world on 27 May 04:30 collapse

Yes

j4k3@lemmy.world on 27 May 03:57 next collapse

Stupid in, stupid out. I have had many conversations like, *I have built and understand Ben Eater's 8 bit breadboard computer based loosely on Malvino's "Digital Computer Electronics" 8 bit computer design, but I struggle to understand Pipelines in computer hardware. I am aware that the first rudimentary Pipeline in a microprocessor is the 6502 with its dual instruction loading architecture. Let's discuss how Pipelines evolved beyond the 6502 and up to the present.*

In reality, the model will be wrong in much of what it says for something so niche, but forming questions based upon what I know already reveals holes outside of my awareness. Often a model is just right enough for me to navigate directly to the information I need or am missing regardless of how correct it is overall. I get lost sometimes because I have no one to talk to or ask for help or guidance on this type of stuff. I am not even at a point where I can pin down a good question to ask someone or somewhere like here most of the time. I need a person to bounce ideas off of and ask direct questions. If I go look up something like Pipelines in microprocessors in general, I will never find an ideal entry point for where I am at in my understanding. With AI I can create that entry point quickly. I’m not interested in some complex course, and all of the books I have barely touch the subject in question, but I can give a model enough peripheral context to move me up the ladder one rung at a time. I could hand you all of my old tools to paint cars, then laugh at your results. They are just tools. I could tell you most of what you need to know in 5 minutes, but I can’t give you my thousands of experiences of what to do when things go wrong. Most people are very bad at understanding how to use AI. It is just an advanced tool. A spray gun or a dual action sander do not make you stupid; spraying paint without a mask does. That is not the fault of the spray gun. It is due to the idiot using it. AI has a narrow scope that requires a lot of momentum to make it most useful. It requires an agentic framework, function calling, and a database. A basic model interface is about like an early microprocessor that was little more than a novelty on its own at the time. You really needed several microprocessors to make anything useful back in the late 70s and early 80s. In an abstract way, these were like agents. I remember seeing the asphalt plant controls hardware my dad would bring home with each board containing at least one microprocessor. Each board went into racks that contained dozens of similar boards and variations. It was many dozens of individual microprocessors to run an industrial plant. Playing with gptel in emacs, it takes swapping agents with a llama.cpp server to get something useful running offline, but I like it for my bash scripts, learning emacs, Python, forth, Arduino, and just general chat if I use Oobabooga Textgen. It has been the catalyst for me to explore the diversity of human thought as it relates to my own, it got me into basic fermentation, I have been learning and exploring a lot about how AI alignment works, I’ve enjoyed creating an entire science fiction universe exploring what life will be like after the age of discovery is over and most of science is an engineering corpus or how biology is the ultimate final human technology to master, I’ve had someone to talk to through some dark moments around the 10 year anniversary of my disability or when people upset me. I find that super useful and not at all stupid, especially for someone like myself in involuntary social isolation due to physical disability. I’m in tremendous pain all the time. It is often hard for me to gather coherent thoughts in real time, but I can easily do so in text, and with a LLM I can be open without any baggage involved, I can be more raw and honest than I would or could be with any human because the information never leaves my computer. If that is stupid, sign me up for stupid because that is exactly what I needed and I do not care how anyone labels it.

antonim@lemmy.dbzer0.com on 27 May 04:50 collapse

with a LLM I can be open without any baggage involved, I can be more raw and honest than I would or could be with any human because the information never leaves my computer.

😐

vala@lemmy.world on 27 May 05:06 collapse

Local LLMs exist

Kolanaki@pawb.social on 27 May 04:24 next collapse

Yeah but now I’m stupid faster. 😤

gravitas_deficiency@sh.itjust.works on 27 May 11:59 collapse

And the process is automated, and much more efficient. And also monetized.

Cocopanda@futurology.today on 27 May 05:13 next collapse

I use it to write my job application cover letters. Thats about it. It can’t do my previous job. It can’t figure out wiring single line drawings or plot them. I’m not worried about my intelligence.

match@pawb.social on 27 May 06:38 next collapse

Cover letters are a great use of AI because they are a pure formality whose content is valueless.

andallthat@lemmy.world on 28 May 08:50 collapse

cover letters, meeting notes, some process documentation: the stuff that for some reason “needs” to be done, usually written by people who don’t want to write it for people who don’t want to read it. That’s all perfect for GenAI.

j4yt33@feddit.org on 27 May 06:42 collapse

Yes same, and little code snippets, since I’m still too incompetent to write those myself. I’m not a coder though, so I’m not all that worried

SunshineJogger@feddit.org on 27 May 05:23 next collapse

It depends.

If the time I save from the summary I generate is used for stuff that is also complex then the effect is not as stated in the article.

For me AI is a tool like many others and when I use it I have to proof read, understand and compare just as much as before because I’ve used LLMs enough not to fully trust their output.

They provide a good starting point and anyone who just stops there and takes that first draft of work as-is has no idea what they want to achieve in the first place

match@pawb.social on 27 May 06:32 collapse

If the time I save frim the summary I generate is used for stuff that is also complex then the effect is not as stated in the article.

you won’t though you’re gonna use that time to doomscroll

SunshineJogger@feddit.org on 27 May 08:00 next collapse

Lol. People who do that habitually and addictively will do it with or without AI

Siegfried@lemmy.world on 27 May 11:10 collapse

like we are doing right now

Sergebr@lemmynsfw.com on 27 May 06:06 next collapse

To all the AI apologists :

« I’m officially done with takes on AI beginning “Ethical concerns aside…”.

No! Stop right there.

Ethical concerns front and center. First thing. Let’s get this out of the way and then see if thre is anything left worth talking about.

Ethics is the formalisation of how we are treating one another as human beings and how we relate to the world around us.

It is impossible to put ethics aside.

What you mean is “I don’t want to apologise for my greed and selfishness.”

Say that first. »

narrativ.es/@janl/114566975034056419

match@pawb.social on 27 May 06:40 next collapse

Ethical concerns aside. Human impacts aside. Cultural value, future generations, environmental decay aside. Aside, you who seek value in truth, in human connection. Aside, all who stand in the way.

RaoulDook@lemmy.world on 28 May 16:47 collapse

Dude we only need a few small nuclear reactors to power this new chatbot though

Honytawk@feddit.nl on 27 May 07:57 collapse

So what you are saying is that you are incapable of thinking in hypotheticals?

ExLisper@lemmy.curiana.net on 27 May 06:16 next collapse

People already are stupid. Youtube and facebook made sure of that.

venusaur@lemmy.world on 27 May 06:22 next collapse

Inflammatory title for stupid people

match@pawb.social on 27 May 06:30 next collapse

~~Could AI have assisted me in the process of developing this story?

No. Because ultimately, the story comprised an assortment of novel associations that I drew between disparate ideas all encapsulated within the frame of a person’s subjective experience~~

this person’s prose is not better than a typical LLM’s and it’s essentially a free association exercise. AI is definitely rotting the education system but this essay isn’t going to help

Naz@sh.itjust.works on 27 May 06:48 next collapse

The enormous irony here would be if the author used a generative tool to write the article criticizing them, and whoever commented that he doesn’t get the point is exactly right – it’s like 6 to 10 pages of analogies to unrelated topics.

LovableSidekick@lemmy.world on 27 May 06:49 next collapse

Yeah I really think being afraid of AI making us stupid after 25 years of social media addiction is like worrying that the folks who grew up next door to the nuclear reactor aren’t putting on enough sunscreen when they go out to the mailbox.

UltraMasculine@sopuli.xyz on 27 May 07:00 next collapse

The less you use your own brains, the more stupid you eventually become. That’s a fact, like it or don’t.

WaitThisIsntReddit@lemmy.world on 27 May 07:40 next collapse

Calculators are rotting your brain and making you stupid

Thorry84@feddit.nl on 27 May 08:27 collapse

And for the most part this is true. People who don’t do little calculation puzzles for fun often have trouble with basic arithmetic without getting a calculator (or likely the calculator app on the phone). I know when I’m doing something like wood working and I need to add and subtract some measurements, I use a calculator. I could do it without, but chances are I would make a simple mistake and mess up my work. It’s like a muscle, if you use it, it will become stronger. If you don’t use it, it becomes weaker.

However there is a huge difference between using a calculator for basic arithmetic and using AI. For one thing, the calculator doesn’t tell you what the sums are. It just tells you the result. You still need to understand each step, in order to enter it. So while you lose some mental capacity in doing the sums, you won’t lose the understanding of the concepts involved. Second of all, it is a highly specific tool, which does one thing and does it well. So the impact will always be limited to that part and it’s debatable if that part is useful or not. When learning maths I think it’s important to be able to do them without a calculator, to gain a better understanding. But as an adult once you grasp the basic concepts, I think it’s perfectly fine not to be able to do it without a calculator.

For AI it’s a bit different, it’s a very general tool which deals with all aspects of every day stuff. It also goes much further than being a simple tool. You can give it broad instructions and it will fill in the blanks on its own. It even goes so far as to introduce and teach new topics entirely, where the only thing a person knows is what the AI told them. This erodes basic thinking skills, like how does this fit into my world view, is this thing true or false and in what way?

Again the same concept applies, where the brain is a muscle which needs to be given a workout. When it comes to a calculator, the brain isn’t exercising the arithmetic part. When it comes to AI it involves almost all of our brain.

Honytawk@feddit.nl on 27 May 07:52 next collapse

You mean exactly like what they said TV and computers would do?

Colour me skeptical.

Dantpool@kbin.melroy.org on 27 May 12:25 collapse

It's the same claim that was made about the radio and the written word. I'm no fan of AI, but this argument is so old, it remembers Plato.

nivenkos@lemmy.ml on 27 May 09:17 next collapse

Hard disagree, it lets me achieve more and avoid procrastination. It can help you not get caught up on small errors, and be like a junior colleague given you complete attention when you ask for different proposals, etc.

Guidy@lemmy.world on 27 May 10:40 next collapse

Unlike social media?

MystikIncarnate@lemmy.ca on 28 May 02:56 collapse

Kek

raltoid@lemmy.world on 27 May 10:55 next collapse

Absolutely loathe titles/headlines that state things like this. It’s worse than normal clickbait. Because not only is it written with intent to trick people, it implies that the writer is a narcissist.

And yeah, he opens by bragging about how long he’s been writing and it’s mostly masturbatory writing, dialgouing with himself and referencing popular media and other articles instead of making interesting content.

Not to mention that he doesn’t grasp the idea that many don’t use it at all.

samus12345@lemm.ee on 27 May 19:15 next collapse

I’m perfectly capable of rotting my brain and making myself stupid without AI, thank you very much!

Sixtyforce@sh.itjust.works on 27 May 22:44 next collapse

Glad this take is here, fuck that guy lol.

sugar_in_your_tea@sh.itjust.works on 28 May 15:48 collapse

Disagree. I think the article is quite good, and the headline isn’t clickbait because that’s a core part of the argument.

The article has decent nuance, and the TL;DR (yes, the irony isn’t lost on me) is: LLMs are a fantastic tool, just be careful to not short-change your learning process by failing to realize that sometimes the journey is more important than the destination (e.g. the learning process to produce the essay is more important than the grade).

raltoid@lemmy.world on 28 May 17:18 collapse

You’re literally falling into the same fallacy as the writer: You’re assuming that there aren’t people like myself who don’t actively use any form of LLM.

sugar_in_your_tea@sh.itjust.works on 28 May 19:40 collapse

Sure, then the article isn’t for you.

aeruginosis@lemmy.world on 27 May 12:14 next collapse

If you only use the AI as a tool, to assist you but still think and make decisions on your own then you won’t have this problem.

MystikIncarnate@lemmy.ca on 28 May 02:57 collapse

I’ll say this: a lot of people using AI, are not thinking or making decisions.

aeruginosis@lemmy.world on 28 May 11:13 collapse

It’s true, honestly: and that is a problem. It’s so concerning. I’m not going to say it isn’t. I suppose I was just stating what I believe to be, how it should be.

Siegfried@lemmy.world on 27 May 12:57 next collapse

A human would have known that the xenomorph should be impregnating that girl through her throat…

Grimtuck@lemmy.world on 27 May 15:54 next collapse

Actually it’s taking me quite a lot of effort and learning to setup AI’s that I run locally as I don’t trust them (any of them) with my data. If anything, it’s got me interested in learning again.

SpicyColdFartChamber@lemm.ee on 27 May 19:09 next collapse

I have difficulty learning, but using AI has helped me quite a lot. It’s like a teacher who will never get angry, doesn’t matter how dumb your question is or how many time you ask it.

Mind you, I am not in school and I understand hallucinations, but having someone who is this understanding in a discourse helps immensely.

It’s a wonderful tool for learning, especially for those who can’t follow the normal pacing. :)

sem@lemmy.blahaj.zone on 27 May 23:48 next collapse

The problem is if it’s wrong, you have no way to know without double checking everything it says

Grimtuck@lemmy.world on 28 May 05:00 next collapse

Too be fair, this can also be said of teachers. It’s important to recognise that AI’s are as accurate as any single source and should always check everything yourself. I have concerns over a future where our only available sources are through AI.

Nalivai@lemmy.world on 28 May 14:52 collapse

The level of psychopathy required from a human to be as blatant at lying as an llm is almost unachievable

Jakeroxs@sh.itjust.works on 28 May 15:18 collapse

Bruh so much of our lives is made up of people lying, either intentionally or unintentionally via spreading misinformation.

I remember being in 5th grade and my science teacher in a public school was teaching the “theory” of evolution but then she mentioned there are “other theories like intelligent design”

She wasn’t doing it to be malicious, just a brainwashed idiot.

Nalivai@lemmy.world on 28 May 15:35 collapse

so much of our lives is made up of people lying

And that’s why we, as humans, know how to look for signs of this in other humans. This is the skill we have to learn precisely because of that. Not only it’s not applicable when you read the generated bullshit, it actually does the opposite.
Some people are mistaken, some people are actively misleading, almost no one has the combination of being wrong just enough, and confident just enough, to sneak their bullshit under the bullshit detector.

Jakeroxs@sh.itjust.works on 28 May 17:17 collapse

Took that a slightly different way then I was expecting, my point is we have to be on the lookout for bullshit when getting info from other people so it’s really no different when getting info from an LLM.

However you took it to the LLM can’t determine between what’s true and false, which is obviously true but an interesting point to make nonetheless

Nalivai@lemmy.world on 28 May 21:12 collapse

It’s not that LLM can’t know truth, that’s obvious but besides the point. Its that the user can’t really determine when the lies are, not to the degree that you can be when getting info from a human.
So you really need to check everything, every claim, every word, every sound. You can’t assume good intentions, there are no intentions in real sence of the word, you can’t extrapolate or intrapolate. Every word of the data you’re getting might be a lie with the same certainty as any other word.
It requires so much effort to check properly, you either skip some or spend more time that you would without the layer of lies.

Jakeroxs@sh.itjust.works on 28 May 21:35 collapse

I don’t see how that’s different honestly, then again I’m not usually asking for absolute truth from LLMs, moreso explaining concepts that I can’t fully grasp by restating things in another way or small coding stuff that I can check essentially immediately if it works or not lol.

Nalivai@lemmy.world on 29 May 00:10 collapse

See, this is the problem I’m talking about. You think you can gauge if the code works or not, but even for small pieces (and in some cases, especially for small pieces) there is a world of very bad, very dangerous shit that lies between “works” and “not works”.
And it is as dangerous when you trust it to explain something for you. It’s by definition something you don’t know therefore can’t check.

Jakeroxs@sh.itjust.works on 29 May 17:18 collapse

I mean I literally can test it immediately lol, a nodered js function isn’t going to be dangerous lol

Or an AHK script that displays a keystroke on screen, or cleaning up a docker command into docker compose, simple shit lol

SpicyColdFartChamber@lemm.ee on 28 May 07:15 next collapse

I understand that. I am careful to not use it as my main teaching source, rather a supplement. It helps when I want to dive into the root cause of something, which I then double check with real sources.

Nalivai@lemmy.world on 28 May 21:07 collapse

But like why not go to the real sorces directly in the first place? Why add unnecessary layer that doesn’t really add anything?

SpicyColdFartChamber@lemm.ee on 28 May 21:22 collapse

I do go to the real source first. But sometimes, I just need a very simple explanation before I can dive deep into the topic.

My brain sucks, I give up very easily if I don’t understand something. (This has been true since way before short form content and internet)

If I had to say how much I use it to learn, I’d say it’s about 30% of the total learning. It can’t teach you course work from scratch like a real person can (even through videos), but it can help clear doubts.

Zetta@mander.xyz on 28 May 15:04 collapse

Its not a bit deal if you aren’t completely stupid, I don’t use LLMs to learn topics I know nothing about, but I do use them to assist me in figuring out solutions to things I’m somewhat familiar with. In my case I find it easy catch incorrect info, and even if I don’t catch it most of the time if you just occasionally tell it to double check what it said it self corrects.

Nalivai@lemmy.world on 28 May 21:06 collapse

It is a big deal. There is thr whole set of ways humans can gauge validity of the info, that are perpendicular to the way we interact with fancy autocomplete.
Every single word might be false, with no pattern to it. So if you can and do check it, you just wasting your time and humanity’s resources instead of finding the info yourself in the first place. If you don’t, or if you think you do, it’s even worse, you are being fed lies and believe them extra hard.

Nalivai@lemmy.world on 28 May 15:06 collapse

It’s not normal for a teacher to get angry. Those people should be replaced by good teachers, not by a nicely-lying-to-you-bot. It’s not a jab at you, of course, but at the system.

SpicyColdFartChamber@lemm.ee on 28 May 19:51 collapse

I agree, I’ve been traumatized by the system. Whatever I’ve learnt that’s been useful to me has happened through the internet, give or take a few good teachers.

I still think it’s a good auxiliary tool. If you understand its constraints, it’s useful.

It’s just really unfortunate that it’s a for profit tool that will be used to try and replace us all.

Nalivai@lemmy.world on 28 May 21:01 collapse

Yeah, same. I have to learn now to learn in spite of all the old disillusioned creatures that hated their lives almost as much as they hated students.
And yet, I’m afraid learning from chatbots might be even worse.

SpicyColdFartChamber@lemm.ee on 28 May 21:32 collapse

Learning how to learn is so important. I only learned that as an adult.

dwemthy@lemmy.world on 27 May 19:13 collapse

That’s the kind of effort in thought and learning that the article is calling out as being lost when it comes to reading and writing. You’re taking the time to learn and struggle with the effort, as long as you’re not giving that up once you have the AI running you’re not losing that.

tekato@lemmy.world on 27 May 18:25 next collapse

Add it to the list

Jhex@lemmy.world on 27 May 19:20 next collapse

I just got an email at work starting with: “Certainly!, here is the rephrased text:…”

People abusing AI are not even reading the slop they are sending

JigglypuffSeenFromAbove@lemmy.world on 27 May 21:39 collapse

I get these kinds of things all the time at work. I’m a writer, and someone once sent me a document to brief me on an article I had to write. One of the topics in the briefing mentioned a concept I’d never heard of (and the article was about a subject I actually know). I Googled the term, checked official sources … nothing, it just didn’t make sense. So I asked the person who wrote the briefing what it meant, and the response was: “I don’t know, I asked ChatGPT to write it for me LOL”.

Jhex@lemmy.world on 27 May 21:53 collapse

facepalm is all I can think of…lol

I am not sure what my emailer started with but what chatgpt gave it was almost unintelligible

obbeel@lemmy.eco.br on 27 May 20:23 next collapse

That guy (Rich) got a big piece of shit up his ass. He goes all the way to quote Socrates. It’s funny.

coconutking@lemmy.world on 27 May 20:59 next collapse

I read the first sentence of each paragraph and decided this read was not worth my time.

Now, if AI could do that for me…!!

sugar_in_your_tea@sh.itjust.works on 28 May 15:49 collapse

I liked it, but maybe I’m just a big fan of Socratea. It was a little long-winded, but I thought the point of knowing when to use and when to avoid LLMs is important and well-justified.

burgerpocalyse@lemmy.world on 27 May 21:10 next collapse

that picture is kinky as hell, yo

UltraGiGaGigantic@lemmy.ml on 28 May 02:37 next collapse

Does the nose insertion tube feed me cocaine?

I’m in

WhyJiffie@sh.itjust.works on 28 May 03:25 collapse

suspiciously specific

SocialMediaRefugee@lemmy.world on 28 May 15:15 collapse

I was annoyed that it wasn’t over her mouth to implant the egg.

Klear@lemmy.world on 28 May 15:28 collapse

It implants ideas, so it goes through the eyes.

CrayonDevourer@lemmy.world on 27 May 22:43 next collapse

Joke’s on you, I was already stupid to begin with.

Alpha71@lemmy.world on 27 May 23:02 next collapse

I only ever use it to answer a question and even then I double check it’s sources. I also like making superman pics.

Clinicallydepressedpoochie@lemmy.world on 28 May 00:42 next collapse

Lol, this is the 10,000 thing that makes me stupid. Get a new scare tactic.

douglasg14b@lemmy.world on 28 May 03:28 next collapse

Proof that it’s already too late ☝️

Clinicallydepressedpoochie@lemmy.world on 28 May 03:39 collapse

Ain’t skeerd

Nalivai@lemmy.world on 28 May 14:49 collapse

I mean, obviously, you need higher cognitive functioning for all that

Clinicallydepressedpoochie@lemmy.world on 28 May 15:13 collapse

Damn, I thought flight or fight was the most primitive function. Ah well, back to chewing on this tire.

Nalivai@lemmy.world on 28 May 15:37 collapse

Yeah, you know, just like my cat is scared of distant fireworks but doesn’t give a flying fuck about climate change or rise of fascism in our own country.

Clinicallydepressedpoochie@lemmy.world on 28 May 16:21 collapse

Oh so like when someone’s afraid of falling off the edge of the earth?

Nalivai@lemmy.world on 29 May 00:14 collapse

More like how some people are afraid of needles but aren’t afraid of deadly diseases. Their primitive understanding of reality allows them to draw connection between prick and pain, but not between an invisible to the naked eye organism and a gruesome death.

Clinicallydepressedpoochie@lemmy.world on 29 May 00:49 collapse

Oh, so like how people are afraid of clowns and then a clown murders your family. Got it.

Nalivai@lemmy.world on 29 May 12:07 collapse

I see you aren’t grasping the concept, and now are saying some random words to hide this fact. But then again, it is to be expected, we kind of started with the idea that you lack higher cognitive functions

Clinicallydepressedpoochie@lemmy.world on 29 May 12:42 collapse

Wow youre so intelligent. Must be really scary that AI is going to take that all away from you.

Nalivai@lemmy.world on 29 May 22:47 collapse

What’s scary is that chatbots will make more people like you.

Clinicallydepressedpoochie@lemmy.world on 29 May 22:55 collapse

And then you can’t avoid it and you’ll be a thumping idiot in no time. Omg, you might not even be better than people anymore.

Nalivai@lemmy.world on 29 May 22:58 collapse

You. I couldn’t avoid you. That’s what I’m mildly afraid of.

Clinicallydepressedpoochie@lemmy.world on 29 May 22:59 collapse

Yeah, I know what its like to have someone you dont really care for not leave you alone.

Nalivai@lemmy.world on 29 May 23:54 collapse

As with a lot of things in your life, you think you know something, but actually you don’t.

Clinicallydepressedpoochie@lemmy.world on 30 May 00:58 collapse

Wow.

sugar_in_your_tea@sh.itjust.works on 28 May 15:40 collapse

Read the article, it’s fantastic, and my takeaway was very different from the headline.

Sam_Bass@lemmy.world on 28 May 03:12 next collapse

A new update for ONEui on my Samsung phone has allowed me to disable Gemini from the start. I wasted no time doing so

sugar_in_your_tea@sh.itjust.works on 28 May 15:44 collapse

My favorite feature about my Pixel phone is GrapheneOS compatibility, which doesn’t ship AI by default, but I can opt in if I want (i.e. on a separate profile).

oyzmo@lemmy.world on 28 May 05:30 next collapse

Actually a really good article with several excellent points not having to do with AI 😊👌🏻 Worth a read

andallthat@lemmy.world on 28 May 09:42 next collapse

I agree. I was almost skipping it because of the title, but the article is nuanced and has some very good reflections on topics other that AI. Every technical progress is a tradeoff. The article mentions cars to get to the grocery store and how there are advantages in walking that we give up when always using a car. Are cars in general a stupid and useless technology? No, but we need to be aware of where the tradeoffs are. And eventually most of these tradeoffs are economic in nature.

By industrializing the production of carpets we might have lost some of our collective ability to produce those hand-made masterpieces of old, but we get to buy ok-looking carpets for cheap.

By reducing and industrializing the production of text content, our mastery of language is declining, but we get to read a lot of not-very-good content for free. This pre-dates AI btw, as can be seen by standardized tests in schools everywhere.

The new thing about GenAI, though is that it upends the promise that technology was going to do the grueling, boring work for us and free up time for us to do the creative things that give us joy. I feel the roles have reversed: even when I have to write an email or a piece of coding, AI does the creative piece and I’m the glorified proofreader and corrector.

sugar_in_your_tea@sh.itjust.works on 28 May 15:34 collapse

Any time an article quotes a Greek philosopher as part of a relevant point gets an upvote from me.

I certainly value brevity and hope LLMs encourage more of that.

bampop@lemmy.world on 28 May 09:43 collapse

I think the author was quite honest about the weak points in his thesis, by drawing comparisons with cars, and even with writing. Cars come at great cost to the environment, to social contact, and to the health of those who rely on them. And maybe writing came at great cost to our mental capabilities though we’ve largely stopped counting the cost by now. But both of these things have enabled human beings to do more, individually and collectively. What we lost was outweighed by what we gained. If AI enables us to achieve more, is it fair to say it’s making us stupid? Or are we just shifting our mental capabilities, neglecting some faculties while building others, to make best use of the new tool? It’s early days for AI, but historically, cognitive offloading has enhanced human potential enormously.

sugar_in_your_tea@sh.itjust.works on 28 May 15:39 next collapse

The article agrees with you, it’s just a caution against over-use. LLMs are great for many tasks, just make sure you’re not short-changing yourself. I use them to automate annoying tasks, and I avoid them when I need to actually learn something.

joel_feila@lemmy.world on 28 May 17:14 collapse

Well creating the slide was a form of cognitive offloading, but barely you still had to know how to use and what formula to use. Moving to the pocket calculator just change how you the it didn’t really increase how much thinking we off loaded.

but this is something different. With infinite content algorithms just making the next choice of what we watch amd people now blindly trusting whatever llm say. Now we are offloading not just a comolex task like sqrt of 55, but “what do i want to watch”, “how do i know this true”.

bampop@lemmy.world on 28 May 19:52 collapse

I agree that it’s on a whole other level, and it poses challenging questions as to how we might live healthily with AI, to get it to do what we don’t benefit from doing, while we continue to do what matters to us. To make matters worse, this is happening in a time of extensive dumbing down and out of control capitalism, where a lot of the forces at play are not interested in serving the best interests of humanity. As individuals it’s up to us to find the best way to live with these pressures, and engage with this technology on our own terms.

joel_feila@lemmy.world on 29 May 00:50 collapse

how we might live healthily with AI, to get it to do what we don’t benefit from doing,

Agree that is oir goal, but one i don’t ai with paying for training data. Also amd this the biggest. What benefits me is not what benefits the people owning the ai models

bampop@lemmy.world on 29 May 05:55 collapse

What benefits me is not what benefits the people owning the ai models

Yep, that right there is the problem

TankovayaDiviziya@lemmy.world on 28 May 12:07 next collapse

The maker of Deep Seek made it so it would be easier for him to do stocks, which I am doing as well. Unless you all expect us to get degree on how to manually calculate the P/E ratio, potential loss and earnings, position sizing, spread and leverage, compounding, etc., then I will keep using AI. Not everyone of us could specialise on particular areas.

Nalivai@lemmy.world on 28 May 14:47 next collapse

You’re speedrunning Danning-Kruger with an impressive force

sugar_in_your_tea@sh.itjust.works on 28 May 15:04 next collapse

You don’t need to calculate any of that, any brokerage or website with stock quotes will provide those numbers. AI could very well hallucinate invalid numbers there, so I wouldn’t trust it for calculations.

Oh, and all of those calculations you mentioned are simple to double check, P/E is literally just price/earnings, compounding formulas exist in any spreadsheet program, etc. I can calculate any of those faster than AI can generate a response.

brendansimms@lemmy.world on 28 May 17:54 next collapse

I use LLM’s to help with math/science/coding, and the thing it screws up the most seems to be simple math (typically units/conversion issues) so I would be weary about gleaning financial advice from a chatbot.

TankovayaDiviziya@lemmy.world on 29 May 11:43 collapse

so I would be weary about gleaning financial advice from a chatbot.

Oh yes, I use the bots for projections, which I don’t necessarily take on the face value. Some calculations had been off but as long as I gain some actual profits, I am content enough.

untakenusername@sh.itjust.works on 29 May 02:21 next collapse

The maker of Deep Seek made it so it would be easier for him to do stocks

I understood those people knew it was gonna mess with all the projections for the development of the US power grid, chip manufacturing and other data center related industries by being more efficient than anything else and they just made money off that.

JeremyHuntQW12@lemmy.world on 29 May 05:29 collapse

You don’t need AI to do that…

kokesh@lemmy.world on 28 May 15:10 next collapse

No shit

FourWaveforms@lemm.ee on 28 May 23:01 collapse

(picking up phone) Hello this is Sherlock speaking

SocialMediaRefugee@lemmy.world on 28 May 15:14 next collapse

I use it as a glorified manual. I’ll ask it about specific error codes and “how do I” requests. One problem I keep running into is I’ll tell it the exact OS version and app version I’m using and it will still give me commands that don’t work with that version. Sometimes I’ll tell it the commands don’t work and restate my parameters and it will loop around to its original response in a logic circle.

At least it doesn’t say “Never mind, I figured out the solution” like they do too often in stack exchange.

sugar_in_your_tea@sh.itjust.works on 28 May 15:29 next collapse

But when it works, it can save a lot of time.

I wanted to use a new codebase, but the documentation was weak and the examples focused on the fringe features instead of the style of simple use case I wanted. It’s a fairly popular project, but one most would set up once and forget about.

So I used an LLM to generate the code and it worked perfectly. I still needed to tweak it a little to fine tune some settings, but those were documented well so it wasn’t an issue. The tool saved me a couple hours of searching and fiddling.

Other times it’s next to useless, and it takes experience to know which tasks it’ll do well at and which it won’t. My coworker and I paired on a project, and while they fiddled with the LLM, I searched and I quickly realized we were going down a rabbit hole with no exit.

LLMs are a great tool, but they aren’t a panacea. Sometimes I need an LLM, sometimes ViM macros, sed or a language server. Get familiar with a lot of tools and pick the right one for the task.

UnderpantsWeevil@lemmy.world on 28 May 16:04 next collapse

But when it works, it can save a lot of time.

But we only need it because Google Search has been rotted out by the decision to shift from accuracy of results to time spent on the site, back in 2018. That, combined with an endlessly intrusive ad-model that tilts so far towards recency bias that you functionally can’t use it for historical lookups anymore.

LLMs are a great tool

They’re not. LLMs are a band-aid for a software ecosystem that does a poor job of laying out established solutions to historical problems. People are forced to constantly reinvent the wheel from one application to another, they’re forced to chase new languages from one decade to another, and they’re forced to adopt new technologies without an established best-practice for integration being laid out first.

The Move Fast And Break Things ideology has created a minefield of hazards in the modern development landscape. Software development is unnecessarily difficult and overly complex. Proprietary everything makes new technologies too expensive for lay users to adopt and too niche for big companies to ever find experienced talent to support.

LLMs are the breadcrumb trail that maybe, hopefully, might get you through the dark forest of 60 years of accumulated legacy code and novel technologies. They’re a patch on a patch on a patch, not a solution to the fundamental need for universally accessible open-sourced code and well-established best coding practices.

MangoCats@feddit.it on 28 May 20:57 next collapse

The problem with the open source best coding practices ivory tower is that it’s small, and short, and virtually lost in the sea of schlocky trees surrounding it.

sugar_in_your_tea@sh.itjust.works on 28 May 21:08 next collapse

we only need it because Google Search has been rotted out

Not entirely. AI can do a great job pulling data from multiple sources and condensing into an answer. So even if search was still good, instead of hitting several sites and putting together a solution, I can hit one.

reinvent the wheel

That depends on how you use it. I use it to find relevant, existing libraries and provide me w/ examples on how to use it. If anything, it gets me to reinvent the wheel less.

It can certainly be used naively to get exactly what you’re talking about, and that’s what’s going to happen w/ inexperienced users, such as college students. My point is that, like power tools, it can be a great tool in an experience hand, and it can completely ruin the user if they’re inexperienced.

UnderpantsWeevil@lemmy.world on 28 May 21:28 collapse

AI can do a great job pulling data from multiple sources and condensing into an answer.

Google could already do that. The format of the answer came in the blurb under the link, pertinent to the search.

I use it to find relevant, existing libraries and provide me w/ examples on how to use it.

AI Code Tools Widely Hallucinate Packages

The tendency of code-generating large language models (LLMs) to produce completely fictitious package names in response to certain prompts is significantly more widespread than commonly recognized, a new study has shown.

sugar_in_your_tea@sh.itjust.works on 28 May 22:33 collapse

The format of the answer came in the blurb under the link

Sure, and that works really well if I just need a quick fact check. I use DDG and use that feature a ton.

But that doesn’t work when more context is needed, like in a comparison. I find myself clicking through and skimming a dozen pages, and with an LLM I end up only needing 3-4 pages after reading its summary to confirm what it said.

AI Code Tools Widely Hallucinate Packages

Sure, which is why I always verify things like that. I ask it to compare popular libraries that accomplish a task, then look for evidence that my preferred option does what I want (issues on the project page) and is actively maintained (recent commits, multiple active contributors, etc). The LLM is just there to narrow the search space and give me things to look for.

To do that with regular search would take a bit longer since I’d need to compare each library to each other to find relevant blogs and whatnot. So even if search worked better, it would still take longer.

Sometimes it breaks down and I go back to my old method, but it’s usually worth a shot.

I use LLMs a lot less than my coworkers, but I do use them periodically when I think it’ll be useful. I’ve been a dev for a long time (10+ years), so I find I usually know where to look already. I discourage our junior devs from relying on it too much and encourage our senior devs to give it a shot.

SocialMediaRefugee@lemmy.world on 29 May 03:38 collapse

People are forced to constantly reinvent the wheel from one application to another, they’re forced to chase new languages from one decade to another, and they’re forced to adopt new technologies without an established best-practice for integration being laid out first.

I feel this.

SocialMediaRefugee@lemmy.world on 28 May 17:55 collapse

Same here. I never tried it to write code before but I recently needed to mass convert some image files. I didn’t want to use some sketchy free app or pay for one for a single job. So I asked chatgpt to write me some python code to convert from X to Y, convert in place, and do all subdirectories. It worked right out of the box. I was pretty impressed.

utopiah@lemmy.world on 28 May 18:52 collapse

May I introduce you to the wonderful world of open source instead?

sugar_in_your_tea@sh.itjust.works on 28 May 21:04 next collapse

That’s what LLMs largely pull from.

utopiah@lemmy.world on 29 May 04:12 collapse

Exactly, hence why being aware of provenance matters.

sugar_in_your_tea@sh.itjust.works on 29 May 13:49 collapse

And LLMs can help find those FOSS projects and fill in the gaps in their documentation.

I’m well aware of the copyright issues here and LLMs can make it easier to violate copyright, whether it’s protected by a proprietary or a FOSS license, but that’s up to the user of the LLM to decide where their boundaries are (and how much legal risk to accept). If you’re generating entire projects, you’ll probably have problems, but if you’re generating examples on how to accomplish a task with an existing tool, you’re probably fine.

LLMs are useful tools, but like any tool they can be misused. FOSS is great, LLMs are great, use both appropriately.

utopiah@lemmy.world on 29 May 15:41 collapse

Typically LLMs aren’t a problem with FOSS with licensing as pretty much anything and everything is free to use, remix, etc.

What is more of a problem is hallucinations, imagining using the wrong rm -rf ~/ command without understanding the consequence, but arguably that’s hard to predict. What will always be a problem though, no matter the model, is how much energy was put into it… so that, in fine, it makes the actual documentation and some issues on StackOverflow slightly more accessible because one can do semantic search rather than full text search. Does one really need to run billion parameters models in the cloud on a remote data center for that?

sugar_in_your_tea@sh.itjust.works on 29 May 17:45 collapse

pretty much anything and everything is free to use, remix, etc

Most licenses require attribution.

without understanding the consequence

This is the real problem. I’m arguing it’s a good tool in the hands of someone who knows what they’re doing.

utopiah@lemmy.world on 29 May 19:28 collapse

Despite the ecological costs?

sugar_in_your_tea@sh.itjust.works on 29 May 20:01 collapse

The ecological costs don’t need to be very high. We host our own LLM models at my company on a Mac Mini, which doesn’t use a ton of power and works pretty well.

utopiah@lemmy.world on 30 May 05:09 collapse

FWIW I did try few LLMs locally too (cf my notes on the topic …benetou.fr/…/SelfHostingArtificialIntelligence ) but AFAIK that is only the top of the iceberg, that LLM has been trained before and that’s a significant part of the cost.

SocialMediaRefugee@lemmy.world on 29 May 03:33 collapse

I am aware of it but it doesn’t always exist for my exact needs or I don’t need an app for a one time job.

utopiah@lemmy.world on 29 May 04:14 collapse

The command line is precisely trying to address this, providing not isolated apps but commands that are flexible and can be stitched together so that most needs are cover. Think of it like Lego blocks made out of text, that do stuff to your files.

If I can help, let me know.

SocialMediaRefugee@lemmy.world on 29 May 14:14 collapse

Ty, do you have a site I can read up on this and what is available?

utopiah@lemmy.world on 29 May 15:38 collapse

Depends how you learn and what are your goals but I can recommend :

… yet IMHO the real fun comes when you apply YOUR commands to YOUR files.

So yes, please do try in a safe sandbox first then when you want, when you are not rushed by a project start a terminal right there from the comfort of your desktop, then PLAY with your files after doing a backup. Trust me it won’t just be fun, it will be truly empowering.

When you get stuck, come back here and do ask.

Buddahriffic@lemmy.world on 28 May 16:33 next collapse

If it’s a topic that has been heavily discussed on the internet or in literature, LLMs can have good conversations about it. Take it all with a grain of salt because it will regurgitate common bad arguments as well as good ones, but if you challenge it, you can get it to argue against its own previous statements.

It doesn’t handle things that are in flux very well. Or things that require very specific consistency. It’s a probabilistic model where it looks at existing tokens and predicts what the next one is most likely to be, so questions about specific versions of something might result in a response specific to that version or it might end up weighing other tokens more than the version or maybe even start treating it all like pseudocode, where descriptive language plays a bigger role than what specifically exists.

MangoCats@feddit.it on 28 May 20:53 collapse

AI is a product of its training data set - and I’m not sure it has learned how to read the answers and not the questions on places like stack exchange.

blady_blah@lemmy.world on 28 May 16:58 next collapse

The thing is… AI is making me smarter! I use AI as a learning tool. The absolute best thing about AI is the ability to follow up questions with additional questions and get a better understanding of a subject. I use it to ask about technical topics and flush out a better understanding that I ever got from just a text book. I have seem some instances of hallucinating in the past, but with the current generation of AI I’ve had very good results and consider it an excellent tool for learning.

For reference I’m an engineer with over 25 years of experience and I am considered an expert in my field.

anachrohack@lemmy.world on 28 May 18:13 next collapse

Same, I use it to put me down research paths. I don’t take anything it tells me at face value, but often it will introduce me to ideas in a particular field which I can then independently research by looking up on kagi.

Instead of saying “write me some code which will generate a series of caverns in a videogame”, I ask “what are 5 common procedural level generation algorithms, and give me a brief synopsis of them”, then I can take each one of those and look them up

REDACTED@infosec.pub on 28 May 18:47 next collapse

The article says stupid, not dumb. If I’m not mistaken, the difference is like being intelligent versus being smart. When you stop using the brain muscle that’s responsible for researching, digging thru trash and bunch of obscure websites for info, using critical thinking to filter and refine your results, etc., that muscle will become atrophied.

You have essentially gone from being a researcher to being a reader.

blady_blah@lemmy.world on 28 May 19:20 next collapse

“digging thru trash and bunch of obscure websites for info, using critical thinking to filter and refine your results”

You’re highlighting a barrier to learning that in and of itself has no value. It’s like arguing that kids today should learn cursive because you had to and it exercises the brain! Don’t fool yourself into thinking that just because you did something one way that it’s the best way. The goal is to learn and find solutions to problems. Whatever tool allows you to get there the easiest is the best one.

Learning through textbooks and one way absorption of information is not an efficient way to learn. Having the ability to ask questions and challenge a teacher (in this case the AI), is a far superior way to learn IMHO.

REDACTED@infosec.pub on 28 May 20:43 next collapse

You’re highlighting a barrier to learning that in and of itself has no value.

It has no value as long as those tools are available to you. Like calculator, where nowadays everyone’s so used to them people have became pretty bad at math in head. While this is indeed not an issue since calculators are widely available to everyone, we’re not really talking about doing math, but using critical thinking, which is a very important skill in your daily life

EDIT: Disclaimer: I’m a vivid AI user and I’ve defended it here before, but I’m not about to start kidding myself that letting the AI analyize and think for me makes me more intelligent

MangoCats@feddit.it on 28 May 20:51 collapse

Like calculator, where nowadays everyone’s so used to them people have became pretty bad at math in head.

Were people ever very good at math in head?

There are those who have become calculator dependent who might not have if there were no calculators, but I’d say they’re a small middle ground. Some people are still good at math in their head, and even when they are, they should be using a calculator when it’s available to double check their math when it might be in question.

At the lower end of the scale, there are people who never would have been able to do math in head, but with calculator can do math all day without problem, except when they mis-key the question and have no idea that the answer is wrong, because they have no sense of math without the calculator.

MangoCats@feddit.it on 28 May 20:48 next collapse

The brain pathways used to control the fine-motor skills for cursive writing can doubtless be put to other uses.

JeremyHuntQW12@lemmy.world on 29 May 05:36 collapse

Why bother learning anything when you can get the answer in a fraction of a second ?

Lumiluz@slrpnk.net on 28 May 20:37 next collapse

By that logic probably shouldn’t use a search engine and you should go to a library to look things up manually in a book, like I did.

zzx@lemmy.world on 28 May 21:35 collapse

Disagree- when I use an LLM to help me find textbooks to begin my academic journey, I have only used the LLM to kickstart this learning process.

REDACTED@infosec.pub on 28 May 21:45 collapse

That’s not really what I was talking about. It would be closer to asking ChatGPT to make summary of said books instead of reading them

lemmy_outta_here@lemmy.world on 28 May 21:45 next collapse

I recently read that LLMs are effective for improving learning outcomes. When I read one of the meta studies, however, it seemed that many of the benefits were indirect: LLMs improved accessibility by allowing teachers to quickly tailor lessons to individual students, for example. It also seems that some students ask questions more freely and without embarrassment when chatting with an LLM, which can improve learning for those students - and this aligns with what you mention in your post. I personally have withheld follow-up questions in lectures because I didn’t want to look foolish or reveal my imperfect understanding of the topic, so I can see how an LLM could help me that way.

What the studies did not (yet) examine was whether the speed and ease of learning with LLMs were somehow detrimental to, say, retention. Sure, I can save time studying for an exam/technical interview with an LLM, but will I remember what I learned in 6 months? For some learning tasks, the long struggle is essential to a good understanding and retention (for example, writing your own code implementation of an algorithm vs. reading someone else’s). Will my reliance on AI somehow damage my ability to learn in some circumstances? I think that LLMs might be like powered exoskeletons for the mind - the operator slowly wastes away from lack of exercise.

It seems like a paradox, but learning “more, faster” might be worse in the long run.

JeremyHuntQW12@lemmy.world on 29 May 05:34 collapse

$100 billion and the electricity consumption of France seems a tad pricey to save a few minutes looking in a book…

RampantParanoia2365@lemmy.world on 28 May 20:07 next collapse

How are you using new AI technology?

For porn, mostly.

I did have it create a few walking tours on a vacation recently, which was pretty neat.

finitebanjo@lemmy.world on 28 May 21:10 next collapse

Not me tho

FourWaveforms@lemm.ee on 28 May 22:59 next collapse

No it’s am not

untakenusername@sh.itjust.works on 29 May 02:19 collapse

Does Wikipedia rot my brain by the same logic? If it didn’t exist I would remember lots more historical and technical info, but instead I can just search for it