Why Copilot is Making Programmers Worse at Programming (www.darrenhorrocks.co.uk)
from pnutzh4x0r@lemmy.ndlug.org to programming@programming.dev on 11 Sep 2024 17:21
https://lemmy.ndlug.org/post/1104646

Over the past few years, the evolution of AI-driven tools like GitHub’s Copilot and other large language models (LLMs) has promised to revolutionise programming. By leveraging deep learning, these tools can generate code, suggest solutions, and even troubleshoot issues in real-time, saving developers hours of work. While these tools have obvious benefits in terms of productivity, there’s a growing concern that they may also have unintended consequences on the quality and skillset of programmers.

#programming

threaded - newest

refalo@programming.dev on 11 Sep 2024 17:23 next collapse

sfconservancy.org/GiveUpGitHub/

0x0@programming.dev on 11 Sep 2024 20:40 collapse

codeberg.org

ArtVandelay@lemmy.world on 11 Sep 2024 23:20 collapse

I migrated about 2 weeks ago and couldn’t be happier

Dunstabzugshaubitze@feddit.org on 11 Sep 2024 18:03 next collapse

I’ve seen enough programmers blindly copypasting code from stackoverflow and other forums without thinking and never understanding the thing they just “wrote”, to know that tools like copilot won’t make programmers worse, they will allow more people to be bad programmers.

people need to read more code, play around with it, break it and fix it to become better programmers.

Spzi@lemm.ee on 13 Sep 2024 16:58 collapse

Hehe, good point.

people need to read more code, play around with it, break it and fix it to become better programmers.

I think AI bots can help with that. It’s easier now to play around with code which you could not write by yourself, and quickly explore different approaches. And while you might shy away from asking your colleagues a noob question, ChatGPT will happily elaborate.

In the end, it’s just one more tool in the box. We need to learn when and how to use it wisely.

mox@lemmy.sdf.org on 11 Sep 2024 18:13 next collapse

What’s Copilot? ;)

thingsiplay@beehaw.org on 11 Sep 2024 18:55 next collapse

Copilot is a tool for programmers who don’t want program.

RiikkaTheIcePrincess@pawb.social on 12 Sep 2024 03:06 collapse

I’ll never forget attending CS courses with a guy who got violently angry at having to write code. I assume he’s either thrilled with Copilot or in prison for attacking somebody over its failure to reliably write all of his code for him.

groucho@lemmy.sdf.org on 11 Sep 2024 20:06 next collapse

A thing that hallucinates uncompilable code but somehow convinces your boss it’s a necessary tool.

Kuinox@lemmy.world on 12 Sep 2024 08:57 collapse

An LLM that propose autocompletion for whole line/function.

kureta@lemmy.ml on 13 Sep 2024 17:54 collapse

This is the right answer.

Kuinox@lemmy.world on 13 Sep 2024 22:50 collapse

Of course, I don’t understand why people think it’s “unecessary”.
Do they never do exploratory work and do thing they are uncomfortable with ?
It’s a tool, if i’m in a codebase I know well, it’s often pretty useless.
But I started writing some python, I’m a python noob, copilot is a gigantic productivity booster.

dinckelman@lemmy.world on 11 Sep 2024 18:57 next collapse

Anything that allows people to blindly and effortlessly get results inherently makes them more stupid. Your brain is like any muscle. You need to repeatedly use it for it to work well

DScratch@sh.itjust.works on 11 Sep 2024 20:08 collapse

I’ll bet people said the same thing when Intellisense started suggesting lines completions.

And when errors were highlighted in the code rather than console output.

And when high-level languages started appearing.

JackGreenEarth@lemm.ee on 11 Sep 2024 20:15 next collapse

And they may have been right. But getting code is usually the end result, not proving you’re some better programmer. And useful tools may be used to help you with the aforementioned goal.

leisesprecher@feddit.org on 11 Sep 2024 20:26 next collapse

And when people started writing books instead of memorizing epic poems.

dinckelman@lemmy.world on 11 Sep 2024 22:07 next collapse

This really isn’t a good comparison at all. One gives you a list of choices you can make, and the other gives you a blind answer.

If seeing what argument types the function takes make me a worse engineer, so be it, I guess

MajorHavoc@programming.dev on 12 Sep 2024 01:29 next collapse

I’ll bet people said the same thing when Intellisense started suggesting lines completions.

They did.

And when errors were highlighted in the code rather than console output.

Yep.

And when high-level languages started appearing.

And yes.

That said, if you believed my mentors, we were barelling towards a 2025 in which nothing running on software ever really worked reliably.

So they may have been grumpy, but they were also right, on that point.

vrighter@discuss.tchncs.de on 12 Sep 2024 03:34 collapse

I mean with the “move fast and break things” mentality of most companies nowadays, I’d say he was spot-on

u_tamtam@programming.dev on 12 Sep 2024 05:55 collapse

I’ll bet people said the same thing when Intellisense started suggesting lines completions.

I’m sure many did, but I’m also pretty sure it’s easy to draw a line between code assistance and LLM-infused code generation.

thesmokingman@programming.dev on 11 Sep 2024 20:24 next collapse

I have heard the same rhetoric about IDEs, autocomplete (Intellisense, Jedi, etc.), DevOps, and frameworks. The kernel of truth across all of them is the separation between a dev and good dev. It is getting easier and easier to have something built for you using AI in your IDE in a framework that abstracts all the things away dumped into a prebuilt pipeline that deploys your artifacts for you. A dev can do that. A good dev understands the tools and knows when to dig into things.

I have yet to see a decrease in the number of good devs I meet even though IDEs slowly replaced text editors (and editors became strong enough to become IDEs). Frameworks have enabled more good devs to focus on business logic. DevOps provides solid guard rails for everything.

I don’t know if there’s an increase in the number of superficial devs. I haven’t interviewed junior dev candidates in awhile. I do know the market is flooded right now so I’d argue there might be other factors.

Also overall I do agree with the idea that letting copilot do everything for you means you don’t understand anything. Shit was the same way when cookbooks were common.

Kuinox@lemmy.world on 12 Sep 2024 08:59 next collapse

I browsed author own codebase and the first thing I saw is 150 lines of C# reimplementing functions available in the .NET standard lib.

fuzzzerd@programming.dev on 12 Sep 2024 17:06 next collapse

Link? I’d like to see. Always amusing to see that kind of thing.

lysdexic@programming.dev on 14 Sep 2024 18:51 collapse

the first thing I saw is 150 lines of C# reimplementing functions available in the .NET standard lib.

Once again: en.wikipedia.org/wiki/Dunning–Kruger_effect

fuzzzerd@programming.dev on 12 Sep 2024 17:08 collapse

There are a LOT of superficial devs out there. You dont even have to be interviewing junior devs. Plenty of them out there at medium and senior levels. They existed before LLMs were spitting code like today, and this will undoubtedly lower the bar for bad developers to enter. It remains to be seen if this can help the gold developers in a meaningful way.

lysdexic@programming.dev on 14 Sep 2024 18:50 collapse

They existed before LLMs were spitting code like today, and this will undoubtedly lower the bar for bad developers to enter.

If LLMs allow bad programmers to deliver work with good enough quality to pass themselves off as good programmers, this means LLMs are fantastic value for money.

Also worth noting: programmers do learn by analysing the output of LLMs, just as the programmers of old learned by reading someone else’s code.

fuzzzerd@programming.dev on 14 Sep 2024 21:00 collapse

I think I could have states my opinion better. I think LLMs total value remains to be seen. They allow totally incompetent developers to occasionally pass as below average developers. Is that good or bad? I don’t know. What an average and excellent developer can do with LLM assistance is less clear. Certainly it can help those developers in some situations.

lysdexic@programming.dev on 15 Sep 2024 09:50 collapse

I think I could have states my opinion better. I think LLMs total value remains to be seen. They allow totally incompetent developers to occasionally pass as below average developers.

This is a baseless assertion from your end, and a purely personal one.

My anecdotal evidence is that the best software engineers I know use these tools extensively to get rid of churn and drudge work, and they apply it anywhere and everywhere they can.

fuzzzerd@programming.dev on 15 Sep 2024 13:52 collapse

I’m don’t disagree. Good developers use the tools to do better, but its incremental not revolutionary improvements for already competent developers.

beeng@discuss.tchncs.de on 12 Sep 2024 05:49 next collapse

You write machine code?

No, you only describe what you want the compiler to write in machine code.

With copilot it’s still a description.

magic_smoke@links.hackliberty.org on 12 Sep 2024 09:46 next collapse

Sure but you’re also specifically telling it direct instructions which it will follow every time to the T, based on predetermined logic.

That is no where near how an LLM works. Furthermore, most programming languages require effort to learn. They night not be machine language, or even an assambler, but its still a skill you actually have to learn beyond speaking your native tongue.

Also one could make the argument that machine code is a “description” of what you want the CPU to do.

beeng@discuss.tchncs.de on 12 Sep 2024 10:43 collapse

The skill beyond your native tongue is knowing what a db does and how to describe what your app does. Aka a designer, with design language. Good luck with a LLM getting it to do what you want with no domain specific language.

“No, no, not like that, I meant bigger…”

ulkesh@beehaw.org on 12 Sep 2024 15:10 next collapse

And many programmers write some pretty stupid and horrible descriptions. LLMs don’t solve this, they just allow lazy programmers to be even lazier.

beeng@discuss.tchncs.de on 12 Sep 2024 16:05 collapse

Anybody that doesn’t write binary is lazy, said the compiler.

ulkesh@beehaw.org on 12 Sep 2024 17:53 collapse

I don’t even know how to respond to this. It makes no sense at all and doesn’t really relate to or respond to my comment except it happens to use the word “lazy”, I’m guessing in reference to my comment. Good luck trying to push LLMs, not sure what your agenda really is, other than to be argumentative here. Peace.

Snarwin@fedia.io on 12 Sep 2024 16:26 next collapse

If the compiler produces a program that doesn't match your description, you can debug the compiler. Can you debug an LLM?

[deleted] on 12 Sep 2024 17:33 next collapse

.

beeng@discuss.tchncs.de on 12 Sep 2024 17:46 collapse

Why wouldn’t a compiled program match your description (code)? The compiler is broken?? Compiled programs alwsys match their description(code).

So more likely your translation from idea to function is wrong.

Re-read your description, step through it slowly, what did you assume, that was wrong, or where did you add a mistake or typo? Sounds like I can do this in natural language or in Rust.

You can say that llms are not deterministic of what they produce, but that’s got nothing to do with making a programmer worse at their job.

If you can’t translate your idea into function and test its output to be what you want, then you are a bad programmer.

firelizzard@programming.dev on 12 Sep 2024 16:56 collapse

Copilot frequently produces results that need to be fixed. Compilers don’t do that. Anyone who uses copilot to generate code without understanding how that code works is a shit developer. The same is true of anyone who copies from stack overflow/etc without understanding what they’re copying.

beeng@discuss.tchncs.de on 12 Sep 2024 17:31 collapse

You’re missing the point. If the program doesn’t do what it’s meant to its YOU that didn’t use the tools between you and metal, correctly. LLM involved or not, it’s how you’ve described it, in whatever ‘language’ you chose (natural or Rust)

firelizzard@programming.dev on 15 Sep 2024 17:53 collapse

The key difference is that compilers don’t fuck up, outside of the very rare compiler bug. LLMs do fuck up, quite often.

BatmanAoD@programming.dev on 12 Sep 2024 15:00 next collapse

I was hoping this might start with some actual evidence that programmers are in fact getting worse. Nope, just a single sentence mentioning “growing concern”, followed by paragraphs and paragraphs of pontification.

fuzzzerd@programming.dev on 12 Sep 2024 16:53 next collapse

Welcome to the Internet. Pontification is all we’ve got. Now we’ve got LLMs regurgitating the old pontifications to make new ones.

I came in with your same expectations and found the same shit. Just some opinion formed on the basis of “concern”.

trolololol@lemmy.world on 13 Sep 2024 01:25 next collapse

Thx for saving me a click. We are full of options and nobody has data. Down voting the post.

nebeker@programming.dev on 13 Sep 2024 12:56 next collapse

We’ve all read this post multiple times. Isn’t it just the “young people are lazy” that’s been going around for thousands of years?

historyhustle.com/2500-years-of-people-complainin…

At most it’s a tangent on it…

pixeltree@lemmy.blahaj.zone on 13 Sep 2024 16:35 next collapse

I don’t think it’s making devs worse, however I do think it’s significantly lowering the bar to entry to the point where people who don’t have enough knowledge to actually do the job well are becoming proceedingly common. Theoretically they should get weeded out by a good interview process but corporate be corporate

Not that my opinion is worth anything, it’s not like I have anything to back it up.

Please disregard any takes I may have

BatmanAoD@programming.dev on 14 Sep 2024 02:14 collapse

I mean, at least you acknowledge that you’re presenting an opinion. This blog post just tries to gloss over the fact that it’s pure speculation.

Fades@lemmy.world on 18 Sep 2024 05:51 collapse

Exactly what I suspected. How could you even truly prove such a thing

BatmanAoD@programming.dev on 20 Sep 2024 20:50 collapse

It’s probably not “provable” one way or the other, but I’d like to see more empirical studies in general within the software industry, and this seems like a fruitful subject for that.

dsilverz@thelemmy.club on 12 Sep 2024 17:03 next collapse

I’m a 10+ (cumulative) yr. experience dev. While I never used The GitHub Copilot specifically, I’ve been using LLMs (as well as AI image generators) on a daily basis, mostly for non-dev things, such as analyzing my human-written poetry in order to get insights for my own writing. And I already did the same for codes I wrote, asking for LLMs to “Analyze and comment” my code, for the sake of insights. There were moments when I asked it for code snippets, and almost every code snippet it generated was indeed working or just needing few fixes.

They’ve been becoming good at this, but not enough to really replace my own coding and analysis. Instead, they’re becoming really better for poetry (maybe because their training data is mostly books and poetry works) and sentiment analysis. I use many LLMs simultaneously in order to compare them:

  • Free version of Google Gemini is becoming lazy (short answers, superficial analysis, problems with keeping context, drafts aren’t so diverse as they were before, among other problems)
  • free version of ChatGPT is a bit better (can keep contexts, can issue detailed answers) but not enough (it does hallucinate sometimes: good for surrealist poetry but bad for code and other technical matters when precision and coherence matters)
  • Claude is laughable hypersensitive and self-censoring to certain words independently of contexts (got a code or text that remotely mentions the word “explode” as in PHP’s explode function? “Sorry, can’t comment on texts alluding to dangerous practices such as involving explosives”, I mean, WHAT?!?!)
  • Bing Copilot got web searching, but it has a context limit of 5 messages, so, only usable for quick and short things.
  • Same about Bing Copilot goes for Perplexity
  • Mixtral is very hallucination-prone (i.e. does not properly cohere)
  • LLama has been the best of all (via DDG’s “AI Chat” feature), although it sometimes glitches (i.e. starts to output repeated strings ad æternum)

As you see, I tried almost all of them. In summary, while it’s good to have such tools, they should never replace human intelligence… Or, at least, they shouldn’t…

Problem is, dev companies generally focus on “efficiency” over “efficacy”, wishing the shortest deadlines while wishing some perfection. Very understandable demands, but humans are humans, not robots. We need our time to deliver, we need to cautiously walk through all the steps needed to finally deploy something (especially big things), or it’ll become XGH programming (Extreme Go Horse). And machines can’t do that so perfectly, yet. For now, LLM for development is XGH: really fast, but far from coherent about the big picture (be it a platform, a module, a website, etc).

lysdexic@programming.dev on 14 Sep 2024 18:47 collapse

Claude is laughable hypersensitive and self-censoring to certain words independently of contexts (…)

That’s not a problem, nor Claude’s main problem.

Claude’s main problem is that it is frequently down, unreliable, and extremely buggy. Overall I think it might be better than ChatGPT and Copilot, but it’s simply so unstable it becomes unusable.

Phegan@lemmy.world on 13 Sep 2024 14:48 collapse

As someone who thinks we are in an AI bubble about to burst, this article has “old man angry at the kids using new technology” vibes.

lysdexic@programming.dev on 14 Sep 2024 18:44 collapse

I agree. Those who make bold claims like “AI is making programmers worse” neither has any first-hand experience with AI tools nor has any contact with how programmers are using them in their day-to-day business.

Let’s think about this for a second: one feature of GitHub Copilot is the /explain command, which is used to put together a synthetic description of what a codebase does. Please someone tell me how a programmer gets worse at their job by having a tool that helps him understand any codebase anywhere.

Big_Boss_77@lemmynsfw.com on 15 Sep 2024 03:13 collapse

I honestly wonder if they’re not trying to imply by virtue of digging up the info yourself, you’re not better for it… some real boomer shit