Ignoring lemmyhate, are programmers really using AI to be more efficient?
from bridgeenjoyer@sh.itjust.works to programming@programming.dev on 15 Aug 16:02
https://sh.itjust.works/post/44161444

I’ve seen a few articles saying that instead of hating AI, the real quiet programmers young and old are loving it and have a renewed sense of purpose coding with llm helpers (this article was also hating on ed zitiron, which makes sense why it would).

Is this total bullshit? I have to admit, even though it makes me ill, I’ve used llms a few times to help me learn simple code syntax quickly (im and absolute noob who’s wanted my whole life to learn code but cant grasp it very well). But yes, a lot of time its wrong.

#programming

threaded - newest

Sicklad@lemmy.world on 15 Aug 16:07 next collapse

From my experience it’s great at doing things that have been done 1000x before (which makes sense given the training data), but when it comes to building something novel it really struggles, especially if there’s 3rd party libraries involved that aren’t commonly used. So you end up spending a lot of time and money hand holding it through things that likely would have been quicker to do yourself.

kewjo@lemmy.world on 15 Aug 18:17 next collapse

the 1000x before bit has quite a few sideffects to it as well.

  • lesser used languages suffer because there’s not enough training data. this gets annoying quickly when it overrides your static tools and suggests nonsense.
  • larger training sets contain more vulnerabilities as most code is pretty terrible and may just be snippets that someone used once and threw away. owasp has a top 10 for a reason. take input validation for example, if I’m working on parsing a string there’s usually context such as is this trusted data or untrusted? if i don’t have that mental model where I’m thinking about the data i might see generated code and think it looks correct but in reality its extremely nefarious.
mesamunefire@piefed.social on 15 Aug 19:31 next collapse

Its also trained on old stuff.

And because its old, you get some very strange side effects and less maintainability.

MagicShel@lemmy.zip on 15 Aug 23:09 collapse

It’s decent at reviewing its own code, especially if you give it different lenses to look though.

“Analyze this code and look for security vulnerabilities.” “Analyze this code and look for ways to reduce complexity.”

And then… think about the response like it’s a random dude online reviewing your code. Lots of times it raises good issues but sometimes it tries too hard to find little shit that is at best a sidegrade.

Eq0@literature.cafe on 15 Aug 16:18 collapse

The pycharm AI integration completes each line. That’s very useful when you are repeating a well known algorithm and not distracting when you are doing something unusual. So overall, for small things AI is a speed up. I haven’t tried asking chatgpt for bigger coffe chunks, I haven’t had the greatest experience with it up to now and ii don’t want to spend more time debugging than I am already.

ripcord@lemmy.world on 16 Aug 06:17 collapse

Oh man, the Codeium auto complete in PyCharm has been just awful for me. Slow enough that it doesnt come up fast enough that I ever expect it (and rarely comes up when I pause to wait for it) then goes away instantly when I invariably continue typing when it comes up. Then won’t come back if I backspace, erase the word and start retyping it, etc. And competes with the old school pycharm auto complete sometimes which adds another layer of fun.

notarobot@lemmy.zip on 15 Aug 16:51 next collapse

Yes. But I’m not paying for premium like some of my cowokres. I use it to avoid the grunt work, and to avoid things I know I’d have to google.

I used some coworkers account for a while and auto complete is amazing. I it guesses wrong you just keep tipping as usual. If its right, hit tab and saves you like 20 seconds.

On the other hand I have cokowkers that do not check the chatgpt output and the PRs make no sense. Example: instead of making a variable type any (which is forbidden in our codebase) they did

Let a : date|number|string|object|(…) = fetchData()

criss_cross@lemmy.world on 15 Aug 17:20 next collapse

From my experience it’s really great at bootstrapping new projects for you. It’s good at getting you sample files and if you’re using cursor just building out a sample project.

It’s decent at being an alternative to google/SO for syntax or previously encountered errors. There’s a few things it hallucinates but generally it can save time as long as you don’t trust it blindly.

It struggles when you give it complex tasks or not-straightforward items. Or things that require a lot of domain knowledge. I once wanted to see what css classes were still in use across a handful of react components and it just shat the bed.

The people who champion AI as a human replacement will build a quick proof of concept with it and proclaim “oh shit this is awesome!” And not realize that that’s the easy part of software engineering.

AMillionMonkeys@lemmy.world on 15 Aug 17:24 next collapse

I’ve had good luck having it write simple scripts that I could easily handle myself. For example, I needed a script to chop a directory full of log files up into archives, with some constraints. That sort of thing.
I haven’t tried it on anything more substantial.
This was using Copilot because I haven’t found a good coding model that will run locally on 16GB VRAM.

RheumatoidArthritis@mander.xyz on 15 Aug 17:41 next collapse

Programmers are promoted to architects who write high-level specs with a subordinate to do the leg work (AI). I think the hate is because not everyone is good at planning and some people are better at perfecting implementation details, and AI isn’t helpful there.

FizzyOrange@programming.dev on 15 Aug 18:05 next collapse

Some definitely are. But I think a lot aren’t. Hell, a lot of programmers still don’t even use an IDE.

I don’t know why it would make you ill.

bridgeenjoyer@sh.itjust.works on 15 Aug 18:14 collapse

The same reason I wont use autotune or melodyne. Its just gross to me.

FizzyOrange@programming.dev on 15 Aug 19:41 collapse

I don’t feel like it’s the same - autotune can make me more in tune than I could ever achieve. Current LLMs definitely can’t write better code than me, they can just do it faster.

dotslashme@infosec.pub on 15 Aug 18:33 next collapse

I use it sparingly and only to automate things I know how to do very well, so reviewing its work become easier.

JakenVeina@midwest.social on 15 Aug 18:56 next collapse

Definitely depends on the sub-sector of the industry you’re in. There’s no shortage of stories of people who swear by it, or who are having it forced on them by management.

Me personally’ I’ve never wanted or been pressured to use it, and I work for a company with “AI” in the damn name. To be fair, though, this company was around doing general machine-learning stuff before the current LLM craze exploded. Also, I work with a small team that was only bought by this company a few years ago, and thus far has been allowed to remain practically independent. Also also, the business domain my team works in is finance and accounting, where there’s bot much practical application for ML, and where you REALLY can’t afford to fuck around and find out, with business data.

Naich@lemmings.world on 15 Aug 19:03 next collapse

You can either spend your time generating prompts, tweaking them until you get what you want and then using more prompts to refining the code until you end up with something that does what you want…

or you can just fucking write it yourself. And there’s the bonus of understanding how it works.

AI is probably fine for generating boiler plate code or repetitive simple stuff, but personally I wouldn’t trust it any further than that.

MagicShel@lemmy.zip on 15 Aug 23:02 collapse

There is a middle ground. I have one prompt I use. I might tweak it a little for different technologies, languages, etc. only so I can fit more standards, documentation and example code in the upload limit.

And I ask it questions rather than asking it to write code. I have it review my code, suggest other ways of doing something, have it explain best practices, ask it to evaluate the maintainability, conformance to corporate standards, etc.

Sometimes it takes me down a rabbit hole when I’m outside my experience (so does Google and stack overflow for what it’s worth), but if you’re executing a task you understand well on your own, it can help you do it faster and/or better.

Kissaki@programming.dev on 15 Aug 19:03 next collapse

I’ve been using phind as a technical-focused AI search engine, which is a great addition to my toolset.

I’m mindful of using it vs searching [ref docs etc], not only in the kind of search and answer I’m looking for but also energy consumption impact, but it’s definitely very useful. I’m a senior dev though, and know what to expect and I am able to assess plausibility, and phind provides sources I can inspect too.

As for code assistance, I find it plausible that it can be useful, even if from my personal experience I’m skeptical.

I watched an Microsoft talk from two devs, which was technically sound and plausible in that it was not just marketing but they talked about their experience, including limits of AI, and where they had to and to what degree they had to deal with hallucinations and cleanup. They talked about where they see usefulness in AI. They were both senior, and able to assess plausibility, and do corrections where necessary. What I remember; they used it to bounce ideas back and forth, to do an implementation draft they then go over and complete, etc.

Microsoft can do the investment of AI setup, code sharing to model, AI instructions/meta-description setup investment, etc.

My personal experience was in using copilot for Rust code, for Nushell plugins. I’m not very familiar with Rust, and it was very confusing, and with a lot of hallucinations.

The PR descriptions CodeRabbit did were verbose and not useful for smaller PRs I made. That has been a while ago.

At work we have a voluntary work group exploring AI. The whole generate your whole app kind of thing seems plausible for UI prototypes. But nothing more. And for that it’s probably expensive.

I’m not sure how much the whole thing does or can do for efficiency. Seems situational - in terms of environment, setup, capabilities, and kind of work and approach.

pelya@lemmy.world on 15 Aug 19:15 next collapse

I’m okay with AI-powered autocomplete, or with AI-powered mock project generator. Anything beyond that seems like the management’s misguided attempt at having more meetingsraising productivity.

I’m not using AI, and I rarely use IDE, because ugh, code editor is not fullscreen, and I don’t need a separate panel to navigate project tree and edit makefiles, I can perfectly use the shell for that, and I don’t even need to wiggle the mouse like some graphics designer to debug my code.

Kissaki@programming.dev on 15 Aug 22:24 collapse

I’ve found in-line completions/suggestions useful at times, but multi-line completions always irritating to the point that I disabled them completely. Much more often I want to read surrounding and following code, and not have it be pushed out of view, and rarely was it useful to me.

Of course, that may be largely the project and use case. (And quite limited experience with it.)

daniskarma@lemmy.dbzer0.com on 15 Aug 19:16 next collapse

It’s good for what it’s good, and bad for what it’s bad.

If you only use it for what is good I would suppose it would be easy to be more productive. Sometimes is faster to ask an LLM than trying to surf through pages of SO “repeated question” to get an answer.

I use mostly for things like that, questions, translation between languages (for instance having some working code in one language that you want to quickly translate to other language), boiler plate of well known algorithms and functions.

For full programming development I’ve no luck to make it work. And trusting it to refactor all your code would be something hilarious.

JoeKrogan@lemmy.world on 15 Aug 19:18 next collapse

I use it now and again but not integrated into an ide and not to write large bits of code.

My uses are like so

Rewrite this rant to shut the PO/PM up. Explain why this is a waste of time

Convert this excel row into a custom model.

Given these tables give me the sql to do xyz.

Sometimes for troubleshooting an environment issue

Do I need it , no. But if it saves me some time on bullshit tasks then thats more time for me

andyburke@fedia.io on 15 Aug 19:29 next collapse

My brother in programming,

please don't use AI for data format conversion when deterministic energy efficient means are readily available.

  • an old man.
JoeKrogan@lemmy.world on 15 Aug 19:36 next collapse

It was just an example to illustrate the point. I use specific convertors for actual format conversions. Actual uses have been map it to a custom data model .

You are right though , right tool for the job and all that.

atzanteol@sh.itjust.works on 15 Aug 19:55 next collapse

🙄

MagicShel@lemmy.zip on 15 Aug 23:13 collapse

I’d never trust it to make the change but I recently asked for a Python script to do a change I needed and it did it perfectly first try (verified the diff).

Also I don’t know Python at all.

— a fellow old man

hono4kami@piefed.social on 15 Aug 21:35 collapse

Sorry, I agree with other replier, but why would you use AI to convert from and to XML... when another more objective, reliable, and deterministic tool to convert exists for a long time. You know well how often LLM makes up stuff...

JoeKrogan@lemmy.world on 15 Aug 22:26 collapse

Clarified my point in the reply above .

beejjorgensen@lemmy.sdf.org on 15 Aug 19:25 next collapse

I’m pretty sure every time you use AI for programming your brain atrophies a little, even if you’re just looking something up. There’s value in the struggle.

So they can definitely speed you up, but be careful how you use it. There’s no value in a programmer who can only blindly recite LLM output.

There’s a balance to be struck in there somewhere, and I’m still figuring it out.

0x1C3B00DA@piefed.social on 15 Aug 22:48 collapse

I'm pretty sure every time you use AI for programming your brain atrophies a little, even if you're just looking something up. There's value in the struggle.

I assume you were joking but some studies have come out recently that found this is exactly what happens and for more than just programming. (sorry it was a while ago so I dont have links)

ripcord@lemmy.world on 16 Aug 04:11 collapse

Doesn’t sound like they’re joking to me.

shalafi@lemmy.world on 15 Aug 19:42 next collapse

Not a programmer, but I used it at my last job to get over humps where I was stuck on PowerShell scripts. AI can show you a path you didn’t know or hadn’t thought about. The developers seemed to be using it the same way. Great tool if you don’t completely lean on it and you know enough to judge the output.

bridgeenjoyer@sh.itjust.works on 15 Aug 20:06 next collapse

Thats the key, use it to learn, not to do your thinking

MagicShel@lemmy.zip on 15 Aug 23:05 collapse

I find it excels at one-off scripts. They are simple enough that every parameter and line of code fits in a small bit of memory. They are really bad at complex tasks, but they can help if you use it judiciously.

ripcord@lemmy.world on 16 Aug 06:13 collapse

I used ChatGPAT to write some fairly straight forward bash scripts last week and it was mostly awful. I ended up massaging it enough to do what I needed, but I would have been better off just writing it myself and maybe asking it a couple syntax questions (although the regex I needed was one of 8 things it stumbled over)

traches@sh.itjust.works on 15 Aug 19:56 next collapse

I used supermaven (copilot competitor) for awhile and it was sorta ok sometimes, but I turned it off when I realized I’d forgotten how to write a switch case. Autocomplete doesn’t know your intent, so it introduces a lot of noise that I prefer to do without.

I’ve been trying out Claude code for a couple months and I think I like it ok for some tasks. If you use it to do your typing rather than your thinking, then it’s pretty decent. Give it small tasks with detailed instructions and you generally get good results. The problem is that it’s most tempting to use when you don’t have the problem figured out and you’re hoping it will, but thats when it gives you overconvoluted garbage. About half the time this garbage is more useful than starting from scratch.

It’s good at sorting out boilerplate and following explicit patterns that you’ve already created. It’s not good at inventing and implementing those patterns in the first place.

VoterFrog@lemmy.world on 15 Aug 21:12 next collapse

My favorite use is actually just to help me name stuff. Give it a short description of what the thing does and get a list of decent names. Refine if they’re all missing something.

Also useful for finding things quickly in generated documentation, by attaching the documentation as context. And I use it when trying to remember some of the more obscure syntax stuff.

As for coding assistants, they can help quickly fill in boilerplate or maybe autocomplete a line or two. I don’t use it for generating whole functions or anything larger.

So I get some nice marginal benefits out of it. I definitely like it. It’s got a ways to go before it replaces the programming part of my job, though.

sobchak@programming.dev on 15 Aug 21:56 next collapse

In the grand scheme of things, I think AI code generators make people less efficient. Some studies have come out that indicate this. I’ve tried to use various AI tools, as I do like fields of AI/ML in general, but they would end up hampering my work in various ways.

asm@programming.dev on 15 Aug 22:10 next collapse

I’m somewhat new to the field ~1.5 years, so my opinion doesn’t hold too much weight.

But in the embedded field I’ve found AI to not be as helpful as it seems to be for many others. The one BIG thing is has helped me with is I can give it a data sheet and it’ll spit out all the register fields that I need, or help me quickly find information that I’m too lazy to Ctrl-f, which saves a couple minutes.

It has not proven it’s worth when it comes to the firmware itself. I’ve tried to get it to instantiate some peripheral instances and they never ended up working, no matter how I phrased the prompt or what context i’ve given it.

fubarx@lemmy.world on 15 Aug 22:21 next collapse

I use it mainly to tweak things I can’t be bothered to dig into, like Jekyll or Wordpress templates. A few times I let it run and do a major refactor of some async back-end code. It botched the whole thing. Fortunately, easy to rewind everything from remote git repo.

Last week I started a brand new project, thought I’d have it write the boilerplate starter code. Described in detail what I was looking for. It sat there for ten minutes saying ‘Thinking’ and nothing happened. Killed it and created it myself. This was with Cursor using Claude. I’ve noticed it’s gotten worse lately, maybe because of the increased costs.

Kolanaki@pawb.social on 15 Aug 23:07 next collapse

I don’t see how it could be more effecient to have AI generate something that you then have to review and make sure actually works over just writing the code yourself, unless you don’t know enough to code it yourself and just accept the AI generated code as-is without further review.

Quibblekrust@thelemmy.club on 16 Aug 06:02 collapse

I don’t see how it could be more effecient to have [a junior developer write] something that you then have to review and make sure actually works over just writing the code yourself…

patrick@lemmy.bestiver.se on 16 Aug 03:35 next collapse

My AI Skeptic Friends Are All Nuts - fly.io/blog/youre-all-nuts/

djmikeale@feddit.dk on 16 Aug 04:30 collapse

Not total bullshit, but it’s not great for all use cases:

For coding tasks the output looks good on the surface but often I end up changing stuff, meaning it would have been faster up do myself.

For coding I know little about (currently writing some GitHub actions), it’s great at explaining alternatives, pros and cons, to give me a rudimentary understanding of stuff

I’ve also used it to transcribe tutorial screencasts, and then afterwards having a secondary LLM use the transcription to generate documentation (include in prompt: "when relevant, generate examples, use markdown tables, generate plantuml etc)