Vibe coding has turned senior devs into ‘AI babysitters,’ but they say it’s worth it | TechCrunch (techcrunch.com)
from kinther@lemmy.world to technology@lemmy.world on 15 Sep 00:41
https://lemmy.world/post/35931768

#technology

threaded - newest

Sxan@piefed.zip on 15 Sep 01:05 next collapse

Uh huh. And independent studies show vibe coders believe þey're more efficient, but þey're actually less efficient.

This sound like hell to me. Bug fixing and digging þrough someone else's shitty code is þe worst part of software development, and vibe coding maximizes it?

I don't care how many propaganda pieces AI companies pay to have written about vibe coding, it's still shit which makes projects worse, and developer's jobs worse

123@programming.dev on 15 Sep 04:27 next collapse

Not related to the topic at hand but interestingly (?) I’ve gotten used to your weird “th” as 1 character. I could read the entire thing without noticing it. I wonder if others have started to do the same since the upvote ratio seems better than what I remember it being before when people always questioned the usage.

TheRedSpade@lemmy.world on 15 Sep 05:19 next collapse

I originally saw it as somebody using 2 different symbols for the 2 different sounds that “th” can make. That at least makes sense. Simply replacing the letters with one character (one not on a standard keyboard, btw) regardless of which sound they make is just extra effort for the sake of it.

lennivelkant@discuss.tchncs.de on 15 Sep 05:53 collapse

Nerds doing something unnecessarily complicatedly for the fun of it? I’m not particularly surprised.

123@programming.dev on 15 Sep 09:33 collapse

I think there was some mention of poisoning the AI crawlers or at least confusing them/requiring special handling as a possible side effect, so I stopped caring, but yes it can be somewhat of an annoyance until you remember that its basically just a contraction.

lennivelkant@discuss.tchncs.de on 15 Sep 10:20 collapse

If true, that’s an intent I can get behind. But even if it isn’t, given my own inclination towards contrived shenanigans to scratch some weird itch in my brain, I’ve come to accept such things as harmless quirks and treat them with the same patience I’d want others to treat my own with.

And every now and ðen, I try someþhing myself and realise what fun it can be ;-)

Sxan@piefed.zip on 15 Sep 12:26 collapse

what fun it can be

Pegged it in one. I'd hope þat's þe main reason most of us are here, after all.

Sxan@piefed.zip on 15 Sep 12:24 collapse

Not related to your point, but interestingly I have vote ratios turned off - mainly because every client I've tried has þem off by default. I assumed it was just Reddit refugees who paid attention to þose, because votes have value on Reddit and it's conditioned behavior. Now I wonder what percentage of FediVerse users do pay attention to votes.

It is interesting þat you've gotten used to it. We must overlap in a lot of communities.

Tollana1234567@lemmy.today on 15 Sep 06:36 next collapse

trying to cope, for being a money sink, rather than profitable.

hisao@ani.social on 15 Sep 07:16 collapse

how many propaganda pieces AI companies pay to have written about vibe coding,

Imo there’s orders of magnitude more anti-AI propaganda and stigma than pro-AI. If you’re ok with AI it’s very dangerous to admit that in professional setting IRL, you have to use careful language and a lot of conditionals.

shaiatan@midwest.social on 15 Sep 01:07 next collapse

I’m well aware the plural of “anecdote” isn’t “data”, but literally no dev I know (senior or otherwise) thinks this. Give me a junior work with - most of them at least actually learn.

ThePowerOfGeek@lemmy.world on 15 Sep 01:45 next collapse

Amen. I’ve tried the vibe coding thing but it’s frustrating because a) too often the AI output has some profound problems and it gets annoying ‘babysitting’ it; and b) I usually prefer the challenge of figuring out syntax and implementation issues myself.

If something is taking too long I’ll ask the LLM. But I feel like if I do this too much my skill set will atrophy and I’ll lose my sharpness. So it’s a balancing act.

But this brings up another wider question: where is the line between “occasionally getting AI help” and “vibe coding”? Surely it’s subjective.

pheonixdown@sh.itjust.works on 15 Sep 02:02 next collapse

I’d say the use cases of: mundane but time consuming, pointed inquiries or interactive rubber ducking, are all getting AI help. Offloading a design where you don’t have a clear understanding of how it should be done is vibing.

MagicShel@lemmy.zip on 15 Sep 03:22 next collapse

I don’t think the two cross, really. A vibe coder asks for a bunch of features and then starts refining the output, fixing bugs and adding features. A developer knows the specific architecture and from years of writing tasks knows how to break work into manageable chunks and uses AI to implement something they have already defined and know where it fits in. The skill to write a good story isn’t far off from writing a good prompt.

I use AI all the time, and every time I hear someone describing vibe coding it makes my skin crawl.

Buffalobuffalo@reddthat.com on 15 Sep 03:51 collapse

The definition may have changed but I feel like originally it was only vibe coding when the “dev” did not know what they are doing. When some one with little to no programming background is able to build and app on “vibes” alone.

webghost0101@sopuli.xyz on 15 Sep 05:17 next collapse

My first interpretation of vibe coding was to code for fun and personal enjoyment without worrying about industry standards or deployability. More often see with self-taught youth.

I felt like i have been doing this for years before ai became a thing.

dis_honestfamiliar@lemmy.sdf.org on 15 Sep 05:28 next collapse

Well then, I’ve been doing this all wrong.

Serinus@lemmy.world on 15 Sep 06:27 collapse

Also applies when the dev could know what they’re doing, but just doesn’t care to.

criss_cross@lemmy.world on 15 Sep 13:51 collapse

Yeah it’s the same skillset I use with Junior devs except I don’t have the hope AI will grow out of its bad habits

themeatbridge@lemmy.world on 15 Sep 01:18 next collapse

Senior devs love vibe coding because they have the knowledge and skills to recognize and fix errors. They hate it because it makes morons think they don’t need the knowledge and skills to recognize and fix errors.

muntedcrocodile@hilariouschaos.com on 15 Sep 01:50 next collapse

Yep this

5C5C5C@programming.dev on 15 Sep 04:37 collapse

As a senior dev I hate vibe coding. I can write code an order of magnitude faster than I can review it, because reviewing code forces you to piece together a mental model for something made by someone else, whereas when I write the code myself I get to start with the mental model already in my head.

Writing code is never the bottleneck for me. If I understand the problem well enough to write a prompt for an LLM, then I understand the problem well enough to write the code for it.

Serinus@lemmy.world on 15 Sep 06:29 next collapse

I understand how to turn the results of a select statement into an update statement, but the AI does it a hell of a lot faster.

I find if you give it small enough chunks, it’s easy enough to review. And even if you do have to correct, it’s generally easier to correct than it would be to write it all by hand.

5C5C5C@programming.dev on 15 Sep 15:01 collapse

Outside of my own specialty I can people in the software industry bogged down by managing excessive boilerplate. I think this happens most often in web dev and data science.

In my opinion this is an indication that the software tools for those ecosystems need improvement, but rather than putting in the design effort to improve the tools in the ecosystem, these Big Data companies see an opportunity to just throw LLMs at it and call it a commercial product.

KryptonNerd@slrpnk.net on 15 Sep 10:08 collapse

I’m a junior and even I feel the same way, reading and understanding someone else’s code not only takes me longer but is far less rewarding than just writing it myself. There’s also the issue as a junior that if I read AI code with issues that maybe I don’t notice or recognise, but it compiles fine, it could teach or reinforce poor practices that I may then put into my own work.

BlameTheAntifa@lemmy.world on 15 Sep 06:03 next collapse

If you are wondering how it could possibly be “worth it” the end of the article has this.

The Fastly survey found that senior developers were twice as likely to put AI-generated code into production compared to junior developers, saying that the technology helped them work faster.

So vibes. Vibe coding is “worth it” because people got good vibes.

The research shows that - while engineers think AI makes them more about 20% more productive - it actually causes an approximate 20% slow-down.

AI cannot use logic or reason. Everything it outputs is a hallucination, even if it’s sometimes accurate. You cannot trust anything it outputs.

victorz@lemmy.world on 15 Sep 06:21 next collapse

But surely you test the code and review it, right? That’s how you reinstate trust in what it outputs?

Disclaimer: I’ve never used AI to code, not even copilot.

BlameTheAntifa@lemmy.world on 15 Sep 06:26 next collapse

You mean rewrite it all from scratch? If you have any kind of standards that is what you end up doing. If you know what you’re doing you do it right the first time and move on. Using AI for coding it like trying to babysit the most inept, inexperienced intern to ever walk the earth. It wastes time and the end result is far worse.

victorz@lemmy.world on 15 Sep 18:23 collapse

That’s what I’m afraid of, and it doesn’t seem like employers are aware of this in general. Irks me especially as a consultant.

Serinus@lemmy.world on 15 Sep 06:35 next collapse

It’ll sometimes do dumb and/or redundant or too complicated shit. Pile up a couple of those and your codebase can get unmaintainable fast.

I find if you give it small chunks and keep an eye on it, it’s great.

I think one of my recent prompts was “Create a procedure that creates an example configuration file with placeholder values. If a config file doesn’t exist on start, give a warning and create the example config.”

It also works great as a replacement for an ORM.

CXORA@aussie.zone on 15 Sep 10:19 collapse

Based on my coworkers… no.

They get the Ai to write the code, and the tests.

Then hand it over to me to review and test.

Its all overly verbose, does things that are not required or desirable, and insists on re-writing existing code to match its own style.

I hate it passionately.

victorz@lemmy.world on 15 Sep 17:30 collapse

Damn. 😢

Sounds awful. I would just reject these PRs, dude. Tell them that AI is good for scaffolding and creating a draft, but you gotta maintain the human quality assurance, and that’s not your job, it’s theirs.

hisao@ani.social on 15 Sep 07:08 next collapse

senior developers were twice as likely to put AI-generated code into production compared to junior developers, saying that the technology helped them work faster

Perhaps senior devs are more likely to use more granular, step-by-step, controlled prompting. Asking it do write specific functions in specific ways and following specific approaches and conventions instead of just “do me an app, robot bro”.

Bababasti@feddit.org on 15 Sep 07:36 collapse

That’s actually how I am using AI for my work (web dev, pls don’t hate me). If I am stuck or have some tiny function missing for a task I ask AI, check their output - if it’s garbage I continue on my own again or if it’s usable I review the output and continue from there. Also, I think AI can be neat for „rubberducking“ when I am debugging some stupid shit and point me in directions I haven’t looked before.

fuzzzerd@programming.dev on 15 Sep 13:35 collapse

Similar to how I have found success with it. Is it revolutionary? No, not at all. But it’s a variable sized (big for some use and nonexistent for other use) incremental tool that requires a new skill set to use effectively.

Mix in all of the hype and its easy to see why people are confused and why some get different results.

Technus@lemmy.zip on 15 Sep 09:11 next collapse

I’m a senior dev and I want nothing to do with AI. By the time I understand what I want well enough to describe it in a complete sentence or paragraph, I can just write the fucking code myself. I figure it out as I go.

The whole point of having devs under you that is to be able to trust them to get the job done and do it right. You want to be able to delegate tasks to them and not have to peek over their shoulder every five fucking minutes to be certain they’re not making a mess of things.

I seriously doubt AI will ever be able to replace that. Not until they figure out how to make it afraid of fucking up.

slevinkelevra@sh.itjust.works on 15 Sep 10:46 collapse

As a senior dev I have found AI useful for auto completion (where you see beforehand what it wants to write directly in Visual Studio) and code analysis (as it does find some bugs and can give good hints for code structure). I would never trust it with anything even remotely complex though.

It kinda scares me that people trust it enough for “agent mode”, as giving it direct access to change stuff directly has simply put never worked.

kiku@feddit.org on 15 Sep 14:41 collapse

Yes. It’s extremely helpful when I’m doing a refactor and can just go TAB TAB TAB TAB Oops not that TAB TAB done. Saves me a lot of time with the boilerplate, but is very bad at the logic portions.

slevinkelevra@sh.itjust.works on 15 Sep 20:10 collapse

You do refactoring with auto complete?

rozodru@piefed.social on 15 Sep 11:05 next collapse

As someone right there in the trenches getting hired specifically to clean the slop up, I don't buy this survey at all and I'd be very suspicious of any "senior dev" that participated in it cause...where are they? I'm not seeing them when I go in to my clients offices because they all got axed. I do see a lot of junior prompt monkeys though.

BrianTheeBiscuiteer@lemmy.world on 15 Sep 11:31 next collapse

If I try to get it to do more than predict the next two lines of code it’s gonna fuck something up. A nervously laughable thing I saw at work was someone using a long spec file to generate a series of other files and getting high praise for it. It was the equivalent of mustache templates but slower and with a 30% chance of spitting out garbage. There was also no way to verify if you were in that 30% zone without looking through the dozens of files it made.

gandalf_der_12te@discuss.tchncs.de on 15 Sep 21:23 collapse

The research shows that - while engineers think AI makes them more about 20% more productive - it actually causes an approximate 20% slow-down.

AI cannot use logic or reason. Everything it outputs is a hallucination, even if it’s sometimes accurate. You cannot trust anything it outputs.

Research shows that - while people think having more people in the household gets the housework done faster - babies actually cause an approximate 100% increase in time spent on housework.

Children cannot use logic or reason. Everything they output is brabbling, even if it sometimes resembles actual works. You cannot trust anything they say. Parents are stupid for having them. (/s)


Developers see AI as a “child” that might need many years to grow up, but it’s still worth all the trouble they go through. It’s an emotional choice, not a rational one.

Simulation6@sopuli.xyz on 15 Sep 09:22 next collapse

I have never tried to use AI to develop software, just looked at the output that sometimes shows up in google searches. Noises are starting to come from on-high about an AI ‘push’, so I may need to show some basic awareness. Any suggestions on how to get started or should I just ask the AI?

percent@infosec.pub on 15 Sep 11:03 next collapse

I’d suggest Cursor. I was somewhat anti-AI-coding until my job encouraged it, and Cursor (using Claude 4 Sonnet) gave me that “ohh, now I get it” moment.

It’s still plenty capable of generating bad code, so it can take a bit of practice to get a feel for how to use it productively.

thirteene@lemmy.world on 15 Sep 12:13 next collapse

I’ve been using copilot. Potential is there but getting a result is more art than science. I’ve found it helpful to document desired workflows in readmes and ask for unit tests then run unit tests until it works out.

  • use a premium model like sonnet and put it in agent mode
  • Ask it to review the project
  • ask it to review the ticket/requirements
  • ask it to research existing solutions and write a design document that meets the requirements with high certainty
  • Let it write the document and make sure it stays on task
  • review the output and send build errors back, roll forward or undo the code and re-submit
  • identify what works and reduce scope
cevn@lemmy.world on 15 Sep 12:45 collapse

I will say Claude Code may be at the fore front of AI coding assistants. It runs in your terminal. Try loading it on one of your side projects and see what you can accomplish.

Ghostie21@lemmy.world on 15 Sep 13:47 collapse

Is there a difference between claud in the vscode extension and Claude code? I mostly use chat mode but will sometimes try agent and neither really make me happy. Id say if a task could be given to a high school programmer the AI agents can do it about 30÷ of the time.

cevn@lemmy.world on 15 Sep 15:30 collapse

I feel like the experience is different and it feels more integrated with the project than simply running a claude model with Cursor which is a vscode fork. Right now I had it working on a long running cli app task in Rust and its been implementing feature after feature consistently.

drmoose@lemmy.world on 15 Sep 13:50 next collapse

This topic is always twisted and based on some random bait surveys. Yes I’d commit AI code but mostly because that code does a test or implements some one off function that I read through anyway.

Do I enjoy baby sitting AI? Eh its a mix bag. Its great for writing tests and boilerplate and bootstrap you into real solutions but I dread any code base that claims their mostly written by cloude code. The AI is still incredibly stupid.

I think rubber duck is really the best feature of AI. I’ve been working remotely for over 20 years now and it’s such a game changer just to bounce ideas and architecture designs with a chat bot. This feature should be revolutionary enough without the need for independent agents.

criss_cross@lemmy.world on 15 Sep 13:52 next collapse

This feels like one of those paid fluff pieces companies put out so that smaller ones feel like they’re “missing out”

wulrus@lemmy.world on 15 Sep 15:22 next collapse

Currently, I write all production code at work without any AI assistance. But to keep up with things, I do my own projects.

Main observation: When I use it (Claude Code + IDE-assistant) like a fancy code completion, it can save a lot of time. But: It must be in my own area of expertise, so I could do it myself just as well, only slower. It makes a mistake about 10 - 20 % of the time, most of them not obvious like compile errors, so it would turn the project into disaster over time. Still, seems like a senior developer could be about 50% - 100% more productive in the heat of the implementation phase. Most important job is to say “STOP” when it’s about to do nonsense. The resulting code is pretty much exactly how I would have done it, and it saved time.

I also tried “vibe coding” by using languages and technologies that I have no experience with. It resulted in seemingly working programs, e. g. to extract and sort photos from an outdated data file format, or to parse a nice statistics out of 1000 lines of annual private bank statements. Especially the latter resulted in 500 lines of unmaintainable Python-spaghetticode. Still nice for my private application, but nobody in the world can guarantee that there aren’t pennies missing, or income and outcome switched in the calculation. So unusable for the accounting of a company or anything like that.

I think it will remain code completion for the next 5 years. The bubble of trying more than next-gen code completion for seniors will burst. What happens then is hard to say, but it takes significant breakthroughs to replace a senior and work independently.

0x0@lemmy.zip on 15 Sep 17:19 next collapse

It makes a mistake about 10 - 20 %

Anecdotally, Copilot does the reverse for me.

corsicanguppy@lemmy.ca on 15 Sep 19:32 collapse

Copilot leads me on flights of fanciful code that is absolutely not possible, and the joy turns to tragedy when I find out it lied insidiously about a particular niche function the entire time.

squaresinger@lemmy.world on 15 Sep 17:43 collapse

In real code, so after the first week of development, typing really isn’t what I spend most of my time on. Fancy autocomplete can sometimes be right and then it saves a few seconds, but not nearly 50-100% added productivity. Maybe more like 1-2%.

If I get a single unnecessary failed compile from the autocomplete code, it loses me more time than it saved.

But it does feel nice not having to type out stuff.

That’s why all research on this topic says that AI assistance feels like a 20-30% productivity boost (when the developers are asked to estimate how much time they saved) while the actual time spent on the task actually goes up by 20-30% (so productivity gets lost).

wulrus@lemmy.world on 15 Sep 19:07 collapse

I find it also saves a certain “mental energy”.

E. g. when I worked on a program to recover data from the old discontinued Windows photo app: I started 2 years ago and quickly had a proof-of-concept: Found out it’s just sqlite format, checked out the table structure, made a query to list the files from one album. So at that point, it was clear that it was doable, but the remaining 90 % would be boring.

So after 2 years on pause, I just gave Gemini 2.5Pro the general problem and the two queries I had. It 1-shot a working powershell script, no changes required. It reads directly from the sqlite (imagine the annoyance to research that when you never ever use powershell!) and put the files to folders named by the former albums. My solution would have been worse, would probably have gone with just hacking together some copy-commands from SELECT and run them all once.

That was pretty nice: I got to do the interesting part of building the SQL queries, and it did the boring, tiring things for me.

Overall, I remain sceptical as you do. There is definitely a massive bullshit-bubble, and it’s not clear yet where it ends. I keep it out of production code for now, but will keep experimenting on the side with an “it’s just code completion” approach, which I think might be viable.

squaresinger@lemmy.world on 15 Sep 20:28 collapse

Yours is pretty much the best-case scenario for AI:

  • Super small project, maybe a few dozen lines at most
  • Greenfield: no dependencies, no old code, nothing to consider apart from the problem at hand
  • Disposable: once the job is done you discard it and won’t need to maintain it
  • Someone most likely already did the same thing or did something very similar and the LLM can draw on that, modify it slightly and serve it as innovation
  • It’s a subject where you are good enough that you can verify what the LLM spits out, but where you’d have to spend hours and hours to read into how to do it

For that kind of stuff it’s totally OK to use an LLM. It’s like googleing, finding a ready-made solution on Stackexchange, running that once and discarding it, just in a more modern wrapping. I’ve done something similar too.

But for real work on real projects, LLM is more often than not a time waster and not a productivity gain.

wulrus@lemmy.world on 16 Sep 07:48 collapse

That’s completely true; it’s hard for me to judge on a small scale when I won’t (for good reasons) let it touch my customer’s production code.

vala@lemmy.dbzer0.com on 15 Sep 15:51 next collapse

This headline made me a little nauseous.

vane@lemmy.world on 15 Sep 15:57 next collapse

Looks like every senior developer is building vibe coded startup and their children are selling machine learning models on marketplaces. Anyone know of such marketplace or it’s fake as much as the article ?

traceur402@lemmy.blahaj.zone on 15 Sep 20:29 collapse

I’ve noticed nary a thing except vapid media and social buzz. I’ve tried the tools themselves and they seem to waste time too often to be worthwhile

ICastFist@programming.dev on 15 Sep 16:33 next collapse

Carla Rover once spent 30 minutes sobbing after having to restart a project she vibe coded. Rover has been in the industry for 15 years, mainly working as a web developer. She’s now building a startup, alongside her son, that creates custom machine learning models for marketplaces.

Using AI to sell AI, infinite money glitch! /s

“Using a coding co-pilot is kind of like giving a coffee pot to a smart six-year-old and saying, ‘Please take this into the dining room and pour coffee for the family,’” Rover said. Can they do it? Possibly. Could they fail? Definitely. And most likely, if they do fail, they aren’t going to tell you.

No, a kid will learn if s/he fucks up and, if pressed, will spill the beans. AI is, despite being called “intelligent”, not learning anything from its mistakes and often forgetting things because of limitations - consistency is still one of the key problems for all LLM and image generators

squaresinger@lemmy.world on 15 Sep 16:42 next collapse

If you bring a 6yo into office and tell them to do your work for you, you should be locked up. For multiple reasons.

Not sure why they thought that was a positive comparison.

Knock_Knock_Lemmy_In@lemmy.world on 16 Sep 06:40 collapse

AI is, despite being called “intelligent”, not learning anything from its mistakes

Don’t they also train new models on past user conversations?

ICastFist@programming.dev on 16 Sep 16:12 collapse

Considering how many AI models still can’t correctly count how many ‘r’ there are in “strawberry”, I doubt it. There’s also the seahorse emoji doing the rounds at the moment, you’d think that the models would get “smart” after repeatedly failing and realize it’s an emoji that has never existed in the first place.

Knock_Knock_Lemmy_In@lemmy.world on 16 Sep 17:23 collapse

Chatgpt5 can count the number of 'r’s, but that’s probably because it has been specifically trained to do so.

I would argue that the models do learn, but only over generations. So slowly and specifically.

They definitely don’t learn intelligently.

hark@lemmy.world on 16 Sep 18:36 collapse

That’s the P in ChatGPT: Pre-trained. It has “learned” based on the set of data it has been trained on, but prompts will not have it learn anything. Your past prompts are kept to use as “memory” and to influence output for your future prompts, but it does not actually learn from them.

Knock_Knock_Lemmy_In@lemmy.world on 16 Sep 20:35 collapse

The next generation of GPT will include everyone’s past prompts (ever been A/B tested on openAI?). That’s what I mean by generational learning.

hark@lemmy.world on 16 Sep 21:39 collapse

Maybe. It’s probably not high quality training data for the most part, though.

kokesh@lemmy.world on 15 Sep 17:37 next collapse

That will make Taco very angry

jonesey71@lemmus.org on 15 Sep 17:57 next collapse

That picture accompanying the article is backwards. Why is it portraying the AI as the babysitter and not the baby that needs to be supervised by a human?

gandalf_der_12te@discuss.tchncs.de on 15 Sep 21:19 collapse

You’re absolutely right!

I completely messed up the picture. It should be the other way around. Do you want me to correct my mistake and generate a new picture?

/s

aesthelete@lemmy.world on 16 Sep 18:13 collapse

<<proceeds to produce a derivative of the same picture>>

podbrushkin@mander.xyz on 15 Sep 19:37 next collapse

A day will come when I get to know what vibecoding is. Or maybe this word will die out sooner. You never know.

kkj@lemmy.dbzer0.com on 15 Sep 22:34 collapse

Vibe coding is when you ask a chatbot to code for you, then ask it to fix the errors it generated, and repeat until you can’t find any more errors. Later, someone notices that your application was coded by a chatbot, exploits one of the many security flaws, and steals all your data and credentials.

podbrushkin@mander.xyz on 16 Sep 06:52 collapse

I thought this was a normal coding. Then how do you call those who heavily rely on google and SO?

Dumhuvud@programming.dev on 16 Sep 08:42 next collapse

Imposters.

x00z@lemmy.world on 16 Sep 09:34 next collapse

Copy pasta chefs.

kkj@lemmy.dbzer0.com on 16 Sep 14:59 collapse

That’s regular programming. You still have to fit everything together, so you end up reading the code much more closely. Chatbot enjoyers don’t read it at all.

CaptainBlinky@lemmy.myserv.one on 15 Sep 20:41 next collapse

I keep seeing “vibe coding.” WTF is vibe coding? ELI5

NocturnalEngineer@lemmy.world on 15 Sep 20:43 next collapse

A security nightmare waiting to happen.

sugar_in_your_tea@sh.itjust.works on 15 Sep 23:47 collapse

Also performance, maintenance, and regression.

PlantJam@lemmy.world on 15 Sep 20:52 collapse

Vibe coding is asking gpt for code, copying it into your code environment, then telling gpt about any errors or issues. The problem is that it actually works a significant amount of the time, let’s be generous and say 80%. Another 15% of the time it cannot solve a problem itself. And finally the worst possible outcome is the last 5%, where it creates a seemingly working solution that actually breaks on edge cases or has potential security issues.

ngcbassman@sh.itjust.works on 16 Sep 08:20 collapse

One important aspect of vibe coding that I always see that is missing in explanations is the part that vibe coding is the is the generation of code through AI, but without understanding what the code is doing, the effect of this is you are totally dependant on the AI to keep generating the code, so if any error happens you don’t have fucking idea in what to do. If you generate the code using AI and you understood what the AI did, is not vibe coding.

EnsignWashout@startrek.website on 15 Sep 01:35 next collapse

they say it’s worth it

Narrator: They did not.

MoonRaven@feddit.nl on 16 Sep 06:47 next collapse

As a senior dev, I say it’s not worth it. Our junior devs rely on it too much and I spent most of yesterday trying to figure out for my junior dev why their code didn’t work. Eventually came to the conclusion that they just have to redo most of it because it’s utter garbage and invents new code to do what the architecture already has.

themaninblack@lemmy.world on 16 Sep 09:06 collapse

As a senior dev, I agree but am impressed that you’re dealing with a functional architecture to begin with

fodor@lemmy.zip on 16 Sep 07:49 next collapse

Interesting how this article is contradicted by hundreds of others.

3abas@lemmy.world on 16 Sep 09:28 collapse

Because if we say anything positive about AI programming we get downvoted to hell…

I’m not a supporter of the companies making LLMs and how they profit off others’ intellect, I’m not a supporter of their use of the technology for fascism, genocide, and pure evil. I’m not a moron that thinks LLMs are intelligent.

But I recognize it as a very useful technological advancement, it’s a very useful tool and to pretend otherwise is foolish. LLMs are an amazing coding aid, and when used correctly and fed the right context, they can save hours of frustration and research dead-ends.

Edit: see? I just called them a useful tool and got downvoted…

wulrus@lemmy.world on 16 Sep 17:57 next collapse

I am generally a sceptic myself, especially in my own area, which is software development. But recently in a board game community, someone was scolded for asking ChatGPT about a rule dispute (and it was wrong). All upvotes to unhelpful “AI bad” comments. I pointed out that while this was true 3 months ago, ChatGPT 5 (and only that one) can very accurately answer such questions when asked the right way, showed how to ask the user question and the (now correct) response, and mentioned my 35 board game test questions and results with major LLM flagship models. (Almost all LLMs did horribly, under 70% even in yes/no questions, but ChatGPT 5 with specific instructions or “Thinking” model got 100%.)

Even as a sceptic, I can acknowledge that LLMs just jumped from completely useless to perfect in the past few months when it comes to this specific niche.

Dalraz@lemmy.ca on 16 Sep 18:36 collapse

I share the same opinion as yourself and gave you an upvote. LLM are a truly fascinating technology and I’m amazed by how little we understand on how it even does what it does. That said the process that got us here and what they are currently being used for is amoral at best

AmanitaCaesarea@slrpnk.net on 16 Sep 09:02 next collapse

Gotta love how devs and engineers are supposed to be on the front lines of innovation and progression. But most of the it’s just moaning and calling the next gen dumb. 15 years ago the current devs would be called dumb for using Frameworks amd how it’s cheating since it’s not self written. Do your part and educate and guide the next gen instead of complaining about tech evolving and being used.

nutsack@lemmy.dbzer0.com on 16 Sep 09:10 next collapse

as a programmer I feel like this would be pretty cool. but this isn’t really how it is at all. I’m usually asking Claude code to do something very specific and then I’m throwing whatever it does away because it’s not correct. if I could have a little baby that I had to babysit I think that would be better

javiwhite@feddit.uk on 16 Sep 09:25 next collapse

Conversely, I’d imagine there are babysitters out there who at times wish they could just throw the baby away.

nutsack@lemmy.dbzer0.com on 16 Sep 15:01 collapse

I’ve thrown the baby away many times

Garbagio@lemmy.zip on 16 Sep 17:48 collapse

Eyoooo

MangoCats@feddit.it on 16 Sep 18:30 collapse

I was “there” with Claude as you describe about 3 months ago. Since then, Claude has stepped up to being able to create fully functional microservices. It helps if you completely specify what you want, it helps if you don’t specify funky libraries or other tech that has poor support on the internet, it helps if your total ask amounts to 1000 lines of code or less - but I have gotten up around 3000 lines before Sonnet 4 choked a few times.

Before this, my AI queries were mostly limited to specific API function call syntax, and they would only be right about 2/3 of the time, which beats randomly trying things myself until I eventually guess the right variation… Yes, it’s better to consult the documentation - when it’s available - it’s not always available.

nutsack@lemmy.dbzer0.com on 17 Sep 02:28 collapse

yea so maybe the resulting future is that the tools can only work with really popular libraries that have lots of people talking about them on stack overflow in the year 2024 or whatever, and new smaller potentially interesting libraries will have a harder time seeing adoption

MangoCats@feddit.it on 17 Sep 14:23 collapse

maybe the resulting future is that the tools can only work with really popular libraries that have lots of people talking about them on stack overflow in the year 2024 or whatever, and new smaller potentially interesting libraries will have a harder time seeing adoption

Yeah, that’s the future I’ve been living since about 2005. The alternative to letting the world be your support desk via stack overflow and similar is to develop killer examples and API documentation for your own libraries so the AI (and everyone else) can learn from that. Qt was a great example of this starting in the early 2000s.

The dark future is where you have competitors “poisoning the well” spreading false information about your tech in the normally reliable channels, then having AI amplify that for them. This, too, is already happening to some extent - more in the political sphere than the technical space, but it’s everywhere to some extent.

hark@lemmy.world on 16 Sep 18:46 next collapse

I’ve used it to explore some avenues without having to write a complete implementation. If the approach shows promise, then I go through the code and mostly rewrite it because the code it generates is terrible. I also use it if I don’t care about the project I’m on. They want to “do test-driven development” while having poorly-defined requirements that constantly change on a whim while also setting unreasonable unit test coverage thresholds? Cool, I’ll let the AI shit out a bunch of unit tests and waffle stomp it to satisfy your poorly thought out project requirements.

kinther@lemmy.world on 17 Sep 01:59 collapse

I agree with you on this. Let it handle things you don’t care about and massage the output if necessary. Anything I do care about, I code myself, but will ask for help if I get stuck on something. I’m a novice programmer at best, 18/100 skill score.

iAvicenna@lemmy.world on 16 Sep 18:52 collapse

All this depends critically on one premise: that sometime in near future AI coders will become fully automated and produce senior level code. If not we are wholly fucked because currently they are employing less and less junior coders which means that we will be running very low on number of senior coders in a decade or so. If LLMs still need supervision by then there won’t be enough senior coders to do so.