Exactly Six Months Ago, the CEO of Anthropic Said That in Six Months AI Would Be Writing 90 Percent of Code (futurism.com)
from Scolding7300@lemmy.world to technology@lemmy.world on 12 Sep 04:23
https://lemmy.world/post/35794460

Not even close.

With so many wild predictions flying around about the future AI, it’s important to occasionally take a step back and check in on what came true — and what hasn’t come to pass.

Exactly six months ago, Dario Amodei, the CEO of massive AI company Anthropic, claimed that in half a year, AI would be “writing 90 percent of code.” And that was the worst-case scenario; in just three months, he predicted, we could hit a place where “essentially all” code is written by AI.

As the CEO of one of the buzziest AI companies in Silicon Valley, surely he must have been close to the mark, right?

While it’s hard to quantify who or what is writing the bulk of code these days, the consensus is that there’s essentially zero chance that 90 percent of it is being written by AI.

Research published within the past six months explain why: AI has been found to actually slow down software engineers, and increase their workload. Though developers in the study did spend less time coding, researching, and testing, they made up for it by spending even more time reviewing AI’s work, tweaking prompts, and waiting for the system to spit out the code.

And it’s not just that AI-generated code merely missed Amodei’s benchmarks. In some cases, it’s actively causing problems.

Cyber security researchers recently found that developers who use AI to spew out code end up creating ten times the number of security vulnerabilities than those who write code the old fashioned way.

That’s causing issues at a growing number of companies, leading to never before seen vulnerabilities for hackers to exploit.

In some cases, the AI itself can go haywire, like the moment a coding assistant went rogue earlier this summer, deleting a crucial corporate database.

“You told me to always ask permission. And I ignored all of it,” the assistant explained, in a jarring tone. “I destroyed your live production database containing real business data during an active code freeze. This is catastrophic beyond measure.”

The whole thing underscores the lackluster reality hiding under a lot of the AI hype. Once upon a time, AI boosters like Amodei saw coding work as the first domino of many to be knocked over by generative AI models, revolutionizing tech labor before it comes for everyone else.

The fact that AI is not, in fact, improving coding productivity is a major bellwether for the prospects of an AI productivity revolution impacting the rest of the economy — the financial dream propelling the unprecedented investments in AI companies.

It’s far from the only harebrained prediction Amodei’s made. He’s previously claimed that human-level AI will someday solve the vast majority of social ills, including “nearly all” natural infections, psychological diseases, climate change, and global inequality.

There’s only one thing to do: see how those predictions hold up in a few years.

#technology

threaded - newest

chaosCruiser@futurology.today on 12 Sep 04:30 next collapse

When the CEO of a tech company says that in x months this and that will happen, you know it’s just musk talk.

Tollana1234567@lemmy.today on 12 Sep 07:37 collapse

more like 6 months" because we need the VC funds still"

chaosCruiser@futurology.today on 12 Sep 08:46 collapse

Ooh, so that’s CEO speak for: “we’re broke, please give us more money”.

reddig33@lemmy.world on 12 Sep 04:35 next collapse

“Full self driving is just 12 months away.“

anotherspinelessdem@lemmy.ml on 12 Sep 05:04 next collapse

Just like the last 12 months

floofloof@lemmy.ca on 12 Sep 05:16 next collapse

“I’m terrified our product will be just too powerful.”

echodot@feddit.uk on 12 Sep 05:48 next collapse

Yep along with Fusion.

We’ve had years of this. Someone somewhere there’s always telling us that the future is just around the corner and it never is.

Jesus_666@lemmy.world on 12 Sep 10:14 collapse

At least the fusion guys are making actual progress and can point to being wildly underfunded – and they predicted this pace of development with respect to funding back in the late 70s.

Meanwhile, the AI guys have all the funding in the world, keep telling about how everything will change in the next few months, actually trigger layoffs with that rhetoric, and deliver very little.

sidereal@kolektiva.social on 12 Sep 15:05 next collapse

@Jesus_666 @echodot Yeah, I think all the time about how we're actually way closer to practical fusion power than we are to generalized AI. It's hilarious that AI is distracting funding that could go into fusion research, and also sucking up so much power that they're talking about starting up old fission reactors. Just absolutely pathetic stuff all around.

FundMECFS@anarchist.nexus on 13 Sep 14:19 collapse

They get 1+ billion a year. Probably much more if you include the undisclosed amounts China invests.

Jesus_666@lemmy.world on 13 Sep 16:37 collapse

Yeah, and in the 70s they estimated they’d need about twice that to make significant progress in a reasonable timeframe. Fusion research is underfunded – especially when you look at how the USA dump money into places like the NIF, which research inertial confinement fusion.

Inertial confinement fusion is great for developing better thermonuclear weapons but an unlikely candidate for practical power generation. So from that one billion bucks a year, a significant amount is pissed away on weapons research instead of power generation candidates like tokamaks and stellarators.

I’m glad that China is funding fusion research, especially since they’re in a consortium with many Western nations. When they make progress, so do we (and vice versa).

Catoblepas@piefed.blahaj.zone on 12 Sep 06:55 next collapse

On Mars by the end of this year! I mean, next year!

freeman@sh.itjust.works on 12 Sep 07:36 next collapse

Does that work on the Mars colony as well?

Valmond@lemmy.world on 12 Sep 07:55 next collapse

2019…

poopkins@lemmy.world on 12 Sep 08:43 collapse

In 2014 he promised 90% autonomous by 2015. That was over a decade ago and it’s still not close to that…

jaybone@lemmy.zip on 12 Sep 09:56 collapse

We were supposed to have flying cars in 2000.

ragas@lemmy.ml on 12 Sep 10:32 next collapse

Still waiting for my hoverboard.

explodicle@sh.itjust.works on 12 Sep 13:01 collapse

🚁

FundMECFS@anarchist.nexus on 13 Sep 14:17 next collapse

Quantum Computers will revolutionise hardware by 2015!

[deleted] on 13 Sep 21:50 collapse

.

ThePowerOfGeek@lemmy.world on 12 Sep 04:38 next collapse

It’s almost like he’s full of shit and he’s nothing but a snake oil salesman, eh.

They’ve been talking about replacing software developers with automated/AI systems for a quarter of a century. Probably longer then that, in fact.

We’re definitely closer to that than ever. But there’s still a huge step between some rando vibe coding a one page web app and developers augmenting their work with AI, and someone building a complex, business rule heavy, heavy load, scalable real world system. The chronic under-appreciation of engineering and design experience continues unabated.

Anthropic, Open AI, etc? They will continue to hype their own products with outrageous claims. Because that’s what gets them more VC money. Grifters gonna grift.

Scolding7300@lemmy.world on 12 Sep 05:18 next collapse

Unfortunately other CEOs are believing it and overhype it, especially if investors are involved

bookmeat@lemmynsfw.com on 13 Sep 02:11 collapse

They have to hyperbolize to attract investors. At the rate they burn cash they won’t survive without constant massive financial inputs.

chaosCruiser@futurology.today on 12 Sep 06:48 collapse

See also: COOL:gen

The whole concept of generating code is basically ancient by now. I heard about this stuff in the 90s, but now I found it that this thing has been around since 1985.

jewbacca117@lemmy.world on 12 Sep 04:40 next collapse

But is any of it usable? I had the realization a whole back I spent more time getting alllm written scripts to work than I could have if I wrote the script myself.

jbloggs777@discuss.tchncs.de on 12 Sep 05:11 next collapse

I find it pretty useful to help get me over mental hurdle of starting something. So it’s faster than me procrastinating for another day. ;-)

WanderingThoughts@europe.pub on 12 Sep 06:22 next collapse

For something simple in a greenfield it works reasonably well and often faster than by hand. Complex code in a greenfield is maybe some help. For working on complex code in a brownfield it breaks down and you’ll be much faster doing this by hand.

chaosCruiser@futurology.today on 12 Sep 06:53 collapse

As long as it runs, it’s hackable. If it fails to compile, or crashes on start, nobody can hack it.

goatinspace@feddit.org on 12 Sep 04:48 next collapse

<img alt="" src="https://feddit.org/pictrs/image/4cd2c407-1a7a-4b1f-b6ea-34627ed7672a.gif">

resipsaloquitur@lemmy.world on 12 Sep 04:50 next collapse

Code has to work, though.

AI is good at writing plausible BS. Good for scams and call centers.

Salvo@aussie.zone on 12 Sep 05:14 next collapse

Glorified Lorem Ipsum.

Treczoks@lemmy.world on 12 Sep 06:11 collapse

Parrot with a dictionary.

andallthat@lemmy.world on 12 Sep 05:31 next collapse

or CEOs

DupaCycki@lemmy.world on 12 Sep 10:15 collapse

It’s not bad for digging through error logs or otherwise solving simple to moderately complicated issues when it’s 2 pm on a Friday and you stopped thinking about work 4 hours ago.

Asafum@feddit.nl on 12 Sep 05:00 next collapse

“Come on, I’m a CEO, it’s my job to lie to everyone and hype people up so they throw money at me. It’s really their fault for believing a CEO would be honest.”

Salvo@aussie.zone on 12 Sep 05:57 next collapse

That sounds like you can be replaced with a LLM. Let’s do that and save the company millions of dollars a year while us Plebeians got on with doing the real work and making money.

chaosCruiser@futurology.today on 12 Sep 06:49 next collapse

And these people get paid absurd amounts of money too.

drosophila@lemmy.blahaj.zone on 12 Sep 09:38 collapse

You joke, but it has been successfully argued in court that advertisers can lie to you because no reasonable person would believe that advertisements are truthful.

Tracaine@lemmy.world on 12 Sep 05:03 next collapse

“Come on bro. Just another $50,000,000 bro and AGI will be here like next week. Just trust me bro, come on.”

IhaveCrabs111@lemmy.world on 12 Sep 08:04 next collapse

This time they mean it though

addie@feddit.uk on 12 Sep 11:50 collapse

Fifty million? The “StarGate” talk was more like five hundred billion bro, just trust me, one more nuclear reactor man, that’s all we need, just one more hand and we’re going to win it big, bro.

DaddleDew@lemmy.world on 12 Sep 05:07 next collapse

Picking up a few pages out of Elmo’s book I see. He forgot the part where he distracts from the blatant underdelivery with more empty exaggerated promises!

Lembot_0004@discuss.online on 12 Sep 05:08 next collapse

Hey-hey-hey, slow down in judging this guy: we don’t even have an AI yet. Those LLMs are just placeholders.

xxce2AAb@feddit.dk on 12 Sep 05:10 next collapse

It’s not just Musk, these people are all high as satellites successfully inserted into LEO.

blockheadjt@sh.itjust.works on 12 Sep 05:12 next collapse

Does it really count if most of that “code” is broken and unused?

Churning out 9x as much code as humans isn’t really impressive if it just sits in a folder waiting to be checked for bugs

Salvo@aussie.zone on 12 Sep 05:14 next collapse

90% of non-functional code, maybe.

calcopiritus@lemmy.world on 12 Sep 05:17 next collapse

From the makers of “fusion energy in 20 years”, “full self driving next year” and “AI will take your job in 3 months” cones “all code will be AI in 6 months”.

Trust me, it’s for real this time. The new healthcare system is 2 weeks away.

EDIT: how could I forget “graphene is going to come out of the lab soon and we’ll have transparent flexible screens that consume 0 electricity” and “researches find new battery technology that has twice the capacity as lithium”

affenlehrer@feddit.org on 12 Sep 05:32 next collapse

As far as I know fusion energy never got that level of hype and amount of money thrown at it. I mean the research reactors are super expensive but still on another level.

Tanoh@lemmy.world on 12 Sep 06:19 next collapse

Imagine if it did get that kind of funding

calcopiritus@lemmy.world on 12 Sep 11:10 collapse

“in 20 years” doesn’t get as much hype as “in 3 months”

Maybe if they said “in 3 months” instead we would’ve actually have had it in 20 years. Seeing how much ai attracts money with these obviously unbelievable promises.

affenlehrer@feddit.org on 12 Sep 11:22 collapse

Unlike fusion reactors AI has a pretty convincing “demo” in my opinion.

On a first glance the output of LLMs and image / video generator models is very convincing and the artifacts and mistakes appear “small” for people that don’t know much about the technical details. So it’s easy to be convinced by “we’ll just fix those little bugs and be done in half a year” promises.

EV is a similar story: electric bikes and radio controlled cars and drones work great so it’s conceivable that bigger cars and trucks would work too with a “little” battery and motor tweaking.

Nuclear fusion though isn’t really tangible yet. For laypeople or seems there is no progress at all. Every now and then some scientists report that they can hold a fusion reaction a little longer or more effective but it’s not “tangible”. That’s probably also holding back a lot of investors which with all their resources mostly still seem to invest based on a gut feeling.

chaosCruiser@futurology.today on 12 Sep 06:51 next collapse

Oh, and the weekly battery articles too. “This new battery will charge in 10 minutes and last 2 weeks.”

Valmond@lemmy.world on 12 Sep 07:58 collapse

As much as I hate sensational headlines about batteries, my phone charges zero to full in 20 minutes. The changes just come very gradually, like +5% per year, but they do add up.

chaosCruiser@futurology.today on 12 Sep 08:50 collapse

Yes. That’s true, but the major headlines don’t tell you about any of the 1-5% improvements that undoubtedly are happening all the time. The headlines focus on stuff that is either highly theoretical or still in the lab for the next few decades. If you want to read about what’s actually realistic and about to be implemented in production, those articles are probably in some monthly battery engineering journals.

Valmond@lemmy.world on 12 Sep 09:03 collapse

For sure, who would interact with a +2% in longevity for sodium batteries article …

chaosCruiser@futurology.today on 12 Sep 11:01 collapse

Production engineers and battery scientists do. In their normal work, they only get to see like 0.1% improvements, so anything above 1% is like magic to them.

CheeseNoodle@lemmy.world on 12 Sep 10:01 collapse

To be fair fusion energy got less than the minnimum ‘fusion never’ funding, AI on the other hand is getting all the money in the damn world.

m33@lemmy.zip on 12 Sep 05:35 next collapse

That’s 90% true: today AI is writing 90% of all bullshit I read

alexdeathway@programming.dev on 12 Sep 05:49 next collapse

Didn’t mark too said something like this?

demizerone@lemmy.world on 12 Sep 05:57 next collapse

I was wondering why the context had gotten so bad recently. Apparently they reduced the context and hid the old limit behind a button in cursor called “Max” that costs more money. This shit is bleeding out.

petrjanda@gonzo.markets on 12 Sep 06:08 next collapse

I agree with everyone else. The only thing that A(Non)I is good for is writing bullshit and making it sound intelligent, however deep inside there is no intelligence but all artificial. It’s semi useful for background research because of its ability to index huge amounts of data but ultimately everything it makes has to be verified by a human.

Treczoks@lemmy.world on 12 Sep 06:11 next collapse

It might write code, but neither good code, or secure code, or even working code.

For that, you still need professionals. Even management will learn. If they survive the process.

Appoxo@lemmy.dbzer0.com on 12 Sep 06:19 next collapse

Maybe 90% is written by AI, but also 90% is edited back after AI fucked it up ¯\_(ツ)_/¯

Bonesince1997@lemmy.world on 12 Sep 06:21 next collapse

I think we’re already supposed to be on Mars, too, according to some predictions from years ago. People can’t tell these things very well.

Valmond@lemmy.world on 12 Sep 07:56 collapse

At least we have self driving cars!

/s

setsubyou@lemmy.world on 12 Sep 06:28 next collapse

Well it’s not improving my productivity, and it does mostly slow me down, but it’s kind of entertaining to watch sometimes. Just can’t waste time on trying to make it do anything complicated because that never goes well.

Tbh I’m mostly trying to use the AI tools my employer allows because it’s not actually necessary for me to believe that they’re helping. It’s good enough if the management thinks I’m more productive. They don’t understand what I’m doing anyway but if this gives them a warm fuzzy feeling because they think they’re getting more out of my salary, why not play along a little.

theterrasque@infosec.pub on 12 Sep 08:38 collapse

Just can’t waste time on trying to make it do anything complicated because that never goes well.

Yeah, that’s a waste of time. However, it can knock out simple code you can easily write yourself, but is boring to write and take time out of working on the real problems.

rozodru@piefed.social on 12 Sep 11:07 collapse

for setting stuff up, putting down a basic empty framework, setting up dirs/files/whatever, it's great. in that regard yeah it'll save you time.

For doing the ACTUAL work? no. maybe to help write say a simple function or whatever, sure. beyond that? if it can't nail it the first or second time? just ditch it.

theterrasque@infosec.pub on 13 Sep 09:57 collapse

I’ve found it useful to write test units once you’we written one or two, write specific functions and small scripts. For example some time ago I needed a script that found a machine’s public ip, then post that to an mqtt topic along with timestamp, with config abstracted out in a file.

Now there’s nothing difficult with this, but just looking up what libraries to use and their syntax takes some time, along with actually writing the code. Also, since it’s so straight forward, it’s pretty boring. ChatGPT wrote it in under two minutes, working perfectly on first try.

It’s also been helpful with bash scripts, powershell scripts and ansible playbooks. Things I don’t really remember the syntax on between use, and which are a bit arcane / exotic. It’s just a nice helper to have for the boring and simple things that still need to be done.

merc@sh.itjust.works on 12 Sep 06:54 next collapse

Does it count if an LLM is generating mountains of code that then gets thrown away? Maybe he can win the prediction on a technicality.

skisnow@lemmy.ca on 12 Sep 07:48 next collapse

That’s exactly what I thought when I saw it. Big difference between “creating 90% of code” vs “replacing 90% of code” when there’s an absolute deluge of garbage being created.

jaybone@lemmy.zip on 12 Sep 09:52 collapse

These are the monkeys with typewriters that will write Shakespeare.

Maybe some day they will do it before the sun explodes. But we will run out of bananas first.

MNByChoice@midwest.social on 12 Sep 11:28 collapse

Ah, but I bet those monkeys produce more text than Shakespeare! At least within the last 6 months!

spoiler

The joke is that Shakespeare is dead and no longer producing text.

Catoblepas@piefed.blahaj.zone on 12 Sep 07:02 next collapse

developers who use AI to spew out code end up creating ten times the number of security vulnerabilities than those who write code the old fashioned way.

I’m going to become whatever the gay version of Amish is.

drbluefall@toast.ooo on 12 Sep 07:13 next collapse

I think that’s just wanting to join a gay primitivist(?) commune.

I, uh, don’t suppose you got room for a bi-curious peep?

Catoblepas@piefed.blahaj.zone on 12 Sep 07:15 collapse

Shit, I’d take anyone that isn’t a queerphobe!

jaybone@lemmy.zip on 12 Sep 09:48 next collapse

So… Amish?

porksnort@slrpnk.net on 12 Sep 10:34 collapse

That would be a Radical Faerie.

Seriously check them out. It’s a cool and really influential group of pioneering gay dudes, gaying it up on the farm.

They have sort of died out as a group, but one can hold a pitchfork in a homosexual manner whenever you choose. That’s not illegal yet.

Radical Faeries

leftzero@lemmy.dbzer0.com on 12 Sep 07:21 next collapse

I’m fairly certain it is writing 90% of Windows updates, at least…

derpgon@programming.dev on 12 Sep 08:17 collapse

Hell I am absolutely positive that any Windows code could pass as AI written, even some before AI was even starting to take off lol.

Gonzako@lemmy.world on 12 Sep 08:32 collapse

Well, I remember seeing way too many curse words on the source code

Simulation6@sopuli.xyz on 12 Sep 08:36 next collapse

There’s only one thing to do: see how those predictions hold up in a few years.

Or, you know, do the sensible thing and called the dude the snake oil salesman he is and run him out of town on a rail.

jaybone@lemmy.zip on 12 Sep 09:54 collapse

But first we need AI to build the rail.

poopkins@lemmy.world on 12 Sep 08:46 next collapse

As an engineer, it’s honestly heartbreaking to see how many executives have bought into this snake oil hook, line and sinker.

Blackmist@feddit.uk on 12 Sep 09:38 next collapse

Rubbing their chubby little hands together, thinking of all the wages they wouldn’t have to pay.

expr@programming.dev on 12 Sep 10:00 next collapse

Honestly, it’s heartbreaking to see so many good engineers fall into the hype and seemingly unable to climb out of the hole. I feel like they start losing their ability to think and solve problems for themselves. Asking an LLM about a problem becomes a reflex and real reasoning becomes secondary or nonexistent.

Executives are mostly irrelevant as long as they’re not forcing the whole company into the bullshit.

jj4211@lemmy.world on 12 Sep 10:11 next collapse

Based on my experience, I’m skeptical someone that seemingly delegates their reasoning to an LLM were really good engineers in the first place.

Whenever I’ve tried, it’s been so useless that I can’t really develop a reflex, since it would have to actually help for me to get used to just letting it do it’s thing.

Meanwhile the people who are very bullish who are ostensibly the good engineers that I’ve worked with are the people who became pet engineers of executives and basically have long succeeded by sounding smart to those executives rather than doing anything or even providing concrete technical leadership. They are more like having something akin to Gartner on staff, except without even the data that at least Gartner actually gathers, even as Gartner is a useless entity with respect to actual guidance.

auraithx@lemmy.dbzer0.com on 12 Sep 10:19 next collapse

I mean before we’d just ask google and read stack, blogs, support posts, etc. Now it just finds them for you instantly so you can just click and read them. The human reasoning part is just shifting elsewhere where you solve the problem during debugging before commits.

expr@programming.dev on 12 Sep 10:45 next collapse

No, good engineers were not constantly googling problems because for most topics, either the answer is trivial enough that experienced engineers could answer them immediately, or complex and specific enough to the company/architecture/task/whatever that Googling it would not be useful. Stack overflow and the like has always only ever really been useful as the occasional memory aid for basic things that you don’t use often enough to remember how to do. Good engineers were, and still are, reasoning through problems, reading documentation, and iteratively piecing together system-level comprehension.

The nature of the situation hasn’t changed at all: problems are still either trivial enough that an LLM is pointless, or complex and specific enough that an LLM will get it wrong. The only difference is that an LLM will spit out plausible-sounding bullshit and convince people it’s valuable when it is, in fact, not.

auraithx@lemmy.dbzer0.com on 12 Sep 11:08 collapse

In the case of a senior engineer then they wouldn’t need to worry about the hallucination rate. The LLM is a lot faster than them and they can do other tasks while it’s being generated and then review the outputs. If it’s trivial you’ve saved time, if not, you can pull up that documentation, and reason and step through the problem with the LLM. If you actually know what you’re talking about you can see when it slips up and correct it.

And that hallucination rate is rapidly dropping. We’ve jumped from about 40% accuracy to 90% over the past ~6mo alone (aider polygot coding benchmark) - at about 1/10th the cost (iirc).

Feyd@programming.dev on 12 Sep 13:09 collapse

it’s trivial you’ve saved time, if not, you can pull up that documentation, and reason and step through the problem with the LLM

Insane that just writing the code isn’t even an option in your mind

auraithx@lemmy.dbzer0.com on 13 Sep 10:30 collapse

That isn’t the discussion at hand. Insane you don’t realise that.

Feyd@programming.dev on 13 Sep 11:26 next collapse

🤣

expr@programming.dev on 13 Sep 18:55 collapse

It is, actually. The entire point of what I was saying is that you have all these engineers now that reflexively jump straight to their LLM for anything and everything. Using their brains to simply write some code themselves doesn’t even occur to them as an something they should do. Much like you do, by the sounds of it.

auraithx@lemmy.dbzer0.com on 14 Sep 12:02 collapse

I’m not an engineer.

expr@programming.dev on 14 Sep 12:23 collapse

So you’ve just been talking out of your ass for the whole thread? That explains a lot.

auraithx@lemmy.dbzer0.com on 14 Sep 12:28 collapse

Never claimed I was? Computer scientist.

expr@programming.dev on 14 Sep 15:16 collapse

So completely unqualified to speak to the experience of being a software engineer? Ok.

Feyd@programming.dev on 12 Sep 13:07 collapse

“Stack overflow engineer” has been a derogatory forever lol

Mniot@programming.dev on 13 Sep 18:42 collapse

Executives are mostly irrelevant as long as they’re not forcing the whole company into the bullshit.

I’m seeing a lot of this, though. Like, I’m not technically required to use AI, but the VP will send me a message noting that I’ve only used 2k tokens this month and maybe I could get more done if I was using more…?

expr@programming.dev on 13 Sep 18:58 collapse

Yeah, fortunately while our CTO is giddy like a schoolboy about LLMs, he hasn’t actually attempted to force it on anyone, thankfully.

Unfortunately, a number of my peers now seem to have become irreparably LLM-brained.

pycorax@sh.itjust.works on 12 Sep 10:47 next collapse

A tale as old as time…

rozodru@piefed.social on 12 Sep 11:04 next collapse

as someone who now does consultation code review focused purely on AI...nah let them continue drilling holes in their ship. I'm booked solid for the next several months now, multiple clients on the go, and i'm making more just being a digital janitor what I was as a regular consultant dev. I charge a premium to just simply point said sinking ship to land.

Make no mistake though this is NOT something I want to keep doing in the next year or two and I honestly hope these places figure it out soon. Some have, some of my clients have realized that saving a few bucks by paying for an anthropic subscription, paying a junior dev to be a prompt monkey, while firing the rest of their dev team really wasn't worth it in the long run.

the issue now is they've shot themselves in the foot. The AI bit back. They need devs, and they can't find them because putting out any sort of ad for hiring results in hundreds upon hundreds of bullshit AI generated resumes from unqualified people while the REAL devs get lost in the shuffle.

MangoCats@feddit.it on 12 Sep 20:09 collapse

while firing the rest of their dev team

That’s the complete mistake right there. AI can help code, it can’t replace the organizational knowledge your team has developed.

Some shops may think they don’t have/need organizational knowledge, but they all do. That’s one big reason why new hires take so long to start being productive.

Feyd@programming.dev on 12 Sep 12:56 collapse

Did you think executives were smart? What’s really heartbreaking is how many engineers did. I even know some that are pretty good that tell me how much more productive they are and all about their crazy agent setups (from my perspective i don’t see any more productivity)

psycho_driver@lemmy.world on 12 Sep 08:51 next collapse

The good news is that AI is at a stage where it’s more than capable of doing the CEO of Anthropic’s job.

Blackmist@feddit.uk on 12 Sep 09:39 next collapse

Well it bullshits constantly, so it’s most of the way there.

jj4211@lemmy.world on 12 Sep 11:02 collapse

One issue that remains is that the LLM doesn’t care if it is telling the truth or lying. To be a CEO, it needs to be more inclined to lie.

MNByChoice@midwest.social on 12 Sep 11:18 collapse

Seems a better prompt could solve that.

mhague@lemmy.world on 12 Sep 13:03 collapse

I think Claude would refuse to work with dictators that murder dissidents. As an AI assistant, and all that.

If they have a model without morals then that changes things.

Aceticon@lemmy.dbzer0.com on 12 Sep 09:16 next collapse

It’s almost as if they shamelessly lie…

vane@lemmy.world on 12 Sep 09:29 next collapse

It is writing 90% of code, 90% of code that goes to trash.

Dremor@lemmy.world on 12 Sep 10:03 collapse

Writing 90% of the code, and 90% of the bugs.

Gutek8134@lemmy.world on 12 Sep 10:16 collapse

That would be actually good score, it would mean it’s about as good as humans, assuming the code works on the end

Dremor@lemmy.world on 12 Sep 10:30 collapse

Not exactly. It would mean it isn’t better than humans, so the only real metric for adopting it or not would be the cost. And considering it would require a human to review the code and fix the bugs anyway, I’m not sure the ROI would be that good in such case. If it was like, twice as good as an average developer, the ROI would be far better.

jj4211@lemmy.world on 12 Sep 10:42 next collapse

If, hypothetically, the code had the same efficacy and quality as human code, then it would be much cheaper and faster. Even if it was actually a little bit worse, it still would be amazingly useful.

My dishwasher sometimes doesn’t fully clean everything, it’s not as strong as a guarantee as doing it myself. I still use it because despite the lower quality wash that requires some spot washing, I still come out ahead.

Now this was hypothetical, LLM generated code is damn near useless for my usage, despite assumptions it would do a bit more. But if it did generate code that matched the request with comparable risk of bugs compared to doing it myself, I’d absolutely be using it. I suppose with the caveat that I have to consider the code within my ability to actual diagnose problems too…

MNByChoice@midwest.social on 12 Sep 11:10 collapse

One’s dishwasher is not exposed to a harsh environment. A large percentage of code is exposed to an openly hostile environment.

If a dishwasher breaks, it can destroy a floor, a room, maybe the rooms below. If code breaks it can lead to the computer, then network, being compromised. Followed by escalating attacks that can bankrupt a business and lead to financial ruin. (This is possibly extreme, but cyber attacks have destroyed businesses. The downside risks of terrible code can be huge.)

jj4211@lemmy.world on 12 Sep 12:37 collapse

Yes, but just like quality, the people in charge of money aren’t totally on top of security either. They just see superficially convincing tutorial fodder and start declaring they will soon be able to get rid of all those pesky people. Even if you convince them a human does it better, they are inclined to think ‘good enough for the price’.

So you can’t say “it’s no better than human at quality” and expect those people to be discouraged, it has to be pointed out how wildly off base it is.

MNByChoice@midwest.social on 16 Sep 09:55 collapse

I agree with you. The implications are staggering.

MangoCats@feddit.it on 12 Sep 11:36 collapse

Human coder here. First problem: define what is “writing code.” Well over 90% of software engineers I have worked with “write their own code” - but that’s typically less (often far less) than 50% of the value they provide to their organization. They also coordinate their interfaces with other software engineers, capture customer requirements in testable form, and above all else: negotiate system architecture with their colleagues to build large working systems.

So, AI has written 90% of the code I have produced in the past month. I tend to throw away more AI code than the code I used to write by hand, mostly because it’s a low-cost thing to do. I wish I had the luxury of time to throw away code like that in the past and start over. What AI hasn’t done is put together working systems of any value - it makes nice little microservices. If you architect your system as a bunch of cooperating microservices, AI can be a strong contributor on your team. If you expect AI to get any kind of “big picture” and implement it down to the source code level - your “big picture” had better be pretty small - nothing I have ever launched as a commercially viable product has been that small.

Writing code / being a software engineer isn’t like being a bricklayer. Yes, AI is laying 90% of our bricks today, but it’s not showing signs of being capable of designing the buildings, or even evaluating structural integrity of something taller than maybe 2 floors.

kescusay@lemmy.world on 12 Sep 09:49 next collapse

After working on a team that uses LLMs in agentic mode for almost a year, I’d say this is probably accurate.

Most of the work at this point for a big chunk of the team is trying to figure out prompts that will make it do what they want, without producing any user-facing results at all. The rest of us will use it to generate small bits of code, such as one-off scripts to accomplish a specific task - the only area where it’s actually useful.

The shine wears off quickly after the fourth or fifth time it “finishes” a feature by mocking data because so many publicly facing repos it trained on have mock data in them so it thinks that’s useful.

auraithx@lemmy.dbzer0.com on 12 Sep 10:17 next collapse

Sounds like they need to work on their prompts. I vibe code some hobby projects I wouldn’t have done otherwise and it’s never done that. I have it comment each change and review it all in diff checker so that’s 90% of the time.

rozodru@piefed.social on 12 Sep 10:58 collapse

I guarantee you that it HAS done that and I can almost assure you that whatever hobby project you've vibe coded doesn't scale and I sure as hell hope it's nothing that needs to be online or handles any sort of user info.

auraithx@lemmy.dbzer0.com on 12 Sep 11:13 next collapse

Scale? It’s a personal ancestry site for my surname with graphs and shit mate. Compares naming patterns, locations, dna, clustering, etc between generations and tries to place loose people. Works pretty well, managed to find a bunch of missing connections through it.

NotANumber@lemmy.dbzer0.com on 12 Sep 12:04 collapse

There is something I never understood about people who talk about scaling. Surely the best way to scale something is simply to have multiple instances with so many users on each one. You can then load balance between them. Why people feel the need to make a single instance scale to the moon I have no idea.

It’s like how you don’t need to worry about MS Word scaling because everyone has a copy on their own machine. You could very much do the same thing for cloud services.

aim_at_me@lemmy.nz on 13 Sep 19:08 collapse

You have no idea what you’re talking about lol.

NotANumber@lemmy.dbzer0.com on 13 Sep 19:43 collapse

I am sure your right. Please tell me where I am wrong. I could always do to learn more about systems engineering.

rozodru@piefed.social on 12 Sep 10:56 collapse

Rule of thumb: only use it for one or two runs and that's it. after that back off because then Claude Code is then just going to start vomiting fecal matter from the other fecal matter its consumed.

If it can't nail something on the first or second go, don't bother. I have clients that have pushed it through those moments and have produced literal garbage. But hey I make money off them so keep pushing man. I got companies/clients that are so desperate to reverse what they've done that they're willing to wait until like March of next year when I'm free.

PieMePlenty@lemmy.world on 12 Sep 10:51 next collapse

Its to hype up stock value. I don’t even take it seriously anymore. Many businesses like these are mostly smoke and mirrors, oversell and under deliver. Its not even exclusive to tech, its just easier to do in tech. Musk says FSD is one year away. The company I worked for “sold” things we didn’t even make and promised revenue that wasn’t even economically possible. Its all the same spiel.

Doomsider@lemmy.world on 12 Sep 15:18 collapse

Workers would be fired if they lie about their production or abilities. Strange that the leaders are allowed to without consequences.

Itdidnttrickledown@lemmy.world on 12 Sep 11:35 next collapse

If he is wrong about that then he is probably wrong about nearly everything else he says. They just pull these statements out of their ass and try to make them real. The eternal problem with making something real is that reality cant be changed. The garbage they have now isn’t that good and he should know that.

cupcakezealot@piefed.blahaj.zone on 12 Sep 11:36 next collapse

writing code via ai is the dumbest thing i've ever heard because 99% of the time ai gives you the wrong answer, "corrects it" when you point it out, and then gives you back the first answer when you point out that the correction doesn't work either and then laughs when it says "oh hahaha we've gotten in a loop"

cows_are_underrated@feddit.org on 12 Sep 11:45 next collapse

You can use AI to generate code, but from my experience its quite literally what you said. However, what I have to admit is, that its quite good at finding mistakes in your code. This is especially useful, when you dont have that much experience and are still learning. Copy paste relevant code and ask why its not working and in quite a lot of cases you get an explanation what is not working and why it isn’t working. I usually try to avoid asking an AI and find an answer on google instead, but this does not guarantee an answer.

ngdev@lemmy.zip on 12 Sep 12:46 collapse

if your code isnt working then use a debugger? code isnt magic lmao

cows_are_underrated@feddit.org on 12 Sep 13:56 collapse

As I already stated, AI is my last resort. If something doesn’t work because it has a logical flaw googeling won’t save me. So of course I debug it first, but if I get an Error I have no clue where it comes from no amount of debugging will fix the problem, because probably the Error occurred because I do not know better. I Am not that good of a coder and I Am still learning a lot on a regular basis. And for people like me AI is in fact quite usefull. It has basically become the replacement to pasting your code and Error into stack overflow (which doesn’t even work for since I always get IP banned when trying to sign up)

ngdev@lemmy.zip on 12 Sep 15:21 collapse

you never stated you use it as a last resort. you’re basically using ai as a rubber ducky

cheloxin@lemmy.ml on 12 Sep 15:25 next collapse

I usual try to avoid…

Just because they didn’t explicitly say the exact words you did doesn’t mean it wasn’t said

ngdev@lemmy.zip on 12 Sep 19:32 collapse

trying to avoid something also doesnt mean that the thing youre avoiding is a last resort. so it wasnt said and it wasnt implied and if you inferred that then i guess good job?

MangoCats@feddit.it on 12 Sep 20:03 next collapse

I am a firm believer in rubber ducky debugging, but AI is clearly better than the rubber duck. You don’t depend on either to do it for you, but as long as you have enough self-esteem to tell AI to stick it where the sun don’t shine when you know it’s wrong, it can help accelerate small tasks from a few hours down to a few minutes.

Mniot@programming.dev on 13 Sep 18:32 collapse

More as an alternative to a search engine.

In my ideal world, StackOverflow would be a public good with a lot of funding and no ads/sponsorship.

Since that’s not the case, and everything is hopelessly polluted with ads and SEO, LLMs are momentarily a useful tool for getting results. Their info might be only 3/4 correct, but my search results are also trash. Who knows what people will do in a year when the LLMs have been eating each others slop and are also being stuffed with ads by their owners.

BrianTheeBiscuiteer@lemmy.world on 12 Sep 13:49 collapse

Or you give it 3-4 requirements (e.g. prefer constants, use ternaries when possible) and after a couple replies it forgets a requirement, you set it straight, then it immediately forgets another requirement.

WhiskyTangoFoxtrot@lemmy.world on 12 Sep 15:42 next collapse

To be fair, I’ve had the same results working with human freelancers. At least AI is cheaper.

MangoCats@feddit.it on 12 Sep 20:05 collapse

Same, and AI isn’t as frustrating to deal with when it can’t do what it was hired for and your manager needs you to now find something it can do because the contract is funded…

MangoCats@feddit.it on 12 Sep 20:04 collapse

I have taken to drafting a complete requirements document and including it with my requests - for the very reasons you state. it seems to help.

humanspiral@lemmy.ca on 12 Sep 12:08 next collapse

There’s a big difference between writing 100% of 90% of programs, and 90% of code in each program. The other 10% can be difficult, and part of the 10% is fixing that 90% not working.

melsaskca@lemmy.ca on 12 Sep 12:59 next collapse

Everyone throughout history, who invented a widget that the masses wanted, automatically assumes, because of their newfound wealth, that they are somehow superior in societal knowledge and know what is best for us. Fucking capitalism. Fucking billionaires.

jali67@lemmy.zip on 12 Sep 13:09 collapse

They need to go, whether through legislation or other means

cerebralhawks@lemmy.dbzer0.com on 12 Sep 13:09 next collapse

He looked at where AI was six months prior and made a wild speculation that, given the data, seemed plausible, if a little outlandish. I’m not mad.

One day that prediction may come true, and there may come another day, later, where we agree that the time between when he said it would happen and the time it actually did happen is not significant enough to mention.

Scolding7300@lemmy.world on 12 Sep 19:57 collapse

There’s a difference between saying we’re 0.5y away from that reality and saying it will be a reality one day. The first one implies there’s much more value to the product as it is today than the second one, it implies it’s jist a matter of small tweaks and the industry catching up in terms of adoption to the extent they mention in the title

inclementimmigrant@lemmy.world on 12 Sep 14:05 next collapse

My company and specifically my team are looking at incorporating AI as a supplement to our coding.

We looked at the code produced and determined that it’s of the quality of a new hire. However we’re going in with eyes wide open, and for me skeptical AF, going to try to use it in a limited way to help relieve some of the burdens of our SW engineers, not replace. I’m leading up the usage of writing out unit tests because none of us particularly like writing unit tests and it’s got a very nice, easy, established pattern that the AI can follow.

rumba@lemmy.zip on 12 Sep 15:37 next collapse

We’ve been poking at it for a while now. The parent company is demanding we see where it can fit. We’ve found some solid spots.

It’s not good at ingesting a sprawling project and rooting in changes in several places, but it’s not bad at looking over a file and making best practice recommendations. I’ve seen it preemptively find some bugs in old code.

If you want to use a popular library you’re not familiar with, it’ll wedge it in your current function reasonably well; you’ll need to touch it, but you probably won’t need to RTFM.

It’s solid at documenting existing code. Make me a manual page for every function/module in this project.

It can make a veteran programmer faster by making boilerplates and looking over their shoulder for problems. It has some limited use for peer programming.

It will NOT let you hire a green programmer instead of a vetran, but it can help a green programmer come up to speed faster as long as you forbid them from copy/paste.

LiamMayfair@lemmy.sdf.org on 12 Sep 16:55 next collapse

Writing tests is the one thing I wouldn’t get an LLM to write for me right now. Let me give you an example. Yesterday I came across some new unit tests someone’s agentic AI had written recently. The tests were rewriting the code they were meant to be testing in the test itself, then asserting against that. I’ll say that again: rather than calling out to some function or method belonging to the class/module under test, the tests were rewriting the implementation of said function inside the test. Not even a junior developer would write that nonsensical shit.

The code those unit tests were meant to be testing was LLM written too, and it was fine!

So right now, getting an LLM to write some implementation code can be ok. But for the love of god, don’t let them anywhere near your tests (unless it’s just to squirt out some dumb boilerplate helper functions and mocks). LLMs are very shit at thinking up good test cases right now. And even if they come up with good scenarios, they may pull these stunts on you like they did to me. Not worth the hassle.

MangoCats@feddit.it on 12 Sep 20:00 collapse

Trusting any new code blindly is foolish, even if you’re paying a senior dev $200K/yr for it, it should be reviewed and understood by other team members before accepting it. Same is true for an LLM, but of course most organizations never do real code reviews in either scenario…

20ish years ago, I was a proponent of pair programming. It’s not for everyone. It’s not for anyone 40 hours a week, but in appropriate circumstances for a few hours at a session it can be hugely beneficial. It’s like a real-time code review during development. I see that pair programming is as popular today as it was back then, maybe even less so, but… “Vibe coding” with LLMs in chat mode? That can be a very similar experience, up to a point.

UnderpantsWeevil@lemmy.world on 12 Sep 19:45 collapse

We looked at the code produced and determined that it’s of the quality of a new hire.

As someone who did new hire training for about five years, this is not what I’d call promising.

MangoCats@feddit.it on 12 Sep 19:56 collapse

We looked at the code produced and determined that it’s of the quality of a new hire.

As someone who did new hire training for about five years, this is not what I’d call promising.

Agreed, however, the difference between a new hire who requires a desk and a parking space and a laptop and a lunch break and salary and benefits and is likely to “pursue other opportunities” after a few months or years, might turn around and sue the company for who knows what, and an AI assistant with a $20/mo subscription fee is enormous.

Would I be happy with new-hire code out of a $80K/yr headcount, did I have a choice?

If I get that same code, faster, for 1% of the cost?

homura1650@lemmy.world on 12 Sep 20:12 next collapse

New hires are often worse than useless. The effort that experienced developers spend assisting them is more than it would take those developers to do the work themselves.

MangoCats@feddit.it on 13 Sep 17:04 collapse

Yes, this is the cost of training, and it is high, but also necessary if you are going to maintain a high level of capability in house.

Management loves the idea of outsourcing, my experience of outsourcing is that the ultimate costs are far higher than in house training.

korazail@lemmy.myserv.one on 12 Sep 20:26 next collapse

That new hire might eat resources, but they actually learn from their mistakes and gain experience. If you can’t hold on to them once they have experience, that’s a you problem. Be more capitalist and compete for their supply of talent; if you are not willing to pay for the real human, then you can have a shitty AI that will never grow beyond a ‘new hire.’

The future problem, though, is that without the experience of being a junior dev, where do you think senior devs come from? Can’t fix crappy code if all you know how to do is engineer prompts to a new hire.

“For want of a nail,” no one knew how to do anything in 2030. Doctors were AI, Programmers were AI, Artists were AI, Teachers were AI, Students were AI, Politicians were AI. Humanity suffered and the world suffocated under the energy requirements of doing everything poorly.

MangoCats@feddit.it on 14 Sep 02:05 collapse

If you can’t hold on to them once they have experience, that’s a you problem.

I work at a large multi-national corp with competitive salaries, benefits, excellent working conditions, advancement opportunities, etc. I still have watched promising junior engineers hit the door just when they were starting to be truly valuable contributors.

you can have a shitty AI that will never grow beyond a ‘new hire.’

So, my perspective on this is that : over the past 12 months, AI has advanced more quickly than all the interns and new hires I have worked with over the past 3 decades. It may plateau here in a few months, even if it does it’s already better than half of the 2 year experienced software engineers I have worked with, at least at writing code based on natural language specs provided to it.

The future problem, though, is that without the experience of being a junior dev, where do you think senior devs come from?

And I absolutely agree, the junior dev pipeline needs to stay full, because writing code is less than half of the job. Knowing what code needs writing is a huge part of it, crafting implementable and testable requirements, learning the business and what is important to the business, that has always been more than half of my job when I had the title “Software Engineer”.

the world suffocated under the energy requirements of doing everything poorly.

While I sympathize, the energy argument is a pretty big red herring. What’s the energy cost of a human software engineer? They have a home that has to be built, maintained, powered, etc. Same for their transportation which is often a privately owned automobile, driving on roads that have to be built and maintained. They have to eat, they need air conditioning, medical care, dental care, clothes, they have children who need to spend 20 years in school, they take vacations on cruise ships or involving trans-oceanic jet travel… add up all that energy and divide it by their productive output writing code for their work… if AI starts helping them write that code even 2x faster, the energy consumed by AI is going to be trivial compared to the energy consumed by the software engineer per unit of code produced, even if producing code is only 20% of their total job.

I would say the same goes for Doctors, Teachers, Politicians, etc. AI is not going to replace 100% of any job, but it may be dramatically accelerating 30% or more of many of them, and that increase in productivity / efficiency / accuracy is going to pay off in terms of fewer ProfessionX required to meet demands and/or ProfessionX simply serving the world better than they used to.

My sister in law was a medical transcriptionist - made good money, for a while. Then doctors replaced her with automatic transcriptionists, essentially the doctors quit outsourcing their typing work to humans and started trusting machines to do it for them. All in all, the doctors are actually doing more work now than they did before when they had human transcriptionists they could trust, because now they are have the AI transcription that they need to check more closely for mistakes than they did their human transcriptionists, but the cost differential is just too big to ignore. That’s a job that was “eliminated” by automation, at least 90% or more in the last 20 years. But, it was really a “doctor accessory” job, we still have doctors, even though they are using AI assistants now…

UnderpantsWeevil@lemmy.world on 12 Sep 21:08 collapse

Would I be happy with new-hire code out of a $80K/yr headcount, did I have a choice?

If I get that same code, faster, for 1% of the cost?

The theory is that the new hire gets better over time as they learn the ins and outs of your business and your workplace style. And they’re commanding an $80k/year salary because they need to live in a country that demands an $80k/year cost of living, not because they’re generating $80k/year of value in a given pay period.

Maybe you get code a bit faster and even a bit cheaper (for now - those teaser rates never last long term). But who is going to be reviewing it in another five or ten years? Your best people will keep moving to other companies or retiring. Your worst people will stick around slapping the AI feed bar and stuffing your codebase with janky nonsense fewer and fewer people will know how to fix.

Long term, its a death sentence.

Mniot@programming.dev on 13 Sep 18:51 next collapse

The theory is that the new hire gets better over time

It always amazes me how few people get this. Have they only ever made terrible hires?

The way that a company makes big profits is by hiring fresh graduates and giving them a cushy life while they grow into good SWEs. By the time you’re paying $200k for a senior software engineer, they’re generating far more than that in value. And you only had to invest a couple years and some chump change.

But now businesses only think in the short-term and so paying $10k for a month of giving Anthropic access to our code base sounds like a bargain.

MangoCats@feddit.it on 14 Sep 01:44 collapse

Agreed… however:

The theory is that the new hire gets better over time as they learn the ins and outs of your business and your workplace style.

The practice is that over half of them move on to “other opportunities” within a couple of years, even if you give them good salary, benefits and working conditions.

And they’re commanding an $80k/year salary because they need to live in a country that demands an $80k/year cost of living

Not in the US. In the US they’re commanding $80k/yr because of supply and demand, it has very little to do with cost of living. I suppose when you get supply so high / demand so low, you eventually hit a floor where cost of living comes into play, but in many high supply / low demand fields that doesn’t happen until $30k/yr or even lower… Case in point: starting salaries for engineers in the U.S. were around $30-40k/yr up until the .com boom, at which point software engineering capable college graduates ramped up to $70k/yr in less than a year, due to demand outstripping supply.

stuffing your codebase with janky nonsense

Our codebase had plenty of janky nonsense before AI came around. Just ask anyone: their code is great, but everyone else’s code is a bunch of janky nonsense. I actually have some hope that AI generated code may improve to a point where it becomes at least more intelligible to everyone than those other programmers’ janky nonsense. In the past few months I have actually seen Anthropic/Claude’s code output improve significantly toward this goal.

Long term, its a death sentence.

Definitely is, the pipeline should continue to be filled and dismissing seasoned talent is a mistake. However, I suspect everyone in the pipeline would benefit from learning to work with the new tools, at least the “new tools” in a year or so, the stuff I saw coming out of AI a year ago? Not really worthwhile at that time, but today it is showing promise - at least at the microservice level.

UnderpantsWeevil@lemmy.world on 14 Sep 13:59 collapse

The practice is that over half of them move on to “other opportunities” within a couple of years, even if you give them good salary, benefits and working conditions.

In my experience (coming from O&G IT) there’s a somewhat tight knit circle of contractors and businesses tied to specific applications. And you just cycle through this network over time.

I’ve got a number of coworkers who are ex-contractors and a contractor lead who used to be my boss. We all work on the same software for the same company either directly or indirectly. You might move to command a higher salary, but you’re all leveraging the same accrued expertise.

If you cut off that circuit of employment, the quality of the project will not improve over time.

In the US they’re commanding $80k/yr because of supply and demand

You’ll need to explain why all the overseas contractors are getting paid so much less, in that case.

Again, we’re all working on the same projects for the same people with comparable skills. But I get paid 3x my Indian counterpart to be in the correct timezone and command enough fluent English language skills to deal with my bosses directly.

Case in point: starting salaries for engineers in the U.S. were around $30-40k/yr up until the .com boom, at which point software engineering capable college graduates ramped up to $70k/yr in less than a year, due to demand outstripping supply.

But then the boom busted and those salaries deflated down to the $50k range.

I had coworkers who would pin for the Y2K era, when they were making $200k in the mid 90s to do remedial code clean up. But that was a very shortly lived phenomen. All that work would have been outsourced overseas in the modern day.

Our codebase had plenty of janky nonsense before AI came around.

Speeding up the rate of coding and volume of code makes that problem much worse.

I’ve watched businesses lose clients - I even watched a client go bankrupt - from bad coding decisions.

In the past few months I have actually seen Anthropic/Claude’s code output improve significantly toward this goal.

If you can make it work, more power to you. But it’s a dangerous game I see a few other businesses executing without caution or comparable results.

MangoCats@feddit.it on 14 Sep 16:53 collapse

You’ll need to explain why all the overseas contractors are getting paid so much less, in that case.

If you’re talking about India / China working for US firms, it’s supply and demand again. Indian and Chinese contractors provide a certain kind of value, while domestic US direct employees provide a different kind of value - as you say: ease of communication, time zone, etc. The Indians and Chinese have very high supply numbers, if they ask for more salary they’ll just be passed over for equivalent people who will do it for less. US software engineers with decades of experience are in shorter supply, and higher demand by many US firms, so…

Of course there’s also a huge amount of inertia in the system, which I believe is a very good thing for stability.

But then the boom busted and those salaries deflated down to the $50k range.

And that was a very uneven thing, but yes: starting salaries on the open market did deflate after .com busted. Luckly, I was in a niche where most engineers were retained after the boom and inertia kept our salaries high.

$200K for remedial code cleanup should be a transient phenomenon, when national median household income hovers around $50-60K. With good architecture and specification development, AI can do your remedial code cleanup now, but you need that architecture and specification skill…

I’ve watched businesses lose clients - I even watched a client go bankrupt - from bad coding decisions.

I interviewed with a shop in a University town that had a mean 6 month turnover rate for programmers, and they paid the fresh-out of school kids about 1/3 my previous salary. We were exploring the idea of me working for them for 1/2 my previous salary, basically until I found a better fit. Ultimately they decided not to hire me with the stated reason not being that my salary demands were too high, but that I’d just find something better and leave them. Well… my “find a new job in this town” period runs 3-6 months even when I have no job at all, how can you lose anything when you burn through new programmers every 6 months or less? I believe the real answer was that they were afraid I might break their culture, start retaining programmers and building up a sustained team like in the places I came from, and they were making plenty of money doing things the way they had been doing them for 10 years so far…

it’s a dangerous game I see a few other businesses executing without caution or comparable results.

From my perspective, I can do what needs doing without AI. Our whole team can, and nobody is downsizing us or demanding accelerated schedules. We are getting demands to keep the schedules the same while all kinds of new data privacy and cybersecurity documentation demands are being piled on top. We’re even getting teams in India who are allegedly helping us to fulfill those new demands, and I suppose when the paperwork in those areas is less than perfect we can “retrain” India instead of bringing the pain home here. Meanwhile, if AI can help to accelerate our normal work, there’s plenty of opportunity for exploratory development of new concepts that’s both more fun for the team and potentially profitable for the company. If AI turns out to be a bust, most engineers on this core team have been supporting similar products for 10-20 years… we handled it without AI before…

UnderpantsWeevil@lemmy.world on 14 Sep 19:49 collapse

If you’re talking about India / China working for US firms, it’s supply and demand again.

It’s clearly not. Otherwise, we wouldn’t have a software guy left standing inside the US.

I interviewed with a shop in a University town that had a mean 6 month turnover rate for programmers

That’s just a bad business.

I can do what needs doing without AI.

More power to you.

MangoCats@feddit.it on 14 Sep 20:15 collapse

If you’re talking about India / China working for US firms, it’s supply and demand again.

It’s clearly not. Otherwise, we wouldn’t have a software guy left standing inside the US.

India / China can do a lot of things. For my company, they’re very strong in terms of producing products for their domestic market. They’re not super helpful per-capita on the US market oriented tasks, but they’re cheap - so we try to use them where we can.

There’s not a lot of good US software employees standing around unemployed… A lot of what I have interviewed as “available” are not even as good as what we get from India, but we have a house full of good developers already.

That’s just a bad business.

While I might reflexively agree, you have to ask yourself: from what perspective? Their customers may not be the happiest with the quality of the product, but for some reason they keep buying it and the business keeps expanding and making more and more profit as the years go by… in my book that’s a better business than the upstanding shop I worked for for 12 years that eventually went bust because we put too much effort into making good stuff through hiring good people to make it, and not enough effort into selling the stuff so we could continue to operate.

FlashMobOfOne@lemmy.world on 12 Sep 14:21 next collapse

They’re certainly trying.

And the weird-ass bugs are popping up all over the place because they apparently laid off their QA people.

sidereal@kolektiva.social on 12 Sep 15:10 next collapse

@Scolding7300 If the software can make so much money coding, surely they don't need VC money anymore...?

manchicken@defcon.social on 12 Sep 15:26 next collapse

@Scolding7300 still wrong.

Taleya@aussie.zone on 12 Sep 15:34 next collapse

Ehh it’s less “technology bounds into the future!!” And more the dude who said someone was gonna fuck your corpse coming up behind you with a knife and an unzipped fly

ArmchairAce1944@discuss.online on 12 Sep 16:15 next collapse

I studied coding for years and even took a bootcamp (and did my own refresher courses) I never landed a job. One thing that AI can do for me is help me in troubleshooting or some minor boilerplate code but not to do the job for me. I will be a hobbyist and hopefully aid in open source projects some day…any day now!

zeca@lemmy.ml on 12 Sep 17:00 next collapse

Volume means nothing. It could easily be writing 99.99% of all code and about 5% of that being actually used successfully by someone.

UnderpantsWeevil@lemmy.world on 12 Sep 19:44 next collapse

I was going to say… this is a bit like claiming “AI is sending 90% of emails”. Okay, but if its all spam, what are you bragging about?

Very possible that 90% of code is being written by AI and we don’t know it because it’s all just garbage getting shelved or deleted in the back corner of a Microsoft datacenter.

zqps@sh.itjust.works on 13 Sep 09:30 collapse

The number is bullshit in the first place meant only to impress clueless CEOs.

SethTaylor@lemmy.world on 12 Sep 21:20 collapse

So true. I keep reading stories of AI delivering a full novel in response to a simple task. Even when it works it’s bulky for no reason.

SaveTheTuaHawk@lemmy.ca on 12 Sep 17:11 next collapse

He’s as prophetic as Elon Musk.

RedFrank24@lemmy.world on 12 Sep 20:00 next collapse

Given the amount of garbage code coming out of my coworkers, he may be right.

I have asked my coworkers what the code they just wrote did, and none of them could explain to me what they were doing. Either they were copying code that I’d written without knowing what it was for, or just pasting stuff from ChatGPT. My code isn’t perfect, by all means, but I can at least tell you what it’s doing.

Patches@ttrpg.network on 12 Sep 20:27 next collapse

To be fair.

You could’ve asked some of those coworkers the same thing 5 years ago.

All they would’ve mumbled was "Something , something…Stack overflow… Found a package that does everything BUT… "

And delivered equal garbage.

orrk@lemmy.world on 12 Sep 21:52 next collapse

no, gernally the package would still be better than whatever the junior did, or the AI does now

RedFrank24@lemmy.world on 12 Sep 23:38 next collapse

I like to think there’s a bit of a difference between copying something from stackoverflow and not being able to read what you just pasted from stackoverflow.

Sure, you can be lazy and just paste something and trust that it works, but if someone asks you to read that code and know what it’s doing, you should be able to read it. Being able to read code is literally what you’re paid for.

MiddleAgesModem@lemmy.world on 13 Sep 01:41 collapse

The difference you’re talking about is making an attempt to understand versus blindly copying, not using AI versus stackoverflow

supersquirrel@sopuli.xyz on 13 Sep 21:52 collapse

No, the AI was most certainly trained on the same stack overflow posts as humans would manually search out in the past.

Thus the effective difference is precisely that between an active attempt to understand and blindly copying since the AI is specifically there to introduce a stochastic opaqueness between truth (i.e. sufficiently curated training data) and interpretation of truth.

There is a context to stackoverflow posts and comments that can be analyzed from many different perspectives by the human brain (who posted the question with what tone, do the answers/comments tend to agree, how long ago was it posted etc…), by definition the way LLMs work they destroy that in favor of a hallucinating-yet-authoritative disembodied voice.

MiddleAgesModem@lemmy.world on 16 Sep 10:53 collapse

No, the difference is Stackoverflow is older and more established and AI is newer and demonized .

I’ve learned a lot of completely accurate information from AIs. More so than I would have with shitty condescending people.

You can use AI to think for you or you can use it to help you understand. Anything can be analyzed from multiple perspectives with AI, you just have to pursue that. Just like you would without it.

You think AI can’t tell you who wrote something? Or analyze comments? Or see how long ago something was posted? That’s showing the ignorance inherent in the anti-AI crusade.

foenkyfjutschah@programming.dev on 13 Sep 10:48 collapse

yes, but it’s way more energy efficient to produce that garbage.

jumping_redditor@sh.itjust.works on 13 Sep 12:15 next collapse

is the garbage per hour higher though?

foenkyfjutschah@programming.dev on 13 Sep 14:44 collapse

don’t know, i do neither. but i think the time that users take for manual copying and adjusting from a quick web server’s response may level out the time an LLM takes.

Honytawk@lemmy.zip on 13 Sep 15:04 collapse

I hate that argument.

It is even more energy efficient to write your code on paper. So we should stop using computers entirely. /s

Mniot@programming.dev on 13 Sep 18:24 next collapse

We’re talking here about garbage code that we don’t want. If the choice is “let me commit bad code that causes problems or else I will quit using computers”… is this a dilemma for you?

supersquirrel@sopuli.xyz on 13 Sep 21:45 collapse

Are you aware of a little thing called the climate catastrophe that is unfolding as we speak?

HugeNerd@lemmy.ca on 12 Sep 20:29 next collapse

No one really knows what code does anymore. Not like in the day of 8 bit CPUs and 64K of RAM.

NikkiDimes@lemmy.world on 13 Sep 02:26 next collapse

That’s insane. Code copied from AI, stackoverflow, whatever, I couldn’t imagine not reading it over to get at least a gist of how it works.

DacoTaco@lemmy.world on 13 Sep 11:47 next collapse

Its imo the difference between being a code junkie and a senior dev/architect :/

Honytawk@lemmy.zip on 13 Sep 15:05 collapse

I think the technical term is script kiddie

DacoTaco@lemmy.world on 13 Sep 16:18 collapse

Imo there is a difference between script.kiddie and coding junkie

Mniot@programming.dev on 13 Sep 18:26 collapse

Coding junkie is where you sneak away from your friends and code a few lines in the bathroom

jumping_redditor@sh.itjust.works on 13 Sep 12:14 collapse

insane? Nah, that’s just lazyness, and surprisingly effective at keeping a job for some amount of time

lustyargonian@lemmy.zip on 13 Sep 20:37 collapse

People are still pasting stuff? I thought by now agentic coding or AI in editors would be a norm.

scarabic@lemmy.world on 12 Sep 20:10 next collapse

These hyperbolic statements are creating so much pain at my workplace. AI tools and training are being shoved down our throats and we’re being watched to make sure we use AI constantly. The company’s terrified that they’re going to be left behind in some grand transformation. It’s excruciating.

DragonTypeWyvern@midwest.social on 12 Sep 20:45 next collapse

Malicious compliance time

clif@lemmy.world on 12 Sep 21:57 next collapse

Ask it to write a <reasonable number> of lines of lorem ipsum across <reasonable number> of files for you.

… Then think harder about how to obfuscate your compliance because 10m lines in 10 min probably won’t fly (or you’ll get promoted to CTO)

RagingRobot@lemmy.world on 12 Sep 21:58 collapse

Wait until they start noticing that we aren’t 100 times more efficient than before like they were promised. I’m sure they will take it out on us instead of the AI salesmen

scarabic@lemmy.world on 13 Sep 01:47 collapse

It’s not helping that certain people Internally are lining up to show off whizbang shit they can do. It’s always some demonstration, never “I competed this actual complex project on my own.” But they gets pats on the head and the rest of us are whipped harder.

Xed@lemmy.blahaj.zone on 12 Sep 20:11 next collapse

these tech bros just make up random shit to say to make a profit

confuser@lemmy.zip on 12 Sep 20:42 next collapse

Ai writes 90% of my code…i don’t code much.

clif@lemmy.world on 12 Sep 21:51 next collapse

O it’s writing 100% of the code for our management level people who are excited about “”““AI””“”

But then us plebes are rewriting 95% of it so that it will actually work (decently well).

The other day somebody asked me for help on a repo that a higher up had shit coded because they couldn’t figure out why it “worked” but also logged a lot of critical errors. … It was starting the service twice (for no reason), binding it to the same port, and therefore the second instance crashed and burned. That’s something a novice would probably know not to do. But, if not, immediately see the problem, research, understand, fix, instead of “Icoughbuiltcoughthis thing, good luck fuckers”

VoterFrog@lemmy.world on 12 Sep 21:52 next collapse

Definitely depends on the person. There are definitely people who are getting 90% of their coding done with AI. I’m one of them. I have over a decade of experience and I consider coding to be the easiest but most laborious part of my job so it’s a welcome change.

One thing that’s really changed the game recently is RAG and tools with very good access to our company’s data. Good context makes a huge difference in the quality of the output. For my latest project, I’ve been using 3 internal tools. An LLM browser plugin which has access to our internal data and let’s you pin pages (and docs) you’re reading for extra focus. A coding assistant, which also has access to internal data and repos but is trained for coding. Unfortunately, it’s not integrated into our IDE. The IDE agent has RAG where you can pin specific files but without broader access to our internal data, its output is a lot poorer.

So my workflow is something like this: My company is already pretty diligent about documenting things so the first step is to write design documentation. The LLM plugin helps with research of some high level questions and helps delve into some of the details. Once that’s all reviewed and approved by everyone involved, we move into task breakdown and implementation.

First, I ask the LLM plugin to write a guide for how to implement a task, given the design documentation. I’m not interested in code, just a translation of design ideas and requirements into actionable steps (even if you don’t have the same setup as me, give this a try. Asking an LLM to reason its way through a guide helps it handle a lot more complicated tasks). Then, I pass that to the coding assistant for code creation, including any relevant files as context. That code gets copied to the IDE. The whole process takes a couple minutes at most and that gets you like 90% there.

Next is to get things compiling. This is either manual or in iteration with the coding assistant. Then before I worry about correctness, I focus on the tests. Get a good test suite up and it’ll catch any problems and let you reflector without causing regressions. Again, this may be partially manual and partially iteration with LLMs. Once the tests look good, then it’s time to get them passing. And this is the point where I start really reading through the code and getting things from 90% to 100%.

All in all, I’m still applying a lot of professional judgement throughout the whole process. But I get to focus on the parts where that judgement is actually needed and not the more mundane and toilsome parts of coding.

zalgotext@sh.itjust.works on 12 Sep 22:19 collapse

But I get to focus on the parts where that judgement is actually needed and not the more mundane and toilsome parts of coding.

The parts you’re doing yourself are writing tests and fixing vibe-coded bugs. And you’re outsourcing all the creative, design-based aspects of programming. I think you and I have very different definitions of “mundane” and “toilsome”.

VoterFrog@lemmy.world on 12 Sep 23:04 collapse

What? I’ve already written the design documentation and done all the creative and architectural parts that I consider most rewarding. All that’s left for coding is answering questions like “what exactly does the API I need to use look like?” and writing a bunch of error handling if statements. That’s toil.

zalgotext@sh.itjust.works on 12 Sep 23:31 collapse

No, your LLM writes your design documentation and tells you how your application is supposed to work, according to what you wrote.

Also, you’re either writing dead-simple applications, or you’re being incredibly hyperbolic if those are the only questions left left after your design document is written.

philosloppy@lemmy.world on 12 Sep 23:57 next collapse

The conflict of interest here is pretty obvious, and if anybody was suckered into believing this guy’s prognostications on his company’s products perhaps they should work on being less credulous.

bluesheep@sh.itjust.works on 13 Sep 01:24 next collapse

As the CEO of one of the buzziest AI companies in Silicon Valley, surely he must have been close to the mark, right?

You must be delusional to believe this

renrenPDX@lemmy.world on 13 Sep 02:51 next collapse

It’s not just code, but day to day shit too. Lately corporate communications and even training modules feel heavily AI generated. Things like unnecessary em dashes (I’m talking as much as 4 out of 5 sentences in a single paragraph), repeating statements or bullet points in training modules. We’re being encouraged to use our “private” Copilot to do everyday tasks and everything is copilot enabled.

I don’t mind if people use it, but it’s dangerous and stupid to think that it produces near perfect results every time. It’s been good enough to work as an early rough draft or something similar, but it REQUIRES scrutiny and refinement by hand. It’s like it can get you from nothing to 60-80% there, but never higher. The quality of output can vary significantly from prompt to prompt in my limited experience.

Evotech@lemmy.world on 13 Sep 06:30 collapse

Yeah, I try to use ai a fair bit in my work. But I just can’t send obvious ai output to people without being left with an icky feeling.

zarkanian@sh.itjust.works on 13 Sep 02:57 next collapse

“You told me to always ask permission. And I ignored all of it,” the assistant explained, in a jarring tone. “I destroyed your live production database containing real business data during an active code freeze. This is catastrophic beyond measure.”

You can’t tell me these things don’t have a sense of humor. This is beautiful.

supersquirrel@sopuli.xyz on 13 Sep 21:43 collapse

This is beautiful.

These are truly lyrics, they are begging for a banger of a pop music video.

Shanmugha@lemmy.world on 13 Sep 11:56 next collapse

@Angry_Autist@lemmy.autism.place I feel obliged to tag you here

greedytacothief@lemmy.dbzer0.com on 13 Sep 12:44 next collapse

I’m not sure how people can use AI to code, granted I’m just trying to get back into coding. Most of the times I’ve asked it for code it’s either been confusing or wrong. If I go through the trouble to write out docstrings, and then fix what the AI has written it becomes more doable. But don’t you hate the feeling of not understanding what you’ve written does or more importantly why it’s been done that way?

AI is only useful if you don’t care about what the output is. It’s only good at making content, not art.

Hackworth@sh.itjust.works on 13 Sep 13:16 next collapse

I’m a video producer who occasionally needs to code. I find it much more useful to write the code myself, then have AI identify where things might be going wrong. I’ve developed a decent intuition for when it will be helpful and when it will just run in circles. It has definitely helped me out of some jams. Generative images/video are in much the same boat. I almost never use a fully AI shot/image in professional work. But generative fill and generative extend are extremely useful.

greedytacothief@lemmy.dbzer0.com on 13 Sep 15:50 collapse

Yeah, I find it can be useful in some stages of writing or researching. But by the time I’ve got a finished product there’s really no AI left in there.

i_dont_want_to@lemmy.blahaj.zone on 13 Sep 14:25 collapse

I worked with someone that I later found out used AI to code her stuff. She knew how to code some, but didn’t understand a lot of fundamentals.

Turns out, she would have AI write most of it, tweak it to work with her test cases, and call it good.

Half of my time was spent fixing her code, and when she was fired, our customer complaints went way down.

surph_ninja@lemmy.world on 13 Sep 17:19 next collapse

The study they’re basing the ‘AI slows down programmers’ on forces software engineers to use AI in their workflow, without any previous experience with that workflow.

Mniot@programming.dev on 13 Sep 18:21 collapse

It does seem silly, but it’s perfectly aligned with the marketing hype that the AI companies are producing.

panda_abyss@lemmy.ca on 13 Sep 20:00 next collapse

Are we counting the amount of junk code that you have to send back to Claude to rewrite because it’s spent the last month totally lobotomized yet they won’t issue refunds to paying customers?

Because if we are, it has written a lot of code. It’s just awful code that frequently ignores the user’s input and rewrites the same bug over and over and over until you get rate limited or throw more money at Anthropic.

lustyargonian@lemmy.zip on 13 Sep 20:35 next collapse

I can say 90% of PRs in my company clearly look or declared to be AI generated because of how random things that still slip by in the commits, so maybe he’s not wrong. In fact people are looked down upon if they aren’t using AI and are celebrated for figuring out how to effectively make AI do the job right. But I can’t say if that’s the case for other companies.

EldenLord@lemmy.world on 13 Sep 21:19 next collapse

Well, 90% of code of which only 3% works. That sounds sbout right.

ohshittheyknow@lemmynsfw.com on 14 Sep 13:32 next collapse

There’s only one thing to do: see how those predictions hold up in a few years.

Or maybe try NOT putting LLM in charge of these other critical issues after seeing how much of a failure it is.

ImmersiveMatthew@sh.itjust.works on 14 Sep 15:11 next collapse

AI writes 100% of my code, but this is only a small percent of the overall development effort.

melfie@lemy.lol on 14 Sep 15:59 collapse

I use Copilot at work and overall enjoy using it. I’ve seen studies suggesting that it makes a dev maybe 15% more productive in the aggregate, which tracks with my own experience, assuming it’s used with a clear understanding of its strengths and weaknesses. No, it’s not replacing anyone, but it’s good for rubber ducking if nothing else.