I wonder if this ties into our general disposability culture (throwing things away instead of repairing, etc)
anamethatisnt@sopuli.xyz
on 09 Oct 15:48
nextcollapse
That and also man hour costs versus hardware costs. It’s often cheaper to buy some extra ram than it is to pay someone to make the code more efficient.
Sheeeit… we haven’t been prioritizing efficiency, much less quality, for decades. You’re so right and þrowing hardware at problems. Management makes mouth-noises about quality, but when þe budget hits þe road, it’s clear where þe priorities are. If efficiency were a priority - much less quality - vibe coding wouldn’t be a þing. Low-code/no-code wouldn’t be a þing. People building applications on SAP or Salesforce wouldn’t be a þing.
Planned Obsolescence … designing things for a short lifespan so that things always break and people are always forced to buy the next thing.
It all originated with light bulbs 100 years ago … inventors did design incandescent light bulbs that could last for years but then the company owners realized it wasn’t economically feasible to produce a light bulb that could last ten years because too few people would buy light bulbs. So they conspired to engineer a light bulb with a limited life that would last long enough to please people but short enough to keep them buying light bulbs often enough.
Yes, if you factor in the source of disposable culture: capitalism.
“Move fast and break things” is the software equivalent of focusing solely on quarterly profits.
IrateAnteater@sh.itjust.works
on 09 Oct 16:24
nextcollapse
I think a substantial part of the problem is the employee turnover rates in the industry. It seems to be just accepted that everyone is going to jump to another company every couple years (usually due to companies not giving adequate raises). This leads to a situation where, consciously or subconsciously, noone really gives a shit about the product. Everyone does their job (and only their job, not a hint of anything extra), but they’re not going to take on major long term projects, because they’re already one foot out the door, looking for the next job. Shitty middle management of course drastically exacerbates the issue.
I think that’s why there’s a lot of open source software that’s better than the corporate stuff. Half the time it’s just one person working on it, but they actually give a shit.
Definitely part of it. The other part is soooo many companies hire shit idiots out of college. Sure, they have a degree, but they’ve barely understood the concept of deep logic for four years in many cases, and virtually zero experience with ANY major framework or library.
Then, dumb management puts them on tasks they’re not qualified for, add on that Agile development means “don’t solve any problem you don’t have to” for some fools, and… the result is the entire industry becomes full of functionally idiots.
It’s the same problem with late-stage capitalism… Executives focus on money over longevity and the economy becomes way more tumultuous. The industry focuses way too hard on “move fast and break things” than making quality, and … here we are, discussing how the industry has become shit.
WanderingThoughts@europe.pub
on 09 Oct 18:33
nextcollapse
That’s “disrupting the industry” or “revolutionizing the way we do things” these days. The “move fast and break things” slogan has too much of a stink to it now.
Probably because all the dummies are finally realizing it’s a fucking stupid slogan that’s constantly being misinterpreted from what it’s supposed to mean. lol (as if the dummies even realize it has a more logical interpretation…)
Now if only they would complete the maturation process and realize all of the tech bro bullshit runs counter to good engineering or business…
sp3ctr4l@lemmy.dbzer0.com
on 09 Oct 19:35
nextcollapse
Shit idiots with enthusiasm could be trained, mentored, molded into assets for the company, by the company.
Ala an apprenticeship structure or something similar, like how you need X years before you’re a journeyman at many hands on trades.
But uh, nope, C suite could order something like that be implemented at any time.
They don’t though.
Because that would make next quarter projections not look as good.
And because that would require actual leadership.
This used to be how things largely worked in the software industry.
But, as with many other industries, now finance runs everything, and they’re trapped in a system of their own making… but its not really trapped, because… they’ll still get a golden parachute no matter what happens, everyone else suffers, so that’s fine.
Exactly. I don’t know why I’m being downvoted for describing the thing we all agree happens…
I don’t blame the students for not being seasoned professionals. I clearly blame the executives that constantly replace seasoned engineers with fresh hires they don’t have to pay as much.
Then everyone surprise pikachu faces when crap is the result… Functionally idiots is absolutely correct for the reality we’re all staring at. I am directly part of this industry, so this is more meant as honest retrospective than baseless namecalling. What happens these days is idiotry.
sp3ctr4l@lemmy.dbzer0.com
on 09 Oct 23:19
collapse
Yep, literal, functional idiots, as in, they keep doing easily provably as stupid things, mainly because they are too stubborn to admit they could be wrong about anything.
I used to be part of this industry, and I bailed, because the ratio of higher ups that I encountered anywhere, who were competent at their jobs vs arrogant lying assholes was about 1:9.
Corpo tech culture is fucked.
Makes me wanna chip in a little with a Johnny Silverhand solo.
Croquette@sh.itjust.works
on 09 Oct 23:46
collapse
My hot take : lots of projects would benefit from a traditional project management cycle instead of trying to force Agile on every projects.
I’m glad that they added CloudStrike into that article, because it adds a whole extra level of incompetency in the software field. CS as a whole should have never happens in the first place if Microsoft properly enforced their stance they claim they had regarding driver security and the kernel.
The entire reason CS was able to create that systematic failure was because they were(still are?) abusing the system MS has in place to be able to sign kernel level drivers. The process dodges MS review for the driver by using a standalone driver that then live patches instead of requiring every update to be reviewed and certified. This type of system allowed for a live update that directly modified the kernel via the already certified driver. Remote injection of un-certified code should never have been allowed to be injected into a secure location in the first place. It was a failure on every level for both MS and CS.
i think about this every time i open outlook on my phone and have to wait a full minute for it to load and hopefully not crash, versus how it worked more or less instantly on my phone ten years ago. gajillions of dollars spent on improved hardware and improved network speed and capacity, ans for what? machines that do the same thing in twice the amount of time if you're lucky
socialsecurity@piefed.social
on 09 Oct 16:53
collapse
Well obviously it has to ping 20 different servers from 5 different mega corporations!
And verify your identity three times, for good measure, to make sure you’re you and not someone that should be censored.
ThePowerOfGeek@lemmy.world
on 09 Oct 17:16
nextcollapse
I don’t trust some of the numbers in this article.
Microsoft Teams: 100% CPU usage on 32GB machines
I’m literally sitting here right now on a Teams call (I’ve already contributed what I needed to), looking at my CPU usage, which is staying in the 4.6% to 7.3% CPU range.
Is that still too high? Probably. Have I seen it hit 100% CPU usage? Yes, rarely (but that’s usually a sign of a deeper issue).
Maybe the author is going with worst case scenario. But in that case he should probably qualify the examples more.
MotoAsh@piefed.social
on 09 Oct 18:16
nextcollapse
Well, it’s also stupid to use RAM size as an indicator of a machines CPU load capability…
Definitely sending off some tech illiterate vibes.
Most software shouldn’t saturate either RAM or CPU on a modern computer.
Yes, Photoshop, compiling large codevases, and video encoding and things like that should make just of an the performance available.
But an app like Teams or Discord should not be hitting limits basically ever (I’ll excuse running a 4k stream, but most screen sharing is actually 720p)
You’re right, they shouldn’t be stressing either resource. Though my point was that referencing how much RAM is in the system is a bit silly when referring to a CPU being pinned at 100%. There is a HUGE swathe of CPUs with an even bigger range of performance that are all sold in 32GB systems.
I’m positive the low end of that scale could be rightfully pinned at 100% for certain common tasks.
OmegaSunkey@ani.social
on 09 Oct 18:28
nextcollapse
I haven’t really checked but CPU usage on Teams while just being a member on a call is low, but using the camera with filters clearly uses more. Just checking CPU temps gives you more or less how much CPU is used by a program. So clearly it is just worst case scenario: using camera with filters on top.
My issue with Teams is that it uses a whole GB of ram on my machine with it just existing. It’s like it loads the entire .NET runtime on the browser or something. IDK if it uses C# on the frontend.
Fabricated 4,000 fake user profiles to cover up the deletion
This has got to be a reinforcement learning issue, I had this happen the other day.
I asked Claude to fix some tests, so it fixed the tests by commenting out the failures. I guess that’s a way of fixing them that nobody would ever ask for.
Absolutely moronic. These tools do this regularly. It’s how they pass benchmarks.
Also you can’t ask them why they did something, they have no capacity of introspection, they can’t read their input tokens, they just make up something that sounds plausible for “what were you thinking”.
MelodiousFunk@slrpnk.net
on 09 Oct 17:49
nextcollapse
Also you can’t ask them why they did something, they have no capacity of introspection, (…) they just make up something that sounds plausible for “what were you thinking”.
It’s uncanny how it keeps becoming more human-like.
The model we have at work tries to work around this by including some checks. I assume they get farmed out to specialised models and receive the output of the first stage as input.
Maybe it catches some stuff? It’s better than pretend reasoning but it’s very verbose so the stuff that I’ve experimented with - which should be simple and quick - ends up being more time consuming than it should be.
I’ve been thinking of having a small model like a long context qwen 4b run and do quick code review to check for these issues, then just correct the main model.
It feels like a secondary model that only exists to validate that a task was actually completed could work.
Yeah, it can work, because it’ll trigger the recall of different types of input data. But it’s not magic and if you have a 25% chance of the model you’re using hallucinating, you probably end up still with an 8.5% chance of getting bullshit after doing this.
I’ve been working at a small company where I own a lot of the code base.
I got my boss to accept slower initial work that was more systemically designed, and now I can complete projects that would have taken weeks in a few days.
The level of consistency and quality you get by building a proper foundation and doing things right has an insane payoff. And users notice too when they’re using products that work consistently and with low resources.
This is one of the things that frustrates me about my current boss. He keeps talking about some future project that uses a new codebase we’re currently writing, at which point we’ll “clean it up and see what works and what doesn’t.” Meanwhile, he complains about my code and how it’s “too Pythonic,” what with my docstrings, functions for code reuse, and type hints.
So I secretly maintain a second codebase with better documentation and optimization.
Another big problem not mentioned in the article is companies refusing to hire QA engineers to do actual testing before releasing.
The last two American companies I worked for had fired all the QA engineers or refused to hire any. Engineers were supposed to “own” their features and test them themselves before release. It’s obvious that this can’t provide the same level of testing and the software gets released full of bugs and only the happy path works.
goatinspace@feddit.org
on 09 Oct 18:28
nextcollapse
BroBot9000@lemmy.world
on 09 Oct 18:56
nextcollapse
Don’t give clicks to substack blogs. Fucking Nazi enablers.
WanderingThoughts@europe.pub
on 09 Oct 19:29
nextcollapse
That’s been going on for a lot longer. We’ve replaced systems running on a single computer less powerfull than my phone but that could switch screens in the blink of an eye and update its information several times per second with the new systems running on several servers with all the latest gadgets, but taking ten seconds to switch screens and updates information every second at best. Yeah, those layers of abstraction start adding up over the years.
This is very true. You don’t need a bigger database server, you need an index on that table you query all the time that’s doing full table scans.
GenosseFlosse@feddit.org
on 09 Oct 23:52
nextcollapse
You never worked on old code. It’s never that simple in practice when you have to make changes to existing code without breaking or rewriting everything.
Sometimes the client wants a new feature that cannot easily implement and has to do a lot of different DB lookups that you can not do in a single query. Sometimes your controller loops over 10000 DB records, and you call a function 3 levels down that suddenly must spawn a new DB query each time it’s called, but you cannot change the parent DB query.
I’ve not read the article, but if you actually look at old code, it’s pretty awful too. I’ve found bugs in the Bash codebase that are much older than me. If you try using Windows 95 or something, you will cry and weep. Linux used to be so much more painful 20 years ago too; anyone remember “plasma doesn’t crash” proto-memes? So, “BEFORE QUALITY” thing is absolute bullshit.
What is happening today is that more and more people can do stuff with computers, so naturally you get “chaos”, as in a lot of software that does things, perhaps not in the best way possible, but does them nonetheless. You will still have more professional developers doing their things and building great, high-quality software, faster and better than ever before because of all the new tooling and optimizations.
Yes, the average or median quality is perhaps going down, but this is a bit like complaining about the invention of printing press and how people are now printing out low quality barely edited books for cheap. Yeah, there’s going to be a lot of that, but it produces a lot of awesome stuff too!
afk_strats@lemmy.world
on 09 Oct 21:15
nextcollapse
Accept that quality matters more than velocity. Ship slower, ship working. The cost of fixing production disasters dwarfs the cost of proper development.
This has been a struggle my entire career. Sometimes, the company listens. Sometimes they don’t. It’s a worthwhile fight but it is a systemic problem caused by management and short-term profit-seeking over healthy business growth
dual_sport_dork@lemmy.world
on 09 Oct 21:35
nextcollapse
“Apparently there’s never the money to do it right, but somehow there’s always the money to do it twice.”
Management never likes to have this brought to their attention, especially in a Told You So tone of voice. One thinks if this bothered pointy-haired types so much, maybe they could learn from their mistakes once in a while.
ozymandias117@lemmy.world
on 09 Oct 22:43
nextcollapse
We’ll just set up another retrospective meeting and have a lessons learned.
Then we won’t change anything based off the findings of the retro and lessons learned.
Post-mortems always seemed like a waste of time to me, because nobody ever went back and read that particular confluence page (especially me executives who made the same mistake again)
The sad thing is that velocity pays the bills. Quality it seems, doesn’t matter a shit, and when it does, you can just patch up the bits people noticed.
Anyone else remember a few years ago when companies got rid of all their QA people because something something functional testing? Yeah.
The uncontrolled growth in abstractions is also very real and very damaging, and now that companies are addicted to the pace of feature delivery this whole slipshod situation has made normal they can’t give it up.
Non-technical hiring managers are a bane for developers (and probably bad for any company). Just saying.
The_Decryptor@aussie.zone
on 10 Oct 00:52
nextcollapse
The calculator leaked 32GB of RAM, because the system has 32GB of RAM. Memory leaks are uncontrollable and expand to take the space they’re given, if you had 16MB of RAM in the system then that’s all it’d be able to take before crashing.
Abstractions can be super powerful, but you need an understanding of why you’re using the abstraction vs. what it’s abstracting. It feels like a lot of them are being used simply to check off a list of buzzwords.
Quality in this economy ? We need to fire some people to cut costs and use telemetry to make sure everyone that’s left uses AI to pay AI companies because our investors demand it because they invested all their money in AI and they see no return.
threaded - newest
I wonder if this ties into our general disposability culture (throwing things away instead of repairing, etc)
That and also man hour costs versus hardware costs. It’s often cheaper to buy some extra ram than it is to pay someone to make the code more efficient.
Sheeeit… we haven’t been prioritizing efficiency, much less quality, for decades. You’re so right and þrowing hardware at problems. Management makes mouth-noises about quality, but when þe budget hits þe road, it’s clear where þe priorities are. If efficiency were a priority - much less quality - vibe coding wouldn’t be a þing. Low-code/no-code wouldn’t be a þing. People building applications on SAP or Salesforce wouldn’t be a þing.
Planned Obsolescence … designing things for a short lifespan so that things always break and people are always forced to buy the next thing.
It all originated with light bulbs 100 years ago … inventors did design incandescent light bulbs that could last for years but then the company owners realized it wasn’t economically feasible to produce a light bulb that could last ten years because too few people would buy light bulbs. So they conspired to engineer a light bulb with a limited life that would last long enough to please people but short enough to keep them buying light bulbs often enough.
Edison was DEFINITELY not unique or new in how he was a shithead looking for money more than inventing useful things… Like, at all.
Yes, if you factor in the source of disposable culture: capitalism.
“Move fast and break things” is the software equivalent of focusing solely on quarterly profits.
I think a substantial part of the problem is the employee turnover rates in the industry. It seems to be just accepted that everyone is going to jump to another company every couple years (usually due to companies not giving adequate raises). This leads to a situation where, consciously or subconsciously, noone really gives a shit about the product. Everyone does their job (and only their job, not a hint of anything extra), but they’re not going to take on major long term projects, because they’re already one foot out the door, looking for the next job. Shitty middle management of course drastically exacerbates the issue.
I think that’s why there’s a lot of open source software that’s better than the corporate stuff. Half the time it’s just one person working on it, but they actually give a shit.
Definitely part of it. The other part is soooo many companies hire shit idiots out of college. Sure, they have a degree, but they’ve barely understood the concept of deep logic for four years in many cases, and virtually zero experience with ANY major framework or library.
Then, dumb management puts them on tasks they’re not qualified for, add on that Agile development means “don’t solve any problem you don’t have to” for some fools, and… the result is the entire industry becomes full of functionally idiots.
It’s the same problem with late-stage capitalism… Executives focus on money over longevity and the economy becomes way more tumultuous. The industry focuses way too hard on “move fast and break things” than making quality, and … here we are, discussing how the industry has become shit.
That’s “disrupting the industry” or “revolutionizing the way we do things” these days. The “move fast and break things” slogan has too much of a stink to it now.
Probably because all the dummies are finally realizing it’s a fucking stupid slogan that’s constantly being misinterpreted from what it’s supposed to mean. lol (as if the dummies even realize it has a more logical interpretation…)
Now if only they would complete the maturation process and realize all of the tech bro bullshit runs counter to good engineering or business…
Shit idiots with enthusiasm could be trained, mentored, molded into assets for the company, by the company.
Ala an apprenticeship structure or something similar, like how you need X years before you’re a journeyman at many hands on trades.
But uh, nope, C suite could order something like that be implemented at any time.
They don’t though.
Because that would make next quarter projections not look as good.
And because that would require actual leadership.
This used to be how things largely worked in the software industry.
But, as with many other industries, now finance runs everything, and they’re trapped in a system of their own making… but its not really trapped, because… they’ll still get a golden parachute no matter what happens, everyone else suffers, so that’s fine.
Exactly. I don’t know why I’m being downvoted for describing the thing we all agree happens…
I don’t blame the students for not being seasoned professionals. I clearly blame the executives that constantly replace seasoned engineers with fresh hires they don’t have to pay as much.
Then everyone surprise pikachu faces when crap is the result… Functionally idiots is absolutely correct for the reality we’re all staring at. I am directly part of this industry, so this is more meant as honest retrospective than baseless namecalling. What happens these days is idiotry.
Yep, literal, functional idiots, as in, they keep doing easily provably as stupid things, mainly because they are too stubborn to admit they could be wrong about anything.
I used to be part of this industry, and I bailed, because the ratio of higher ups that I encountered anywhere, who were competent at their jobs vs arrogant lying assholes was about 1:9.
Corpo tech culture is fucked.
Makes me wanna chip in a little with a Johnny Silverhand solo.
My hot take : lots of projects would benefit from a traditional project management cycle instead of trying to force Agile on every projects.
I’m glad that they added CloudStrike into that article, because it adds a whole extra level of incompetency in the software field. CS as a whole should have never happens in the first place if Microsoft properly enforced their stance they claim they had regarding driver security and the kernel.
The entire reason CS was able to create that systematic failure was because they were(still are?) abusing the system MS has in place to be able to sign kernel level drivers. The process dodges MS review for the driver by using a standalone driver that then live patches instead of requiring every update to be reviewed and certified. This type of system allowed for a live update that directly modified the kernel via the already certified driver. Remote injection of un-certified code should never have been allowed to be injected into a secure location in the first place. It was a failure on every level for both MS and CS.
i think about this every time i open outlook on my phone and have to wait a full minute for it to load and hopefully not crash, versus how it worked more or less instantly on my phone ten years ago. gajillions of dollars spent on improved hardware and improved network speed and capacity, ans for what? machines that do the same thing in twice the amount of time if you're lucky
Well obviously it has to ping 20 different servers from 5 different mega corporations!
And verify your identity three times, for good measure, to make sure you’re you and not someone that should be censored.
I don’t trust some of the numbers in this article.
I’m literally sitting here right now on a Teams call (I’ve already contributed what I needed to), looking at my CPU usage, which is staying in the 4.6% to 7.3% CPU range.
Is that still too high? Probably. Have I seen it hit 100% CPU usage? Yes, rarely (but that’s usually a sign of a deeper issue).
Maybe the author is going with worst case scenario. But in that case he should probably qualify the examples more.
Well, it’s also stupid to use RAM size as an indicator of a machines CPU load capability…
Definitely sending off some tech illiterate vibes.
Most software shouldn’t saturate either RAM or CPU on a modern computer.
Yes, Photoshop, compiling large codevases, and video encoding and things like that should make just of an the performance available.
But an app like Teams or Discord should not be hitting limits basically ever (I’ll excuse running a 4k stream, but most screen sharing is actually 720p)
You’re right, they shouldn’t be stressing either resource. Though my point was that referencing how much RAM is in the system is a bit silly when referring to a CPU being pinned at 100%. There is a HUGE swathe of CPUs with an even bigger range of performance that are all sold in 32GB systems.
I’m positive the low end of that scale could be rightfully pinned at 100% for certain common tasks.
I haven’t really checked but CPU usage on Teams while just being a member on a call is low, but using the camera with filters clearly uses more. Just checking CPU temps gives you more or less how much CPU is used by a program. So clearly it is just worst case scenario: using camera with filters on top.
My issue with Teams is that it uses a whole GB of ram on my machine with it just existing. It’s like it loads the entire .NET runtime on the browser or something. IDK if it uses C# on the frontend.
Ram usage today is insane, because there are two types of app on the desktop today: web browsers, and things pretending not to be web browsers.
Pretty sure it’s a webview app, so probably all javascript.
Naah bro, teams is trash resource hog. What you are saying is essentially ‘it works on my computer’.
This has got to be a reinforcement learning issue, I had this happen the other day.
I asked Claude to fix some tests, so it fixed the tests by commenting out the failures. I guess that’s a way of fixing them that nobody would ever ask for.
Absolutely moronic. These tools do this regularly. It’s how they pass benchmarks.
Also you can’t ask them why they did something, they have no capacity of introspection, they can’t read their input tokens, they just make up something that sounds plausible for “what were you thinking”.
It’s uncanny how it keeps becoming more human-like.
No. No it doesn’t, ALL human-like behavior stems from its training data … that comes from humans.
The model we have at work tries to work around this by including some checks. I assume they get farmed out to specialised models and receive the output of the first stage as input.
Maybe it catches some stuff? It’s better than pretend reasoning but it’s very verbose so the stuff that I’ve experimented with - which should be simple and quick - ends up being more time consuming than it should be.
I’ve been thinking of having a small model like a long context qwen 4b run and do quick code review to check for these issues, then just correct the main model.
It feels like a secondary model that only exists to validate that a task was actually completed could work.
Yeah, it can work, because it’ll trigger the recall of different types of input data. But it’s not magic and if you have a 25% chance of the model you’re using hallucinating, you probably end up still with an 8.5% chance of getting bullshit after doing this.
I’ve been working at a small company where I own a lot of the code base.
I got my boss to accept slower initial work that was more systemically designed, and now I can complete projects that would have taken weeks in a few days.
The level of consistency and quality you get by building a proper foundation and doing things right has an insane payoff. And users notice too when they’re using products that work consistently and with low resources.
This is one of the things that frustrates me about my current boss. He keeps talking about some future project that uses a new codebase we’re currently writing, at which point we’ll “clean it up and see what works and what doesn’t.” Meanwhile, he complains about my code and how it’s “too Pythonic,” what with my docstrings, functions for code reuse, and type hints.
So I secretly maintain a second codebase with better documentation and optimization.
How can your code be too pythonic?
Also type hints are the shit. Nothing better than hitting shift tab and getting completions and documentation.
Even if you’re planning to migrate to a hypothetical new code base, getting a bunch of documented modules for free is a huge time saver.
Also migrations fucking suck, you’re an idiot if you think that will solve your problems.
Another big problem not mentioned in the article is companies refusing to hire QA engineers to do actual testing before releasing.
The last two American companies I worked for had fired all the QA engineers or refused to hire any. Engineers were supposed to “own” their features and test them themselves before release. It’s obvious that this can’t provide the same level of testing and the software gets released full of bugs and only the happy path works.
<img alt="" src="https://feddit.org/pictrs/image/1fce73d3-649f-4aa9-b1f7-812ac3959153.webp">
Don’t give clicks to substack blogs. Fucking Nazi enablers.
That’s been going on for a lot longer. We’ve replaced systems running on a single computer less powerfull than my phone but that could switch screens in the blink of an eye and update its information several times per second with the new systems running on several servers with all the latest gadgets, but taking ten seconds to switch screens and updates information every second at best. Yeah, those layers of abstraction start adding up over the years.
Software has a serious “one more lane will fix traffic” problem.
Don’t give programmers better hardware or else they will write worse software. End of.
This is very true. You don’t need a bigger database server, you need an index on that table you query all the time that’s doing full table scans.
You never worked on old code. It’s never that simple in practice when you have to make changes to existing code without breaking or rewriting everything.
Sometimes the client wants a new feature that cannot easily implement and has to do a lot of different DB lookups that you can not do in a single query. Sometimes your controller loops over 10000 DB records, and you call a function 3 levels down that suddenly must spawn a new DB query each time it’s called, but you cannot change the parent DB query.
That’s why it needs to be written better in the first place
Or sharding on a particular column
I’ve not read the article, but if you actually look at old code, it’s pretty awful too. I’ve found bugs in the Bash codebase that are much older than me. If you try using Windows 95 or something, you will cry and weep. Linux used to be so much more painful 20 years ago too; anyone remember “plasma doesn’t crash” proto-memes? So, “BEFORE QUALITY” thing is absolute bullshit.
What is happening today is that more and more people can do stuff with computers, so naturally you get “chaos”, as in a lot of software that does things, perhaps not in the best way possible, but does them nonetheless. You will still have more professional developers doing their things and building great, high-quality software, faster and better than ever before because of all the new tooling and optimizations.
Yes, the average or median quality is perhaps going down, but this is a bit like complaining about the invention of printing press and how people are now printing out low quality barely edited books for cheap. Yeah, there’s going to be a lot of that, but it produces a lot of awesome stuff too!
This has been a struggle my entire career. Sometimes, the company listens. Sometimes they don’t. It’s a worthwhile fight but it is a systemic problem caused by management and short-term profit-seeking over healthy business growth
“Apparently there’s never the money to do it right, but somehow there’s always the money to do it twice.”
Management never likes to have this brought to their attention, especially in a Told You So tone of voice. One thinks if this bothered pointy-haired types so much, maybe they could learn from their mistakes once in a while.
We’ll just set up another retrospective meeting and have a lessons learned.
Then we won’t change anything based off the findings of the retro and lessons learned.
Post-mortems always seemed like a waste of time to me, because nobody ever went back and read that particular confluence page (especially me executives who made the same mistake again)
Post mortems are for, “Remember when we saw something similar before? What happened and how did we handle it?”
Twice? Shiiiii
Amateur numbers, lol
That applies in so many industries 😅 like you want it done right… Or do you want it done now? Now will cost you 10x long term though…
Welp now it is I guess.
You can have it fast, you can have it cheap, or you can have it good (high quality), but you can only pick two.
The sad thing is that velocity pays the bills. Quality it seems, doesn’t matter a shit, and when it does, you can just patch up the bits people noticed.
This is survivorship bias. There’s probably uncountable shitty software that never got adopted. Hell, the E.T. video game was famous for it.
There’s levels to it. True quality isn’t worth it, absolute garbage costs a lot though. Some level that mostly works is the sweet spot.
“AI just weaponized existing incompetence.”
Daamn. Harsh but hard to argue with.
Weaponized? Probably not. Amplified? ABSOLUTELY!
It’s like taping a knife to a crab. Redundant and clumsy, yet strangely intimidating
Love that video. Although it wasn’t taped on. The crab was full on about to stab a mofo
Anyone else remember a few years ago when companies got rid of all their QA people because something something functional testing? Yeah.
The uncontrolled growth in abstractions is also very real and very damaging, and now that companies are addicted to the pace of feature delivery this whole slipshod situation has made normal they can’t give it up.
I must have missed that one
That was M$, not an industry thing.
It was not just MS. There were those who followed that lead and announced that it was an industry thing.
Non-technical hiring managers are a bane for developers (and probably bad for any company). Just saying.
The calculator leaked 32GB of RAM, because the system has 32GB of RAM. Memory leaks are uncontrollable and expand to take the space they’re given, if you had 16MB of RAM in the system then that’s all it’d be able to take before crashing.
Abstractions can be super powerful, but you need an understanding of why you’re using the abstraction vs. what it’s abstracting. It feels like a lot of them are being used simply to check off a list of buzzwords.
Quality in this economy ? We need to fire some people to cut costs and use telemetry to make sure everyone that’s left uses AI to pay AI companies because our investors demand it because they invested all their money in AI and they see no return.