It’s true, although the smart companies aren’t laying off workers in the first place, because they’re treating AI as a tool to enhance their productivity rather than a tool to replace them.
So conversely, we’ll need more workers now that generative AI is hindering productivity.
pelespirit@sh.itjust.works
on 07 Jul 15:49
nextcollapse
Does anyone have numbers on that? Microsoft just announced they’re laying off around 10k.
SnotFlickerman@lemmy.blahaj.zone
on 07 Jul 15:54
nextcollapse
Doesn’t that have more to do with Gamepass eating game studios’ lunch though? And a lot less with AI? Just regular ol’ dumbass management decisions.
DeathsEmbrace@lemmy.world
on 07 Jul 18:14
collapse
It’s Microsoft would make most sense its mangement decisions considering recently theyve pulled all the stops out to guarantee the software cant be shittier. They even made all there software spyware now.
Microsoft did the June layoffs we knew were coming since January and pinned it on “AI cost savings” so that doing so would raise their stock price instead of lower it.
Tollana1234567@lemmy.today
on 08 Jul 00:48
collapse
they also admitted that thier AI isnt generating profit too.
ShittyBeatlesFCPres@lemmy.world
on 07 Jul 15:57
nextcollapse
I don’t know if it even helps with productivity that much. A lot of bosses think developers’ entire job is just churning out code when it’s actually like 50% coding and 50% listening to stakeholders, planning, collaborating with designers, etc. I mean, it’s fine for a quick Python script or whatever but that might save an experienced developer 20 minutes max.
And if you “write” me an email using Chat GPT and I just read a summary, what is the fucking point? All the nuance is lost. Specialized A.I. is great! I’m all for it combing through giant astronomy data sets or protein folding and stuff like that. But I don’t know that I’ve seen generative A.I. without a specific focus increase productivity very much.
MisterNeon@lemmy.world
on 07 Jul 16:03
nextcollapse
I was a frontend developer and UI/UX designer that specialized in JavaScript and Typescript with emphasis on React. I’m learning Python for Flask. I’m skipping meals so I can afford Udemy courses then AWS certifications. I don’t enjoy any of this and I’m falling apart.
Hey there. Of course, I am in no position to say “do this, and it will be all right”, but I will say that if there is any other way to live that won’t put this kind of load on you - do it. You being happier is way way more needed in this world than you getting those certificates
Fuck. Sorry to hear. Though that means all this ai bullshit won’t drown you, since you are after actual knowledge and skill. And if this makes any difference, I for one wish your life to be as sparing as it can possibly get
As a senior developer, my most productive days are genuinely when I remove a lot of code. This might seem like negative productivity to a naive beancounter, but in fact this is my peak contribution to the software and the organization. Simplifying, optimizing, identifying what code is no longer needed, removing technical debt, improving maintainability, this is what requires most of my experience and skill and contextual knowledge to do safely and correctly. AI has no ability to do this in any meaningful way, and code bases filled with mostly AI generated code are bound to become an unmaintainable nightmare (which I will eventually be paid handsomely to fix, I suspect)
6nk06@sh.itjust.works
on 07 Jul 16:54
nextcollapse
That’s what I suspect. ChatGPT is never wrong, and even if it doesn’t know, it knows and still answers something. I guess its no different for source code: always add, never delete.
Yesterday it tried to tell me Duration.TotalYears() and Number.IsNaN() were M functions in the first few interactions. I immediately called it out and for the first time ever, it doubled-down.
I think I’m at a level where, for most cases, what I ask of LLMs for coding is too advanced, else I just do it myself. This results in a very high counts of bullshit. But even for the most basic stuff, I have to take the time to read all of it and fix or optimise mistakes.
My company has a policy to try to make use of LLMs for work. I’m not a fan. Most of the time I spend explaining the infrastructure and whatnot would be better spent just working, because half the time the model suggests something that flies in the face of what’s needed, or outright suggests changes that we can’t implement.
Getting to deprecate legacy support… Yes please, let me get my eraser.
I find most tech debt resolution adds code though.
jjjalljs@ttrpg.network
on 07 Jul 17:19
nextcollapse
A lot of bosses think developers’ entire job is just churning out code when it’s actually like 50% coding and 50% listening to stakeholders, planning, collaborating with designers, etc.
A lot of leadership is incompetent. In a reasonable, just, world they would not be in these decision making positions.
I just watched an interview of Karen Hao and she mentioned something along the lines of executives being oversold AI as something to replace everyone instead of something that should exist alongside people to help them, and they believe it.
And if you “write” me an email using Chat GPT and I just read a summary, what is the fucking point?
Fuuuck, this infuriates me. I wrote that shit for a reason. People already don’t read shit before replying to it and this is making it so much worse.
WanderingThoughts@europe.pub
on 07 Jul 19:02
nextcollapse
So some places started forcing developers to use AI with a quota and monitor the usage. Of course the devs don’t go checking each AI generated line for correctness. That’s bad for the quota. It’s guaranteed to add more slop to the codebase.
It helps with translating. My job is basically double-checking the translation quality and people are essentially paying me for my assurance. Of course, I take responsibility for any mistakes.
Productivity will go up, wages will remain the same, and no additional time off will be given to employees. They’ll merely be required to produce 4x as much and compensation will not increase to match.
It seems the point of all these machines and automation isn’t to make our individual lives easier and more prosperous, but instead to increase and maximize shareholder value.
Grandwolf319@sh.itjust.works
on 07 Jul 18:21
collapse
Idk about engaging productivity.
If your job is just doing a lot of trivial code that just gets used once, yeah I can see it improving productivity.
If your job is more tackling the least trivial challenges and constantly needing to understand the edge cases or uncharted waters of the framework/tool/language, it’s completely useless.
This is why you get a lot of newbies loving AI and a lot of seniors saying it’s counter productive.
It’s technically closer to Schrodinger’s truth. It goes both ways depending on “when” you look at it. Publicly traded companies are more or less expected to adopt AI as it is the next “cheap” labor… so long as it is the cheapest of any option. See the very related: slave labor and it’s variants, child labor, and “outsourcing” to “less developed” countries.
The problem is they need to dance between this experimental technology and … having a publicly “functional” company. The line demands you cut costs but also increase service. So basically overcorrection hell. Mass hirings into mass firings. Every quarter / two quarters depending on the company… until one of two things becomes true: ai works or ai no longer is the cheapest solution. I imagine that will rubberband for quite some time. (saas shit like oracle etc)
In short - I’d not expect this to be more than a brief reprieve from a rapidly drying well. Take advantage of it for now - but I’d recommend not expecting it to remain.
CosmicTurtle0@lemmy.dbzer0.com
on 07 Jul 17:50
collapse
The line demands you cut costs but also increase service.
The line demands it go up. It doesn’t care how you get there. In many cases, decreasing service while also cutting costs is the way to do it so long as line goes up.
Absolutely. I should have used the term productivity rather than service. Lack of caffeine had blunted my vocabulary. In essence: more output for less work. Output in this case is profit.
Enshitification is, in essence, the push beyond diminishing returns into the ‘lossy’ space … sacrificing a for b. The end result is an increasingly shitty experience.
CosmicTurtle0@lemmy.dbzer0.com
on 07 Jul 21:19
collapse
I think what makes enshittification is “give users less and charge more”. It’s about returning shareholder value instead of customer value.
Netflix is a great example. They have pulled back on content, made password sharing more challenging, and increased cost. They still report increases in paying users.
They’ve done the math. They know they can take lost in users because they know they’ll make up for it. That’s the sad part in all of this.
They’ve done the math. They know they can take lost in users because they know they’ll make up for it. That’s the sad part in all of this.
They really haven’t taken massive hits because we are creatures of habit: it’s more convenient to hang around even if we know we’re getting ripped off. There is a conversion rate - but it’s low enough where clearly they believe the market will bear more abuse.
burgerpocalyse@lemmy.world
on 07 Jul 17:57
collapse
jobs are for suckers, be a consultant and charge triple
I’m absolutely not charismatic enough to pull that off.
burgerpocalyse@lemmy.world
on 07 Jul 18:24
collapse
youre in luck, i offer consultation for consultancing, now give me money
some_designer_dude@lemmy.world
on 07 Jul 19:01
collapse
This person sounds confident! You’d be stupid not to take them up on it.
logicbomb@lemmy.world
on 07 Jul 15:35
nextcollapse
Same thing happened with companies that used outsourcing expecting it to be a magic bullet.
expatriado@lemmy.world
on 07 Jul 16:09
nextcollapse
Or more generalized: management going all-in with their decisions, forgetting there is a sweet spot for everything, and then backtracking losing employee time and company money. Sometimes these cause huge backlash, like Wells Fargo pushy sales practices, or great loses, like Meta with Metaverse
I worked in one of these companies. Within months, we went from a company I would be proud to recommend to friends to a service I would never use myself, just due to the horrendous route they took to hire overseas support.
The line of tech work I was in required about a month of training after passing the interview process, and even then you had to take a test at the end to prove you’d absorbed the material before you ever speak to a customer.
When they outsourced, they just bought a company of like 30 people in an adjacent industry and gave them a week of training. Our call queues were never worse and every customer was angry with everyone by the time they talked to someone who had training.
I don’t blame the overseas agents. I blame all the companies that treat them like cattle.
ArchmageAzor@lemmy.world
on 07 Jul 15:53
nextcollapse
Vibe coding is 5% asking for code and 95% cleaning up the code, turns out replacing people with AI is exactly the same.
Jup. But the same goes for developers that go way too fast when setting up a project or library. 2-3 months in and everything is a mess. Weird function names, all one letter vars, no inversion of control, hardcoded things etc. Good luck fixing it.
gravitas_deficiency@sh.itjust.works
on 07 Jul 18:33
collapse
This is what I fight against every goddamn day, and I get yelled at for fighting against it, but I’m not going to stop. I want to build shit that I can largely forget about (because, you know, it’s reliable and logically extensible and maintainable) after it gets to a mature state, and I’m not shy about making that known. This has led to more than a few significant conflicts over the course of my career. It has also led to me saying “I fucking told you so” more than a few times.
It has also led to me saying “I fucking told you so” more than a few times.
I have had several situations where I didn’t even have to give knowing looks, everybody in the room knew I told them so six months ago and here it is. When that led to problems working with my leadership in the future (which happened more often than not), that was a 100% reliable sign that I would be happier and more successful elsewhere.
surewhynotlem@lemmy.world
on 07 Jul 18:41
nextcollapse
I’m still not sure how this is any different than when I used stack exchange for exactly the same thing.
Well, SE code usually compiled and did what it said. I guess that part is different.
Stack Exchange coding is 5% finding solutions to try and 95% copy-pasting those solutions into your project, discovering why they don’t work for you, and trying the next solution on the search list.
redsunrise@programming.dev
on 07 Jul 15:54
nextcollapse
I wonder if there’s a market here. I feel like a company that cleans up AI bullshit would make bank lol
Nah, I came here to make this comment and you already have it well in hand. It’s not really any different other than the marketing spin, though. Companies have always had bad code and hired specialists to sort it out. And over half of the specialists suck, too, and so the merry-go-round spins.
jjjalljs@ttrpg.network
on 07 Jul 15:57
nextcollapse
All the leadership who made this mistake should be fired. They are clearly incompetent
But i guess it’s always labor that pays the price
magic_lobster_party@fedia.io
on 07 Jul 16:01
nextcollapse
You know they’re just going to get bonuses and promotions.
At least in my area they’ve decided to walk back the walk back.
They went from “Self checkouts are now only for ten items or less” to “Self checkouts are permanently closed” and now they’ve gone to “Self checkouts can be used for any number of items and also we added four more”.
Mushroomm@sh.itjust.works
on 07 Jul 17:01
nextcollapse
They should have just asked me. I knew that would be the result years ago. Writing has been on the screaming wall of faces while the faces also screamed it.
WanderingThoughts@europe.pub
on 07 Jul 19:38
collapse
Management doesn’t ask people they want to fire is firing them is a good idea. They themselves would lie like crazy to keep their job and assume therefore everything the developers say would be a lie too.
Grandwolf319@sh.itjust.works
on 07 Jul 18:15
nextcollapse
As someone who has been a consultant/freelance dev for over 20 years now this is true. Lately I’ve been getting offers and contacts from places to essentially clean up the mess from LLMs/AI.
A lot of is pretty bad. It’s a mess. But like I said I’ve been at it for awhile and I’ve seen this before when companies were offshoring anything and everything to India and surprise, surprise, they didn’t learn anything. It’s literally the exact same thing. Instead of an Indian guy that claims they know everything and will work for peanuts, it’s AI pretty much stating the same shit.
I’ve been getting so many requests for gigs I’ve been hitting up random out of work devs on linkedin in my city and referring the jobs to them. I’ve burned through all my contacts that now I’m just reaching out to absolute strangers to get them work.
yes it’s that bad (well bad for companies, it’s fantastic for developers.)
EDIT: Since my comment has gained a lot of traction I’ve marked down peoples user names and portfolios/emails to my dev list. If something more comes up (and trust me, it will) I’ll shoot you an email or msg on here. Currently I’ve already shoved off a bunch of stuff to others and have nothing as of now but I imagine that will change by next week so if more stuff comes up I’ll shoot you an email or DM.
That’s what happens when you have Intel inside ;o)
(Yes, yes, I know, it’s the whole binary based floating point thing, not just Intel, although my Atari 800 BASIC interpreter implemented floating point in BCD, so it didn’t have that issue.)
austinfloyd@ttrpg.network
on 08 Jul 18:43
collapse
Would you happen to be willing to throw work to random out-of-work devs who aren’t in your city? I may know a couple over here in England…
LovableSidekick@lemmy.world
on 07 Jul 19:23
nextcollapse
Retired dev here, I’m curious about the nature of “the mess”. Is it buggy AI-generated code that got into production? I know an active dev who uses ChatGTP every day, says it saves him a hell of a lot of work. What he does sounds like “vibe coding”. If you’re using AI for grunt work and keep a human is in the workflow to verify the code, I don’t see how it would differ from junior devs working under a senior. Have some companies been using poorly managed all-AI tools or what? Sorry for the long question.
Knock_Knock_Lemmy_In@lemmy.world
on 07 Jul 20:29
nextcollapse
Think of AI as a hard working, arrogant, knowledgeable, unimaginative junior intern.
The vibe coding is great for small, self contained tasks. It doesn’t scale to a codebase (yet?).
An example from work a few weeks ago. I fixed some vibe coded UI code that had made it to prod. The layout of the UI was basically just meant to be an easy overview of information relevant to an item. The LLM had done everything right except it assumed a weird mix of tailwind and bootstrap, mixing and matching css classes from both. After I implemented the classes myself it went from a single column view to grids and nested grids grouping the data intuitively.
I talked with the dev who implemented it, and basically it was just something quickly cobbled together with AI until it was passable. The AI had added a lot of extra that served no function and that didn’t conform to a single css framework, but looked like it could. For months noone questioned it despite talk about that part of the UI needing a facelift.
I don’t know how representative it is, but about half the time I’m thoroughly confused about a piece of code and why it was written the way it was, the answer has turned out to be AI. And unlike when a developer wrote it, there rarely is any reason to have written it the weird way.
LovableSidekick@lemmy.world
on 07 Jul 21:54
collapse
TBH that sounds like a lot of code I’ve seen from outsourcing companies in India. Their typical approach is to copy an existing program, module, web page or whatever and modify it as quickly as possible to turn it into what’s needed. The result is often a mishmash of irrelevant code, giant data queries that happen to retrieve some field that’s needed along with a ton of unnecessary crap, mixing frameworks, etc.
essentially, from what I’ve been dealing with, most if it is their offshore people using the AI to completely do the job from start to finish and no one is verifying anything. So it’s not even vibe coding, it’s “here’s a prompt, build it, i’m pushing it to production” coding.
LovableSidekick@lemmy.world
on 08 Jul 17:10
collapse
LOL, sort of like hiring the CEO’s unemployed brother in law to build your new factory because he has a friend who knows about construction.
I imagine you aren’t talking about large companies that just let ai loose in their code base. Are these like companies that fired half their staff and realized llms couldn’t make up for the difference, or small companies that tried to make new apps without a proper team and came up short?
primarily medium to large companies. the smaller startups seem to know better. the former laid off a bunch of staff and in most cases offshored the work to people who ONLY use AI to build things. A few rare cases it’s been a Project Manager who paid for a Claude.ai subscription and had it build things from start to finish then push to production. If I see something that has a gradient background I know they had Claude build it.
dependencyinjection@discuss.tchncs.de
on 07 Jul 19:34
nextcollapse
Throw us some work if you like, although I already work as software engineer but wouldn’t turn down a side gig cleaning up after LLMs.
_haha_oh_wow_@sh.itjust.works
on 07 Jul 20:05
nextcollapse
They learned that by the time all of their shitty decisions ruin everything, they’ll be able to bail with their golden parachute while everyone else has to deal with the fallout.
Landless2029@lemmy.world
on 07 Jul 20:37
nextcollapse
Sounds like you need to start a company and per diem staff.
traceur301@lemmy.blahaj.zone
on 07 Jul 21:10
nextcollapse
Send them my way! I’m freelance currently and good at cleaning up that kind of stuff
ICastFist@programming.dev
on 08 Jul 12:21
collapse
What these companies didn’t take the time to understand is, A.I. is a tool to make employees more efficient, not to replace them. Sadly the vast majority of these companies will also fail to learn this lesson now and will get rid of A.I. systems altogether rather than use them properly.
When I write a document for my employer I use A.I. as a research and planning assistant, not as the writer. I still put in the work writing the document, I just use A.I. to simplify the tedious data gathering and organizing.
I just use A.I. to simplify the tedious data gathering and organizing.
If you’re conscientious, you check AI’s output the same way a conscientious licensed professional checks the work of an assistant before signing their name to it.
If you’re more typical… you’re at even greater risk trusting AI than you are when trusting an assistant who is trying to convince your bosses that they can do your job better than you.
rebelsimile@sh.itjust.works
on 07 Jul 18:56
nextcollapse
yes, 100%, do not use an LLM for anything you’re not prepared to vet and verify all of. The longer an LLM’s response the higher the odds it loses context and starts repeating or stating total gibberish or makes up data to keep going. If that’s what you want (like a list of fake addresses and phone numbers to prototype an app), great, but that’s about all it’s going to really do.
Oh I check the citations. I’m fully aware of A.I. hallucinations.
LovableSidekick@lemmy.world
on 07 Jul 19:28
collapse
My daughter has used AI a lot to write grant proposals, which she cleans up and rewords before submitting. In her prompts she tells it to ask her questions and incorporate her answers into the result, which she says works very well, produces high quality writing, and saves her a ton of time. She’s actually a very competent writer herself, so when she compliments the quality I know it means something.
That’s a good way to use the tool. I generally use the OpenAI option to set up a custom gpt and tell it to become an expert on the subject I’m writing about, then set the parameters. Then once I’ve tested it on a piece of the subject matter I already understand and confirm it’s working properly, I begin asking it questions. When I’m out of questions or just need a break, I go back and check the citations for each answer just to make sure I’m not getting bad data.
Once I’ve run out of questions and all the data is verified, I have it create an outline with a brief summary of each section. Then I take that outline and use that to guide me as I write. Also it seems like the A.I. always puts at least one section in the wrong place so that’s just another reason I like to write it myself and just use an A.I. summary outline.
Nah all they have to say is “that is what the guy from the XYZ consultancy suggested. He told me that everyone is replacing their coding teams with %95 AI assistants and a single newly graduated programmer that works for food.”
TankovayaDiviziya@lemmy.world
on 07 Jul 19:29
nextcollapse
McNamara fallacy at its finest. They hear figures and potential savings and then jump into the hype without considering the context. It is the same when they heard of lean manufacturing or Toyota way. Companies thought it is cost saving rather than process improvement.
some_guy@lemmy.sdf.org
on 07 Jul 19:42
nextcollapse
Well what ends up happening is some company will have a CEO.
He’ll make all the stupid decisions. But they’re only stupid from everybody ELSES perspective.
From his perspective, he uses AI, tanks the companies future in the chase of large short term stock gains. Then he gives himself a huge bonus, leaves the company, gets hired somewhere else, and gets to say “See how that company is failing without me? That’s because I bring value to the brand.”
So he gets hired at the neeeext place, meanwhile that first company is failing because of the actions of a CEO no longer employed there, and whom bailed because he knew what was coming.
These actions aren’t stupid. They’re plotted corruption for the benefit of one.
AnarchistArtificer@slrpnk.net
on 07 Jul 22:46
nextcollapse
What’s really stupid about this cycle is that some of these fail-upward executives genuinely believe the crap they’re spewing. Weirdly, I think I respect the grifting executives more
Edit: by grifting executives, I mean the ones who participate in that cycle you describe, and are aware of the harms they cause in their wake, but don’t care because they’ve gotten good at knowing when to skip out
I used to work with a supplier that hired a former Monsanto executive as their CEO. When his first agenda came out I told their sales team he was an idiot and to have fun looking for a new job a few months.
The CEO bailed after 2 years to start his own “consulting business.”
1 year later the company lost 75% of their market share and was laying off people left and right. They are still afloat barely.
After a couple years “consulting”, the CEO went to another company in 2023. He didn’t bounce fast enough and got caught on this one. He was fired 2 weeks ago and the company shut their doors except for a handful of staff to facilitate the firesale of the companies assets.
_haha_oh_wow_@sh.itjust.works
on 07 Jul 20:03
nextcollapse
AI as it exists today is only effective if used sparingly and cautiously by someone with domain knowledge who can identify the tasks (usually menial ones) that don’t need a human touch.
derpgon@programming.dev
on 07 Jul 20:31
nextcollapse
This 1000x. I am a PHP developer, I found out about two months ago that the AI assistant is included in my Jetbrains subscription (All pack, it was a separate thing before). And recently found about Junie, their AI agent that has deep thinking (or whatever the hell it is called). I tried it the same day to refactor part of my test that had to migrated to stop using a deprecated function call.
To my surprise, it required only very minor changes, but what would’ve taken me about 3 hours was done in half an hour. What I also liked was that it actually asked if it can run a terminal command to verify thr test results and it went back and fixed a broken test or two.
Finally I have faith in AI being useful to programmers.
For a test, I took our dev exam (for potential candidates) and just sent it to see what it does just based on the document, and besides a few mistakes it even used modern tools and not some 5 year old stuff (like PSR standards) and implemented core systems by itself using well known interfaces (from said PSRs). I asked it to change Dependency Injection to use Symfony DI instead of the self-made thing, and it worked flawlessly.
Of course, the code has to be reviewed or heavily specified to make sure it does what it is told to, but all in all it doesn’t look like just a gimmick anymore.
Absolutely, this matches my experience. I think this is also the experience of most coders who willingly use AI. I feel bad for the people who are forced to use it by their companies. And those who are laid off because of C-levels who think AI is capable of replacing an experienced coder.
There is no value in arguing about subjective topics. Feels useful to me if you know how to use it, used to generate one of the worst and random pieces of code you could create.
Some good examples from the bookkeeping/accounting industry is automating the matching of payments to the invoices and using AI to extract and process invoices.
Awkwardparticle@programming.dev
on 08 Jul 22:07
nextcollapse
The biggest point is that you must be an expert in the field you are using it in. I rarely get fooled by hallucinations and stupid bugs because they are glaringly obvious to me. The best use case is having the llm write code for using a library that has poor documentation, that am going to use once, and I am too lazy to learn.
These tools are scary when used by juniors, they are creating more work for everyone by using llms to code. I just imagine myself using this when I was a fresh grad, it is terrifying. It would have only been one step up from vibe coding.
I feel so bad for recent grads. First COVID then AI/LLMs, it’s such a bad time to be starting out. I feel so fortunate that I’m well into my career and can use AI responsibly without having to worry too much about it.
psycho_driver@lemmy.world
on 07 Jul 22:22
nextcollapse
Let them burn.
Phoenix3875@lemmy.world
on 07 Jul 22:52
nextcollapse
The BBC report cited mainly focused on the marketing industry, with the fixing mistake people being the copywriters. This gives a strong vibe of Madman, where you have the “old-fashioned” copywriters and the tension between market research.
Tollana1234567@lemmy.today
on 08 Jul 00:47
nextcollapse
thats because the main peddlers are the ceo/csuites of these tech companies, and the customers arnt people like you or me, its other corporate heads. in case of palintir it would be the government.
AnotherPenguin@programming.dev
on 08 Jul 01:51
nextcollapse
TuffNutzes@lemmy.world
on 08 Jul 06:55
nextcollapse
Very expected. It’s fine. I’ll come back at 10 times my previous rate. And you’ll thank me for it. Fucking chads.
funkyfarmington@lemmy.world
on 08 Jul 08:05
nextcollapse
Yet their reputations will somehow never return…
reluctant_squidd@lemmy.ca
on 08 Jul 08:48
nextcollapse
The even brighter side of it is that it should be easier to spot these companies when job hunting.
IMO: Demand higher wages and iron clad contracts from them because they already demonstrated how they feel about paying people.
They’ll surely cut anyone they can again as soon as they can.
Jhuskindle@lemmy.world
on 08 Jul 13:40
nextcollapse
Same thing happened during the outsourcing craze of early 2000s. Everything and I mean everything moved to India or Philippines. There’s even a movie about it because it was so common. I and everyone else lost our jobs. about a year later the contracts expired and we all got jobs back and outsourcing is used in balance. Eventually ai use will be balanced I hope. It cannot replace us. Not yet anyways.
threaded - newest
I hope this is true. I would like to have a job again.
It’s true, although the smart companies aren’t laying off workers in the first place, because they’re treating AI as a tool to enhance their productivity rather than a tool to replace them.
Fewer workers are required when their productivity is enhanced.
So conversely, we’ll need more workers now that generative AI is hindering productivity.
Does anyone have numbers on that? Microsoft just announced they’re laying off around 10k.
Doesn’t that have more to do with Gamepass eating game studios’ lunch though? And a lot less with AI? Just regular ol’ dumbass management decisions.
It’s Microsoft would make most sense its mangement decisions considering recently theyve pulled all the stops out to guarantee the software cant be shittier. They even made all there software spyware now.
Microsoft did the June layoffs we knew were coming since January and pinned it on “AI cost savings” so that doing so would raise their stock price instead of lower it.
they also admitted that thier AI isnt generating profit too.
I don’t know if it even helps with productivity that much. A lot of bosses think developers’ entire job is just churning out code when it’s actually like 50% coding and 50% listening to stakeholders, planning, collaborating with designers, etc. I mean, it’s fine for a quick Python script or whatever but that might save an experienced developer 20 minutes max.
And if you “write” me an email using Chat GPT and I just read a summary, what is the fucking point? All the nuance is lost. Specialized A.I. is great! I’m all for it combing through giant astronomy data sets or protein folding and stuff like that. But I don’t know that I’ve seen generative A.I. without a specific focus increase productivity very much.
I was a frontend developer and UI/UX designer that specialized in JavaScript and Typescript with emphasis on React. I’m learning Python for Flask. I’m skipping meals so I can afford Udemy courses then AWS certifications. I don’t enjoy any of this and I’m falling apart.
Hey there. Of course, I am in no position to say “do this, and it will be all right”, but I will say that if there is any other way to live that won’t put this kind of load on you - do it. You being happier is way way more needed in this world than you getting those certificates
I can’t think of any other options that don’t end in the best case scenario of myself being elderly and destitute.
Fuck. Sorry to hear. Though that means all this ai bullshit won’t drown you, since you are after actual knowledge and skill. And if this makes any difference, I for one wish your life to be as sparing as it can possibly get
As a senior developer, my most productive days are genuinely when I remove a lot of code. This might seem like negative productivity to a naive beancounter, but in fact this is my peak contribution to the software and the organization. Simplifying, optimizing, identifying what code is no longer needed, removing technical debt, improving maintainability, this is what requires most of my experience and skill and contextual knowledge to do safely and correctly. AI has no ability to do this in any meaningful way, and code bases filled with mostly AI generated code are bound to become an unmaintainable nightmare (which I will eventually be paid handsomely to fix, I suspect)
That’s what I suspect. ChatGPT is never wrong, and even if it doesn’t know, it knows and still answers something. I guess its no different for source code: always add, never delete.
Yesterday it tried to tell me Duration.TotalYears() and Number.IsNaN() were M functions in the first few interactions. I immediately called it out and for the first time ever, it doubled-down.
<img alt="" src="https://lemmy.world/pictrs/image/80d85c30-d36d-4fc5-b2f1-9316c88e4870.jpeg"> <img alt="" src="https://lemmy.world/pictrs/image/9b26d56c-c807-4460-84b3-893ecbd6835c.jpeg">
I think I’m at a level where, for most cases, what I ask of LLMs for coding is too advanced, else I just do it myself. This results in a very high counts of bullshit. But even for the most basic stuff, I have to take the time to read all of it and fix or optimise mistakes.
My company has a policy to try to make use of LLMs for work. I’m not a fan. Most of the time I spend explaining the infrastructure and whatnot would be better spent just working, because half the time the model suggests something that flies in the face of what’s needed, or outright suggests changes that we can’t implement.
It’s such a waste of time and resources.
Getting to deprecate legacy support… Yes please, let me get my eraser.
I find most tech debt resolution adds code though.
A lot of leadership is incompetent. In a reasonable, just, world they would not be in these decision making positions.
Verbose blogger Ed Zitron wrote about this. He called them “Business Idiots”: wheresyoured.at/the-era-of-the-business-idiot/
I just watched an interview of Karen Hao and she mentioned something along the lines of executives being oversold AI as something to replace everyone instead of something that should exist alongside people to help them, and they believe it.
Fuuuck, this infuriates me. I wrote that shit for a reason. People already don’t read shit before replying to it and this is making it so much worse.
So some places started forcing developers to use AI with a quota and monitor the usage. Of course the devs don’t go checking each AI generated line for correctness. That’s bad for the quota. It’s guaranteed to add more slop to the codebase.
It helps with translating. My job is basically double-checking the translation quality and people are essentially paying me for my assurance. Of course, I take responsibility for any mistakes.
Productivity will go up, wages will remain the same, and no additional time off will be given to employees. They’ll merely be required to produce 4x as much and compensation will not increase to match.
It seems the point of all these machines and automation isn’t to make our individual lives easier and more prosperous, but instead to increase and maximize shareholder value.
Idk about engaging productivity.
If your job is just doing a lot of trivial code that just gets used once, yeah I can see it improving productivity.
If your job is more tackling the least trivial challenges and constantly needing to understand the edge cases or uncharted waters of the framework/tool/language, it’s completely useless.
This is why you get a lot of newbies loving AI and a lot of seniors saying it’s counter productive.
It’s technically closer to Schrodinger’s truth. It goes both ways depending on “when” you look at it. Publicly traded companies are more or less expected to adopt AI as it is the next “cheap” labor… so long as it is the cheapest of any option. See the very related: slave labor and it’s variants, child labor, and “outsourcing” to “less developed” countries.
The problem is they need to dance between this experimental technology and … having a publicly “functional” company. The line demands you cut costs but also increase service. So basically overcorrection hell. Mass hirings into mass firings. Every quarter / two quarters depending on the company… until one of two things becomes true: ai works or ai no longer is the cheapest solution. I imagine that will rubberband for quite some time. (saas shit like oracle etc)
In short - I’d not expect this to be more than a brief reprieve from a rapidly drying well. Take advantage of it for now - but I’d recommend not expecting it to remain.
The line demands it go up. It doesn’t care how you get there. In many cases, decreasing service while also cutting costs is the way to do it so long as line goes up.
See: enshittification
Absolutely. I should have used the term productivity rather than service. Lack of caffeine had blunted my vocabulary. In essence: more output for less work. Output in this case is profit.
Enshitification is, in essence, the push beyond diminishing returns into the ‘lossy’ space … sacrificing a for b. The end result is an increasingly shitty experience.
I think what makes enshittification is “give users less and charge more”. It’s about returning shareholder value instead of customer value.
Netflix is a great example. They have pulled back on content, made password sharing more challenging, and increased cost. They still report increases in paying users.
They’ve done the math. They know they can take lost in users because they know they’ll make up for it. That’s the sad part in all of this.
They really haven’t taken massive hits because we are creatures of habit: it’s more convenient to hang around even if we know we’re getting ripped off. There is a conversion rate - but it’s low enough where clearly they believe the market will bear more abuse.
jobs are for suckers, be a consultant and charge triple
I’m absolutely not charismatic enough to pull that off.
youre in luck, i offer consultation for consultancing, now give me money
This person sounds confident! You’d be stupid not to take them up on it.
Same thing happened with companies that used outsourcing expecting it to be a magic bullet.
Or more generalized: management going all-in with their decisions, forgetting there is a sweet spot for everything, and then backtracking losing employee time and company money. Sometimes these cause huge backlash, like Wells Fargo pushy sales practices, or great loses, like Meta with Metaverse
I worked in one of these companies. Within months, we went from a company I would be proud to recommend to friends to a service I would never use myself, just due to the horrendous route they took to hire overseas support.
The line of tech work I was in required about a month of training after passing the interview process, and even then you had to take a test at the end to prove you’d absorbed the material before you ever speak to a customer.
When they outsourced, they just bought a company of like 30 people in an adjacent industry and gave them a week of training. Our call queues were never worse and every customer was angry with everyone by the time they talked to someone who had training.
I don’t blame the overseas agents. I blame all the companies that treat them like cattle.
Vibe coding is 5% asking for code and 95% cleaning up the code, turns out replacing people with AI is exactly the same.
Jup. But the same goes for developers that go way too fast when setting up a project or library. 2-3 months in and everything is a mess. Weird function names, all one letter vars, no inversion of control, hardcoded things etc. Good luck fixing it.
This is what I fight against every goddamn day, and I get yelled at for fighting against it, but I’m not going to stop. I want to build shit that I can largely forget about (because, you know, it’s reliable and logically extensible and maintainable) after it gets to a mature state, and I’m not shy about making that known. This has led to more than a few significant conflicts over the course of my career. It has also led to me saying “I fucking told you so” more than a few times.
I have had several situations where I didn’t even have to give knowing looks, everybody in the room knew I told them so six months ago and here it is. When that led to problems working with my leadership in the future (which happened more often than not), that was a 100% reliable sign that I would be happier and more successful elsewhere.
I’m still not sure how this is any different than when I used stack exchange for exactly the same thing.
Well, SE code usually compiled and did what it said. I guess that part is different.
Practically negligible then…
However how the heck have you all been using stack exchange? My questions are typically something along the lines of:
“How to use a numpy mask with pandas dataframes”
Not something that gives me 50 lines of code.
Oh, yeah. But I assumed that’s how competent coders use chatgpt. For edge cases and boilerplate.
Stack Exchange coding is 5% finding solutions to try and 95% copy-pasting those solutions into your project, discovering why they don’t work for you, and trying the next solution on the search list.
I wonder if there’s a market here. I feel like a company that cleans up AI bullshit would make bank lol
You son of a bitch, I’m in!
Nah, I came here to make this comment and you already have it well in hand. It’s not really any different other than the marketing spin, though. Companies have always had bad code and hired specialists to sort it out. And over half of the specialists suck, too, and so the merry-go-round spins.
All the leadership who made this mistake should be fired. They are clearly incompetent
But i guess it’s always labor that pays the price
You know they’re just going to get bonuses and promotions.
The power to fire lies within the leadership themselves though…
Oh, you mean an actual fire?! I like your way of thinking.
What’s sad is that the AI hype did inflate stock prices.
Most c suite’s job is to look out for the interests of investors.
Technically they did a good job. I hate capitalism
AI: Confidently Incorrect
Kinda like Wal-Mart trying to “save money” with self check out and now they are walking it back.
At least in my area they’ve decided to walk back the walk back.
They went from “Self checkouts are now only for ten items or less” to “Self checkouts are permanently closed” and now they’ve gone to “Self checkouts can be used for any number of items and also we added four more”.
Hiring people at lower wages that is.
They should have just asked me. I knew that would be the result years ago. Writing has been on the screaming wall of faces while the faces also screamed it.
Management doesn’t ask people they want to fire is firing them is a good idea. They themselves would lie like crazy to keep their job and assume therefore everything the developers say would be a lie too.
Ah so AI does create jobs, it’s the Zorg logic
Jean-Baptiste
Emmanuel
Zorg
<img alt="" src="https://lemmy.dbzer0.com/pictrs/image/94143cde-ad5b-4b4e-8c3b-d975f4d8c6a8.webp">
Pretty damn good jobs too, tbh.
As someone who has been a consultant/freelance dev for over 20 years now this is true. Lately I’ve been getting offers and contacts from places to essentially clean up the mess from LLMs/AI.
A lot of is pretty bad. It’s a mess. But like I said I’ve been at it for awhile and I’ve seen this before when companies were offshoring anything and everything to India and surprise, surprise, they didn’t learn anything. It’s literally the exact same thing. Instead of an Indian guy that claims they know everything and will work for peanuts, it’s AI pretty much stating the same shit.
I’ve been getting so many requests for gigs I’ve been hitting up random out of work devs on linkedin in my city and referring the jobs to them. I’ve burned through all my contacts that now I’m just reaching out to absolute strangers to get them work.
yes it’s that bad (well bad for companies, it’s fantastic for developers.)
EDIT: Since my comment has gained a lot of traction I’ve marked down peoples user names and portfolios/emails to my dev list. If something more comes up (and trust me, it will) I’ll shoot you an email or msg on here. Currently I’ve already shoved off a bunch of stuff to others and have nothing as of now but I imagine that will change by next week so if more stuff comes up I’ll shoot you an email or DM.
We’ve hired a bunch of Indian guys who are using AI to do their work… the results are marginally better than either approach independently.
a negative times a negative is a positive?
More like 0.10 + 0.05 = 0.20, in this case.
To be fair, 0.2 + 0.1 = 0.30000000000000004
That’s what happens when you have Intel inside ;o)
(Yes, yes, I know, it’s the whole binary based floating point thing, not just Intel, although my Atari 800 BASIC interpreter implemented floating point in BCD, so it didn’t have that issue.)
Welcome to the secret robot Internet
Absent Indians using AI? The AI ouroboros?
Sometimes it is a bunch of Indian guys pretending to be AI!
Would you happen to be willing to throw work to random out-of-work devs who aren’t in your city? I may know a couple over here in England…
Retired dev here, I’m curious about the nature of “the mess”. Is it buggy AI-generated code that got into production? I know an active dev who uses ChatGTP every day, says it saves him a hell of a lot of work. What he does sounds like “vibe coding”. If you’re using AI for grunt work and keep a human is in the workflow to verify the code, I don’t see how it would differ from junior devs working under a senior. Have some companies been using poorly managed all-AI tools or what? Sorry for the long question.
Think of AI as a hard working, arrogant, knowledgeable, unimaginative junior intern.
The vibe coding is great for small, self contained tasks. It doesn’t scale to a codebase (yet?).
An example from work a few weeks ago. I fixed some vibe coded UI code that had made it to prod. The layout of the UI was basically just meant to be an easy overview of information relevant to an item. The LLM had done everything right except it assumed a weird mix of tailwind and bootstrap, mixing and matching css classes from both. After I implemented the classes myself it went from a single column view to grids and nested grids grouping the data intuitively. I talked with the dev who implemented it, and basically it was just something quickly cobbled together with AI until it was passable. The AI had added a lot of extra that served no function and that didn’t conform to a single css framework, but looked like it could. For months noone questioned it despite talk about that part of the UI needing a facelift.
I don’t know how representative it is, but about half the time I’m thoroughly confused about a piece of code and why it was written the way it was, the answer has turned out to be AI. And unlike when a developer wrote it, there rarely is any reason to have written it the weird way.
TBH that sounds like a lot of code I’ve seen from outsourcing companies in India. Their typical approach is to copy an existing program, module, web page or whatever and modify it as quickly as possible to turn it into what’s needed. The result is often a mishmash of irrelevant code, giant data queries that happen to retrieve some field that’s needed along with a ton of unnecessary crap, mixing frameworks, etc.
essentially, from what I’ve been dealing with, most if it is their offshore people using the AI to completely do the job from start to finish and no one is verifying anything. So it’s not even vibe coding, it’s “here’s a prompt, build it, i’m pushing it to production” coding.
LOL, sort of like hiring the CEO’s unemployed brother in law to build your new factory because he has a friend who knows about construction.
I imagine you aren’t talking about large companies that just let ai loose in their code base. Are these like companies that fired half their staff and realized llms couldn’t make up for the difference, or small companies that tried to make new apps without a proper team and came up short?
primarily medium to large companies. the smaller startups seem to know better. the former laid off a bunch of staff and in most cases offshored the work to people who ONLY use AI to build things. A few rare cases it’s been a Project Manager who paid for a Claude.ai subscription and had it build things from start to finish then push to production. If I see something that has a gradient background I know they had Claude build it.
Throw us some work if you like, although I already work as software engineer but wouldn’t turn down a side gig cleaning up after LLMs.
They learned that by the time all of their shitty decisions ruin everything, they’ll be able to bail with their golden parachute while everyone else has to deal with the fallout.
Sounds like you need to start a company and per diem staff.
Send them my way! I’m freelance currently and good at cleaning up that kind of stuff
How much is the pay for those gigs?
What these companies didn’t take the time to understand is, A.I. is a tool to make employees more efficient, not to replace them. Sadly the vast majority of these companies will also fail to learn this lesson now and will get rid of A.I. systems altogether rather than use them properly.
When I write a document for my employer I use A.I. as a research and planning assistant, not as the writer. I still put in the work writing the document, I just use A.I. to simplify the tedious data gathering and organizing.
If you’re conscientious, you check AI’s output the same way a conscientious licensed professional checks the work of an assistant before signing their name to it.
If you’re more typical… you’re at even greater risk trusting AI than you are when trusting an assistant who is trying to convince your bosses that they can do your job better than you.
yes, 100%, do not use an LLM for anything you’re not prepared to vet and verify all of. The longer an LLM’s response the higher the odds it loses context and starts repeating or stating total gibberish or makes up data to keep going. If that’s what you want (like a list of fake addresses and phone numbers to prototype an app), great, but that’s about all it’s going to really do.
Oh I check the citations. I’m fully aware of A.I. hallucinations.
My daughter has used AI a lot to write grant proposals, which she cleans up and rewords before submitting. In her prompts she tells it to ask her questions and incorporate her answers into the result, which she says works very well, produces high quality writing, and saves her a ton of time. She’s actually a very competent writer herself, so when she compliments the quality I know it means something.
That’s a good way to use the tool. I generally use the OpenAI option to set up a custom gpt and tell it to become an expert on the subject I’m writing about, then set the parameters. Then once I’ve tested it on a piece of the subject matter I already understand and confirm it’s working properly, I begin asking it questions. When I’m out of questions or just need a break, I go back and check the citations for each answer just to make sure I’m not getting bad data.
Once I’ve run out of questions and all the data is verified, I have it create an outline with a brief summary of each section. Then I take that outline and use that to guide me as I write. Also it seems like the A.I. always puts at least one section in the wrong place so that’s just another reason I like to write it myself and just use an A.I. summary outline.
Oh noes, who could have seen this coming
And no doubt struggling to blame their bad decisions on each other and preserve their salary bonuses.
Or just declare victory and move on to the next project quickly
Nah all they have to say is “that is what the guy from the XYZ consultancy suggested. He told me that everyone is replacing their coding teams with %95 AI assistants and a single newly graduated programmer that works for food.”
McNamara fallacy at its finest. They hear figures and potential savings and then jump into the hype without considering the context. It is the same when they heard of lean manufacturing or Toyota way. Companies thought it is cost saving rather than process improvement.
Companies with stupid leaders deserve to fail.
Well what ends up happening is some company will have a CEO.
He’ll make all the stupid decisions. But they’re only stupid from everybody ELSES perspective.
From his perspective, he uses AI, tanks the companies future in the chase of large short term stock gains. Then he gives himself a huge bonus, leaves the company, gets hired somewhere else, and gets to say “See how that company is failing without me? That’s because I bring value to the brand.”
So he gets hired at the neeeext place, meanwhile that first company is failing because of the actions of a CEO no longer employed there, and whom bailed because he knew what was coming.
These actions aren’t stupid. They’re plotted corruption for the benefit of one.
What’s really stupid about this cycle is that some of these fail-upward executives genuinely believe the crap they’re spewing. Weirdly, I think I respect the grifting executives more
Edit: by grifting executives, I mean the ones who participate in that cycle you describe, and are aware of the harms they cause in their wake, but don’t care because they’ve gotten good at knowing when to skip out
No that never happens /S
I used to work with a supplier that hired a former Monsanto executive as their CEO. When his first agenda came out I told their sales team he was an idiot and to have fun looking for a new job a few months.
The CEO bailed after 2 years to start his own “consulting business.”
1 year later the company lost 75% of their market share and was laying off people left and right. They are still afloat barely.
After a couple years “consulting”, the CEO went to another company in 2023. He didn’t bounce fast enough and got caught on this one. He was fired 2 weeks ago and the company shut their doors except for a handful of staff to facilitate the firesale of the companies assets.
AI: The new outsourcing?
AI as it exists today is only effective if used sparingly and cautiously by someone with domain knowledge who can identify the tasks (usually menial ones) that don’t need a human touch.
This 1000x. I am a PHP developer, I found out about two months ago that the AI assistant is included in my Jetbrains subscription (All pack, it was a separate thing before). And recently found about Junie, their AI agent that has deep thinking (or whatever the hell it is called). I tried it the same day to refactor part of my test that had to migrated to stop using a deprecated function call.
To my surprise, it required only very minor changes, but what would’ve taken me about 3 hours was done in half an hour. What I also liked was that it actually asked if it can run a terminal command to verify thr test results and it went back and fixed a broken test or two.
Finally I have faith in AI being useful to programmers.
For a test, I took our dev exam (for potential candidates) and just sent it to see what it does just based on the document, and besides a few mistakes it even used modern tools and not some 5 year old stuff (like PSR standards) and implemented core systems by itself using well known interfaces (from said PSRs). I asked it to change Dependency Injection to use Symfony DI instead of the self-made thing, and it worked flawlessly.
Of course, the code has to be reviewed or heavily specified to make sure it does what it is told to, but all in all it doesn’t look like just a gimmick anymore.
Absolutely, this matches my experience. I think this is also the experience of most coders who willingly use AI. I feel bad for the people who are forced to use it by their companies. And those who are laid off because of C-levels who think AI is capable of replacing an experienced coder.
It still does 😞
There is no value in arguing about subjective topics. Feels useful to me if you know how to use it, used to generate one of the worst and random pieces of code you could create.
Some good examples from the bookkeeping/accounting industry is automating the matching of payments to the invoices and using AI to extract and process invoices.
The biggest point is that you must be an expert in the field you are using it in. I rarely get fooled by hallucinations and stupid bugs because they are glaringly obvious to me. The best use case is having the llm write code for using a library that has poor documentation, that am going to use once, and I am too lazy to learn. These tools are scary when used by juniors, they are creating more work for everyone by using llms to code. I just imagine myself using this when I was a fresh grad, it is terrifying. It would have only been one step up from vibe coding.
I feel so bad for recent grads. First COVID then AI/LLMs, it’s such a bad time to be starting out. I feel so fortunate that I’m well into my career and can use AI responsibly without having to worry too much about it.
may well be a Gell-Mann amnesia simulator when used improperly.
This is the best description of current AI I’ve seen so far.
In the situation outlined, it can be pretty effective.
<img alt="" src="https://lemmy.world/pictrs/image/ff07f9f7-54c1-448a-b472-b3f66f4c68cb.png">
.
Let them burn.
The BBC report cited mainly focused on the marketing industry, with the fixing mistake people being the copywriters. This gives a strong vibe of Madman, where you have the “old-fashioned” copywriters and the tension between market research.
thats because the main peddlers are the ceo/csuites of these tech companies, and the customers arnt people like you or me, its other corporate heads. in case of palintir it would be the government.
Deserved and expected
no surprise
Very expected. It’s fine. I’ll come back at 10 times my previous rate. And you’ll thank me for it. Fucking chads.
Yet their reputations will somehow never return…
The even brighter side of it is that it should be easier to spot these companies when job hunting.
IMO: Demand higher wages and iron clad contracts from them because they already demonstrated how they feel about paying people.
They’ll surely cut anyone they can again as soon as they can.
Same thing happened during the outsourcing craze of early 2000s. Everything and I mean everything moved to India or Philippines. There’s even a movie about it because it was so common. I and everyone else lost our jobs. about a year later the contracts expired and we all got jobs back and outsourcing is used in balance. Eventually ai use will be balanced I hope. It cannot replace us. Not yet anyways.
AI needs to be used as a tool for workers, not a replacement for workers. They will figure it out eventually.
Hope they lose billions!!