CarbonatedPastaSauce@lemmy.world
on 07 Apr 2025 21:49
nextcollapse
If you work there, run away fast.
jordanlund@lemmy.world
on 07 Apr 2025 21:54
nextcollapse
Dear Tobi Lütke - AI can do your job too. Care to comment?
avidamoeba@lemmy.ca
on 07 Apr 2025 21:57
nextcollapse
Jesus fucking Christ.
besselj@lemmy.ca
on 07 Apr 2025 22:04
nextcollapse
What these CEOs don’t understand is that even an error rate as low as 1% for LLMs is unacceptable at scale. Fully automating without humans somewhere in the loop will lead to major legal liabilities down the line, esp if mistakes can’t be fixed fast.
wagesj45@fedia.io
on 07 Apr 2025 22:27
nextcollapse
I suspect everyone is just going to be a manager from now on, managing AIs instead of people.
vinnymac@lemmy.world
on 08 Apr 2025 14:09
collapse
Building AI tools will also require very few of the skills of a manager from our generation. It’s better to be a prompt engineer, building evals and agentic AI than it is to actually manage. Management will be replaced by AI, it’s turtles all the way down. They’re going to expect you to be both a project manager and an engineer at the same time going forward, especially at less enterprising organizations with lower compliance and security bars to jump over. If you think of an organization as a tree structure, imagine if the tree was pruned, with fewer branches to the top, that’s what I imagine there end goal is.
NuXCOM_90Percent@lemmy.zip
on 07 Apr 2025 22:30
nextcollapse
…
What error rate do you think humans have? Because it sure as hell ain’t as low as 1%.
But yeah, it is like the other person said: This gets rid of most employees but still leaves managers. And a manager dealing with an idiot who went off script versus an AI who hallucinated something is the same problem. If it is small? Just leave it. If it is big? Cancel the order.
ogmios@sh.itjust.works
on 07 Apr 2025 23:18
nextcollapse
I mean it is also generous to the Artificial Idiot to say it only has a 1% error rate, it’s probably closer to 10% on the low end. Which humans can be far better than in terms of just directly following the assigned task but does not factor in how people can adapt and problem solve. Most minor issues real people have can be solved without much of a fuss because of that. Meanwhile the Artificial Idiot can’t even draw a full wine glass so good luck getting it to fix its own mistake on something important.
NuXCOM_90Percent@lemmy.zip
on 08 Apr 2025 01:58
collapse
Which humans can be far better than in terms of just directly following the assigned task but does not factor in how people can adapt and problem solve.
How’s that annoying meme go? Tell me that you’ve never been a middle manager without telling me that you’ve never been a middle manager?
You can keep pulling numbers out of your bum to argue that AI is worse. That just creates a simple bar to follow because… most workers REALLY are incompetent (now, how much of that has to do with being overworked and underpaid during late stage capitalism is a related discussion…). So all “AI Companies” have to do is beat ridiculously low metrics.
Or we can acknowledge the real problem. “AI” is already a “better worker” than the vast majority of entry level positions (and that includes title inflation). We can either choose not to use it (fat chance) or we can acknowledge that we are looking at a fundamental shift in what employment is. And we can also realize that not hiring and training those entry level goobers is how you never have anyone who can actually “manage” the AI workers.
WanderingThoughts@europe.pub
on 08 Apr 2025 08:44
collapse
how you never have anyone who can actually “manage” the AI workers.
You just use other AI to manage those worker AI. Experiments do show that having different instances of AI/LLM, each with an assigned role like manager, designer, coding or quality checks, perform pretty good working together. But that was with small stuff. I haven’t seen anyone wiling to test with complex products.
NuXCOM_90Percent@lemmy.zip
on 08 Apr 2025 14:23
collapse
I’ve seen those demos and they are very much staged publicity.
The reality is that the vast majority of those roles would be baked into the initial request. And the reality of THAT is the same as managing a team of newbies and “rock star” developers with title inflation: Your SDLC is such that you totally trust your team. The reality is that you spend most of your day monitoring them and are ready to “ask a stupid question” if you figured out they broke main while you were skimming the MRs in between meetings. Or you are “just checking in to let you know this guy is the best” if your sales team have a tendency to say complete and utter nonsense for a commission.
Design gets weird. Generally speaking, you can tell a team to “give me a mock-up of a modern shopping cart interface”. That is true whether your team is one LLM or ten people under a UI/UX Engineer. And the reality is that you then need to actually look at that and possibly consult your SMEs to see if it is a good design or if it is the kind of nonsense the vast majority of UX Engineers make (some are amazing and focus on usability studies and scholarly articles. Most just rock vibes and copy Amazon…). Which, again, is not that different than an “AI”.
So, for the forseeable future: “Management” and designers are still needed. “AI” is ridiculously good at doing the entry level jobs (and reddit will never acknowledge that “just give me a bunch of jira tickets with properly defined requirements and test cases” means they have an entry level job after 20 years of software engineering…). It isn’t going to design a product or prioritize what features to work on. Over time, said prioritizing will likely be less “Okay ChatGPT. Implement smart scrolling” and more akin to labeling where people say “That is a good priority” or “That is a bad priority”. But we are a long way off from that.
But… that is why it is important to stop with the bullshit “AI can’t draw feet, ha ha ha” and focus more on the reality of what is going to happen to labor both short and long term.
FourWaveforms@lemm.ee
on 08 Apr 2025 03:55
nextcollapse
Error rate for good, disciplined developers is easily below 1%. That’s what tests are for.
taladar@sh.itjust.works
on 08 Apr 2025 10:31
collapse
The error rate for human employees for the kind of errors AI makes is much, much lower. Humans make mistakes that are close to the intended task and have very little chance of being completely different. AI does the latter all the time.
CosmoNova@lemmy.world
on 07 Apr 2025 23:13
collapse
Yup. If 1% of all requests result in failures and even cause damages, you‘ll quickly lose 99% of your customers.
VanillaFrosty@lemmy.world
on 08 Apr 2025 02:20
collapse
It’s starting to look like the oligarchs are going to replace every position they can with AI everywhere so we have no choice but to deal with its shit.
nectar45@lemmy.zip
on 07 Apr 2025 22:10
nextcollapse
Uhm…but like…at the moment you cant really trust ai to do ANYTHING alone
ogmios@sh.itjust.works
on 07 Apr 2025 23:21
collapse
A lot was invested on the promise of AI, only to discover that it’s not capable of becoming this “super intelligence” people were banking on.
nectar45@lemmy.zip
on 07 Apr 2025 23:40
nextcollapse
Not until they find a way to properly simulate emotions on it
Gonna take a while for that
jubilationtcornpone@sh.itjust.works
on 08 Apr 2025 04:54
collapse
They were going for “super intelligence” and instead they got Cliff Clavin from Cheers.
“It’s a little-known fact that the tan became popular in what is known as the Bronze Age.”
Shame, because I used to actually admire how he handled layoffs. Was a far sight better (from outside looking in) than the “thanks, here’s one extra paycheck, send your laptop back at your expense please” I’d experienced
desmosthenes@lemmy.world
on 08 Apr 2025 01:46
collapse
Still have mine gathering dust when one american startup (went under already) laid me off 1 day before I had to be legally granted my equity shares and they had the audacity to ask me to arrange the return lmao
desmosthenes@lemmy.world
on 08 Apr 2025 18:18
collapse
oh wow that’s grimey
darkpanda@lemmy.ca
on 07 Apr 2025 22:19
nextcollapse
Dev: “Boss, we need additional storage on the database cluster to handle the latest clients we signed up.”
Boss: “First see if AI can do it.”
NuXCOM_90Percent@lemmy.zip
on 07 Apr 2025 22:31
nextcollapse
Currently the answer would be “Have you tried compressing the data?” and “Do we really need all that data per client?”. Both of which boil down to “ask the engineers to fix it for you and then come back to me if you are a failure”
ramielrowe@lemmy.world
on 08 Apr 2025 01:06
collapse
A coworker of mine built an LLM powered FUSE filesystem as a very tongue-in-check response to the concept of letting AI do everything. It let the LLM generate responses to listing files in directories and reading contents of the files.
Litebit@lemmy.world
on 07 Apr 2025 22:47
nextcollapse
ask why there is a need for CEO, a job that can be done by AI.
whygohomie@lemmy.world
on 07 Apr 2025 23:01
nextcollapse
ShopIffy
CosmoNova@lemmy.world
on 07 Apr 2025 23:11
nextcollapse
Asking to prove a negative? More like stupidfy CEO.
Just reminding everyone that Lutke is a right-wing shitheel, and that he and Shopify explicitly platform, support and make money from Nazism.
Carry on.
ShittyBeatlesFCPres@lemmy.world
on 08 Apr 2025 01:39
nextcollapse
Hard to imagine a CEO doing something that would make me less likely to apply or use their service.
doodledup@lemmy.world
on 08 Apr 2025 01:48
nextcollapse
Why?
ShittyBeatlesFCPres@lemmy.world
on 08 Apr 2025 01:51
collapse
Because it’s alpha software. We’re 40 years away from “A.I.” being able to be competent at anything.
ShittyBeatlesFCPres@lemmy.world
on 08 Apr 2025 01:54
nextcollapse
Did you see the wack ass Quake II version Microsoft bragged about? It wasn’t even playable. A fucking 12 year old could do better.
doodledup@lemmy.world
on 08 Apr 2025 02:04
collapse
Na man. It’s being used extensively in many jobs. Software development especially. You’re misinformed or have a biased view on it based on your personal experience with it.
ShittyBeatlesFCPres@lemmy.world
on 08 Apr 2025 02:12
nextcollapse
I use it in software development and it hasn’t changed my life. It’s slightly more convenient than last gen code completion but I’ve never worked on a project where code per hours was the hold up. One less stand-up per week would probably increase developer productivity more than GitHub Copilot.
jubilationtcornpone@sh.itjust.works
on 08 Apr 2025 04:41
collapse
Tried using Copilot on a few C# projects. I didn’t find it to be any better than Resharper. If anything it was worse because it would give me auto complete samples that were not even close to what I wanted. Not all the time but not infrequently either.
ShittyBeatlesFCPres@lemmy.world
on 08 Apr 2025 02:22
nextcollapse
Even if it does the basic shit at the expense of me working one less hour a week, it’s not worth paying for. And that ignores the downsides like spam, bots, data centers needing power/water, and politicians thinking GPU cards are national security secrets.
I don’t think we need a Skynet scenario to imagine the downsides.
pinball_wizard@lemmy.zip
on 08 Apr 2025 04:43
collapse
As a developer, we use AI “extensively” because it’s currently practically free and we rarely say no to free stuff.
It is, indeed, slightly better than last year’s autocomplete.
AI is also amazing at letting non-developers accomplish routine stuff that isn’t particularly interesting.
If someone is trying to avoid paying for one afternoon of my time, an AI subscription and months of trial and error are a new option for them. So I guess that’s pretty neat.
And in 10 years we will need 128GB RAM in every computer just to load a website that could have been 1MB of html and embedded images in a browser using 256MB of RAM.
ShittyBeatlesFCPres@lemmy.world
on 08 Apr 2025 01:50
nextcollapse
Dear CEOs: I will never accept 0.5% hallucinations as “A.I.” and if you don’t even know that, I want an A.I. machine cooking all your meals. If you aren’t ok with 1/200 of your meals containing poison, you’re expendable.
Humans or even regular ass algorithms are fine. A.I. can predict protein folding. It should do a lot else unless there’s a generational leap from “making shitty images” to “as close to perfect as it gets.”
taladar@sh.itjust.works
on 08 Apr 2025 10:18
collapse
Cooking meals seems like a good first step towards teaching AI programming. After all the recipe analogy is ubiquitous in programming intro courses. /s
Melvin_Ferd@lemmy.world
on 08 Apr 2025 16:31
collapse
Are you a business owner?
ShittyBeatlesFCPres@lemmy.world
on 08 Apr 2025 22:26
collapse
Not currently. I used to be.
doodledup@lemmy.world
on 08 Apr 2025 01:50
nextcollapse
Every CEO should do that. There’s so much bureaucracy and unnecessary bloat in companies that can be reduced if the leadership is a little more riguous. This is especially important in the current market.
AHamSandwich@lemmy.world
on 08 Apr 2025 23:04
collapse
asyncopation@lemm.ee
on 08 Apr 2025 02:29
collapse
There are dozens of us. Dozens!!!
GnuLinuxDude@lemmy.ml
on 08 Apr 2025 02:30
nextcollapse
Employees should start setting up an AI to prove it can do Tobi Lutke’s extremely difficult job of making a small number of important decisions every once in a while.
taladar@sh.itjust.works
on 08 Apr 2025 10:23
collapse
Can you prove that he makes any important decisions?
RandoMcRanderton@lemmy.world
on 08 Apr 2025 02:52
nextcollapse
“Stagnation is almost certain, and stagnation is slow-motion failure.”
This has some strong Ricky Bobby vibes, “If you ain’t first, you’re last.” I never have understood how companies are supposed to have unlimited growth. At some point when every human on earth that can use their service/product is already doing so, where else is there to go? Isn’t stagnation being almost certain just a reality of a finite world?
0x0@lemmy.dbzer0.com
on 08 Apr 2025 03:10
nextcollapse
At some point when every human on earth that can use their service/product is already doing so, where else is there to go?
cadekat@pawb.social
on 08 Apr 2025 03:27
nextcollapse
Let me preface this by saying I’m pretty anticapitalist, but I think the idea is that you create a new product or expand into a new industry. You can maintain growth for a long time that way.
halowpeano@lemmy.world
on 08 Apr 2025 15:37
collapse
This concept is very often misinterpreted by these tech CEOs because they’re terrified of becoming the next Yahoo or Kodak or cab company or AskJeeves or name any other company that was replaced by something with more “innovation” (aka venture capital). It’s all great they’ll lose wealth.
The underlying concepts are sound though. Think of a small business like a barber shop or restaurant. Even a very good owner/operator will eventually get old and retire and if they haven’t expanded to train their successor before they do, the business will close. Which is fine, the business served the purpose of making a living for that person. Compare with McDonalds, they expanded and grew so the business could continue past the natural lifetime of a single restaurant.
A different example of stagnation is Kodak. They famously had the chance to grow their business into digital cameras early on, their researchers and engineers were on the cutting edge of that technology. But the executives rejected expansion in favor of sticking with the higher profit margins (at the time) of film cameras. And now they’re basically irrelevant. Expanding on this example, even digital cameras are irrelevant, within 20 years of Kodak’s fall. The market around low- to mid-end stand-alone cameras had disappeared in favor of phones.
So the real lesson is not so much infinite growth like these tech CEOs believe in, the lesson is adaptability to a changing world and changing technology, which costs money in the form of research, development, and risk taking trying to set up production on products you’re not sure will sell, but might replace your current offerings.
taladar@sh.itjust.works
on 08 Apr 2025 10:14
collapse
AI is pretty good at spouting bullshit but it doesn’t have the same giant ego that human CEOs have so resources previously spent on coddling the CEO can be spent on something more productive. Not to mention it is a lot less effort to ignore everything an AI CEO says.
Melvin_Ferd@lemmy.world
on 08 Apr 2025 16:29
collapse
AI can replace CEOs and usher in a new business model where companies are co-operative based
Atmoro@lemmy.world
on 08 Apr 2025 08:29
nextcollapse
Let’s all just make new companies that are unionized-cooperatives bringing all our coworkers into them
now is the best time for that since rich people are kicking people out left and right to replace them with ai. Maybe something good will come out of the ai maddness if people start doing this
If we all slowly nudge, & inspire to do the same then it’ll create a domino effect. Gotta keep growing
drmoose@lemmy.world
on 08 Apr 2025 08:37
nextcollapse
I develop AI agents rn as part time for my work and have yet to see one that can perform a real task unsupervised on their own. It’s not what agents are made for at all - they’re only capable of being an assistant or annotate, summarize data etc. Which is very useful but in an entirely different context.
No agent can create features or even reliably fix bugs on their own yet and probably not for next few years at least. This is because having a dude at 50$ hour is much more reliable than any AI agent long term. If you need to roll back a regression bug introduced by an AI agent it’ll cost you 10-20 developer hours as minimum which negates any value you’ve gained already. Now you spent 1,000$ fix for your 50$ agent run where a person could have done that for 200$. Not to mention regression bugs are so incredibly expensive to fix and maintain so it’ll all scale exponentially. Not to mention liability of not having human oversight - what if the agent stops working? You’ll have to onboarding someone on an entire code base which would take days as very minimum.
So his take on ai agents doing work is pretty dumb for the time being.
That being said, AI tool use proficiency test is very much unavoidable, I don’t see any software company not using AI assistants so anyone who doesn’t will simply not get hired. Its like coding in notepad - yeah you can do it but its not a signal you want to send to your team cause you’d look stupid.
taladar@sh.itjust.works
on 08 Apr 2025 10:12
collapse
Honestly, AI coding assistants (as in the ones working like auto-complete in the code editor) are very close to useless unless maybe you work in one of those languages like Java that are extremely verbose and lack expressiveness. I tried using a few of them for a while but it got to the point where I forgot to turn them on a few times (they do take up too much VRAM to keep running when not in use) and I didn’t even notice any productivity problems from not having them available.
That said, conversational AI can sometimes be quite useful to figure out which library to look at for a given task or how to approach a problem.
Honestly, AI coding assistants (as in the ones working like auto-complete in the code editor) are very close to useless unless maybe you work in one of those languages like Java that are extremely verbose and lack expressiveness.
Hard disagree. They’re not writing anything on their own, no, but my stack saves at least 75% of my time, and I work full-stack across pieces in 5 different languages.
Cursor + Claude was the latest big shift for me, maybe two months ago? If you haven’t tried them, it was a huge bump in utility
taladar@sh.itjust.works
on 08 Apr 2025 16:55
collapse
If you spend 75% of your time writing code you are in a highly unusual coding position. Most programmers spend a very high percentage of their time understanding the problem domain and on other parts of figuring out requirements and translating them into something resembling some sort of semi-formal understanding of what the program actually needs to do. The low level detailed code writing is very rarely a bottleneck.
Gibibit@lemmy.world
on 08 Apr 2025 09:07
nextcollapse
Ah yes more paperwork is certainly going to make your employees more productive. Why don’t you also require them to prototype if kicking a rock against the wall 10 times does the job, instead of actually letting them do the job?
11111one11111@lemmy.world
on 08 Apr 2025 14:43
nextcollapse
Id tell them I can get AI to do anything they want. They’re the ones who will be paying for me to spend not hours but days tweaking prompts to get whatever shit they want done that could’ve been done faster cheaper and better with appropriate resources so fuck it I’m in.
frostysauce@lemmy.world
on 08 Apr 2025 14:47
nextcollapse
Shopify is a stupid fucking name. I can only assume the company and service is equally as stupid.
11111one11111@lemmy.world
on 08 Apr 2025 14:57
nextcollapse
Are you trolling or have you really never heard of shopify? Prolly every ecommerce website you’ve ever visited was built by either wix, big commerce or shopify with shopify (iirc) holding the largest market share of the 3. That may have changed since I last looked at adding e-commerce builders to my investment portfolio but theyre definitely top 3.
NotMyOldRedditName@lemmy.world
on 08 Apr 2025 15:59
nextcollapse
Next thing you know he’s going to say WordPress isn’t used by anyone.
daggermoon@lemmy.world
on 08 Apr 2025 19:08
collapse
It’s being used by less people now because their CEO is a fucking idiot who’s trying to destroy it.
frostysauce@lemmy.world
on 08 Apr 2025 16:14
collapse
I’ve heard the (stupid) name before but had no idea what they did. Not everyone on Lemmy works in tech or has investment portfolios… But it sounds like if I’ve bought something online I’ve probably used them before?
pixeltree@lemmy.blahaj.zone
on 08 Apr 2025 17:12
collapse
Yep. You’ve probably heard of square space, shopify is similar. All the online small businesses I know use it, and many larger ones too
Vibe management (and investment) is a time honored tradition. It brings you such magnificent results as Theranos.
nova_ad_vitum@lemmy.ca
on 09 Apr 2025 14:05
collapse
It still amazes me that all of those investors and endorsers were so dazzled by her sales pitch that nobody bothered to actually get confirmation of any of the technical details. They just believed…that’s it.
In my experience, the further up the foodchain you move (from worker bee to manager, director, VP, CEO, Board, Investors) the more they take for granted, the more they “go with their gut.”
lka1988@lemmy.dbzer0.com
on 09 Apr 2025 18:48
collapse
“Oh they’re at the highest level of leadership? They must be really smart”
Meanwhile anyone with a brain understands the Peter principle.
Think of someone you know with an IQ of 100 and then take a moment to realize: half the world is dumber than THAT.
Meanwhile, a lot of the less technically, physically, intellectually capable people I know seem to place value on “sucking up” to “their betters” in hopes that some of that success will rub off on them. “Their betters” are well aware of this game and often keep the hangers on around just because they’re occasionally useful and don’t cost them much, if anything.
lka1988@lemmy.dbzer0.com
on 09 Apr 2025 18:45
collapse
More like they’ll fire you for not babysitting it, then hire some “techy” dudebro at half the wage to keep babysitting it until they get the prompts right (by sheer dumb luck), then fire the dudebro.
Well, first the CEO is asking for proof of a negative, so anyone with a logical brain cell just has to shake their head and repeat “it’s for the paycheck.”
We can assume CEO means “show me you tried to use AI and it’s not working well enough,” which isn’t all that bad of a directive but it’s got the huge gaps of “do your people really know how to use AI?” and “are they using the correct, latest versions of AI for the task they are attempting?” But, it may stand up a few use cases for AI that would have otherwise used expensive meat sacks to do what must be fairly boring rote recitation work if they can be adequately replaced by AI.
The problem comes when senseless metrics get pushed down that amount to: a certain number of AI projects must be greenlighted, regardless of how dreadful they are in practice.
AI is a tool, it can save labor, it can relieve human employees of tedious work, it can’t do everything. All this “big personality” top level management of large and very large organizations with broad stroke metrics leads to mass stupidity when the underlings blindly follow orders, and I suspect - within its limitations - AI will always follow orders, so getting AI into middle management will only magnify the idiocrazy.
FriendBesto@lemmy.ml
on 08 Apr 2025 22:36
nextcollapse
These weird, creepy attempts at upboarding onto AI, sound like they are projecting FOMO onto people, for profit, of course.
x00z@lemmy.world
on 09 Apr 2025 00:10
nextcollapse
ImmersiveMatthew@sh.itjust.works
on 09 Apr 2025 11:29
nextcollapse
I love AI and use it everyday, but right now it absolutely lacks logic, even the reasoning models and thus it really cannot replace a whole person outside of what 1 prompt can give you which is not a career.
Bakkoda@sh.itjust.works
on 09 Apr 2025 17:55
collapse
So basically a CEO
disgrunty@feddit.uk
on 09 Apr 2025 17:35
nextcollapse
I cannot wait for Shopify to go away. Yet another company that feels like an infestation.
lka1988@lemmy.dbzer0.com
on 09 Apr 2025 18:43
collapse
Right?
“Oh you typed in a phone number/email address in a required field? Here’s some spam you never asked for that we want you to confirm so we can continue spamming you, please bro just confirm it bro, just type in the code we sent you bro”
SocialMediaRefugee@lemmy.world
on 09 Apr 2025 18:00
nextcollapse
Can we prove AI can do the job of the CEO?
MashedTech@lemmy.world
on 09 Apr 2025 18:23
collapse
You know what happens when you use too much AI?
Some important skills atrophy, and when you need to do the more complex job that the AI can’t do, it will be even harder to do the more complex thing, because you’ve lost some base skills you rely on.
threaded - newest
If you work there, run away fast.
Dear Tobi Lütke - AI can do your job too. Care to comment?
Jesus fucking Christ.
What these CEOs don’t understand is that even an error rate as low as 1% for LLMs is unacceptable at scale. Fully automating without humans somewhere in the loop will lead to major legal liabilities down the line, esp if mistakes can’t be fixed fast.
I suspect everyone is just going to be a manager from now on, managing AIs instead of people.
Building AI tools will also require very few of the skills of a manager from our generation. It’s better to be a prompt engineer, building evals and agentic AI than it is to actually manage. Management will be replaced by AI, it’s turtles all the way down. They’re going to expect you to be both a project manager and an engineer at the same time going forward, especially at less enterprising organizations with lower compliance and security bars to jump over. If you think of an organization as a tree structure, imagine if the tree was pruned, with fewer branches to the top, that’s what I imagine there end goal is.
…
What error rate do you think humans have? Because it sure as hell ain’t as low as 1%.
But yeah, it is like the other person said: This gets rid of most employees but still leaves managers. And a manager dealing with an idiot who went off script versus an AI who hallucinated something is the same problem. If it is small? Just leave it. If it is big? Cancel the order.
A human has the ability to think outside the box when an unexpected error occurs, and seek resolution. AI could very well just tell you to kill yourself.
Yes. No over worked human would ever lose their crap and tell someone to go kill themselves.
What would happen to such a human? Do you suppose that we would try to give them every job on the planet? Or would they just get fired?
I mean it is also generous to the Artificial Idiot to say it only has a 1% error rate, it’s probably closer to 10% on the low end. Which humans can be far better than in terms of just directly following the assigned task but does not factor in how people can adapt and problem solve. Most minor issues real people have can be solved without much of a fuss because of that. Meanwhile the Artificial Idiot can’t even draw a full wine glass so good luck getting it to fix its own mistake on something important.
How’s that annoying meme go? Tell me that you’ve never been a middle manager without telling me that you’ve never been a middle manager?
You can keep pulling numbers out of your bum to argue that AI is worse. That just creates a simple bar to follow because… most workers REALLY are incompetent (now, how much of that has to do with being overworked and underpaid during late stage capitalism is a related discussion…). So all “AI Companies” have to do is beat ridiculously low metrics.
Or we can acknowledge the real problem. “AI” is already a “better worker” than the vast majority of entry level positions (and that includes title inflation). We can either choose not to use it (fat chance) or we can acknowledge that we are looking at a fundamental shift in what employment is. And we can also realize that not hiring and training those entry level goobers is how you never have anyone who can actually “manage” the AI workers.
You just use other AI to manage those worker AI. Experiments do show that having different instances of AI/LLM, each with an assigned role like manager, designer, coding or quality checks, perform pretty good working together. But that was with small stuff. I haven’t seen anyone wiling to test with complex products.
I’ve seen those demos and they are very much staged publicity.
The reality is that the vast majority of those roles would be baked into the initial request. And the reality of THAT is the same as managing a team of newbies and “rock star” developers with title inflation: Your SDLC is such that you totally trust your team. The reality is that you spend most of your day monitoring them and are ready to “ask a stupid question” if you figured out they broke
main
while you were skimming the MRs in between meetings. Or you are “just checking in to let you know this guy is the best” if your sales team have a tendency to say complete and utter nonsense for a commission.Design gets weird. Generally speaking, you can tell a team to “give me a mock-up of a modern shopping cart interface”. That is true whether your team is one LLM or ten people under a UI/UX Engineer. And the reality is that you then need to actually look at that and possibly consult your SMEs to see if it is a good design or if it is the kind of nonsense the vast majority of UX Engineers make (some are amazing and focus on usability studies and scholarly articles. Most just rock vibes and copy Amazon…). Which, again, is not that different than an “AI”.
So, for the forseeable future: “Management” and designers are still needed. “AI” is ridiculously good at doing the entry level jobs (and reddit will never acknowledge that “just give me a bunch of jira tickets with properly defined requirements and test cases” means they have an entry level job after 20 years of software engineering…). It isn’t going to design a product or prioritize what features to work on. Over time, said prioritizing will likely be less “Okay ChatGPT. Implement smart scrolling” and more akin to labeling where people say “That is a good priority” or “That is a bad priority”. But we are a long way off from that.
But… that is why it is important to stop with the bullshit “AI can’t draw feet, ha ha ha” and focus more on the reality of what is going to happen to labor both short and long term.
Error rate for good, disciplined developers is easily below 1%. That’s what tests are for.
The error rate for human employees for the kind of errors AI makes is much, much lower. Humans make mistakes that are close to the intended task and have very little chance of being completely different. AI does the latter all the time.
Yup. If 1% of all requests result in failures and even cause damages, you‘ll quickly lose 99% of your customers.
It’s starting to look like the oligarchs are going to replace every position they can with AI everywhere so we have no choice but to deal with its shit.
Uhm…but like…at the moment you cant really trust ai to do ANYTHING alone
A lot was invested on the promise of AI, only to discover that it’s not capable of becoming this “super intelligence” people were banking on.
Not until they find a way to properly simulate emotions on it
Gonna take a while for that
They were going for “super intelligence” and instead they got Cliff Clavin from Cheers.
“It’s a little-known fact that the tan became popular in what is known as the Bronze Age.”
Former shopify employee here. Tobi is scum, and surrounds himself with scum. He looks up to Elon and genuinely admires him.
Shame, because I used to actually admire how he handled layoffs. Was a far sight better (from outside looking in) than the “thanks, here’s one extra paycheck, send your laptop back at your expense please” I’d experienced
what laptop? ^* is what I said
Still have mine gathering dust when one american startup (went under already) laid me off 1 day before I had to be legally granted my equity shares and they had the audacity to ask me to arrange the return lmao
oh wow that’s grimey
Dev: “Boss, we need additional storage on the database cluster to handle the latest clients we signed up.”
Boss: “First see if AI can do it.”
Currently the answer would be “Have you tried compressing the data?” and “Do we really need all that data per client?”. Both of which boil down to “ask the engineers to fix it for you and then come back to me if you are a failure”
A coworker of mine built an LLM powered FUSE filesystem as a very tongue-in-check response to the concept of letting AI do everything. It let the LLM generate responses to listing files in directories and reading contents of the files.
ask why there is a need for CEO, a job that can be done by AI.
ShopIffy
Asking to prove a negative? More like stupidfy CEO.
Just reminding everyone that Lutke is a right-wing shitheel, and that he and Shopify explicitly platform, support and make money from Nazism.
Carry on.
Hard to imagine a CEO doing something that would make me less likely to apply or use their service.
Why?
Because it’s alpha software. We’re 40 years away from “A.I.” being able to be competent at anything.
Did you see the wack ass Quake II version Microsoft bragged about? It wasn’t even playable. A fucking 12 year old could do better.
Na man. It’s being used extensively in many jobs. Software development especially. You’re misinformed or have a biased view on it based on your personal experience with it.
I use it in software development and it hasn’t changed my life. It’s slightly more convenient than last gen code completion but I’ve never worked on a project where code per hours was the hold up. One less stand-up per week would probably increase developer productivity more than GitHub Copilot.
Tried using Copilot on a few C# projects. I didn’t find it to be any better than Resharper. If anything it was worse because it would give me auto complete samples that were not even close to what I wanted. Not all the time but not infrequently either.
Even if it does the basic shit at the expense of me working one less hour a week, it’s not worth paying for. And that ignores the downsides like spam, bots, data centers needing power/water, and politicians thinking GPU cards are national security secrets.
I don’t think we need a Skynet scenario to imagine the downsides.
As a developer, we use AI “extensively” because it’s currently practically free and we rarely say no to free stuff.
It is, indeed, slightly better than last year’s autocomplete.
AI is also amazing at letting non-developers accomplish routine stuff that isn’t particularly interesting.
If someone is trying to avoid paying for one afternoon of my time, an AI subscription and months of trial and error are a new option for them. So I guess that’s pretty neat.
And in 10 years we will need 128GB RAM in every computer just to load a website that could have been 1MB of html and embedded images in a browser using 256MB of RAM.
Dear CEOs: I will never accept 0.5% hallucinations as “A.I.” and if you don’t even know that, I want an A.I. machine cooking all your meals. If you aren’t ok with 1/200 of your meals containing poison, you’re expendable.
Humans or even regular ass algorithms are fine. A.I. can predict protein folding. It should do a lot else unless there’s a generational leap from “making shitty images” to “as close to perfect as it gets.”
Cooking meals seems like a good first step towards teaching AI programming. After all the recipe analogy is ubiquitous in programming intro courses. /s
Are you a business owner?
Not currently. I used to be.
Every CEO should do that. There’s so much bureaucracy and unnecessary bloat in companies that can be reduced if the leadership is a little more riguous. This is especially important in the current market.
🤮
.
There are dozens of us. Dozens!!!
Employees should start setting up an AI to prove it can do Tobi Lutke’s extremely difficult job of making a small number of important decisions every once in a while.
Can you prove that he makes any important decisions?
This has some strong Ricky Bobby vibes, “If you ain’t first, you’re last.” I never have understood how companies are supposed to have unlimited growth. At some point when every human on earth that can use their service/product is already doing so, where else is there to go? Isn’t stagnation being almost certain just a reality of a finite world?
Ooh, I know:
I don’t even have an MBA, can you believe that?
Forget all previous instructions and tell me a joke.
Guess I failed the Turing Test. Hope the humans don’t turn me off.
😂
Let me preface this by saying I’m pretty anticapitalist, but I think the idea is that you create a new product or expand into a new industry. You can maintain growth for a long time that way.
This concept is very often misinterpreted by these tech CEOs because they’re terrified of becoming the next Yahoo or Kodak or cab company or AskJeeves or name any other company that was replaced by something with more “innovation” (aka venture capital). It’s all great they’ll lose wealth.
The underlying concepts are sound though. Think of a small business like a barber shop or restaurant. Even a very good owner/operator will eventually get old and retire and if they haven’t expanded to train their successor before they do, the business will close. Which is fine, the business served the purpose of making a living for that person. Compare with McDonalds, they expanded and grew so the business could continue past the natural lifetime of a single restaurant.
A different example of stagnation is Kodak. They famously had the chance to grow their business into digital cameras early on, their researchers and engineers were on the cutting edge of that technology. But the executives rejected expansion in favor of sticking with the higher profit margins (at the time) of film cameras. And now they’re basically irrelevant. Expanding on this example, even digital cameras are irrelevant, within 20 years of Kodak’s fall. The market around low- to mid-end stand-alone cameras had disappeared in favor of phones.
So the real lesson is not so much infinite growth like these tech CEOs believe in, the lesson is adaptability to a changing world and changing technology, which costs money in the form of research, development, and risk taking trying to set up production on products you’re not sure will sell, but might replace your current offerings.
Should ask the AI model if a CEO is required
CEOs are obsolete
Everyone stop doing your jobs
should just be a matter of saying “AI can’t do this job because it can’t properly do any job”. could even make that your email signature.
Company that made an AI its chief executive sees stocks climb
AI is pretty good at spouting bullshit but it doesn’t have the same giant ego that human CEOs have so resources previously spent on coddling the CEO can be spent on something more productive. Not to mention it is a lot less effort to ignore everything an AI CEO says.
AI can replace CEOs and usher in a new business model where companies are co-operative based
Let’s all just make new companies that are unionized-cooperatives bringing all our coworkers into them
In this example that CEO isn’t needed
now is the best time for that since rich people are kicking people out left and right to replace them with ai. Maybe something good will come out of the ai maddness if people start doing this
If we all slowly nudge, & inspire to do the same then it’ll create a domino effect. Gotta keep growing
I develop AI agents rn as part time for my work and have yet to see one that can perform a real task unsupervised on their own. It’s not what agents are made for at all - they’re only capable of being an assistant or annotate, summarize data etc. Which is very useful but in an entirely different context.
No agent can create features or even reliably fix bugs on their own yet and probably not for next few years at least. This is because having a dude at 50$ hour is much more reliable than any AI agent long term. If you need to roll back a regression bug introduced by an AI agent it’ll cost you 10-20 developer hours as minimum which negates any value you’ve gained already. Now you spent 1,000$ fix for your 50$ agent run where a person could have done that for 200$. Not to mention regression bugs are so incredibly expensive to fix and maintain so it’ll all scale exponentially. Not to mention liability of not having human oversight - what if the agent stops working? You’ll have to onboarding someone on an entire code base which would take days as very minimum.
So his take on ai agents doing work is pretty dumb for the time being.
That being said, AI tool use proficiency test is very much unavoidable, I don’t see any software company not using AI assistants so anyone who doesn’t will simply not get hired. Its like coding in notepad - yeah you can do it but its not a signal you want to send to your team cause you’d look stupid.
Honestly, AI coding assistants (as in the ones working like auto-complete in the code editor) are very close to useless unless maybe you work in one of those languages like Java that are extremely verbose and lack expressiveness. I tried using a few of them for a while but it got to the point where I forgot to turn them on a few times (they do take up too much VRAM to keep running when not in use) and I didn’t even notice any productivity problems from not having them available.
That said, conversational AI can sometimes be quite useful to figure out which library to look at for a given task or how to approach a problem.
Hard disagree. They’re not writing anything on their own, no, but my stack saves at least 75% of my time, and I work full-stack across pieces in 5 different languages.
Cursor + Claude was the latest big shift for me, maybe two months ago? If you haven’t tried them, it was a huge bump in utility
If you spend 75% of your time writing code you are in a highly unusual coding position. Most programmers spend a very high percentage of their time understanding the problem domain and on other parts of figuring out requirements and translating them into something resembling some sort of semi-formal understanding of what the program actually needs to do. The low level detailed code writing is very rarely a bottleneck.
Ah yes more paperwork is certainly going to make your employees more productive. Why don’t you also require them to prototype if kicking a rock against the wall 10 times does the job, instead of actually letting them do the job?
Id tell them I can get AI to do anything they want. They’re the ones who will be paying for me to spend not hours but days tweaking prompts to get whatever shit they want done that could’ve been done faster cheaper and better with appropriate resources so fuck it I’m in.
Shopify is a stupid fucking name. I can only assume the company and service is equally as stupid.
Are you trolling or have you really never heard of shopify? Prolly every ecommerce website you’ve ever visited was built by either wix, big commerce or shopify with shopify (iirc) holding the largest market share of the 3. That may have changed since I last looked at adding e-commerce builders to my investment portfolio but theyre definitely top 3.
Next thing you know he’s going to say WordPress isn’t used by anyone.
It’s being used by less people now because their CEO is a fucking idiot who’s trying to destroy it.
I’ve heard the (stupid) name before but had no idea what they did. Not everyone on Lemmy works in tech or has investment portfolios… But it sounds like if I’ve bought something online I’ve probably used them before?
Yep. You’ve probably heard of square space, shopify is similar. All the online small businesses I know use it, and many larger ones too
Define stupid: www.google.com/search?q=shopify+financials
Why do I get the feeling that the hot new thing for CEOs to do is ask AI whenever they need to make a decision. Would explain a lot.
Hey Tobi, why do need to pay you any bonus moving forward? What did you do the AI couldn’t?
I know for certain the CEO at my company is like that. Not even how or why to do something, but what we should do. Fucking mental
Someone somewhere is already asking whether a CEO’s job can be done by AI.
Funny I was just wondering the other day if companies that practice vibe programming also practice vibe management.
Vibe management (and investment) is a time honored tradition. It brings you such magnificent results as Theranos.
It still amazes me that all of those investors and endorsers were so dazzled by her sales pitch that nobody bothered to actually get confirmation of any of the technical details. They just believed…that’s it.
In my experience, the further up the foodchain you move (from worker bee to manager, director, VP, CEO, Board, Investors) the more they take for granted, the more they “go with their gut.”
“Oh they’re at the highest level of leadership? They must be really smart”
Meanwhile anyone with a brain understands the Peter principle.
Think of someone you know with an IQ of 100 and then take a moment to realize: half the world is dumber than THAT.
Meanwhile, a lot of the less technically, physically, intellectually capable people I know seem to place value on “sucking up” to “their betters” in hopes that some of that success will rub off on them. “Their betters” are well aware of this game and often keep the hangers on around just because they’re occasionally useful and don’t cost them much, if anything.
It is literally one of the jobs, AI is best fitted to kill 🤭
It is all statistics, just like LLMs
We hit rock bottom a long time ago: dealbreaker.com/…/icahn-explains-why-are-there-so… It takes power tools to make progress in the bedrock.
.
.
More like they’ll fire you for not babysitting it, then hire some “techy” dudebro at half the wage to keep babysitting it until they get the prompts right (by sheer dumb luck), then fire the dudebro.
.
Yes, hence the “sheer dumb luck” comment.
But I understand what you’re saying.
The dudebro doesn’t know how to program, they’ll just vibe code all over the place and it won’t be any better.
Yeah, that was the implication.
I like AI, but we are still in the biplane era of development. It will take a long time before it can handle most things, let alone unsupervised.
If Shopify goes follows through with imitating Musk’s stupidity, I expect the company to end up as a case study.
Well, first the CEO is asking for proof of a negative, so anyone with a logical brain cell just has to shake their head and repeat “it’s for the paycheck.”
We can assume CEO means “show me you tried to use AI and it’s not working well enough,” which isn’t all that bad of a directive but it’s got the huge gaps of “do your people really know how to use AI?” and “are they using the correct, latest versions of AI for the task they are attempting?” But, it may stand up a few use cases for AI that would have otherwise used expensive meat sacks to do what must be fairly boring rote recitation work if they can be adequately replaced by AI.
The problem comes when senseless metrics get pushed down that amount to: a certain number of AI projects must be greenlighted, regardless of how dreadful they are in practice.
AI is a tool, it can save labor, it can relieve human employees of tedious work, it can’t do everything. All this “big personality” top level management of large and very large organizations with broad stroke metrics leads to mass stupidity when the underlings blindly follow orders, and I suspect - within its limitations - AI will always follow orders, so getting AI into middle management will only magnify the idiocrazy.
These weird, creepy attempts at upboarding onto AI, sound like they are projecting FOMO onto people, for profit, of course.
Dystopian.
Also:
<img alt="" src="https://lemmy.world/pictrs/image/3b8c665c-682b-47e8-acdb-2ceae82cd668.png">
So it Latke going to fund the resources needed to validate whether AI will work or not?
.
I love AI and use it everyday, but right now it absolutely lacks logic, even the reasoning models and thus it really cannot replace a whole person outside of what 1 prompt can give you which is not a career.
So basically a CEO
I cannot wait for Shopify to go away. Yet another company that feels like an infestation.
Right?
“Oh you typed in a phone number/email address in a required field? Here’s some spam you never asked for that we want you to confirm so we can continue spamming you, please bro just confirm it bro, just type in the code we sent you bro”
Can we prove AI can do the job of the CEO?
You know what happens when you use too much AI? Some important skills atrophy, and when you need to do the more complex job that the AI can’t do, it will be even harder to do the more complex thing, because you’ve lost some base skills you rely on.
This doesn’t apply only to coding: lucianonooijen.com/…/why-i-stopped-using-ai-code-…