latenightnoir@lemmy.blahaj.zone
on 07 Oct 16:29
nextcollapse
This is how AI will take over… not through wars or competence, but by being better at bureaucratic forgeries…
Edit: well, I guess the apple never falls far from the tree, as it were! Wa-hey! We wanted to create the ultimate worker, but we’ve managed to create the ultimate politician instead=))
I said I DON’T LIKE THIS FUTURE. I’D LIKE TO GO BACK
Lucidlethargy@sh.itjust.works
on 08 Oct 06:38
collapse
It’s easy when the first line of every reply is “oh, you’re so goddamn smart. Holy shit, are you the smartest person in the world for asking that question?..”
tidderuuf@lemmy.world
on 07 Oct 16:38
nextcollapse
Knowing the way our country is going I would expect in the end workers will have to pay an AI tax on their income and most workers will start working 50 hours a week.
I like you optimism that it won’t be worse than that. 😋
Grandwolf319@sh.itjust.works
on 07 Oct 17:10
nextcollapse
If you have a job that you can be confidently wrong without any self awareness after the fact, then yeah I guess.
But I can’t think of many jobs like that except something that is mostly just politics.
Blackfeathr@lemmy.world
on 07 Oct 17:34
nextcollapse
Don’t forget the vast majority of CEOs.
thisbenzingring@lemmy.sdf.org
on 07 Oct 19:09
collapse
IMO AI would probably do the job of CEO better than a human. It wouldn’t be as greedy and would be happy with any growth while being humble enough to make decisions that might be personally embarrassing
WanderingThoughts@europe.pub
on 07 Oct 18:31
collapse
Spam and astroturfing mostly.
expatriado@lemmy.world
on 07 Oct 17:11
nextcollapse
the onion? looks like ai already took adviser to congress jobs
Kyle_The_G@lemmy.world
on 07 Oct 17:24
nextcollapse
and then 115 million will be needed to unwind the half-assed implementation and inevitable damage.
Zephorah@discuss.online
on 07 Oct 17:30
nextcollapse
Thus demonstrating the crux of the issue.
I was just looking for a name of a historical figure associated with the Declaration of Independence but not involved in the writing of it. Elizabeth Powel. Once I knew the name, I went through the ai to see how fast they’d get it. Duck.ai confidently gave me 9 different names, including people who were born on 1776 or soon thereafter and could not have been historically involved in any of it. I even said not married to any of the writers and kept getting Abagail Adams and the journalist, Goddard. It was continually distracted by “prominent woman” and would give Elizabeth Cady Stanton instead. Twice.
Finally, I gave the ai a portrait. It took the ai three tries to get the name from the portrait, and the portrait is the most used one under the images tab.
It was very sad. I strongly encourage everyone to test the ai. Easy to grab wikis that would be top of the search anyway are making the ai look good.
LOL Maybe AI will be the next big job creator. The AI solves a task super fast, but a human has to sort out the mistakes, and spend twice the time doing that, than it would have taken to just do it yourself.
DarkDarkHouse@lemmy.sdf.org
on 08 Oct 03:53
collapse
This what’s happening in computer programming. The booming subfield is apparently slop cleaners.
If you understand how LLMs work, that’s not surprising.
LLMs generate a sequence of words that makes sense in that context. It’s trained on trillions(?) of words from books, Wikipedia, etc. In most of the training material, when someone asks “what’s the name of the person who did X?” there’s an answer, and that answer isn’t “I have no fucking clue”.
Now, if it were trained on a whole new corpus of data that had “I have no fucking clue” a lot more often, it would see that as a reasonable thing to print sometimes so you’d get that answer a lot more often. However, it doesn’t actually understand anything. It just generates sequences of believable words. So, it wouldn’t generate “I have no fucking clue” when it doesn’t know, it would just generate it occasionally when it seemed like it was an appropriate time. So, you’d ask “Who was the first president of the USA?” and it would sometimes say “I have no fucking clue” because that’s sometimes what the training data says a response might look like when someone asks a question of that form.
Is that technically possible? I mean, theoretically.
I’m pretty sure that to do something like that, you’d need AGI. Then you’d need to build systems that leveraged it. Then you’d need to get it deployed.
What we have today is most-certainly not AGI. And I suspect that we’re still some ways from developing AGI. So we aren’t even at Step 1 on that three-part process, and I would not at all be surprised if AGI is a gradual development process, rather than a “Eureka” moment.
I think the fad will die down a bit, when companies figure out that AI will be more likely than humans to make very expensive mistakes that the company has to compensate, and saying it was the AI is not a valid cop out.
I foresee companies will go bankrupt on that account.
It doesn’t help to save $100k on cutting away an employee, if the AI causes damages for 10 or 100 times that amount.
When the bubble bursts, whoever is left standing is going to have to jack prices through the roof to put so much as a dent in their outlay. Their outlay so far. Can’t see many companies hanging in there at that point.
Not if the IP is purchased by another company leaving the original saddled with the debt, or spun off so the parent company can rebuy it thusly, or the government bails them out, or buys it to be the State AI too, or a bunch of other scenarios in this dark new world ahead.
simplejack@lemmy.world
on 07 Oct 20:43
nextcollapse
Agreed, but I do think that some jobs are just going to be gone.
For example, low level CS agents. I worked for a company that replaced that first line of CS defense with a bot, and the end-of-call customer satisfaction scores went up.
I can think of a few other things in my company that had a similar outcome. If the role is gone, and the customers and employees are being served even better than when they had that support role, that role ain’t coming back.
I’m pretty sure that even consumer services is an area where I saw a computer made an expensive mistake, promising the customer something very expensive, and a court decided the company had to honor the agreement the AI made. But I can’t find the story, because I’m flooded with product placement articles about how wonderful AI is at saving cost in CS.
But yes CS is absolutely an area where AI is massively pushed.
No it was a single case of reparation for damages of some sort. Where the company wouldn’t honor the deal, but lost the case in court.
But thanks for finding a concrete example of how AI makes stupid mistakes, this Airline case looks like the bot suffered from the infamous hallucinations they are prone to.
And it’s absolutely disgusting that the airline won’t honor it, and instead make all sorts of claims about how when and where the complaint needs to be filed.
Very good example of the principle of what I meant, although it’s only a few hundred dollars.
Oh 100%. The question will be are there more opportunities that come from it. Here’s my guess: if you can’t produce something interesting you will be fighting for scraps. Even that might not be good enough.
I put my money on AI act here in Europe and the willingness of local authorities to make a few examples.
That would help bringing some accountability here and there and stir a bit the pot.
Eventually, as AI commodities, it will be less in the light. That will also help.
OpenAI is pets.com. It has fairly crappy models in a very strong competition for models. The only difference with pets.com is that US government is behind it to make Skynet for Israel’s control. Datacenters are meant to develop Skynet. The only pretense of economic strength in US is datacenter economy, and Skynet for Israel is absolute mission for US government.
Despite no possible business economics for datacenter model, the sheer will behind Skynet for Israel ensures that there is no imminent bubble pop. Perplexity and Coreweave may get sacrificed though.
Still GPUs and specialized AI GPUs are here to stay, even if sales forecasts can be too high. Open weight models are awesome. Smaller models can be trained after quantization to domain specialization, with hardware for small enough models accessible to individuals and businesses. The fatal flaw of using datacenter providers is that their purpose is to provide Skynet for Israel, and steal any information that might help in the process, and then terminate/genocide anyone who would stand in their way.
thisbenzingring@lemmy.sdf.org
on 07 Oct 19:05
nextcollapse
funny… i expected IT workers to be in that list but we’re not. AI couldn’t do my job but it could be my boss and that frightens me.
I drove Amazon Flex during Covid, having an AI as your boss is deeply and perpetually unsettling but ultimately doable! Just do what the push notification tells you to do. If you want to say something to your boss, use the feedback form on the corporate website. So simple.
sexy_peach@feddit.org
on 07 Oct 20:36
nextcollapse
What do you do?
thisbenzingring@lemmy.sdf.org
on 08 Oct 01:39
collapse
what don’t I do… some days… I tell you. My job is Systems Administrator
explodicle@sh.itjust.works
on 08 Oct 14:48
collapse
LOLLLLLLLL that’s like a third of the US population. Probably half of the number currently employed. There’s no way in hell this useless garbage will take 1/3 to 1/2 of all jobs. Companies that do this will go out of business fast.
You can tell how competent someone is at something by how good they think AI is at that thing.
This is so true.
I recently had a colleague - ignorant of this perspective - give a training presentation on using AI to update a kind of bullshit job useless document.
Dozens of peers attended their presentation. They went on demonstrating relatively mindless prompt inputting for 40 minutes.
I keep remembering just how many people they shared their AI enthusiasm with.
I think they may honestly believe that AI has democratized the workplace, and that they will vibe code their way to successful startup CEO-ship in a year.
I also find it interesting how whenever I’ve expressed the above sentiment either here or on the Other Place, the up/downvote ratios seem to vary massively depending on the tech-bro quotient of the group. I’m mildly surprised to see it go entirely positive in a community called “technology”.
TankovayaDiviziya@lemmy.world
on 09 Oct 01:21
collapse
And these 1/3 are perfect horde for fascist brainwashing and consolidate the power of techno-fascists. The fascists will tell the jobless that immigrants took their jobs and not robots.
CosmoNova@lemmy.world
on 07 Oct 23:07
nextcollapse
Why even post this here? This is politics BS that‘s used as a diversion from the Epstein files and the government shut down which again only happened so they don‘t vote on the Epstein files.
The epstien files is a distraction from dismantling our constitutional law. What laws are you going to try the pedos under? Which courts do you plan on using? You see where I’m going with this? We all know who’s on the list who’s gonna hold them accountable? No one, thus it’s a stupid distraction.
SabinStargem@lemmy.today
on 08 Oct 04:46
nextcollapse
I don’t think the numbers themselves are that important, the key bit is that AI is an advancing technology over this century. If we don’t rework our society to account for an oncoming future, people will get run over.
If there is an overhaul of my nation’s Constitution, I would like economics to be addressed. One such thing would be a mechanical ruleset that adjusts the amount of wealth and assets a company can hold, according to employee headcount. If they downsize the amount of working humans, their limit goes down. They can opt to join a lotto program, that grants UBI to people whose occupation is displaced by AI, and each income that is lotto’ed by the company adds to their Capital Asset Limit.
HeyThisIsntTheYMCA@lemmy.world
on 08 Oct 05:26
nextcollapse
One such thing would be a mechanical ruleset that adjusts the amount of wealth and assets a company can hold, according to employee headcount.
Expert here. That’s a bad idea. Example: a small law firm, 10 employees including owners/partners/I don’t care how they’re organized. They have 3 bank accounts: their payroll account, their operating fund (where all their nonpayroll expenditures are made) and their client liability account. None of the money in that account is actually theirs, they just hold it while waiting for clients to cash their settlement checks.
Proportionally, at least at the firm I’ve consulted with, their client liability account is several orders of magnitude larger than either of the other accounts. Technically the money isn’t theirs, they are just custodians, and the interest from that account is their bar association dues.
My point is, certain asset caps may look appropriate for one industry and simultaneously be absolutely disruptive to others.
SabinStargem@lemmy.today
on 08 Oct 05:46
nextcollapse
In that case, what would you believe to be an appropriate solution for your industry? I would like your viewpoint, it might refine my concept a bit further*.
*My approach is assuming a scenario that can be broadly be described as ‘What if FDR failed to save capitalism?’, or a total breakdown of the economic reality we know. That is the sort of thing that the Framers of America did when they made the Constitution. They formalized rules on preventing absolute political power, so I am looking for something similar regarding economic gaps.
ShittDickk@lemmy.world
on 08 Oct 06:05
nextcollapse
I’ve thought a good one would be to have all publicly traded business allocate 15% equity to employees, and require a seat on the board for an employee elected representative. Employees should be allowed to vote to sell off a certain amount every quarter, and any stock buybacks would go into the employee pool until the 15% is reached again.
I have a feeling that this is different from Beeg’s concern about client assets, but more about employee influence over the company? The idea of an equity limit might be a good addition to the Universal Ranked Income concept that I have cobbled together. Thank you. :)
In any case, my notes has two things about my own take:
1: Employees can vote for whether someone can obtain and retain their leadership position within their chapter and for higher rungs of the organization. Also, the pay grade of those leaders. Employees who are fired or retired from the company will receive 1:1 retirement pay over time, equal to the days and pay grade that they worked at the company, and can vote on any position of the company or those it has merged with. This essentially means that legacy employees can determine the leadership of the company, and cannot be made to ‘go away’ in a political sense.
2: Stocks when sold, have two components. The first is that they pay an amount over a fixed time, that is more than what they were paid for. It cannot be be sold nor traded, until it has been exhausted of this payback value. When exhausted of value, the share can now be traded to another individual for money, or returned to the company for the value it was bought at. The company cannot refuse the return, nor offer an increased price. A share returned to the company can be reissued, which allows it to start paying the fixed value again. Secondly, people who hold a share can vote for company leadership*. People within leadership positions at the company cannot own stocks from their own organization.
By requiring stocks to be held for a certain time before they can be traded, it makes it harder for stockholders to hoard and dispose of stocks when convenient. The gradual payout is a reward to people who buy stocks from the company. Presumably, the inability of stocks to have guaranteed value when they become tradeable will promote their return to the company.
*It is assumed that we are operating within an economic system, where there are absolute wealth and asset caps. There is only so many shares a person can possess, and holding shares can prevent someone from owning a yacht or bigger house - they have to lose the shares to make room within the cap for things they enjoy. This helps limit the influence of individual stock holders.
HeyThisIsntTheYMCA@lemmy.world
on 08 Oct 07:15
collapse
There’s a few ways to account for it. i mean, if you are doing Net Assets (Assets-Liability), that’s just Equity and having a limit on the total Equity a business is able to carry at specific sizes feels like it’s incentivizing the wrong things.
It’s kind of interesting to see the changes in investment rates that happened when the tax rate dropped from 90% on anything over a million in annual income. People would essentially buy losses (invest in businesses that were struggling) in order to keep from having to pay the government more. So struggling businesses got a little more capital to survive. Simply changing the top personal/corp income tax rate to something draconian at some arbitrarily high amount can have transformative effects on a society. that’s where i’d start.
survirtual@lemmy.world
on 08 Oct 10:42
nextcollapse
What is it you’re an expert of, here? Game theory? Or do you mean you’re a lawyer?
If you’re a lawyer, you are not an expert on formulating a society. We’ve let lawyers run things for a long time and look at where it’s gotten us.
The system needs to promote positive, human centric outcomes. Maybe having clients with that much wealth isn’t fundamentally a positive outcome? Perhaps that idea needs to be reworked as a part of the oncoming changes?
In other words, anyone dealing with a certain threshold of wealth needs to hire human beings in order to raise their cap. I like this idea a lot actually. The bigger the clients, the more they have to pay if they want legal representation. For billionaires, legal representation would cost an absolute fortune and provide income to thousands of people.
Honestly I haven’t thought of this pattern but the more I think about it, the better it seems.
HeyThisIsntTheYMCA@lemmy.world
on 08 Oct 16:14
collapse
Maybe having clients with that much wealth isn’t fundamentally a positive outcome?
let’s remove the ability of people to sue for damages when they’re injured, that’s ALSO a positive societal goal.
I am not looking to argue. I just don’t think there is a future for the law profession in a post-scarcity society. Disagreements will occur and negotiations will exist, but there are better ways to resolve them.
Ideally, lawyers, marketers, bankers, and politicians will no longer be needed. They can all be automated.
HeyThisIsntTheYMCA@lemmy.world
on 08 Oct 23:34
collapse
i mean, ideally everything can be automated. the reason we have lawyers is because there is (usually intentional) wiggle room in the law, and people sometimes need more than “society runs better if we honor our word” to act with integrity, follow the law, or put their shoppings cart back. some people need the stick of legal repercussions all of the time. automating politicians (unless you are going for a direct democracy, which no one has the time for) concentrates power in the hands of the people maintaining the automation. i agree with you on the other two, but i’m sure i could find justifications for human intervention in their processes if i tried. not to mention there’s a certain amount of ingenuity and talent that AI can’t duplicate. nearly everything i’ve seen that’s AI produced lacks soul.
also, i’m not a lawyer, i am just occasionally an expert witness or forensic analyst for some law firms and have some lawyers in the family. I specialize in one federal and two state titles, but again, i provide analysis i don’t practice law. my career has spanned four or five marginally related disciplines so not quite sure what to call me
As a more general principle, don’t build nitpicky implementation detail into a strategy document. That’s how you get brainfarts like the 3/5 compromise.
“If there is a massive overhaul, I would like to use this once in a century event to enact minimal changes that will help to keep the capitalist system in place.”
weirdbeardgame@lemmy.world
on 08 Oct 06:34
nextcollapse
Yes, just kill the 96 million people because it’s not like the capitalists are ever going to share what they control and Americans are never going to vote for social safety nets. Not within the next 10 years anyway.
DamnianWayne@lemmy.world
on 08 Oct 10:49
nextcollapse
Well my AI says it will take 96 or 98 million jobs, depending on what you want it say and only for $5,000.
phutatorius@lemmy.zip
on 08 Oct 10:57
nextcollapse
Just look at who’s in charge of the Senate, and ask yourself if they are to be trusted to do anything but lie, steal and carry out witch hunts.
As for LLMs, unless driving contact-centre customer satisfaction scores even further through the floor counts as an achievement, so far, all there’s been has been a vast volume of hype and wasted energy, and very little to show for it, except for some highly constrained point solutions which aren’t significant enough to make economic impact. Even then, the ROI is questionable.
MonkderVierte@lemmy.zip
on 08 Oct 11:07
nextcollapse
Stop calling LLM AI. It creates false expectations.
So they want to keep them terrified of losing their shitty, barely functioning status quo.
The reality is that these are the numbers the Republicans want , because it’s the numbers their billionaire owners want. ChatGPT is just accidentally letting us know how they’ve poisoned the models.
Needs to stop with stupid gimmicks from Bernie. Higher personal, corporate, and investment taxes to fund UBI. Welcome robots/automation to free us from any useless work instead of looking at cannibal solutions to “pick me” for the one job there is.
Robot taxes are wrongheaded, because automation is hard to define. Taxing pipes and wires will make full employment getting all your energy and water with buckets from the river and chopping down all the trees. Even if we strained to define narrow robots/automation categories, it would encourage more foreign production, and no local robot production economy. Why would those selling Yachts to the robot owners not be taxed?
sugar_in_your_tea@sh.itjust.works
on 09 Oct 01:46
collapse
We don’t necessarily need higher taxes, we could probably put an income cap on SS benefits, remove the cap on SS taxes, and fund it with the excess.
threaded - newest
This is how AI will take over… not through wars or competence, but by being better at bureaucratic forgeries…
Edit: well, I guess the apple never falls far from the tree, as it were! Wa-hey! We wanted to create the ultimate worker, but we’ve managed to create the ultimate politician instead=))
This.
AI politicians might be the move after next.
Corporate personhood(you are here) ->
Corporation self advocates ->
Corporations run for office
I don’t like this future. I’d like to go back.
I hate to break it to you….
www.bbc.com/news/articles/cm2znzgwj3xo
I said I DON’T LIKE THIS FUTURE. I’D LIKE TO GO BACK
It’s easy when the first line of every reply is “oh, you’re so goddamn smart. Holy shit, are you the smartest person in the world for asking that question?..”
youtu.be/TuEKb9Ktqhc
Knowing the way our country is going I would expect in the end workers will have to pay an AI tax on their income and most workers will start working 50 hours a week.
I like you optimism that it won’t be worse than that. 😋
If you have a job that you can be confidently wrong without any self awareness after the fact, then yeah I guess.
But I can’t think of many jobs like that except something that is mostly just politics.
Don’t forget the vast majority of CEOs.
IMO AI would probably do the job of CEO better than a human. It wouldn’t be as greedy and would be happy with any growth while being humble enough to make decisions that might be personally embarrassing
Spam and astroturfing mostly.
the onion? looks like ai already took adviser to congress jobs
and then 115 million will be needed to unwind the half-assed implementation and inevitable damage.
Thus demonstrating the crux of the issue.
I was just looking for a name of a historical figure associated with the Declaration of Independence but not involved in the writing of it. Elizabeth Powel. Once I knew the name, I went through the ai to see how fast they’d get it. Duck.ai confidently gave me 9 different names, including people who were born on 1776 or soon thereafter and could not have been historically involved in any of it. I even said not married to any of the writers and kept getting Abagail Adams and the journalist, Goddard. It was continually distracted by “prominent woman” and would give Elizabeth Cady Stanton instead. Twice.
Finally, I gave the ai a portrait. It took the ai three tries to get the name from the portrait, and the portrait is the most used one under the images tab.
It was very sad. I strongly encourage everyone to test the ai. Easy to grab wikis that would be top of the search anyway are making the ai look good.
LOL Maybe AI will be the next big job creator. The AI solves a task super fast, but a human has to sort out the mistakes, and spend twice the time doing that, than it would have taken to just do it yourself.
This what’s happening in computer programming. The booming subfield is apparently slop cleaners.
If you understand how LLMs work, that’s not surprising.
LLMs generate a sequence of words that makes sense in that context. It’s trained on trillions(?) of words from books, Wikipedia, etc. In most of the training material, when someone asks “what’s the name of the person who did X?” there’s an answer, and that answer isn’t “I have no fucking clue”.
Now, if it were trained on a whole new corpus of data that had “I have no fucking clue” a lot more often, it would see that as a reasonable thing to print sometimes so you’d get that answer a lot more often. However, it doesn’t actually understand anything. It just generates sequences of believable words. So, it wouldn’t generate “I have no fucking clue” when it doesn’t know, it would just generate it occasionally when it seemed like it was an appropriate time. So, you’d ask “Who was the first president of the USA?” and it would sometimes say “I have no fucking clue” because that’s sometimes what the training data says a response might look like when someone asks a question of that form.
I wouldn’t put it entirely outside the realm of possibility, but I think that that’s probably unlikely.
The entire US only has about 161 million people working at the moment. In order for a 97 million shift to happen, you’d have to manage to transition most human-done work in the US to machines, using one particular technology, in 10 years.
Is that technically possible? I mean, theoretically.
I’m pretty sure that to do something like that, you’d need AGI. Then you’d need to build systems that leveraged it. Then you’d need to get it deployed.
What we have today is most-certainly not AGI. And I suspect that we’re still some ways from developing AGI. So we aren’t even at Step 1 on that three-part process, and I would not at all be surprised if AGI is a gradual development process, rather than a “Eureka” moment.
I think the fad will die down a bit, when companies figure out that AI will be more likely than humans to make very expensive mistakes that the company has to compensate, and saying it was the AI is not a valid cop out.
I foresee companies will go bankrupt on that account.
It doesn’t help to save $100k on cutting away an employee, if the AI causes damages for 10 or 100 times that amount.
When the bubble bursts, whoever is left standing is going to have to jack prices through the roof to put so much as a dent in their outlay. Their outlay so far. Can’t see many companies hanging in there at that point.
Not if the IP is purchased by another company leaving the original saddled with the debt, or spun off so the parent company can rebuy it thusly, or the government bails them out, or buys it to be the State AI too, or a bunch of other scenarios in this dark new world ahead.
That’s my favorite part. All the stolen IP.
Agreed, but I do think that some jobs are just going to be gone.
For example, low level CS agents. I worked for a company that replaced that first line of CS defense with a bot, and the end-of-call customer satisfaction scores went up.
I can think of a few other things in my company that had a similar outcome. If the role is gone, and the customers and employees are being served even better than when they had that support role, that role ain’t coming back.
I’m pretty sure that even consumer services is an area where I saw a computer made an expensive mistake, promising the customer something very expensive, and a court decided the company had to honor the agreement the AI made. But I can’t find the story, because I’m flooded with product placement articles about how wonderful AI is at saving cost in CS.
But yes CS is absolutely an area where AI is massively pushed.
Not sure if it’s the one you are referring to, but AI gave discounts on flights.
No it was a single case of reparation for damages of some sort. Where the company wouldn’t honor the deal, but lost the case in court.
But thanks for finding a concrete example of how AI makes stupid mistakes, this Airline case looks like the bot suffered from the infamous hallucinations they are prone to.
And it’s absolutely disgusting that the airline won’t honor it, and instead make all sorts of claims about how when and where the complaint needs to be filed.
Very good example of the principle of what I meant, although it’s only a few hundred dollars.
The court honored it for now. I expect the future it will be your problem.
Oh but the EU?
Once they are done with North America the EU will be a non issue for them.
Oh 100%. The question will be are there more opportunities that come from it. Here’s my guess: if you can’t produce something interesting you will be fighting for scraps. Even that might not be good enough.
I put my money on AI act here in Europe and the willingness of local authorities to make a few examples. That would help bringing some accountability here and there and stir a bit the pot. Eventually, as AI commodities, it will be less in the light. That will also help.
Good podcast about this bubble bursting : craphound.com/…/the-real-economic-ai-apocalypse-i…
OpenAI is pets.com. It has fairly crappy models in a very strong competition for models. The only difference with pets.com is that US government is behind it to make Skynet for Israel’s control. Datacenters are meant to develop Skynet. The only pretense of economic strength in US is datacenter economy, and Skynet for Israel is absolute mission for US government.
Despite no possible business economics for datacenter model, the sheer will behind Skynet for Israel ensures that there is no imminent bubble pop. Perplexity and Coreweave may get sacrificed though.
Still GPUs and specialized AI GPUs are here to stay, even if sales forecasts can be too high. Open weight models are awesome. Smaller models can be trained after quantization to domain specialization, with hardware for small enough models accessible to individuals and businesses. The fatal flaw of using datacenter providers is that their purpose is to provide Skynet for Israel, and steal any information that might help in the process, and then terminate/genocide anyone who would stand in their way.
funny… i expected IT workers to be in that list but we’re not. AI couldn’t do my job but it could be my boss and that frightens me.
I drove Amazon Flex during Covid, having an AI as your boss is deeply and perpetually unsettling but ultimately doable! Just do what the push notification tells you to do. If you want to say something to your boss, use the feedback form on the corporate website. So simple.
What do you do?
what don’t I do… some days… I tell you. My job is Systems Administrator
marshallbrain.com/manna1
I’m thinking William Gibson probably gets it right with the Neuromancer story
And over the next 50 years it will take 485 million jobs, and the unemployment rate will be 235%.
And we’ll all be dead.
Here’s hoping!
LOLLLLLLLL that’s like a third of the US population. Probably half of the number currently employed. There’s no way in hell this useless garbage will take 1/3 to 1/2 of all jobs. Companies that do this will go out of business fast.
You can tell how competent someone is at something by how good they think AI is at that thing.
This is so true.
I recently had a colleague - ignorant of this perspective - give a training presentation on using AI to update a kind of bullshit job useless document.
Dozens of peers attended their presentation. They went on demonstrating relatively mindless prompt inputting for 40 minutes.
I keep remembering just how many people they shared their AI enthusiasm with.
I think they may honestly believe that AI has democratized the workplace, and that they will vibe code their way to successful startup CEO-ship in a year.
I also find it interesting how whenever I’ve expressed the above sentiment either here or on the Other Place, the up/downvote ratios seem to vary massively depending on the tech-bro quotient of the group. I’m mildly surprised to see it go entirely positive in a community called “technology”.
And these 1/3 are perfect horde for fascist brainwashing and consolidate the power of techno-fascists. The fascists will tell the jobless that immigrants took their jobs and not robots.
Why even post this here? This is politics BS that‘s used as a diversion from the Epstein files and the government shut down which again only happened so they don‘t vote on the Epstein files.
The epstien files is a distraction from dismantling our constitutional law. What laws are you going to try the pedos under? Which courts do you plan on using? You see where I’m going with this? We all know who’s on the list who’s gonna hold them accountable? No one, thus it’s a stupid distraction.
I don’t think the numbers themselves are that important, the key bit is that AI is an advancing technology over this century. If we don’t rework our society to account for an oncoming future, people will get run over.
If there is an overhaul of my nation’s Constitution, I would like economics to be addressed. One such thing would be a mechanical ruleset that adjusts the amount of wealth and assets a company can hold, according to employee headcount. If they downsize the amount of working humans, their limit goes down. They can opt to join a lotto program, that grants UBI to people whose occupation is displaced by AI, and each income that is lotto’ed by the company adds to their Capital Asset Limit.
Expert here. That’s a bad idea. Example: a small law firm, 10 employees including owners/partners/I don’t care how they’re organized. They have 3 bank accounts: their payroll account, their operating fund (where all their nonpayroll expenditures are made) and their client liability account. None of the money in that account is actually theirs, they just hold it while waiting for clients to cash their settlement checks.
Proportionally, at least at the firm I’ve consulted with, their client liability account is several orders of magnitude larger than either of the other accounts. Technically the money isn’t theirs, they are just custodians, and the interest from that account is their bar association dues.
My point is, certain asset caps may look appropriate for one industry and simultaneously be absolutely disruptive to others.
In that case, what would you believe to be an appropriate solution for your industry? I would like your viewpoint, it might refine my concept a bit further*.
*My approach is assuming a scenario that can be broadly be described as ‘What if FDR failed to save capitalism?’, or a total breakdown of the economic reality we know. That is the sort of thing that the Framers of America did when they made the Constitution. They formalized rules on preventing absolute political power, so I am looking for something similar regarding economic gaps.
I’ve thought a good one would be to have all publicly traded business allocate 15% equity to employees, and require a seat on the board for an employee elected representative. Employees should be allowed to vote to sell off a certain amount every quarter, and any stock buybacks would go into the employee pool until the 15% is reached again.
I have a feeling that this is different from Beeg’s concern about client assets, but more about employee influence over the company? The idea of an equity limit might be a good addition to the Universal Ranked Income concept that I have cobbled together. Thank you. :)
In any case, my notes has two things about my own take:
1: Employees can vote for whether someone can obtain and retain their leadership position within their chapter and for higher rungs of the organization. Also, the pay grade of those leaders. Employees who are fired or retired from the company will receive 1:1 retirement pay over time, equal to the days and pay grade that they worked at the company, and can vote on any position of the company or those it has merged with. This essentially means that legacy employees can determine the leadership of the company, and cannot be made to ‘go away’ in a political sense.
2: Stocks when sold, have two components. The first is that they pay an amount over a fixed time, that is more than what they were paid for. It cannot be be sold nor traded, until it has been exhausted of this payback value. When exhausted of value, the share can now be traded to another individual for money, or returned to the company for the value it was bought at. The company cannot refuse the return, nor offer an increased price. A share returned to the company can be reissued, which allows it to start paying the fixed value again. Secondly, people who hold a share can vote for company leadership*. People within leadership positions at the company cannot own stocks from their own organization.
By requiring stocks to be held for a certain time before they can be traded, it makes it harder for stockholders to hoard and dispose of stocks when convenient. The gradual payout is a reward to people who buy stocks from the company. Presumably, the inability of stocks to have guaranteed value when they become tradeable will promote their return to the company.
*It is assumed that we are operating within an economic system, where there are absolute wealth and asset caps. There is only so many shares a person can possess, and holding shares can prevent someone from owning a yacht or bigger house - they have to lose the shares to make room within the cap for things they enjoy. This helps limit the influence of individual stock holders.
There’s a few ways to account for it. i mean, if you are doing Net Assets (Assets-Liability), that’s just Equity and having a limit on the total Equity a business is able to carry at specific sizes feels like it’s incentivizing the wrong things.
It’s kind of interesting to see the changes in investment rates that happened when the tax rate dropped from 90% on anything over a million in annual income. People would essentially buy losses (invest in businesses that were struggling) in order to keep from having to pay the government more. So struggling businesses got a little more capital to survive. Simply changing the top personal/corp income tax rate to something draconian at some arbitrarily high amount can have transformative effects on a society. that’s where i’d start.
What is it you’re an expert of, here? Game theory? Or do you mean you’re a lawyer?
If you’re a lawyer, you are not an expert on formulating a society. We’ve let lawyers run things for a long time and look at where it’s gotten us.
The system needs to promote positive, human centric outcomes. Maybe having clients with that much wealth isn’t fundamentally a positive outcome? Perhaps that idea needs to be reworked as a part of the oncoming changes?
In other words, anyone dealing with a certain threshold of wealth needs to hire human beings in order to raise their cap. I like this idea a lot actually. The bigger the clients, the more they have to pay if they want legal representation. For billionaires, legal representation would cost an absolute fortune and provide income to thousands of people.
Honestly I haven’t thought of this pattern but the more I think about it, the better it seems.
let’s remove the ability of people to sue for damages when they’re injured, that’s ALSO a positive societal goal.
where do you think that money came from?
Preferably, yes. Ideally, we are all insured by a single payer system and in the case of an accident, people are compensated via that insurance.
No legal bank account needed.
Next point?
oh, you want to argue. accidents are a very small subset of legal injuries
I am not looking to argue. I just don’t think there is a future for the law profession in a post-scarcity society. Disagreements will occur and negotiations will exist, but there are better ways to resolve them.
Ideally, lawyers, marketers, bankers, and politicians will no longer be needed. They can all be automated.
i mean, ideally everything can be automated. the reason we have lawyers is because there is (usually intentional) wiggle room in the law, and people sometimes need more than “society runs better if we honor our word” to act with integrity, follow the law, or put their shoppings cart back. some people need the stick of legal repercussions all of the time. automating politicians (unless you are going for a direct democracy, which no one has the time for) concentrates power in the hands of the people maintaining the automation. i agree with you on the other two, but i’m sure i could find justifications for human intervention in their processes if i tried. not to mention there’s a certain amount of ingenuity and talent that AI can’t duplicate. nearly everything i’ve seen that’s AI produced lacks soul.
also, i’m not a lawyer, i am just occasionally an expert witness or forensic analyst for some law firms and have some lawyers in the family. I specialize in one federal and two state titles, but again, i provide analysis i don’t practice law. my career has spanned four or five marginally related disciplines so not quite sure what to call me
As a more general principle, don’t build nitpicky implementation detail into a strategy document. That’s how you get brainfarts like the 3/5 compromise.
“If there is a massive overhaul, I would like to use this once in a century event to enact minimal changes that will help to keep the capitalist system in place.”
The Senate will decide its fate.
Yes but got forbid those jobs be stolen by another country. Can’t have that.
Good.
Having machines do the work for us is a good thing.
Yes, just kill the 96 million people because it’s not like the capitalists are ever going to share what they control and Americans are never going to vote for social safety nets. Not within the next 10 years anyway.
Well my AI says it will take 96 or 98 million jobs, depending on what you want it say and only for $5,000.
Just look at who’s in charge of the Senate, and ask yourself if they are to be trusted to do anything but lie, steal and carry out witch hunts.
As for LLMs, unless driving contact-centre customer satisfaction scores even further through the floor counts as an achievement, so far, all there’s been has been a vast volume of hype and wasted energy, and very little to show for it, except for some highly constrained point solutions which aren’t significant enough to make economic impact. Even then, the ROI is questionable.
Stop calling LLM AI. It creates false expectations.
The fact that “AI” training off other LLM slop produces worse and worse results is proof there is no “intelligence” going on just clever parroting.
I.e., made up on the spot.
<img alt="" src="https://lemmy.world/pictrs/image/6f24e8f6-46b3-4b74-a812-b5ce74347f5a.gif">
So they want to keep them terrified of losing their shitty, barely functioning status quo.
The reality is that these are the numbers the Republicans want , because it’s the numbers their billionaire owners want. ChatGPT is just accidentally letting us know how they’ve poisoned the models.
Needs to stop with stupid gimmicks from Bernie. Higher personal, corporate, and investment taxes to fund UBI. Welcome robots/automation to free us from any useless work instead of looking at cannibal solutions to “pick me” for the one job there is.
Robot taxes are wrongheaded, because automation is hard to define. Taxing pipes and wires will make full employment getting all your energy and water with buckets from the river and chopping down all the trees. Even if we strained to define narrow robots/automation categories, it would encourage more foreign production, and no local robot production economy. Why would those selling Yachts to the robot owners not be taxed?
We don’t necessarily need higher taxes, we could probably put an income cap on SS benefits, remove the cap on SS taxes, and fund it with the excess.