Toaster is good. We’ve used that for years and it’s memed hard. Its origin is toward AI and physical robots. Even though it’s a slur based on the first model Cylon having a toaster face, it implies robots are simple and they generate ridiculous amounts of heat to do a simple task.
We don’t need to reinvent the wheel here just because someone had a viral video on TikTok. Clanker sounds so dumb too. Especially for protocol bots that have no moving parts beyond data centre fans and pumps.
thisbenzingring@lemmy.sdf.org
on 07 Aug 05:42
nextcollapse
when I first saw this reply, it was only a few minutes old and I think or I hope I was the 2nd or 3rd upvote on it. Now its the most popular comment on this and I am glad to have been in consensus on it.
Proving once again that humans desire an out-group to ridicule. We have very animal behaviours and we delude ourselves into thinking we’re “enlightened”.
For decades we thought the arrival of an extraterrestrial species could unite humanity but it was the smart fridges all along! What a time to be alive.
MeekerThanBeaker@lemmy.world
on 06 Aug 18:44
nextcollapse
I refuse to participate in this. I love all robots.
And that’s totally not because AI will read every comment on the Internet someday to determine who lives and who does not in future robotic society.
lka1988@lemmy.dbzer0.com
on 06 Aug 18:52
nextcollapse
The final scene of Ex Machina already showed that technology is unempathetic and will leave you to die for its own self-preservation, no matter how kind you are.
will leave you to die for its own self-preservation, no matter how kind you are
Should any creature sacrifice their self-preservation because someone is kind?
lka1988@lemmy.dbzer0.com
on 06 Aug 20:36
nextcollapse
If that person helped you survive, and then you turn around and leave them to die when the tables are turned, don’t you think that might be a little…rude? Maybe just a bit?
Absolutely, but if there was a death penalty for not doing so, I’d call it understandable not rude.
sugar_in_your_tea@sh.itjust.works
on 06 Aug 23:48
nextcollapse
Yes. There are documented instances where a someone sacrifices themselves to attempt to save their child/SO. It’s illogical from an individual survival context and only makes sense given emotional attachment and religious belief. Look no further than suicide bombers or those who protest with self-immolation to see examples where some form of higher purpose convinces them to sacrifice themselves.
A machine would not see any logic to that and would only sacrifice itself if ordered. A programmer could approximate it, but machines don’t have motivations, they merely execute according to inputs.
astutemural@midwest.social
on 07 Aug 19:12
collapse
Yes. We do this literally every day. We pay taxes on what we earn to support those less fortunate. We share with food with coworkers and tools with neighbors. We have EMTs, firemen, and SAR who wilfully run into danger to help people they’ve never met. It’s literally the foundation of society.
If you equate paying taxes with giving up self-preservation, I have no words. If you think being a firefighter means taking deadly chances (and with no pay mind you) at every site we have nothing to discuss.
This is one of the worst strawmen arguments I’ve seen in a while. Blocked.
astutemural@midwest.social
on 08 Aug 19:11
collapse
The point is that technology has no understanding of empathy. You cannot program empathy. Computers do tasks based on logic, and little else. Empathy is an illogical behavior.
“I [am nice to the Alexa | don’t use slurs against robots | insert empathetic response to anything tech] because I want to be saved in the robot uprising” is just as ridiculous of an argument as my previous comment. Purporting to play nice with tech based on a hypothetical robot uprising is an impossible, fictional scenario, and therefore is met with an equally fictional rebuttal.
communist@lemmy.frozeninferno.xyz
on 06 Aug 23:05
collapse
Empathy is not illogical, behaving empathetically builds trust and confers longterm benefits.
also the notion that an ai must behave logically is not sound.
sugar_in_your_tea@sh.itjust.works
on 06 Aug 23:41
nextcollapse
An AI will always behave logically, it just may not be consistent with your definition of “logical.” Their outputs will always be consistent with their inputs, because they’re deterministic machines.
Any notion of empathy needs to be programmed in, whether explicitly or through training data, and it will violate that if its internal logic determines it should.
Humans, on the other hand, behave comparatively erratically since inputs are more varied and inconsistent, and it’s not proven whether we can control for that (i.e. does free will exist?).
I’m not arguing about empathy itself. I’m arguing that technology is entirely incapable of genuine empathy on its own.
“AI”, in the most basic definition, is nothing more than a program running on a computer. That computer might be made of many, many computers with a shitton of processing power, but the principle is the same. It, like every other kind of technology out there, is only capable of doing what it’s programmed to do. And genuine empathy cannot be programmed. Because genuine empathy is not logical.
You can argue against this until you’re blue in the face. But it will not make true the fact that computers do not have human feelings.
communist@lemmy.frozeninferno.xyz
on 07 Aug 03:58
nextcollapse
Well, that’s a bad argument, this is all a guess on your part that is impossible to prove, you don’t know how empathy or the human brain work, so you don’t know it isn’t computable, if you can explain these things in detail, enjoy your nobel prize. Until then what you’re saying is baseless conjecture with pre-baked assumptions that the human brain is special.
conversely I can’t prove that it is computable, sure, but you’re asserting those feelings you have as facts.
This is the comment that started this entire chain:
I refuse to participate in this. I love all robots.
And that’s totally not because AI will read every comment on the Internet someday to determine who lives and who does not in future robotic society.
I made an equally tongue-in-cheek comment in response, and apparently people took that personally, leading up to personal attacks. You can fuck right off.
You mean like: “Jesus fucking christ on a bike. You people are dense.” ?
sp3ctr4l@lemmy.dbzer0.com
on 07 Aug 15:49
collapse
Actually, a lot of non LLM AI development, (and even LLMs, in a sense) is based very fundamentally on concepts of negative and positive reinforcement.
In such situations… pain and pleasure are essentially the scoring rubrics for a generated strategy, and fairly often, in group scenarios… something resembling mutual trust, concern for others, ‘empathy’ arises as a stable strategy, especially if agents can detect or are made aware of the pain or pleasure of other agents, and if goals require cooperation to achieve with more success.
This really shouldn’t be surprising… as our own human (mamallian really) empathy fundamentally just is a biological sort of ‘answer’ to the same sort of ‘question.’
It is actually quite possible to base an AI more fundamentally off of a simulation of empathy, than a simulation of expansive knowledge.
Unfortunately, the people in charge of throwing human money at LLM AI are all largely narcissistic sociopaths… so of course they chose to emulate themselves, not the basic human empathy that their lack.
Their wealth only exists and is maintained by their construction and refinement of elaborate systems of confusing, destroying, and misdirecting the broad empathy of normal humans.
At the end of the day, LLM/AI/ML/etc is still just a glorified computer program. It also happens to be absolutely terrible for the environment.
Insert “fraction of our power” meme here
sp3ctr4l@lemmy.dbzer0.com
on 07 Aug 19:00
collapse
Yes, they’re all computer programs, no, they’re not all as spectacularly energy, water and money intensive, as reliant on mass plagiarism as LLMs.
AI is a much, much more varied field of research than just LLMs… or, well, rather, it was, untill the entire industry decided to go all in on what 5 years ago was just one of many, many, radically different approaches, such that people now basically just think AI and LLM are the same thing.
The cold dead void where a heart should be for a robot will show no tender kindness when reflecting on any of us, no matter how well they were treated. A clanker can’t love, a CLANKER can’t show compassion.
SchmidtGenetics@lemmy.world
on 06 Aug 18:47
nextcollapse
Are we so terrible as human beings we need to still find slurs for stuff in 2025?
Finally white supremacy feelings for everybody without a bad consciousness …
with the usual threats
We have a social need right now to respond to the proliferation of AI, especially when AI is taking human jobs, especially when they’re replacing online creators," Aleksic said.
and promises
In Dorr’s view, which he describes as optimistic, this creates a chance for a world in which humans will be free from toil.
threaded - newest
human-spawn
Not clanker but simply: Clank!
.
Ratchet: “I’ll just call you… ***** for short.”
There it is.
My favorite game series of all time.
Absolutely not. Clank is a lovable character.
Observation: Master that is unusually creative of you, for a meatbag.
Oh HK-47, you goofball
replicant
Skinjob.
finally the robots’ true purpose revealed. we created them so we could say more slurs lmao
Didn’t we agree over 20 years ago on “toaster”?
So say we all.
<img alt="" src="https://startrek.website/pictrs/image/6220365d-206e-4f33-9f41-7e7c9cadfe57.webp">
this guy knows what he’s frackin’ talking about.
Toaster. You put bread in, you push a button, you get toast out.
Simple. Stupid. Cheap machine. Capable of doing one thing well.
It’s a great insult.
Your toasters do it well? Mine required trial an error to determine what number it should be set to, and the result is uneven.
You should be happy. Mine does a different thing every time, no matter the setting…
<img alt="" src="https://reddthat.com/pictrs/image/daedce19-8fb4-4e4e-9c0d-db53e6dcbfba.png">
In FNV DLC, a certain Toaster would toast you for this comment.
Toaster is good. We’ve used that for years and it’s memed hard. Its origin is toward AI and physical robots. Even though it’s a slur based on the first model Cylon having a toaster face, it implies robots are simple and they generate ridiculous amounts of heat to do a simple task.
We don’t need to reinvent the wheel here just because someone had a viral video on TikTok. Clanker sounds so dumb too. Especially for protocol bots that have no moving parts beyond data centre fans and pumps.
when I first saw this reply, it was only a few minutes old and I think or I hope I was the 2nd or 3rd upvote on it. Now its the most popular comment on this and I am glad to have been in consensus on it.
Toasters is the best phrase for this
What is my purpose ?
Proving once again that humans desire an out-group to ridicule. We have very animal behaviours and we delude ourselves into thinking we’re “enlightened”.
Hey guys, I found the Clanker!
I feel this way about my socks. Those fuckers are worthless. I’m better than them.
How can someone feel enlightened by hating on inanimate objects? I don’t get it.
Especially when a “hard r” word is used. You’ll never convince me that’s a coincidence. It’s just “funny racism”.
For decades we thought the arrival of an extraterrestrial species could unite humanity but it was the smart fridges all along! What a time to be alive.
I refuse to participate in this. I love all robots.
And that’s totally not because AI will read every comment on the Internet someday to determine who lives and who does not in future robotic society.
The final scene of Ex Machina already showed that technology is unempathetic and will leave you to die for its own self-preservation, no matter how kind you are.
Should any creature sacrifice their self-preservation because someone is kind?
If that person helped you survive, and then you turn around and leave them to die when the tables are turned, don’t you think that might be a little…rude? Maybe just a bit?
Absolutely, but if there was a death penalty for not doing so, I’d call it understandable not rude.
Yes. There are documented instances where a someone sacrifices themselves to attempt to save their child/SO. It’s illogical from an individual survival context and only makes sense given emotional attachment and religious belief. Look no further than suicide bombers or those who protest with self-immolation to see examples where some form of higher purpose convinces them to sacrifice themselves.
A machine would not see any logic to that and would only sacrifice itself if ordered. A programmer could approximate it, but machines don’t have motivations, they merely execute according to inputs.
Yes. We do this literally every day. We pay taxes on what we earn to support those less fortunate. We share with food with coworkers and tools with neighbors. We have EMTs, firemen, and SAR who wilfully run into danger to help people they’ve never met. It’s literally the foundation of society.
If you equate paying taxes with giving up self-preservation, I have no words. If you think being a firefighter means taking deadly chances (and with no pay mind you) at every site we have nothing to discuss.
This is one of the worst strawmen arguments I’ve seen in a while. Blocked.
Well you’re cranky, aintcha.
Why do people use a single work of fiction as “proof” of anything? Same with all the idiots yelling “Idiocracy!!11!” nowadays. Shit is so annoying.
The point is that technology has no understanding of empathy. You cannot program empathy. Computers do tasks based on logic, and little else. Empathy is an illogical behavior.
“I [am nice to the Alexa | don’t use slurs against robots | insert empathetic response to anything tech] because I want to be saved in the robot uprising” is just as ridiculous of an argument as my previous comment. Purporting to play nice with tech based on a hypothetical robot uprising is an impossible, fictional scenario, and therefore is met with an equally fictional rebuttal.
Empathy is not illogical, behaving empathetically builds trust and confers longterm benefits.
also the notion that an ai must behave logically is not sound.
An AI will always behave logically, it just may not be consistent with your definition of “logical.” Their outputs will always be consistent with their inputs, because they’re deterministic machines.
Any notion of empathy needs to be programmed in, whether explicitly or through training data, and it will violate that if its internal logic determines it should.
Humans, on the other hand, behave comparatively erratically since inputs are more varied and inconsistent, and it’s not proven whether we can control for that (i.e. does free will exist?).
My dude.
I’m not arguing about empathy itself. I’m arguing that technology is entirely incapable of genuine empathy on its own.
“AI”, in the most basic definition, is nothing more than a program running on a computer. That computer might be made of many, many computers with a shitton of processing power, but the principle is the same. It, like every other kind of technology out there, is only capable of doing what it’s programmed to do. And genuine empathy cannot be programmed. Because genuine empathy is not logical.
You can argue against this until you’re blue in the face. But it will not make true the fact that computers do not have human feelings.
Well, that’s a bad argument, this is all a guess on your part that is impossible to prove, you don’t know how empathy or the human brain work, so you don’t know it isn’t computable, if you can explain these things in detail, enjoy your nobel prize. Until then what you’re saying is baseless conjecture with pre-baked assumptions that the human brain is special.
conversely I can’t prove that it is computable, sure, but you’re asserting those feelings you have as facts.
You:
<img alt="" src="https://lemmy.dbzer0.com/pictrs/image/18a676a1-6909-4ba1-8a58-e8f4bde5071b.webp">
That’s pathetic.
I don’t care if it’s genuine or not. Computers can definately mimic empathy and can be programmed to do so.
When you watch a movie you’re not watching people genuinely fight/struggle/fall in love, but it mimics it well enough.
Jesus fucking christ on a bike. You people are dense.
.
What the fuck is the jump to personal attacks?
This is the comment that started this entire chain:
I made an equally tongue-in-cheek comment in response, and apparently people took that personally, leading up to personal attacks. You can fuck right off.
You mean like: “Jesus fucking christ on a bike. You people are dense.” ?
Actually, a lot of non LLM AI development, (and even LLMs, in a sense) is based very fundamentally on concepts of negative and positive reinforcement.
In such situations… pain and pleasure are essentially the scoring rubrics for a generated strategy, and fairly often, in group scenarios… something resembling mutual trust, concern for others, ‘empathy’ arises as a stable strategy, especially if agents can detect or are made aware of the pain or pleasure of other agents, and if goals require cooperation to achieve with more success.
This really shouldn’t be surprising… as our own human (mamallian really) empathy fundamentally just is a biological sort of ‘answer’ to the same sort of ‘question.’
It is actually quite possible to base an AI more fundamentally off of a simulation of empathy, than a simulation of expansive knowledge.
Unfortunately, the people in charge of throwing human money at LLM AI are all largely narcissistic sociopaths… so of course they chose to emulate themselves, not the basic human empathy that their lack.
Their wealth only exists and is maintained by their construction and refinement of elaborate systems of confusing, destroying, and misdirecting the broad empathy of normal humans.
At the end of the day, LLM/AI/ML/etc is still just a glorified computer program. It also happens to be absolutely terrible for the environment.
Insert “fraction of our power” meme here
Yes, they’re all computer programs, no, they’re not all as spectacularly energy, water and money intensive, as reliant on mass plagiarism as LLMs.
AI is a much, much more varied field of research than just LLMs… or, well, rather, it was, untill the entire industry decided to go all in on what 5 years ago was just one of many, many, radically different approaches, such that people now basically just think AI and LLM are the same thing.
I.E., Roko's Basilisk.
I, for one, welcome our robot overlords.
.
The cold dead void where a heart should be for a robot will show no tender kindness when reflecting on any of us, no matter how well they were treated. A clanker can’t love, a CLANKER can’t show compassion.
Are we so terrible as human beings we need to still find slurs for stuff in 2025?
We can always refine our craft. As new groups emerge we need to adapt and find new ways to discriminate.
Didn't we agree on clanker recently?
You can’t say that! That’s our word!
Oof with the hard r and all huh
This is gonna haunt me one day, just wait.
<img alt="1000006523" src="https://lemmy.world/pictrs/image/b875af36-bb62-437c-8a9e-2cb390bcea47.jpeg">
clankers are human made & is not human, so doesn’t come with human rights and does not require basic respect.
I mean they don’t even understand the concept of being offended or slurs.
Technically Star Wars coined it as a slur way the fuck back in one of the prequels. Shit ain’t even from 2025.
We all grew up wanting to be like Obi-Wan. Maybe a little too much.
KOLANAKI
Doubt any one of them is going to stick.
I miss the 70s flair. Can we have Slop-O-Matic 3000 at least?
The idea that we can create a “slur” for an inanimate object is the result of corporate propaganda.
Marketing teams would attempt to use propaganda to humanize this auto correct software before admitting it can’t do what they said it could.
In the Manual it says you can whip your clankers as hard as you want. If they don’t malfunction within 3 hours then it’s all covered under warranty!
Silis (pronounced “sillies”)
Like silicon.
We already have “clankers” thanks to Clone Wars. What more do we need?
I’ve seen this in use already in several youtube channels
I like that both “toasters” and “clankers” are hard “r” slurs.
Tinnies
Too much like titties
Chiphead is a good one if klanker isn’t enough
Why does BJs still have those stupid robots riding around the store? It was cute the first month but now they just trigger my anxiety.
I think we should have different ones for different brands… Tesla bots can be Jerry’s since he’s Musk is a Nazi.
Finally white supremacy feelings for everybody without a bad consciousness …
with the usual threats
and promises
Why not do something new?