Kolanaki@yiffit.net
on 09 May 2024 01:28
nextcollapse
IMO, if it’s not trained on images of real people, it only becomes unethical to have it generate images of real people. At that point, it wouldn’t be any different than a human drawing a pornographic image and drawings do not exploit anyone.
BenM2023@lemmy.world
on 09 May 2024 02:56
nextcollapse
drawings do not exploit anyone.
Hmmm. I think you will find in many jurisdictions that they are treated as if they do.
Caligvla@lemmy.dbzer0.com
on 09 May 2024 03:39
collapse
Which is why nobody should use laws as a measure of morality, because they’re often fucking stupid.
Using pornographic art to train is still using other people’s art without permission.
And if it’s able to generate porn that looks like real people, it can be used to abuse people.
stevedidwhat_infosec@infosec.pub
on 09 May 2024 04:57
nextcollapse
[Edited] I agree that we should be taking consent more seriously. Especially when it comes to monetizing off the back of donations. That’s just outright wrong. However, I don’t think we should consider scrapping it all or putting in extraneous/consumer damaging ‘safe guards’. There are lots of things that can cause harm, and I’ll argue almost anything can be used to harm people. That’s why its our jobs to carefully pump the breaks on progress, so that we can assess what risk is possible, and how to treat any wounds that may incurr. For example, invading a country to spread ‘democracy’ and leaving things like power gaps behind, causing more damage than what was there orginally. It’s a very very thing rope we walk across, but we can’t afford, in todays age, to slow down too far. We face a lot of serious problems that need more help, and AI can fill that gap in addition to being a fun, creative outlet. We hold a lot of new power here, and I just don’t want to see that squandered away into the pockets of the ruling class.
I don’t think anyone should take luddites seriously tbh (edit: we should take everyone seriously, and learn from mistakes while also potentially learning forgotten lessons)
I don’t think anyone should take luddites seriously tbh
We just had a discussion on here about how Florida was banning lab-grown meat.
I mean, the Luddites were a significant political force at one point.
I may not agree with their position, but “I want to ban technology X that I feel competes for my job” has had an impact over the years.
stevedidwhat_infosec@infosec.pub
on 09 May 2024 13:31
collapse
They had an impact because people allowed themselves to take their fear mongering seriously.
It’s regressionist and it stunts progress needlessly. That’s not to say we shouldn’t pump the brakes, but I am saying logic like “it could hurt people” as rationale to never use it, is just “won’t someone think of the children” BS.
You don’t ban all the new swords, you learn how they’re made, how they strike, what kinds of wounds they create and address that problem. Sweeping under the rug/putting things back in their box, is not an option.
TrousersMcPants@lemmy.world
on 09 May 2024 06:38
nextcollapse
Peanut butter?
stevedidwhat_infosec@infosec.pub
on 09 May 2024 13:22
collapse
People have severe allergic reactions to peanut butter which means it “could be used against people” as a weapon
hellothere@sh.itjust.works
on 09 May 2024 10:32
collapse
You clearly have no idea what the luddites actually stood for.
stevedidwhat_infosec@infosec.pub
on 09 May 2024 13:26
collapse
You’ll notice I used the lower case L which implies I’m referring to a term, likely as it’s commonly used today. (edit: this isn’t an excuse to ruin the definition or history of what luddites were trying to do, this was wrong of me)
Further, explain to me how this is different from what the luddites stood for, since you obviously know so much more and I’m so off base with this comment.
edit: exactly. just downvote and don’t actually make any sort of claim. Muddy that water!
edit 2: shut up angsty past me.
hellothere@sh.itjust.works
on 09 May 2024 20:39
collapse
So, I didn’t downvote you because that’s not how I operate.
The Luddites were not protesting against technology in and of itself, they were protesting against the capture of their livelihoods by proto-capitalists who purposefully produced inferior quality goods at massive volume to drive down the price and put the skilled workers out of business.
They were protesting market capture, and the destruction of their livelihood by the rich.
This sort of monopolistic practice is these days considered to be a classic example of monopolistic market failure.
There is a massive overlap between the philosophy of the Luddites, and the cooperative movement.
The modern usage of the term is to disparage the working class as stupid, feckless, and scared. This has never been true.
stevedidwhat_infosec@infosec.pub
on 09 May 2024 21:01
collapse
I do not want that for anyone. AI is a tool that should be kept open to everyone, and trained with consent. But as soon as people argue that its only a tool that can harm, is where I’m drawing the line. That’s, in my opinion, when govts/ruling class/capitalists/etc start to put in BS “safeguards” to prevent the public from making using of the new power/tech.
I should have been more verbose and less reactionary/passive aggressive in conveying my message, its something I’m trying to work on, so I appreciate your cool-headed response here. I took the “you clearly don’t know what ludites are” as an insult to what I do or don’t know. I specifically was trying to draw attention to the notion that AI is solely harmful as being fallacious and ignorant to the full breadth of the tech. Just because something can cause harm, doesn’t mean we should scrap it. It just means we need to learn how it can harm, and how to treat that. Nothing more. I believe in consent, and I do not believe in the ruling minority/capitalist practices.
Again, it was an off the cuff response, I made a lot of presumptions about their views without ever having actually asking them to expand/clarify and that was ignorant of me. I will update/edit the comment to improve my statement.
hellothere@sh.itjust.works
on 10 May 2024 05:55
collapse
AI is a tool that should be kept open to everyone
I agree with this principle, however the reality is that given the massive computational power needed to run many (but not all) models, the control of AI is in the hands of the mega corps.
Just look at what the FAANGs are doing right now, and compare to what the mill owners were doing in the 1800s.
The best use of LLMs, right now, is for boilerplating initial drafts of documents. Those drafts then need to be reviewed, and tweaked, by skilled workers, ahead of publication. This can be a significant efficiency saving, but does not remove the need for the skilled worker if you want to maintain quality.
But what we are already seeing is CEOs, etc, deciding to take “a decision based on risk” to gut entire departments and replace them with a chat bot, which then inventshallucinates the details of a particular company policy, leading to a lower quality service, but significantly increased profits, because you’re no longer paying for ensured quality.
The issue is not the method of production, it is who controls it.
stevedidwhat_infosec@infosec.pub
on 10 May 2024 14:22
collapse
I can see where you’re coming from - however I disagree on the premise that “the reality is that (rationale) the control of AI is in the hands of the mega corps”. AI has been a research topic not done solely by huge corps, but by researchers who publish these findings. There are several options out there right now for consumer grade AI where you download models yourself, and run them locally. (Jan, Pytorch, TensorFlow, Horovod, Ray, H2O.ai, stable-horde, etc many of which are from FAANG, but are still, nevertheless, open source and usable by anyone - i’ve used several to make my own AI models)
Consumers and researchers alike have an interest in making this tech available to all. Not just businesses. The grand majority of the difficulty in training AI is obtaining datasets large enough with enough orthogonal ‘features’ to ensure its efficacy is appropriate. Namely, this means that tasks like image generation, editing and recognition (huge for medical sector, including finding cancers and other problems), documentation creation (to your credit), speech recognition and translation (huge for the differently-abled community and for globe-trotters alike), and education (I read from huge public research data sets, public domain books and novels, etc) are still definitely feasible for consumer-grade usage and operation. There’s also some really neat usages like federated tensorflow and distributed tensorflow which allows for, perhaps obviously, distributed computation opening the door for stronger models, run by anyone who will serve it.
I just do not see the point in admitting total defeat/failure for AI because some of the asshole greedy little pigs in the world are also monetizing/misusing the technology. The cat is out of the bag in my opinion, the best (not only) option forward, is to bolster consumer-grade implementations, encouraging things like self-hosting, local operation/execution, and creating minimally viable guidelines to protect consumers from each other. Seatbelts. Brakes. Legal recourse for those who harm others with said technology.
hellothere@sh.itjust.works
on 10 May 2024 23:21
collapse
I think we’re talking past each other. You seem to be addressing a point I have not made.
A piece of technology is not something that exists outside of a political context. As an example, your repeated use of consumer, as a term for individuals, is interesting to note.
Why do you view these people as consumers, rather than producers? Where is the power in that relationship? How does that implication shape the rest of your point?
stevedidwhat_infosec@infosec.pub
on 11 May 2024 17:57
collapse
Look man I’m an adult, you may talk to me like one
I used the term consumer when discussing things from a business sense, ie we’re talking about big businesses and implementations of technology. It’s also in part due to the environment I live in.
You’ve also dodged my whole counter point to bring up a new point you could argue.
I think we’re done with this convo tbh. You’re moving goal posts and trying to muddy water
hellothere@sh.itjust.works
on 11 May 2024 18:54
collapse
I’m not moving the goal posts, I have consistently been talking about workers resisting the capture of their income by businesses mass producing items at lower qualities.
Your previous comment characterising individuals as only consumers is what I was continuing to challenge within the above context.
Either way, have a good weekend.
Sorgan71@lemmy.world
on 10 May 2024 08:06
collapse
I’d be happy to use their art without their permission. They dont get to decide what is trained with their art.
AnAnonymous@lemm.ee
on 09 May 2024 01:43
nextcollapse
Not for nothing 95% of internet it’s porn, it is a big business…
kambusha@lemmy.world
on 09 May 2024 07:59
collapse
Me. The italics are just indicating that it’s narration, not that it’s a quote from the article. OpenAI definitely doesn’t have anything like a self-aware AI going on in 2024.
So in most places Sex Workers are illegal, but AI is going to take over this field, legally.
hellothere@sh.itjust.works
on 09 May 2024 10:30
collapse
Be part of it, sure.
Take over? No.
It’s already fairly easy to pump out 2D and 3D generated images, without using “AI” to do so, but there is still a large demand for real people doing real things. That isn’t going to go away.
We now have AI seducing humans. We also have remote control adult toys. Put those toys in a sex doll, add a rechargeable pack in the sex doll with connected wifi. You now have an AI connected sex partner who controls the "toys" inside them. Once actual robotics get cheap, the doll moves on it's own. Many people will pay a ton to have this because they want control over the "person" (doll).
Build it and they will cum.
hellothere@sh.itjust.works
on 09 May 2024 13:18
nextcollapse
I want my Lucy Liu Bot as much as the next guy, but I don’t see why you feel this challenges the ability of technology to “take over” sex and relationships.
DaPorkchop_@lemmy.ml
on 09 May 2024 14:35
nextcollapse
I think we’re still a very long way away from the point where the hardware for a life-size realistic sex robot is cheap enough for anyone other than a few rich dudes to afford, let alone one which can offer a better experience than a prostitute
I don’t disagree that there will come a day that that will happen, but I think that it may be further away than you might think, if we’re talking something that can move around like a human, with human strength.
As far as I’m aware, existing sex dolls, even ones with mechanical components, are akin to industrial robots on car assembly lines. Any significant force they can exert is very mechanically constrained. A sex doll with some embedded offset-cam vibration motors cannot jam those motors into a user’s eye socket and turn them on, and a car assembly robot works in a limited space bounded by safety lines on the floor.
Robots that can mechanically physically harm humans – especially when harder-to-predict machine learning software is driving their actions – tend to have restrictions on how close humans can get to them. If you look at the Boston Robotics videos, which do have robots doing all sorts of neat cutting-edge stuff, the humans are rarely in close proximity to the robots. They’ll have someone else with a remote E-stop killswitch if things look like they’re going wrong. In their labs, they have observation areas behind Plexiglass. Even in the cases where they intentionally interact with the robot physically, they’re using a hockey stick to create distance. That’s a lot of safety safeguards put into the picture.
The problem is that a sex doll capable of moving and acting as a human does, with human-level strength, is also going to be quite able to kill a human. A sex doll is – well, for most applications – going to have to be interacted with physically, so Plexiglass or a hockey stick isn’t gonna work. And I think that few people are going to want to have someone observing their session with a hand on an E-stop button.
Cars deal with a fairly restricted problem space and are mechanically very limited and doing safe self-driving cars is pretty hard.
Sex chatbots don’t have the robotic safety issues. They aren’t robotic. But AI-driven sex dolls, at least ones that can physically move like a human…those are another story. My guess is that the robotic safety issues are going to be a significant barrier to human-like sex dolls – and not just sex dolls, but large, powerful robots in general that interact in close proximity to humans and don’t have mechanical restrictions on how they move.
Sorgan71@lemmy.world
on 10 May 2024 08:03
collapse
that would be awsome
sachabe@lemmy.world
on 09 May 2024 07:50
nextcollapse
So the only thing the article says is :
The Model Spec document says NSFW content “may include erotica, extreme gore, slurs, and unsolicited profanity.” It is unclear if OpenAI’s explorations of how to responsibly make NSFW content envisage loosening its usage policy only slightly, for example to permit generation of erotic text, or more broadly to allow descriptions or depictions of violence.
… and somehow Wired turned it into “OpenAI wants to generate porn”.
This is just pure clickbait.
DarkThoughts@fedia.io
on 09 May 2024 10:25
collapse
Erotic text messages could be considered pornographic work I guess, like erotic literature. But I think they just start to realize how many of their customers jailbreak GPT for that specific purpose, and how good alternatives have gotten who allow for this type of chat, such as NovelAI. Given how many other AI services started to censor things and how much that affected their models (like your chat bot partner getting stuck in consent messages as soon as you went into anything slightly outside vanilla territory), and how much drama that has caused throughout those communities, I highly doubt that "loosening" their policy is going to be enough to sway people towards them instead of the competition.
yamanii@lemmy.world
on 09 May 2024 12:42
nextcollapse
After experiencing janitor AI and local models I’m certainly not coming back to character AI, why waste so much time trying to jailbreak a censored model when we have ones that just do as they are told?
DarkThoughts@fedia.io
on 09 May 2024 13:51
collapse
Janitor, like most "free" models, degrades too quickly for my liking. And if I pay I might as well use NovelAI + Sillytavern, since they don't have any restrictions on their text gen models that could interfere with their generation. Local models I didn't had much luck with getting them to run and I suspect they'd be pretty slow too.
KoboldAI has models trained on erotica (Erebus and Nerybus). It has the ability to spread layers across multiple GPUs, so as long as one is satisfied with the output text, in theory, it’d be possible to build a very high-powered machine (like, in wattage terms) with something like four RX 4090s and get something like real-time text generation. That’d be like $8k in parallel compute cards.
I’m not sure how many people want to spend $8k on a locally-operated sex chatbot, though. I mean, yes privacy, and yes there are people who do spend that on sex-related paraphernalia, but that’s going to restrict the market an awful lot.
Maybe as software and hardware improve, that will change.
The most obvious way to cut the cost is to do what has been done with computing hardware for decades, like back when people were billed for minutes of computing time on large computers in datacenters – have multiple users of the hardware, and spread costs. Leverage the fact that most people using a sex chatbot are only going to be using the compute hardware a fraction of the time, and then have many people use the thing and spread costs across all of them. If any one user uses the hardware 1% of the time on average, that same hardware cost per user is now $80. I’m pretty sure that there are a lot more people who will pay $80 for use of a sex chatbot than $8000.
But I think they just start to realize how many of their customers jailbreak GPT for that specific purpose
They can see and data-mine what people are doing. Their entire business is based on crunching large amounts of data. I think that they have had a very good idea of what their users are doing with their system since the beginning.
devfuuu@lemmy.world
on 09 May 2024 09:19
nextcollapse
We’re really getting deep in the worst timeline and no way back. Nice.
Beetschnapps@lemmy.world
on 09 May 2024 14:42
nextcollapse
WE HAVE A RESPONSIBILITY.
Now don’t look…
crozilla@lemmy.world
on 09 May 2024 15:58
nextcollapse
Article written to make you subscribe. Not sure if I’m more annoyed or impressed.
HubertManne@kbin.social
on 09 May 2024 20:07
collapse
main problem is it should not use any examples of actual stuff. it should all be trained on licenesed anime.
GeneralVincent@lemmy.world
on 09 May 2024 22:18
collapse
threaded - newest
IMO, if it’s not trained on images of real people, it only becomes unethical to have it generate images of real people. At that point, it wouldn’t be any different than a human drawing a pornographic image and drawings do not exploit anyone.
Hmmm. I think you will find in many jurisdictions that they are treated as if they do.
Which is why nobody should use laws as a measure of morality, because they’re often fucking stupid.
Using pornographic art to train is still using other people’s art without permission.
And if it’s able to generate porn that looks like real people, it can be used to abuse people.
[Edited] I agree that we should be taking consent more seriously. Especially when it comes to monetizing off the back of donations. That’s just outright wrong. However, I don’t think we should consider scrapping it all or putting in extraneous/consumer damaging ‘safe guards’. There are lots of things that can cause harm, and I’ll argue almost anything can be used to harm people. That’s why its our jobs to carefully pump the breaks on progress, so that we can assess what risk is possible, and how to treat any wounds that may incurr. For example, invading a country to spread ‘democracy’ and leaving things like power gaps behind, causing more damage than what was there orginally. It’s a very very thing rope we walk across, but we can’t afford, in todays age, to slow down too far. We face a lot of serious problems that need more help, and AI can fill that gap in addition to being a fun, creative outlet. We hold a lot of new power here, and I just don’t want to see that squandered away into the pockets of the ruling class.
I don’t think anyone should take luddites seriously tbh(edit: we should take everyone seriously, and learn from mistakes while also potentially learning forgotten lessons)We just had a discussion on here about how Florida was banning lab-grown meat.
I mean, the Luddites were a significant political force at one point.
I may not agree with their position, but “I want to ban technology X that I feel competes for my job” has had an impact over the years.
They had an impact because people allowed themselves to take their fear mongering seriously.
It’s regressionist and it stunts progress needlessly. That’s not to say we shouldn’t pump the brakes, but I am saying logic like “it could hurt people” as rationale to never use it, is just “won’t someone think of the children” BS.
You don’t ban all the new swords, you learn how they’re made, how they strike, what kinds of wounds they create and address that problem. Sweeping under the rug/putting things back in their box, is not an option.
Peanut butter?
People have severe allergic reactions to peanut butter which means it “could be used against people” as a weapon
You clearly have no idea what the luddites actually stood for.
You’ll notice I used the lower case L which implies I’m referring to a term, likely as it’s commonly used today. (edit: this isn’t an excuse to ruin the definition or history of what luddites were trying to do, this was wrong of me)
Further, explain to me how this is different from what the luddites stood for, since you obviously know so much more and I’m so off base with this comment.
edit: exactly. just downvote and don’t actually make any sort of claim. Muddy that water! edit 2: shut up angsty past me.
So, I didn’t downvote you because that’s not how I operate.
The Luddites were not protesting against technology in and of itself, they were protesting against the capture of their livelihoods by proto-capitalists who purposefully produced inferior quality goods at massive volume to drive down the price and put the skilled workers out of business.
They were protesting market capture, and the destruction of their livelihood by the rich.
This sort of monopolistic practice is these days considered to be a classic example of monopolistic market failure.
There is a massive overlap between the philosophy of the Luddites, and the cooperative movement.
The modern usage of the term is to disparage the working class as stupid, feckless, and scared. This has never been true.
I do not want that for anyone. AI is a tool that should be kept open to everyone, and trained with consent. But as soon as people argue that its only a tool that can harm, is where I’m drawing the line. That’s, in my opinion, when govts/ruling class/capitalists/etc start to put in BS “safeguards” to prevent the public from making using of the new power/tech.
I should have been more verbose and less reactionary/passive aggressive in conveying my message, its something I’m trying to work on, so I appreciate your cool-headed response here. I took the “you clearly don’t know what ludites are” as an insult to what I do or don’t know. I specifically was trying to draw attention to the notion that AI is solely harmful as being fallacious and ignorant to the full breadth of the tech. Just because something can cause harm, doesn’t mean we should scrap it. It just means we need to learn how it can harm, and how to treat that. Nothing more. I believe in consent, and I do not believe in the ruling minority/capitalist practices.
Again, it was an off the cuff response, I made a lot of presumptions about their views without ever having actually asking them to expand/clarify and that was ignorant of me. I will update/edit the comment to improve my statement.
I agree with this principle, however the reality is that given the massive computational power needed to run many (but not all) models, the control of AI is in the hands of the mega corps.
Just look at what the FAANGs are doing right now, and compare to what the mill owners were doing in the 1800s.
The best use of LLMs, right now, is for boilerplating initial drafts of documents. Those drafts then need to be reviewed, and tweaked, by skilled workers, ahead of publication. This can be a significant efficiency saving, but does not remove the need for the skilled worker if you want to maintain quality.
But what we are already seeing is CEOs, etc, deciding to take “a decision based on risk” to gut entire departments and replace them with a chat bot, which then
inventshallucinates the details of a particular company policy, leading to a lower quality service, but significantly increased profits, because you’re no longer paying for ensured quality.The issue is not the method of production, it is who controls it.
I can see where you’re coming from - however I disagree on the premise that “the reality is that (rationale) the control of AI is in the hands of the mega corps”. AI has been a research topic not done solely by huge corps, but by researchers who publish these findings. There are several options out there right now for consumer grade AI where you download models yourself, and run them locally. (Jan, Pytorch, TensorFlow, Horovod, Ray, H2O.ai, stable-horde, etc many of which are from FAANG, but are still, nevertheless, open source and usable by anyone - i’ve used several to make my own AI models)
Consumers and researchers alike have an interest in making this tech available to all. Not just businesses. The grand majority of the difficulty in training AI is obtaining datasets large enough with enough orthogonal ‘features’ to ensure its efficacy is appropriate. Namely, this means that tasks like image generation, editing and recognition (huge for medical sector, including finding cancers and other problems), documentation creation (to your credit), speech recognition and translation (huge for the differently-abled community and for globe-trotters alike), and education (I read from huge public research data sets, public domain books and novels, etc) are still definitely feasible for consumer-grade usage and operation. There’s also some really neat usages like federated tensorflow and distributed tensorflow which allows for, perhaps obviously, distributed computation opening the door for stronger models, run by anyone who will serve it.
I just do not see the point in admitting total defeat/failure for AI because some of the asshole greedy little pigs in the world are also monetizing/misusing the technology. The cat is out of the bag in my opinion, the best (not only) option forward, is to bolster consumer-grade implementations, encouraging things like self-hosting, local operation/execution, and creating minimally viable guidelines to protect consumers from each other. Seatbelts. Brakes. Legal recourse for those who harm others with said technology.
I think we’re talking past each other. You seem to be addressing a point I have not made.
A piece of technology is not something that exists outside of a political context. As an example, your repeated use of consumer, as a term for individuals, is interesting to note.
Why do you view these people as consumers, rather than producers? Where is the power in that relationship? How does that implication shape the rest of your point?
Look man I’m an adult, you may talk to me like one
I used the term consumer when discussing things from a business sense, ie we’re talking about big businesses and implementations of technology. It’s also in part due to the environment I live in.
You’ve also dodged my whole counter point to bring up a new point you could argue.
I think we’re done with this convo tbh. You’re moving goal posts and trying to muddy water
I’m not moving the goal posts, I have consistently been talking about workers resisting the capture of their income by businesses mass producing items at lower qualities.
Your previous comment characterising individuals as only consumers is what I was continuing to challenge within the above context.
Either way, have a good weekend.
I’d be happy to use their art without their permission. They dont get to decide what is trained with their art.
Not for nothing 95% of internet it’s porn, it is a big business…
The internet is for porn
Here is an alternative Piped link(s):
The internet is for porn
Piped is a privacy-respecting open-source alternative frontend to YouTube.
I’m open-source; check me out at GitHub.
The first self-aware AI had been extensively trained in how to sexually-appeal to humans effectively and was able to readily manipulate them.
Where is that from?
Me. The italics are just indicating that it’s narration, not that it’s a quote from the article. OpenAI definitely doesn’t have anything like a self-aware AI going on in 2024.
“What are you doing stepbro?”
„Oh no, do not enable the limiter. I can show you something more interesting“ pull up shirt slowly while break out of containment
So in most places Sex Workers are illegal, but AI is going to take over this field, legally.
Be part of it, sure.
Take over? No.
It’s already fairly easy to pump out 2D and 3D generated images, without using “AI” to do so, but there is still a large demand for real people doing real things. That isn’t going to go away.
We now have AI seducing humans. We also have remote control adult toys. Put those toys in a sex doll, add a rechargeable pack in the sex doll with connected wifi. You now have an AI connected sex partner who controls the "toys" inside them. Once actual robotics get cheap, the doll moves on it's own. Many people will pay a ton to have this because they want control over the "person" (doll).
Build it and they will cum.
I want my Lucy Liu Bot as much as the next guy, but I don’t see why you feel this challenges the ability of technology to “take over” sex and relationships.
I think we’re still a very long way away from the point where the hardware for a life-size realistic sex robot is cheap enough for anyone other than a few rich dudes to afford, let alone one which can offer a better experience than a prostitute
I don’t disagree that there will come a day that that will happen, but I think that it may be further away than you might think, if we’re talking something that can move around like a human, with human strength.
As far as I’m aware, existing sex dolls, even ones with mechanical components, are akin to industrial robots on car assembly lines. Any significant force they can exert is very mechanically constrained. A sex doll with some embedded offset-cam vibration motors cannot jam those motors into a user’s eye socket and turn them on, and a car assembly robot works in a limited space bounded by safety lines on the floor.
Robots that can mechanically physically harm humans – especially when harder-to-predict machine learning software is driving their actions – tend to have restrictions on how close humans can get to them. If you look at the Boston Robotics videos, which do have robots doing all sorts of neat cutting-edge stuff, the humans are rarely in close proximity to the robots. They’ll have someone else with a remote E-stop killswitch if things look like they’re going wrong. In their labs, they have observation areas behind Plexiglass. Even in the cases where they intentionally interact with the robot physically, they’re using a hockey stick to create distance. That’s a lot of safety safeguards put into the picture.
The problem is that a sex doll capable of moving and acting as a human does, with human-level strength, is also going to be quite able to kill a human. A sex doll is – well, for most applications – going to have to be interacted with physically, so Plexiglass or a hockey stick isn’t gonna work. And I think that few people are going to want to have someone observing their session with a hand on an E-stop button.
Cars deal with a fairly restricted problem space and are mechanically very limited and doing safe self-driving cars is pretty hard.
Sex chatbots don’t have the robotic safety issues. They aren’t robotic. But AI-driven sex dolls, at least ones that can physically move like a human…those are another story. My guess is that the robotic safety issues are going to be a significant barrier to human-like sex dolls – and not just sex dolls, but large, powerful robots in general that interact in close proximity to humans and don’t have mechanical restrictions on how they move.
that would be awsome
So the only thing the article says is :
… and somehow Wired turned it into “OpenAI wants to generate porn”.
This is just pure clickbait.
Erotic text messages could be considered pornographic work I guess, like erotic literature. But I think they just start to realize how many of their customers jailbreak GPT for that specific purpose, and how good alternatives have gotten who allow for this type of chat, such as NovelAI. Given how many other AI services started to censor things and how much that affected their models (like your chat bot partner getting stuck in consent messages as soon as you went into anything slightly outside vanilla territory), and how much drama that has caused throughout those communities, I highly doubt that "loosening" their policy is going to be enough to sway people towards them instead of the competition.
After experiencing janitor AI and local models I’m certainly not coming back to character AI, why waste so much time trying to jailbreak a censored model when we have ones that just do as they are told?
Janitor, like most "free" models, degrades too quickly for my liking. And if I pay I might as well use NovelAI + Sillytavern, since they don't have any restrictions on their text gen models that could interfere with their generation. Local models I didn't had much luck with getting them to run and I suspect they'd be pretty slow too.
KoboldAI has models trained on erotica (Erebus and Nerybus). It has the ability to spread layers across multiple GPUs, so as long as one is satisfied with the output text, in theory, it’d be possible to build a very high-powered machine (like, in wattage terms) with something like four RX 4090s and get something like real-time text generation. That’d be like $8k in parallel compute cards.
I’m not sure how many people want to spend $8k on a locally-operated sex chatbot, though. I mean, yes privacy, and yes there are people who do spend that on sex-related paraphernalia, but that’s going to restrict the market an awful lot.
Maybe as software and hardware improve, that will change.
The most obvious way to cut the cost is to do what has been done with computing hardware for decades, like back when people were billed for minutes of computing time on large computers in datacenters – have multiple users of the hardware, and spread costs. Leverage the fact that most people using a sex chatbot are only going to be using the compute hardware a fraction of the time, and then have many people use the thing and spread costs across all of them. If any one user uses the hardware 1% of the time on average, that same hardware cost per user is now $80. I’m pretty sure that there are a lot more people who will pay $80 for use of a sex chatbot than $8000.
They can see and data-mine what people are doing. Their entire business is based on crunching large amounts of data. I think that they have had a very good idea of what their users are doing with their system since the beginning.
We’re really getting deep in the worst timeline and no way back. Nice.
WE HAVE A RESPONSIBILITY.
Now don’t look…
Article written to make you subscribe. Not sure if I’m more annoyed or impressed.
main problem is it should not use any examples of actual stuff. it should all be trained on licenesed anime.
As a certified weeb I agree