WILSOOON@programming.dev
on 21 May 2024 08:13
nextcollapse
Fuckin good job
DmMacniel@feddit.de
on 21 May 2024 08:40
nextcollapse
Mhm I have mixed feelings about this. I know that this entire thing is fucked up but isn’t it better to have generated stuff than having actual stuff that involved actual children?
Retoffelnoster@lemmy.world
on 21 May 2024 08:46
nextcollapse
You know whats better? Having none of this shit
DmMacniel@feddit.de
on 21 May 2024 08:48
nextcollapse
Yeah as I also said.
MxM111@kbin.social
on 21 May 2024 10:04
nextcollapse
Better for whom and why?
Thorny_Insight@lemm.ee
on 21 May 2024 10:25
collapse
Yeah would be nice. Unfortunelately it isn’t so and it’s never going to. Chasing after people generating distasteful AI pictures is not making the world a better place.
Do we know if fuels the urge to get real children? Or do we just assume that through repetition like the myth of "gateway drugs"?
Since no child was involved and harmed in the making of these images... On what grounds could it be forbidden to generate them?
Thorny_Insight@lemm.ee
on 21 May 2024 10:29
collapse
Alternative perspective is to think that does watching normal porn make heterosexual men more likely to rape women? If not then why would it be different in this case?
The vast majority of pedophiles never offend. Most people in jail for child abuse are just plain old rapists with no special interest towards minors, they’re just an easy target. Pedophilia just describes what they’re attracted to. It’s not a synonym to child rapist. It usually needs to coinside with psychopathy to create the monster that most people think about when hearing that word.
ricecake@sh.itjust.works
on 21 May 2024 21:49
collapse
That’s a bit of a difference in comparison.
A better comparison would be “does watching common heterosexual porn make common heterosexual men more interested in performing common heterosexual sexual acts?” or "does viewing pornography long term satiate a mans sex drive?” or “does consumption of nonconsensual pornography correlate to an increase in nonconsensual sex acts?”
Comparing “viewing child sexual content might lead it engaging in sexual acts with children” to “viewing sexual activity with women might lead to rape” is disingenuous and apples to oranges.
a review of 19 studies published between 2013 and 2018 found an association between online porn use and earlier sexual debut, engaging with occasional and/or multiple partners, emulating risky sexual behaviours, assimilating distorted gender roles, dysfunctional body perception, aggression, anxiety, depression, and compulsive porn use.24 Another study has shown that compulsive use of sexually explicit internet material by adolescent boys is more likely in those with lower self-esteem, depressive feeling and excessive sexual interest.1
some porn use in adult men may have a positive impact by increasing libido and desire for a real-life partner, relieving sexual boredom, and improving sexual satisfaction by providing inspiration for real sex.7
As for child porn, it’s not a given that there’s no relationship between consumption and abusing children. There are studies that indicate both outcomes, and are made much more complicated by one of both activities being extremely illegal and socially stigmatized making accurate tracking difficult.
It’s difficult to justify the notion that “most pedophiles never offend” when it can be difficult to identify both pedophiles and abuse.
Point being, you can’t just hand wave the potential for a link away on the grounds that porn doesn’t cause rape amongst typical heterosexual men. There’s too many factors making the statistics difficult to gather.
pavnilschanda@lemmy.world
on 21 May 2024 09:23
nextcollapse
A problem that I see getting brought up is that generated AI images makes it harder to notice photos of actual victims, making it harder to locate and save them
NatakuNox@lemmy.world
on 21 May 2024 13:42
nextcollapse
And doesn’t the AI learn from real images?
pavnilschanda@lemmy.world
on 21 May 2024 13:46
nextcollapse
True, but by their very nature their generations tend to create anonymous identities, and the sheer amount of them would make it harder for investigators to detect pictures of real, human victims (which can also include indicators of crime location.
ricecake@sh.itjust.works
on 21 May 2024 20:41
nextcollapse
It does learn from real images, but it doesn’t need real images of what it’s generating to produce related content.
As in, a network trained with no exposure to children is unlikely to be able to easily produce quality depictions of children. Without training on nudity, it’s unlikely to produce good results there as well.
However, if it knows both concepts it can combine them readily enough, similar to how you know the concept of “bicycle” and that of “Neptune” and can readily enough imagine “Neptune riding an old fashioned bicycle around the sun while flaunting it’s tophat”.
Under the hood, this type of AI is effectively a very sophisticated “error correction” system. It changes pixels in the image to try to “fix it” to matching the prompt, usually starting from a smear of random colors (static noise).
That’s how it’s able to combine different concepts from a wide range of images to create things it’s never seen.
helpImTrappedOnline@lemmy.world
on 22 May 2024 10:43
collapse
Basically if I want to create …
(I’ll use a different example for obvious reasons, but I’m sure you could apply it to the topic)
… “an image of a miniature denium airjet with Taylor Swift’s face on the side of it”, the AI generators can despite no such thing existing in the training data.
It may take multiple attempts and effort with the text prompt to get exactly what you’re looking for, but you could eventually get a convincing image.
AI takes loads of preexisting data on airplanes, T.Swift, and denium to combine it all into something new.
TranscendentalEmpire@lemm.ee
on 21 May 2024 13:08
collapse
Well that, and the idea of cathartic relief is increasingly being dispelled. Behaviour once thought to act as a pressure relief for harmful impulsive behaviour is more than likely just a pattern of escalation.
Source? From what I’ve heard, recent studies are showing the opposite.
TranscendentalEmpire@lemm.ee
on 21 May 2024 21:04
collapse
Catharsis theory predicts that venting anger should
get rid of it and should therefore reduce subsequent
aggression. The present findings, as well as previous
findings, directly contradict catharsis theory (e.g.,
Bushman et al., 1999; Geen & Quanty, 1977). For reduc-
ing anger and aggression, the worst possible advice to
give people is to tell them to imagine their provocateur’s
face on a pillow or punching bag as they wallop it, yet this
is precisely what many pop psychologists advise people to
do. If followed, such advice will only make people
angrier and more aggressive.
But there’s a lot more studies who have essentially said the same thing. The cathartic hypothesis is mainly a byproduct of the Freudian era of psychology, where hypothesis mainly just sounded good to someone on too much cocaine.
Do you have a source of studies showing the opposite?
blanketswithsmallpox@lemmy.world
on 22 May 2024 00:53
nextcollapse
Yes, but I’m too lazy to sauce everything again. If it’s not in my saved comments someone else will have to.
E: couldn’t find it on my reddit either. I have too many saved comments lol.
9bananas@lemmy.world
on 22 May 2024 05:05
nextcollapse
your source is exclusively about aggressive behavior…
it uses the term “arousal”, which is not referring to sexual arousal, but rather a state of heightened agitation.
provide an actual source in support of your claim, or stop spreading misinformation.
TranscendentalEmpire@lemm.ee
on 22 May 2024 11:05
collapse
Lol, my source is about the cathartic hypothesis. So your theory is that it doesn’t work with anger, but does work for sexual deviancy?
Do you have a source that supports that?
9bananas@lemmy.world
on 22 May 2024 13:14
collapse
you made the claim that the cathartic hypothesis is poorly supported by evidence, which you source supports, but is not relevant to the topic at hand.
your other claim is that sexual release follows the same patterns as aggression. that’s a pretty big claim! i’d like to see a source that supports that claim.
otherwise you’ve just provided a source that provides sound evidence, but is also entirely off-topic…
TranscendentalEmpire@lemm.ee
on 22 May 2024 13:31
collapse
but is not relevant to the topic at hand.
The belief that indulging in AI created child porn relieves the sexual deviant behaviour of being attracted to actual minors utilizes the cathartic theory. The cathartic theory is typically understood to relate to an array of emotions, not just anger. "Further, the catharsis hypothesis maintains that aggressive or sexual urges are relieved by “releasing” aggressive or sexual energy, usually through action or fantasy. "
follows the same patterns as aggression. that’s a pretty big claim! i’d like to see a source that supports that claim.
That’s not a claim I make, it’s a claim that cathartic theory states. As I said the cathartic hypothesis is a byproduct of Freudian psychology, which has largely been debunked.
Your issue is with the theory in and of itself, which my claim is already stating to be problematic.
but is also entirely off-topic…
No, you are just conflating colloquial understanding of catharsis with the psychological theory.
9bananas@lemmy.world
on 22 May 2024 13:56
collapse
and your source measured the effects of one single area that cathartic theory is supposed to apply to, not all of them.
your source does in no way support the claim that the observed effects apply to anything other than aggressive behavior.
i understand that the theory supposedly applies to other areas as well, but as you so helpfully pointed out: the theory doesn’t seem to hold up.
so either A: the theory is wrong, and so the association between aggression and sexuality needs to be called into question also;
or B: the theory isn’t wrong after all.
you are now claiming that the theory is wrong, but at the same time, the theory is totally correct! (when it’s convenient to you, that is)
so which is it now? is the theory correct? then your source must be wrong irrelevant.
or is the theory wrong? then the claim of a link between sexuality and aggression is also without support, until you provide a source for that claim.
you can’t have it both ways, but you’re sure trying to.
TranscendentalEmpire@lemm.ee
on 22 May 2024 15:56
collapse
understand that the theory supposedly applies to other areas as well, but as you so helpfully pointed out: the theory doesn’t seem to hold up.
My original claim was that cathartic theory in and of itself is not founded on evidence based research.
but at the same time, the theory is totally correct! (when it’s convenient to you, that is)
When did I claim it was ever correct?
I think you are misconstruing my original claim with the claims made by the cathartic theory itself.
I don’t claim that cathartic theory is beneficial in any way, you are the one claiming that Cathartic theory is correct for sexual aggression, but not for violence.
Do you have a source that claims cathartic theory is beneficial for satiation deviant sexual impulses?
then the claim of a link between sexuality and aggression is also without support, until you provide a source for that claim.
You are wanting me to provide an evidence based claim between the two when I’ve already said the overarching theory is not based on evidence?
The primary principle to establish is the theory of cathartic relief, not wether it works for one emotion or the other. You have not provided any evidence to support that claim, I have provided evidence that disputes it.
Lmao. Says the guy who tried to use a study on aggression to address sexual urges.
TranscendentalEmpire@lemm.ee
on 23 May 2024 00:16
collapse
Reading comprehension is still hard for you? My argument was about Cathartic theory, which includes several emotions including sexual urges… It is a theory from freud, of course it covers sexual urges.
You and the other guy just have no idea what you’re talking about.
How about providing any kind of source instead of talking out of your ass?
Catoblepas@lemmy.blahaj.zone
on 21 May 2024 09:26
nextcollapse
Did we memory hole the whole ‘known CSAM in training data’ thing that happened a while back? When you’re vacuuming up the internet you’re going to wind up with the nasty stuff, too. Even if it’s not a pixel by pixel match of the photo it was trained on, there’s a non-zero chance that what it’s generating is based off actual CSAM. Which is really just laundering CSAM.
DmMacniel@feddit.de
on 21 May 2024 09:35
nextcollapse
I didn’t know that, my bad.
Catoblepas@lemmy.blahaj.zone
on 21 May 2024 09:38
collapse
Fair but depressing, it seems like it barely registered in the news cycle.
Ragdoll_X@lemmy.world
on 21 May 2024 10:08
collapse
IIRC it was something like a fraction of a fraction of 1% that was CSAM, with the researchers identifying the images through their hashes but they weren’t actually available in the dataset because they had already been removed from the internet.
Still, you could make AI CSAM even if you were 100% sure that none of the training images included it since that’s what these models are made for - being able to combine concepts without needing to have seen them before. If you hold the AI’s hand enough with prompt engineering, textual inversion and img2img you can get it to generate pretty much anything. That’s the power and danger of these things.
Catoblepas@lemmy.blahaj.zone
on 21 May 2024 16:58
collapse
What % do you think was used to generate the CSAM, though? Like, if 1% of the images were cups it’s probably drawing on some of that to generate images of cups.
And yes, you could technically do this with no CSAM training material, but we don’t know if that’s what the AI is doing because the image sources used to train it were mass scraped from the internet. They’re using massive amounts of data without filtering it and are unable to say with certainty whether or not there is CSAM in the training material.
retrospectology@lemmy.world
on 21 May 2024 09:44
nextcollapse
The arrest is only a positive. Allowing pedophiles to create AI CP is not a victimless crime. As others point out it muddies the water for CP of real children, but it also potentially would allow pedophiles easier ways to network in the open (if the images are legal they can easily be platformed and advertised), and networking between abusers absolutely emboldens them and results in more abuse.
As a society we should never allow the normalization of sexualizing children.
nexguy@lemmy.world
on 21 May 2024 11:11
nextcollapse
Interesting. What do you think about drawn images? Is there a limit to how will the artist can be at drawing/painting? Stick figures vs life like paintings. Interesting line to consider.
retrospectology@lemmy.world
on 21 May 2024 11:25
nextcollapse
If it was photoreal and difficult to distinguish from real photos? Yes, it’s exactly the same.
And even if it’s not photo real, communities that form around drawn child porn are toxic and dangerous as well. Sexualizing children is something I am 100% against.
littlewonder@lemmy.world
on 22 May 2024 15:36
collapse
It feels like driving these people into the dark corners of the internet is worse than allowing them to collect in clearnet spaces where drawn csam is allowed.
acockworkorange@mander.xyz
on 21 May 2024 20:27
collapse
I’m in favor of specific legislation criminalizing drawn CSAM. It’s definitely less severe than photographic CSAM, and it’s definitely harmful.
NewNewAccount@lemmy.world
on 21 May 2024 12:59
nextcollapse
networking between abusers absolutely emboldens them and results in more abuse.
Is this proven or a common sense claim you’re making?
bassomitron@lemmy.world
on 21 May 2024 13:35
nextcollapse
I wouldn’t be surprised if it’s a mixture of the two. It’s kind of like if you surround yourself with criminals regularly, you’re more likely to become one yourself. Not to say it’s a 100% given, just more probable.
So... its just a claim they're making and you're hoping it has actual backing.
bassomitron@lemmy.world
on 21 May 2024 14:50
collapse
I’m not hoping anything, haha wtf? The comment above me asked if it was a proven statement or common sense and I said I wouldn’t be surprised if it’s both. I felt confident that if I googled it, there would more than likely be studies backing up a common sense statement like that, as I’ve read in the past how sending innocent people or people who committed minor misdemeanors to prison has influenced them negatively to commit crimes they might not have otherwise.
And look at that, there are academic articles that do back it up:
Who we’re around can influence who we are. Just being in a high-crime neighborhood can increase our chances of turning to crime ourselves.4 But being in the presence of criminals is not the only way our environment can affect our behaviors. Research reveals that simply living in poverty increases our likelihood of being incarcerated. When we’re having trouble making ends meet, we’re under intense stress and more likely to resort to crime.
But you didn't say you had proof with your comment, you said it was probable. Basically saying its common sense that its proven.
Why are you getting aggressive about actually having to provide proof about something when saying its obvious?
Also, that seems to imply that locking up people for AI offenses would then encourage truly reprehensible behavior by linking them with those who already engage in it.
Almost like lumping people together as one big group, instead of having levels of grey area, means people are more likely to just go all in instead of sticking to something more morally defensible.
bassomitron@lemmy.world
on 21 May 2024 17:08
collapse
Because it’s a casual discussion, I think it’s obnoxious when people constantly demand sources to be cited in online comments section when they could easily look it up themselves. This isn’t some academic or formal setting.
And I disagree, only the second source mentioned prisons explicitly. The first source mentions social environments as well. So it’s a damned if you do, damned if you don’t situation. Additionally, even if you consider the second source, that source mentions punishment reforms to prevent that undesirable side effect from occuring.
I find it ironic that you criticized me for not citing sources and then didn’t read the sources. But, whatever. Typical social media comments section moment.
NewNewAccount@lemmy.world
on 21 May 2024 19:06
collapse
I think it’s obnoxious when people constantly demand sources to be cited in online comments section when they could easily look it up themselves.
People request sources because people state their opinions as fact. If that’s how it’s presented then asking for a source is ok. Its either ask for a source or completely dismiss the comment.
bassomitron@lemmy.world
on 21 May 2024 20:42
collapse
Again, in casual conversation where no one was really debating, it’s obnoxious. When you’re talking to friends in real life and they say something, do you request sources from them? No, because it’d be rude and annoying. If you were debating them in earnest and you both disagreed on something, sure, that would be expected.
But that wasn’t the case here, the initial statement was common sense: If pedophiles are allowed to meet up and trade AI generated child sex abuse material, would that cause some of them to be more likely to commit crimes against real kids? And I think the answer is pretty obvious. The more you hang around people who agree with you, the more an echo chamber is cultivated. It’s like an alcoholic going into a bar without anyone there to support them in staying sober.
Anyway, it’s your opinion to think asking for sources from strangers in casual conversation is okay, and it’s mine to say it can be annoying in a lot of circumstances. We all have the Internet at our fingertips, look it up in the future if you’re unsure of someone’s assertion.
moitoi@lemmy.dbzer0.com
on 22 May 2024 05:54
collapse
The far right in France normalized its discourses and they are now at the top of the votes.
Also in France, people talked about pedophilia at the TV in the 70s, 80s and at the beginning of the 90s. It was not just once in a while. It was frequent and open without any trouble. Writers would casually speak about sexual relationships with minors.
The normalization will blur the limits between AI and reality for the worse. It will also make it more popular.
The other point is also that people will always ends with the original. Again, politic is a good example. Conservatives try to mimic the far right to gain votes but at the end people vote for the far right…
And, someone has a daughter. A pedophile takes a picture of her without asking and ask an AI to produce CP based on her. I don’t want to see things like this.
The conventional wisdom used to be, (normal) porn makes people more likely to commit sexual abuse (in general). Then scientists decided to look into that. Slowly, over time, they’ve become more and more convinced that (normal) porn availability in fact reduces sexual assault.
I don’t see an obvious reason why it should be different in case of CP, now that it can be generated.
Obonga@feddit.de
on 22 May 2024 06:47
nextcollapse
It should be different because people can not have it. It is disgusting, makes them feel icky and thats just why it has to be bad. Conventional wisdom sometimes really is just convential idiocracy.
littlewonder@lemmy.world
on 22 May 2024 15:38
collapse
I wonder if religiosity is correlated.
Cybermonk_Taiji@r.nf
on 21 May 2024 12:41
nextcollapse
Is “better than” the same as totally cool and legal?
DmMacniel@feddit.de
on 21 May 2024 13:08
nextcollapse
The system isn't perfect, especially where we prioritize punishing people over rehabilitation. Would you rather punish everyone equally, emphasizing that if people are going to risk the legal implications (which, based on legal systems the world over, people are going to do) they might as well just go for the real thing anyways?
You don't have to accept it as morally acceptable, but you don't have to treat them as completely equivalent either.
There's gradations of questionable activity. Especially when there's no real victims involved. Treating everything exactly the same is, frankly speaking, insane. Its like having one punishment for all illegal behavior. Murder someone? Death penalty. Rob them? Straight to the electric chair. Jaywalking? Better believe you're getting the needle.
Cybermonk_Taiji@r.nf
on 21 May 2024 19:23
nextcollapse
Wow. I didn’t say any of that, cool story though.
Go read what I said again and try replying to that instead of whatever this rant is on about
Ironically, You ask if everything is completely black and white for someone without accepting that there’s nuance to the very issue you’re calling out. And assuming that “everything”- a very black and white term, is not very nuanced, is it?
No, not EVERYTHING, but some things. And this is one of those things. Both forms should be illegal. Period. No nuance, no argument, NO grey area.
This does not mean that nuance doesn’t exist. It just means that some believe that it SHOULDN’T exist within the paradigm of child porn.
BrianTheeBiscuiteer@lemmy.world
on 21 May 2024 13:08
nextcollapse
I have trouble with this because it’s like 90% grey area. Is it a pic of a real child but inpainted to be nude? Was it a real pic but the face was altered as well? Was it completely generated but from a model trained on CSAM? Is the perceived age of the subject near to adulthood? What if the styling makes it only near realistic (like very high quality CG)?
I agree with what the FBI did here mainly because there could be real pictures among the fake ones. However, I feel like the first successful prosecution of this kind of stuff will be a purely moral judgement of whether or not the material “feels” wrong, and that’s no way to handle criminal misdeeds.
Chee_Koala@lemmy.world
on 21 May 2024 13:53
nextcollapse
If not trained on CSAM or in painted but fully generated, I can’t really think of any other real legal arguments against it except for: “this could be real”. Which has real merit, but in my eyes not enough to prosecute as if it were real. Real CSAM has very different victims and abuse so it needs different sentencing.
Everything is 99% grey area. If someone tells you something is completely black and white you should be suspicious of their motives.
PM_Your_Nudes_Please@lemmy.world
on 21 May 2024 15:30
nextcollapse
Yeah, it’s very similar to the “is loli porn unethical” debate. No victim, it could supposedly help reduce actual CSAM consumption, etc… But it’s icky so many people still think it should be illegal.
There are two big differences between AI and loli though. The first is that AI would supposedly be trained with CSAM to be able to generate it. An artist can create loli porn without actually using CSAM references. The second difference is that AI is much much easier for the layman to create. It doesn’t take years of practice to be able to create passable porn. Anyone with a decent GPU can spin up a local instance, and be generating within a few hours.
In my mind, the former difference is much more impactful than the latter. AI becoming easier to access is likely inevitable, so combatting it now is likely only delaying the inevitable. But if that AI is trained on CSAM, it is inherently unethical to use.
Whether that makes the porn generated by it unethical by extension is still difficult to decide though, because if artists hate AI, then CSAM producers likely do too. Artists are worried AI will put them out of business, but then couldn’t the same be said about CSAM producers? If AI has the potential to run CSAM producers out of business, then it would be a net positive in the long term, even if the images being created in the short term are unethical.
Ookami38@sh.itjust.works
on 21 May 2024 20:09
nextcollapse
Just a point of clarity, an AI model capable of generating csam doesn’t necessarily have to be trained on csam.
assassin_aragorn@lemmy.world
on 22 May 2024 15:46
collapse
That honestly brings up more questions than it answers.
Ookami38@sh.itjust.works
on 22 May 2024 15:54
collapse
Why is that? The whole point of generative AI is that it can combine concepts.
You train it on the concept of a chair using only red chairs. You train it on the color red, and the color blue. With this info and some repetition, you can have it output a blue chair.
The same applies to any other concepts. Larger, smaller, older, younger. Man, boy, woman, girl, clothed, nude, etc. You can train them each individually, gradually, and generate things that then combine these concepts.
Obviously this is harder than just using training data of what you want. It’s slower, it takes more effort, and results are inconsistent, but they are results. And then, you curate the most viable of the images created this way to train a new and refined model.
todd_bonzalez@lemm.ee
on 22 May 2024 21:44
collapse
Yeah, there are photorealistic furry photo models, and I have yet to meet an anthropomorphic dragon IRL.
Glass0448@lemmy.today
on 21 May 2024 21:54
nextcollapse
PM_Your_Nudes_Please@lemmy.world
on 22 May 2024 02:44
nextcollapse
I wasn’t arguing about current laws. I was simply arguing about public perception, and whether the average person believes it should be illegal. There’s a difference between legality and ethicality. Something unethical can be legal, and something illegal can be ethical.
Weed is illegal, but public perception says it shouldn’t be.
Glass0448@lemmy.today
on 24 May 2024 18:57
collapse
Everybody is American. They just don’t know it yet.
Gospel of the Jesus
JovialMicrobial@lemm.ee
on 22 May 2024 20:59
nextcollapse
I think one of the many problems with AI generated CSAM is that as AI becomes more advanced it will become increasingly difficult for authorities to tell the difference between what was AI generated and what isn’t.
Banning all of it means authorities don’t have to sift through images trying to decipher between the two.
If one image is declared to be AI generated and it’s not…well… that doesn’t help the victims or create less victims. It could also make the horrible people who do abuse children far more comfortable putting that stuff out there because it can hide amongst all the AI generated stuff. Meaning authorities will have to go through far more images before finding ones with real victims in it. All of it being illegal prevents those sorts of problems.
PM_Your_Nudes_Please@lemmy.world
on 22 May 2024 21:46
collapse
And that’s a good point! Luckily it’s still (usually) fairly easy to identify AI generated images. But as they get more advanced, that will likely become harder and harder to do.
Maybe some sort of required digital signatures for AI art would help; Something like a public encryption key in the metadata, that can’t be falsified after the fact. Anything without that known and trusted AI signature would by default be treated as the real deal.
But this would likely require large scale rewrites of existing image formats, if they could even support it at all. It’s the type of thing that would require people way smarter than myself. But even that feels like a bodged solution to a problem that only exists because people suck. And if it required registration with a certificate authority (like an HTTPS certificate does) then it would be a hurdle for local AI instances to jump through. Because they would need to get a trusted certificate before they could sign their images.
Kalcifer@sh.itjust.works
on 22 May 2024 21:34
collapse
But it’s icky so many people still think it should be illegal.
Imo, not the best framework for creating laws. Essentially, it’s an appeal to emotion.
quindraco@lemm.ee
on 21 May 2024 11:48
nextcollapse
Yes, but the perp showed the images to a minor.
cley_faye@lemmy.world
on 21 May 2024 19:30
nextcollapse
Apparently he sent some to an actual minor.
Glass0448@lemmy.today
on 21 May 2024 21:53
nextcollapse
PhlubbaDubba@lemm.ee
on 22 May 2024 00:21
nextcollapse
I think the point is that child attraction itself is a mental illness and people indulging it even without actual child contact need to be put into serious psychiatric evaluation and treatment.
Mastengwe@lemm.ee
on 22 May 2024 03:31
nextcollapse
It’s better to have neither.
forensic_potato@lemmy.world
on 22 May 2024 11:04
collapse
This mentality smells of “just say no” for drugs or “just don’t have sex” for abortions. This is not the ideal world and we have to find actual plans/solutions to deal with the situation. We can’t just cover our ears and hope people will stop
ImminentOrbit@lemmy.world
on 25 May 2024 22:42
collapse
It reminds me of the story of the young man who realized he had an attraction to underage children and didn’t want to act on it, yet there were no agencies or organizations to help him, and that it was only after crimes were committed that anyone could get help.
I see this fake cp as only a positive for those people. That it might make it difficult to find real offenders is a terrible reason against.
Darkard@lemmy.world
on 21 May 2024 08:45
nextcollapse
And the Stable diffusion team get no backlash from this for allowing it in the first place?
Why are they not flagging these users immediately when they put in text prompts to generate this kind of thing?
muntedcrocodile@lemm.ee
on 21 May 2024 08:48
nextcollapse
Not everything exists on the cloud (someone else’s computer)
DmMacniel@feddit.de
on 21 May 2024 08:51
nextcollapse
You can run the SD model offline, so on what service would that User be flagged?
yukijoou@lemmy.blahaj.zone
on 21 May 2024 10:02
nextcollapse
my main question is: how much csam was fed into the model for training so that it could recreate more
i think it’d be worth investigating the training data usued for the model
Ragdoll_X@lemmy.world
on 21 May 2024 10:16
collapse
This did happen a while back, with researchers finding thousands of hashes of CSAM images in LAION-2B. Still, IIRC it was something like a fraction of a fraction of 1%, and they weren’t actually available in the dataset because they had already been removed from the internet.
You could still make AI CSAM even if you were 100% sure that none of the training images included it since that’s what these models are made for - being able to combine concepts without needing to have seen them before. If you hold the AI’s hand enough with prompt engineering, textual inversion and img2img you can get it to generate pretty much anything. That’s the power and danger of these things.
NeoNachtwaechter@lemmy.world
on 21 May 2024 09:13
nextcollapse
Bad title.
They caught him not simply for creating pics, but also for trading such pics etc.
Frozengyro@lemmy.world
on 21 May 2024 13:15
nextcollapse
That’s sickening to know there are bastards out there who will get away with it since they are only creating it.
NeoNachtwaechter@lemmy.world
on 21 May 2024 13:35
collapse
I’m not sure. Let us assume that you generate it on your own PC at home (not using a public service) and don’t brag about it and never give it to anybody - what harm is done?
Frozengyro@lemmy.world
on 21 May 2024 14:26
nextcollapse
Even if the AI didn’t train itself on actual CSAM that is something that feels inherently wrong. Your mind is not right to think that’s acceptable IMO.
DarkThoughts@fedia.io
on 21 May 2024 20:33
collapse
Laws shouldn't be about feelings though and we shouldn't prosecute people for victimless thought crimes. How often did you think something violent when someone really pissed you off? Should you have been persecuted for that thought too?
Frozengyro@lemmy.world
on 21 May 2024 20:40
collapse
This goes away further than a thought
DarkThoughts@fedia.io
on 21 May 2024 20:58
collapse
Who are the victims of someone generating such images privately then? It's on the same level as all the various fan fiction shit that was created manually over all the past decades.
And do we apply this to other depictions of criminalized things too? Would we ban the depiction of violence & sexual violence on TV, in books, and in video games too?
horncorn@lemmynsfw.com
on 21 May 2024 09:24
nextcollapse
Article title is a bit misleading. Just glancing through I see he texted at least one minor in regards to this and distributed those generated pics in a few places. Putting it all together, yeah, arrest is kind of a no-brainer.
Ethics of generating csam is the same as drawing it pretty much. Not much we can do about it aside from education.
retrospectology@lemmy.world
on 21 May 2024 09:48
nextcollapse
Lemmy really needs to stop justifying CP.
We can absolutely do more than “eDuCaTiOn”. AI is created by humans, the training data is gathered by humans, it needs regulation like any other industry.
It’s absolutely insane to me how laissez-fair some people are about AI, it’s like a cult.
retrospectology@lemmy.world
on 21 May 2024 14:33
collapse
Ah yes, we need child porn because it’s a slippery slope.
msage@programming.dev
on 21 May 2024 10:04
nextcollapse
While I agree with your attitude, the whole ‘laissez-fair’ thing is probably a misunderstanding:
There is nothing we can do to stop the AI.
Nothing.
The genie is out of the bottle, the Pandora’s box has been opened, everything is out and it won’t ever return. The world will never be the same, and it’s irrelevant what people think.
That’s why we need to better understand the post-AI world we created, and figure out what do to now.
Also, to hell with CP. (feels weird to use the word ‘fuck’ here)
retrospectology@lemmy.world
on 21 May 2024 11:15
collapse
Thats not the question, the question is not “can we stop AI entirely” it’s about regulating its development and yes, we can make efforts to do that.
This attitude of “it’s inevitable, can’t do anything about it” is eerily similar logic to what is used in climate denial and other right-wing efforts. It’s a really poor attitude to have, especially about something as consequential as AI.
We have the best opportunity right now to create rules about its uses and development. The answer is not “do nothing” as if it’s some force of nature, as opposed toa tool created by humans.
msage@programming.dev
on 21 May 2024 13:47
nextcollapse
I hear you, and I don’t necessarily disagree with you, I just know that’s not how anything works.
Regulations work for big companies, but there isn’t a big company behind this specific case. And those small-time users have run away and you can’t stop them.
It’s like trying to regulate cameras to not store specific images. Like, I get the sentiment, but sorry, no. It’s not that I would not like that, it’s just not possible.
retrospectology@lemmy.world
on 21 May 2024 14:35
collapse
This argument could be applied to anything though. A lot of people get away with myrder, we should still try and do what we can to stop it from happening.
You can’t sit in every car and force people to wear a seatbelt, we still have seatbelt laws and regulations for manufacturers.
msage@programming.dev
on 21 May 2024 15:08
collapse
Physical things are much easier to regulate than software, much less serverless.
We already regulate certain images, and it matters very little.
The bigger payoff will be from educating the public and accepting that we can’t win every war.
retrospectology@lemmy.world
on 21 May 2024 15:37
collapse
So accept defeat from the start, that’s really just a non-starter. AI models run on hardware, they are developed by specific people, their contents are distributed by specific individuals, code bases are hosted on hardware and on specific outlets.
It really does sound like you’re just trying to make excuses to avoid regulation, not that you genuinely have a good reason to think it’s not possible to try.
Dude the amount of open source, untrackable, distributed ai models is off the charts. This isn’t just about the models offered by subscription from the big players.
retrospectology@lemmy.world
on 21 May 2024 19:48
collapse
This is still one of the weaker arguments.
There is a lot of malware out there too, people are still prosecuted when they’re caught developing and distributing it, we don’t just throw up our hands and pretend there’s nothing that can be done.
Like, yeah, some pedophile who also happens to be tech saavy might build his own AI model to make CP, that’s not some self-evident argument against attempting to stop them.
No, like, the tools to do these things are common and readily available. It’s not malware, it’s generalized ai tools, completely embroiled with non image ai work.
Pandora’s box is wide open. All of this work can be done trivially, completely offline with a basic PC. Anyone motivated can be offline and up and running in a weekend
You’re asking to outlaw something like a spreadsheet.
You download a general purpose image ai model, then train and prompt it completely offline
The models used are not trained on CP. The models weight are distributed freely and anybody can train a LORA on his computer. Its already too late to ban open weight models.
autonomoususer@lemmy.world
on 21 May 2024 14:39
nextcollapse
One of two classic excuses, virtue signalling to hijack control of our devices, our computing, an attack on libre software (they don’t care about CP). Next, they’ll be banning more math, encryption, again.
It says gullible at the start of this page, scroll up and see.
DarkThoughts@fedia.io
on 21 May 2024 20:31
collapse
You don't need CSAM training data to create CSAM images. If your model knows how children looks like, how naked human bodies look like, then it can create naked children. That's simply how generative models like this work and has absolutely nothing to do with specifically trained models for CSAM using actual CSAM material.
So while I disagree with him, in that lack of education is the cause of CSAM or pedophilia... I'd say it could help with the general hysteria about LLMs, like the one's coming from you, who just let their emotions run wild when those topics arise. You people need to understand that the goal should be the protection of potential victims, not the punishment of victimless thought crimes.
ricecake@sh.itjust.works
on 21 May 2024 17:12
nextcollapse
Legally, a sufficiently detailed image depicting csam is csam, regardless of how it was produced. Sharing it is why he got caught, inevitably, but it’s still illegal even if he never brought a minor into it.
Glass0448@lemmy.today
on 21 May 2024 21:56
collapse
So if it’s art, we have to allow it under the constitution, right? It’s “free speech”, right?
SeattleRain@lemmy.world
on 21 May 2024 14:25
nextcollapse
Well yeah. Just because something makes you really uncomfortable doesn’t make it a crime. A crime has a victim.
Also, the vast majority of children are victimized because of the US’ culture of authoritarianism and religious fundamentalism. That’s why far and away children are victimized by either a relative or in a church. But y’all ain’t ready to have that conversation.
sugartits@lemmy.world
on 21 May 2024 14:39
collapse
That thing over there being wrong doesn’t mean we can’t discuss this thing over here also being wrong.
So perhaps pipe down with your dumb whataboutism.
SeattleRain@lemmy.world
on 21 May 2024 14:43
collapse
It’s not whataboutism, he’s being persecuted because of the idea that he’s hurting children all the while law enforcement refuses to truly persecute actual institutions victimizing children and are often colluding with traffickers. For instance LE throughout the country were well aware of the scale of the Catholic church’s crimes for generations.
How is this whataboutism.
sugartits@lemmy.world
on 21 May 2024 14:45
nextcollapse
Because it’s two different things.
We should absolutely go after the Catholic church for the crimes committed.
But here we are talking about the creation of child porn.
If you cannot understand this very simple premise, then we have nothing else to discuss.
SeattleRain@lemmy.world
on 21 May 2024 14:50
collapse
They’re not two different things. They’re both supposedly acts of pedophilia except one would take actual courage to prosecute (churches) and the other which doesn’t have any actual victims is easy and is a PR get because certain people find it really icky.
sugartits@lemmy.world
on 21 May 2024 14:58
collapse
I guess we’re done here then.
todd_bonzalez@lemm.ee
on 22 May 2024 17:52
collapse
Yes, case closed. You were wrong. Sucks to suck.
sugartits@lemmy.world
on 22 May 2024 21:08
collapse
Very mature response. Well done.
DarkThoughts@fedia.io
on 21 May 2024 20:25
collapse
Just to be clear here, he's not actually persecuted for generating such imagery like the headline implies.
todd_bonzalez@lemm.ee
on 22 May 2024 18:15
collapse
First of all, it’s absolutely crazy to link to a 6 month old thread just to complain that you go downvoted in it. You’re pretty clearly letting this site get under your skin if you’re still hanging onto these downvotes.
Second, none of your 6 responses in that thread are logical, rational responses. You basically just assert that things that you find offensive enough should be illegal, and then just type in all caps at everyone who explains to you that this isn’t good logic.
The only way we can consider child porn prohibition constitutional is to interpret it as a protection of victims. Since both the production and distribution of child porn hurt the children forced into it, we ban it outright, not because it is obscene, but because it does real damage. This fits the logic of many other forms of non-protected speech, such as the classic “shouting ‘fire’ in a crowded theatre” example, where those hurt in the inevitable panic are victims.
Expanding the definition of child porn to include fully fictitious depictions, such as lolicon or AI porn, betrays this logic because there are no actual victims. This prohibition is rooted entirely in the perceived obscenity of the material, which is completely unconstitutional. We should never ban something because it is offensive, we should only ban it when it does real harm to actual victims.
I would argue that rape and snuff film should be illegal for the same reason.
The reason people disagree with you so strongly isn’t because they think AI generated pedo content is “art” in the sense that we appreciate it and defend it. We just strongly oppose your insistence that we should enforce obscenity laws. This logic is the same logic used as a cudgel against many other issues, including LGBTQ rights, as it basically argues that sexually disagreeable ideas should be treated as a criminal issue.
I think we all agree that AI pedo content is gross, and the people who make it and consume it are sick. But nobody is with you on the idea that drawings and computer renderings should land anyone in prison.
sugartits@lemmy.world
on 22 May 2024 19:13
collapse
First of all, it’s absolutely crazy to link to a 6 month old thread just to complain that you go downvoted in it. You’re pretty clearly letting this site get under your skin if you’re still hanging onto these downvotes.
No, I just… Remembered the thread? Wasn’t difficult to remember it. Took me a minute to find it.
This may surprise you but CP isn’t something I discuss very often.
I don’t lose sleep over people defending CP as “art”, nor did it get under my skin. I just think these are fucking idiots and are for some baffling reason trying to defend the indefensible and go about my day. I’m not going to do anything about it, but I’m sure glad I don’t have such dumb comments linked to a public account with my IP address logged somewhere…
I just raised it to make my point.
I didn’t bother reading the rest of your essay. Its pretty clear from the first paragraph where you’re going to land.
This is tough, the goal should be to reduce child abuse. It’s unknown if AI generated CP will increase or reduce child abuse. It will likely encourage some individuals to abuse actual children while for others it may satisfy their urges so they don’t abuse children. Like everything else AI, we won’t know the real impact for many years.
LadyAutumn@lemmy.blahaj.zone
on 21 May 2024 13:04
collapse
How do you think they train models to generate CSAM?
Some of yall need to lookup what an LoRA is
Dkarma@lemmy.world
on 21 May 2024 13:14
nextcollapse
Lol you don’t need to train it ON CSAM to generate CSAM. Get a clue.
LadyAutumn@lemmy.blahaj.zone
on 21 May 2024 13:37
nextcollapse
It should be illegal either way, to be clear. But you think theyre not training models on CSAM? Youre trusting in the morality/ethics of people creating AI generated child pornography?
The training doesn’t use csam, 0% chance big tech would use that in their dataset. The models are somewhat able to link concept like red and car, even if it had never seen a red car before.
AdrianTheFrog@lemmy.world
on 21 May 2024 19:37
collapse
Well, with models like SD at least, the datasets are large enough and the employees are few enough that it is impossible to have a human filter every image. They scrape them from the web and try to filter with AI, but there is still a chance of bad images getting through. This is why most companies install filters after the model as well as in the training process.
DarkThoughts@fedia.io
on 21 May 2024 20:24
collapse
You make it sound like it is so easy to even find such content on the www. The point is, they do not need to be trained on such material. They are trained on regular kids, so they know their sizes, faces, etc. They're trained on nude bodies, so they also know how hairless genitals or flat chests look like. You don't need to specifically train a model on nude children to generate nude children.
barsquid@lemmy.world
on 21 May 2024 16:37
collapse
I don’t know if we can say for certain it needs to be in the dataset, but I do wonder how many of the other models used to create CSAM are also trained on CSAM.
DarkThoughts@fedia.io
on 21 May 2024 20:19
collapse
I suggest you actually download stable diffusion and try for yourself because it's clear that you don't have any clue what you're talking about. You can already make tiny people, shaved, genitals, flat chests, child like faces, etc. etc. It's all already there. Literally no need for any LoRAs or very specifically trained models.
crazyminner@lemmy.ml
on 21 May 2024 13:59
nextcollapse
I had an idea when these first AI image generators started gaining traction. Flood the CSAM market with AI generated images( good enough that you can’t tell them apart.) In theory this would put the actual creators of CSAM out of business, thus saving a lot of children from the trauma.
Most people down vote the idea on their gut reaction tho.
Looks like they might do it on their own.
Itwasthegoat@lemmy.world
on 21 May 2024 14:27
nextcollapse
My concern is why would it put them out of business? If we just look at legal porn there is already beyond huge amounts already created, and the market is still there for new content to be created constantly. AI porn hasn’t noticeably decreased the amount produced.
Really flooding the market with CSAM makes it easier to consume and may end up INCREASING the amount of people trying to get CSAM. That could end up encouraging more to be produced.
crazyminner@lemmy.ml
on 21 May 2024 15:17
collapse
The market is slightly different tho. Most CSAM is images, with Porn theres a lot of video and images.
It’s also a victimless crime. Just like flooding the market with fake rhino horns and dropping the market price to a point that it isn’t worth it.
DarkThoughts@fedia.io
on 21 May 2024 20:16
nextcollapse
It's such an emotional topic that people lose all rationale.
I remember the Reddit arguments in the comment sections about pedos, already equalizing the term with actual child rapists, while others would argue to differentiate because the former didn't do anything wrong and shouldn't be stigmatized for what's going on in their heads but rather offered help to cope with it. The replies are typically accusations of those people making excuses for actual sexual abusers.
I always had the standpoint that I do not really care about people's fictional content. Be it lolis, torture, gore, or whatever other weird shit. If people are busy & getting their kicks from fictional stuff then I see that as better than using actual real life material, or even getting some hands on experiences, which all would involve actual real victims.
And I think that should be generally the goal here, no? Be it pedos, sadists, sociopaths, whatever. In the end it should be not about them, but saving potential victims. But people rather throw around accusations and become all hysterical to paint themselves sitting on their moral high horse (ironically typically also calling for things like executions or castrations).
Cupcake1972@mander.xyz
on 22 May 2024 07:26
collapse
Yeah, exact same feelings here. If there is no victim then who exactly is harmed?
Glass0448@lemmy.today
on 21 May 2024 21:51
collapse
It would be illegal in the United States. Artistic depictions of CSAM are illegal under the PROTECT act 2003.
TheGrandNagus@lemmy.world
on 21 May 2024 22:03
collapse
And yet it’s out there in droves on mainstream sites, completely without issue. Drawings and animations are pretty unpoliced.
Ibaudia@lemmy.world
on 21 May 2024 14:29
nextcollapse
Isn’t there evidence that as artificial CSAM is made more available, the actual amount of abuse is reduced? I would research this but I’m at work.
SeattleRain@lemmy.world
on 21 May 2024 14:38
nextcollapse
America has some of the most militant anti pedophilic culture in the world but they far and away have the highest rates of child sexual assault.
I think AI is going to revel is how deeply hypocritical Americans are on this issue. You have gigantic institutions like churches committing industrial scale victimization yet you won’t find a 1/10th of the righteous indignation against other organized religions where there is just as much evidence it is happening as you will regarding one person producing images that don’t actually hurt anyone.
It’s pretty clear by how staggering a rate of child abuse that occurs in the states that Americans are just using child victims as weaponized politicalization (it’s next to impossible to convincingly fight off pedo accusations if you’re being mobbed) and aren’t actually interested in fighting pedophilia.
Most states will let grown men marry children as young as 14. There is a special carve out for Christian pedophiles.
ricecake@sh.itjust.works
on 21 May 2024 17:04
collapse
Fortunately most instances are in the category of a 17 year old to an 18 year old, and require parental consent and some manner of judicial approval, but the rates of “not that” are still much higher than one would want.
~300k in a 20 year window total, 74% of the older partner being 20 or younger, and 95% of the younger partner being 16 or 17, with only 14% accounting for both partners being under 18.
There’s still no reason for it in any case, and I’m glad to live in one of the states that said "nah, never needed .
UnpluggedFridge@lemmy.world
on 21 May 2024 15:46
nextcollapse
These cases are interesting tests of our first amendment rights. “Real” CP requires abuse of a minor, and I think we can all agree that it should be illegal. But it gets pretty messy when we are talking about depictions of abuse.
Currently, we do not outlaw written depictions nor drawings of child sexual abuse. In my opinion, we do not ban these things partly because they are obvious fictions. But also I think we recognize that we should not be in the business of criminalizing expression, regardless of how disgusting it is. I can imagine instances where these fictional depictions could be used in a way that is criminal, such as using them to blackmail someone. In the absence of any harm, it is difficult to justify criminalizing fictional depictions of child abuse.
So how are AI-generated depictions different? First, they are not obvious fictions. Is this enough to cross the line into criminal behavior? I think reasonable minds could disagree. Second, is there harm from these depictions? If the AI models were trained on abusive content, then yes there is harm directly tied to the generation of these images. But what if the training data did not include any abusive content, and these images really are purely depictions of imagination? Then the discussion of harms becomes pretty vague and indirect. Will these images embolden child abusers or increase demand for “real” images of abuse. Is that enough to criminalize them, or should they be treated like other fictional depictions?
We will have some very interesting case law around AI generated content and the limits of free speech. One could argue that the AI is not a person and has no right of free speech, so any content generated by AI could be regulated in any manner. But this argument fails to acknowledge that AI is a tool for expression, similar to pen and paper.
A big problem with AI content is that we have become accustomed to viewing photos and videos as trusted forms of truth. As we re-learn what forms of media can be trusted as “real,” we will likely change our opinions about fringe forms of AI-generated content and where it is appropriate to regulate them.
yamanii@lemmy.world
on 21 May 2024 17:49
nextcollapse
partly because they are obvious fictions
That’s it actually, all sites that allow it like danbooru, gelbooru, pixiv, etc. Have a clause against photo realistic content and they will remove it.
Corkyskog@sh.itjust.works
on 21 May 2024 17:58
nextcollapse
It comes back to distribution for me. If they are generating the stuff for themselves, gross, but I don’t see how it can really be illegal. But if your distributing them, how do we know their not real? The amount of investigative resources that would need to be dumped into that, and the impact on those investigators mental health, I don’t know. I really don’t have an answer, I don’t know how they make it illegal, but it really feels like distribution should be.
Glass0448@lemmy.today
on 21 May 2024 21:40
nextcollapse
Currently, we do not outlaw written depictions nor drawings of child sexual abuse
KillingTimeItself@lemmy.dbzer0.com
on 21 May 2024 22:27
collapse
for some reason the US seems to hold a weird position on this one. I don’t really understand it.
It’s written to be illegal, but if you look at prosecution cases, i think there have been only a handful of charged cases. The prominent ones which also include relevant previous offenses, or worse.
It’s also interesting when you consider that there are almost definitely large image boards hosted in the US that host what could be constituted as “cartoon CSAM” notably e621, i’d have to verify their hosting location, but i believe they’re in the US. And so far i don’t believe they’ve ever had any issues with it. And i’m sure there are other good examples as well.
I suppose you could argue they’re exempt on the publisher rules. But these sites don’t moderate against these images, generally. And i feel like this would be the rare exception where it wouldnt be applicable.
The law is fucking weird dude. There is a massive disconnect between what we should be seeing, and what we are seeing. I assume because the authorities who moderate this shit almost exclusively go after real CSAM, on account of it actually being a literal offense, as opposed to drawn CSAM, being a proxy offense.
Glass0448@lemmy.today
on 22 May 2024 13:08
collapse
It seems to me to be a lesser charge. A net that catches a larger population and they can then go fishing for bigger fish to make the prosecutor look good. Or as I’ve heard from others, it is used to simplify prosecution. PedoAnon can’t argue “it’s a deepfake, not a real kid” to the SWAT team.
There is a massive disconnect between what we should be seeing, and what we are seeing. I assume because the authorities who moderate this shit almost exclusively go after real CSAM, on account of it actually being a literal offense, as opposed to drawn CSAM, being a proxy offense.
This can be attributed to no proper funding of CSAM enforcement. Pedos get picked up if they become an active embarrassment like the article dude. Otherwise all the money is just spent on the database getting bigger and keeping the lights on. Which works for congress. A public pedo gets nailed to the wall because of the database, the spooky spectre of the pedo out for your kids remains, vote for me please…
KillingTimeItself@lemmy.dbzer0.com
on 22 May 2024 16:58
collapse
It seems to me to be a lesser charge. A net that catches a larger population and they can then go fishing for bigger fish to make the prosecutor look good. Or as I’ve heard from others, it is used to simplify prosecution. PedoAnon can’t argue “it’s a deepfake, not a real kid” to the SWAT team.
ah that could be a possibility as well. Just ensuring reasonable flexibility in prosecution so you can be sure of what you get.
nucleative@lemmy.world
on 22 May 2024 04:39
nextcollapse
Well thought-out and articulated opinion, thanks for sharing.
If even the most skilled hyper-realistic painters were out there painting depictions of CSAM, we’d probably still label it as free speech because we “know” it to be fiction.
When a computer rolls the dice against a model and imagines a novel composition of children’s images combined with what it knows about adult material, it does seem more difficult to label it as entirely fictional. That may be partly because the source material may have actually been real, even if the final composition is imagined. I don’t intend to suggest models trained on CSAM either, I’m thinking of models trained to know what both mature and immature body shapes look like, as well as adult content, and letting the algorithm figure out the rest.
Nevertheless, as you brought up, nobody is harmed in this scenario, even though many people in our culture and society find this behavior and content to be repulsive.
To a high degree, I think we can still label an individual who consumes this type of AI content to be a pedophile, and although being a pedophile is not in and of itself an illegal adjective to posses, it comes with societal consequences. Additionally, pedophilia is a DSM-5 psychiatric disorder, which could be a pathway to some sort of consequences for those who partake.
Theharpyeagle@lemmy.world
on 22 May 2024 09:03
collapse
It feels incredibly gross to just say “generated CSAM is a-ok, grab your hog and go nuts”, but I can’t really say that it should be illegal if no child was harmed in the training of the model. The idea that it could be a gateway to real abuse comes to mind, but that’s a slippery slope that leads to “video games cause school shootings” type of logic.
I don’t know, it’s a very tough thing to untangle. I guess I’d just want to know if someone was doing that so I could stay far, far away from them.
peanuts4life@lemmy.blahaj.zone
on 21 May 2024 16:37
nextcollapse
It’s worth mentioning that in this instance the guy did send porn to a minor. This isn’t exactly a cut and dry, “guy used stable diffusion wrong” case. He was distributing it and grooming a kid.
The major concern to me, is that there isn’t really any guidance from the FBI on what you can and can’t do, which may lead to some big issues.
For example, websites like novelai make a business out of providing pornographic, anime-style image generation. The models they use deliberately tuned to provide abstract, “artistic” styles, but they can generate semi realistic images.
Now, let’s say a criminal group uses novelai to produce CSAM of real people via the inpainting tools. Let’s say the FBI cast a wide net and begins surveillance of novelai’s userbase.
Is every person who goes on there and types, “Loli” or “Anya from spy x family, realistic, NSFW” (that’s an underaged character) going to get a letter in the mail from the FBI? I feel like it’s within the realm of possibility. What about “teen girls gone wild, NSFW?” Or “young man, no facial body hair, naked, NSFW?”
This is NOT a good scenario, imo. The systems used to produce harmful images being the same systems used to produce benign or borderline images. It’s a dangerous mix, and throws the whole enterprise into question.
retrieval4558@mander.xyz
on 21 May 2024 17:17
nextcollapse
Is every person who goes on there and types, “Loli” or “Anya from spy x family, realistic, NSFW” (that’s an underaged character) going to get a letter in the mail from the FBI?
I’ll throw that baby out with the bathwater to be honest.
Duamerthrax@lemmy.world
on 21 May 2024 19:29
collapse
Simulated crimes aren’t crimes. Would you arrest every couple that finds health ways to simulate rape fetishes? Would you arrest every person who watches Fast and The Furious or The Godfather?
If no one is being hurt, if no real CSAM is being fed into the model, if no pornographic images are being sent to minors, it shouldn’t be a crime. Just because it makes you uncomfortable, don’t make it immoral.
Ookami38@sh.itjust.works
on 21 May 2024 19:59
nextcollapse
Or, ya know, everyone who ever wanted to decapitate those stupid fucking Skyrim children. Crime requires damaged parties, and with this (idealized case, not the specific one in the article) there is none.
DarkThoughts@fedia.io
on 21 May 2024 20:05
collapse
Those were demon children from hell (with like 2 exceptions maybe). It was a crime by Bethesda to make them invulnerable / protected by default.
helpImTrappedOnline@lemmy.world
on 21 May 2024 20:07
nextcollapse
Simulated crimes aren’t crimes.
If they were, any one who’s played games is fucked. I’m confident everyone who has played went on a total ramapage murdering the townfolk, pillaging their houses and blowing everything up…in Minecraft.
Glass0448@lemmy.today
on 21 May 2024 21:48
nextcollapse
Simulated crimes aren’t crimes.
Artistic CSAM is definitely a crime in the United States. PROTECT act of 2003.
Duamerthrax@lemmy.world
on 21 May 2024 22:17
collapse
People have only gotten in trouble for that when they’re already in trouble for real CSAM. I’m not terrible interested in sticking up for actual CSAM scum.
Duamerthrax@lemmy.world
on 22 May 2024 04:04
nextcollapse
If no real child is involved in any way, who is hurt?
Meansalladknifehands@lemm.ee
on 22 May 2024 05:36
nextcollapse
For now, if you read the article, it states that he shared the pictures to form like minded groups where they got emboldened and could support each other and legitimize/normalize their perverted thoughts. How about no thanks.
Obonga@feddit.de
on 22 May 2024 06:33
nextcollapse
wrong comment chain. people weren’t talking about the criminal shithead the article is about but about the scenario of someone using (not csam trained) models to create questionable content (thus it is implied that there would be no victim).
we all know that there are bad actors out there, just like there are rapists and murderers. still we dont condemn true crime lovers or rape fetishists until they commit a crime. we could do the same with pedos but somehow we believe hating them into the shadows will stop them somehow from doing criminal stuff?
Meansalladknifehands@lemm.ee
on 22 May 2024 12:49
collapse
And I’m using the article as an example of that it doesn’t just stop at “victimless” images, because they are not fucking normal people. They are mentally sick, they are sexually turned on by the abuse of a minor, not by the minor but by abusing the minor, sexually.
In what world would a person like that stop at looking at images, they actively search for victims, create groups where they share and discuss abusing minors.
Yes dude, they are fucking dangerous bro, life is not fair. You wouldn’t say the same shit if some one close to you was a victim.
Duamerthrax@lemmy.world
on 22 May 2024 08:40
collapse
Maybe you should focus your energy on normalized things that actually effect kids like banning full contact sports that cause CTE.
Meansalladknifehands@lemm.ee
on 22 May 2024 12:38
collapse
What do you mean focus your energy, how much energy do you think I spend on discussing perverts? And what should I spend my time discussing contact sports. It’s sound like you are deflecting.
Pedophiles get turned on abusing minors, they are mentally sick. It’s not like its a normal sexual desire, they will never stop at watching “victimless” images. Fuck pedophiles they don’t deserve shit, and hope they eat shit he rest of their lives.
Duamerthrax@lemmy.world
on 22 May 2024 15:15
nextcollapse
they will never stop at watching “victimless” images.
How is that different from any other dangerous fetish? Should we be arresting adult couples that do Age Play? All the BDSM communities? Do we even want to bring up the Vore art communities? Victimless is victimless.
Meansalladknifehands@lemm.ee
on 22 May 2024 17:47
collapse
No because its two consenting adults otherwise its illegal. Wtf is vore art, not going to google that. How do you know it’s victimless. Like I said they are turned on by abusing minors, and don’t know how else I can put it, I can’t be more clear.
Let me ask you this, do you think pedophiles care about their victims? If yes, then I want to hear why you think that. If no, why are we even having this argument?
Duamerthrax@lemmy.world
on 22 May 2024 23:11
collapse
Your ultimatum is flawed. Do you believe humans have impulse control? Yes or No.
Meansalladknifehands@lemm.ee
on 23 May 2024 07:32
collapse
I haven’t given you a ultimatum I gave you a question, and you can answer it anyway you want. Do or can pedophiles feel remorse for their victims? Are there pedophiles who feel remorse for their victims but still abuse children?
But let me say this again, pedophiles have no remorse towards their victims, they get turned on by it, I’m trying to tell you it’s not a just a sexual desire. They like the abuse part of it, abusing some one helpless, that is why they are turned on by abusing children.
Bro I can’t continue this, you’re not willing to understand it’s not about the kid, it is about abusing the kid, that is what they want. And if you can’t rigster that it’s not just a sexual desire, then we can agree to disagree.
Wes4Humanity@lemm.ee
on 23 May 2024 12:27
collapse
You’re correct, pedophilia is a mental illness. A very tragic one since there is no hope and no cure. They can’t even ask for help because everyone will automatically assume they are also child molesters. Which is what you’re describing, but not all child molesters are pedophiles, and most pedophiles will never become child molesters… Like you said, some people just get off on exploiting the power dynamic and aren’t necessarily sexually attracted to children. Those people are the real danger.
PotatoKat@lemmy.world
on 22 May 2024 09:10
collapse
Real children are in training data regardless of if there is csam in the data or not (which there is a high chance there is considering how they get their training data) so real children are involved
Duamerthrax@lemmy.world
on 22 May 2024 13:31
collapse
I’ve already stated that I do not support using images of real children in the models. Even if the images are safe/legal, it’s a violation of privacy.
Nobody is arguing that it’s moral. That’s not the line for government intervention. If it was then the entire private banking system would be in prison.
They’ve actually issued warnings and guidance, and the law itself is pretty concise regarding what’s allowed.
(8) “child pornography” means any visual depiction, including any photograph, film, video, picture, or computer or computer-generated image or picture, whether made or produced by electronic, mechanical, or other means, of sexually explicit conduct, where-
(A) the production of such visual depiction involves the use of a minor engaging in sexually explicit conduct;
(B) such visual depiction is a digital image, computer image, or computer-generated image that is, or is indistinguishable from, that of a minor engaging in sexually explicit conduct; or
(11) the term “indistinguishable” used with respect to a depiction, means virtually indistinguishable, in that the depiction is such that an ordinary person viewing the depiction would conclude that the depiction is of an actual minor engaged in sexually explicit conduct. This definition does not apply to depictions that are drawings, cartoons, sculptures, or paintings depicting minors or adults.
If you’re going to be doing grey area things you should do more than the five minutes of searching I did to find those honestly.
It was basically born out of a supreme Court case in the early 2000s regarding an earlier version of the law that went much further and banned anything that “appeared to be” or “was presented as” sexual content involving minors, regardless of context, and could have plausibly been used against young looking adult models, artistically significant paintings, or things like Romeo and Juliet, which are neither explicit nor vulgar but could be presented as involving child sexual activity. (Juliet’s 14 and it’s clearly labeled as a love story).
After the relevant provisions were struck down, a new law was passed that factored in the justices rationale and commentary about what would be acceptable and gave us our current system of “it has to have some redeeming value, or not involve actual children and plausibly not look like it involves actual children”.
Glass0448@lemmy.today
on 21 May 2024 21:47
collapse
The major concern to me, is that there isn’t really any guidance from the FBI on what you can and can’t do, which may lead to some big issues.
The Protect Act of 2003 means that any artistic depiction of CSAM is illegal. The guidance is pretty clear, FBI is gonna raid your house…eventually. We still haven’t properly funded the anti-CSAM departments.
badbytes@lemmy.world
on 21 May 2024 16:58
nextcollapse
Breaking news: Paint made illegal, cause some moron painted something stupid.
They lock it up because it’s frequently stolen. (Because of its use in graffiti, but still.)
cley_faye@lemmy.world
on 21 May 2024 19:26
nextcollapse
I’d usually agree with you, but it seems he sent them to an actual minor for “reasons”.
Glass0448@lemmy.today
on 21 May 2024 21:50
collapse
Asked whether more funding will be provided for the anti-paint enforcement divisions: it’s such a big backlog, we’ll rather just wait for somebody to piss of a politician to focus our resources.
helpImTrappedOnline@lemmy.world
on 21 May 2024 17:54
nextcollapse
The headline/title needs to be extended to include the rest of the sentence
"and then sent them to a minor"
Yes, this sicko needs to be punished.
Any attempt to make him the victim of " the big bad government" is manipulative at best.
Edit: made the quote bigger for better visibility.
cley_faye@lemmy.world
on 21 May 2024 19:25
nextcollapse
That’s a very important distinction. While the first part is, to put it lightly, bad, I don’t really care what people do on their own. Getting real people involved, and minor at that? Big no-no.
DarkThoughts@fedia.io
on 21 May 2024 20:02
nextcollapse
All LLM headlines are like this to fuel the ongoing hysteria about the tech. It's really annoying.
helpImTrappedOnline@lemmy.world
on 21 May 2024 20:20
collapse
Sure is. I report the ones I come across as clickbait or missleading title, explaining the parts left out…such as this one where those 7 words change the story completely.
Whoever made that headline should feel ashamed for victimizing a grommer.
Glass0448@lemmy.today
on 21 May 2024 21:37
nextcollapse
Cartoon CSAM is illegal in the United States. Pretty sure the judges will throw his images under the same ruling.
Madison420@lemmy.world
on 21 May 2024 22:08
nextcollapse
It won’t. They’ll get them for the actual crime not the thought crime that’s been nerfed to oblivion.
ameancow@lemmy.world
on 22 May 2024 20:09
collapse
Based on the blacklists that one has to fire up before browsing just about any large anime/erotica site, I am guessing that these “laws” are not enforced, because they are flimsy laws to begin with. Reading the stipulations for what constitutes a crime is just a hotbed for getting an entire case tossed out of court. I doubt any prosecutors would lean hard on possession of art unless it was being used in another crime.
I’d be torn on the idea of AI generating CP, if it were only that. On one hand if it helps them calm the urges while no one is getting hurt, all the better. But on the other hand it might cause them not to seek help, but problem is already stigmatized severely enough that they are most likely not seeking help anyway.
But sending that stuff to a minor. Big problem.
Glass0448@lemmy.today
on 21 May 2024 21:43
nextcollapse
OMG. Every other post is saying their disgusted about the images part but it’s a grey area, but he’s definitely in trouble for contacting a minor.
Cartoon CSAM is illegal in the United States. AI images of CSAM fall into that category. It was illegal for him to make the images in the first place BEFORE he started sending them to a minor.
Madison420@lemmy.world
on 21 May 2024 22:07
nextcollapse
Yeah that’s toothless. They decided there is no particular way to age a cartoon, they could be from another planet that simply seem younger but are in actuality older.
It’s bunk, let them draw or generate whatever they want, totally fictional events and people are fair game and quite honestly I’d Rather they stay active doing that then get active actually abusing children.
Outlaw shibari and I guarantee you’d have multiple serial killers btk-ing some unlucky souls.
sugar_in_your_tea@sh.itjust.works
on 21 May 2024 23:11
nextcollapse
Exactly. If you can’t name a victim, it shouldn’t be illegal.
The problem with AI CSAM generation is that the AI has to be trained on something first. It has to somehow know what a naked minor looks like. And to do that, well… You need to feed it CSAM.
So is it right to be using images of real children to train these AI? You’d be hard-pressed to find someone who thinks that’s okay.
Eezyville@sh.itjust.works
on 22 May 2024 00:41
nextcollapse
You make the assumption that the person generating the images also trained the AI model. You also make assumptions about how the AI was trained without knowing anything about the model.
Are there any guarantees that harmful images weren’t used in these AI models? Based on how image generation works now, it’s very likely that harmful images were used to train the data.
And if a person is using a model based on harmful training data, they should be held responsible.
However, the AI owner/trainer has even more responsibility in perpetuating harm to children and should be prosecuted appropriately.
Eezyville@sh.itjust.works
on 22 May 2024 01:58
nextcollapse
And if a person is using a model based on harmful training data, they should be held responsible.
I will have to disagree with you for several reasons.
You are still making assumptions about a system you know absolutely nothing about.
By your logic anything born from something that caused suffering from others (this example is AI trained on CSAM) the users of that product should be held responsible for the crime committed to create that product.
Does that apply to every product/result created from human suffering or just the things you don’t like?
Will you apply that logic to the prosperity of Western Nations built on the suffering of indigenous and enslaved people? Should everyone who benefit from western prosperity be held responsible for the crimes committed against those people?
What about medicine? Two examples are The Tuskegee Syphilis Study and the cancer cells of Henrietta Lacks. Medicine benefited greatly from these two examples but crimes were committed against the people involved. Should every patient from a cancer program that benefited from Ms. Lacks’ cancer cells also be subject to pay compensation to her family? The doctors that used her cells without permission didn’t.
Should we also talk about the advances in medicine found by Nazis who experimented on Jews and others during WW2? We used that data in our manned space program paving the way to all the benefits we get from space technology.
aceshigh@lemmy.world
on 22 May 2024 04:11
nextcollapse
The topic that you’re choosing to focus on really interesting. what are your values?
Eezyville@sh.itjust.works
on 22 May 2024 12:30
collapse
My values are none of your business. Try attacking my arguments instead of looking for something about me to attack.
aceshigh@lemmy.world
on 22 May 2024 13:59
collapse
At the root of it beliefs aren’t based on logic they’re based on your value system. So why dance around the actual topic?
PotatoKat@lemmy.world
on 22 May 2024 09:01
collapse
The difference between the things you’re listing and SAM is that those other things have actual utility outside of getting off. Were our phones made with human suffering? Probably but phones have many more uses than making someone cum. Are all those things wrong? Yea, but at least good came out of it outside of just giving people sexual gratification directly from the harm of others.
aesthelete@lemmy.world
on 22 May 2024 06:32
collapse
Are there any guarantees that harmful images weren’t used in these AI models?
Lol, highly doubt it. These AI assholes pretend that all the training data randomly fell into the model (off the back of a truck) and that they cannot possibly be held responsible for that or know anything about it because they were too busy innovating.
There’s no guarantee that most regular porn sites don’t contain csam or other exploitative imagery and video (sex trafficking victims). There’s absolutely zero chance that there’s any kind of guarantee.
sugar_in_your_tea@sh.itjust.works
on 22 May 2024 00:42
nextcollapse
If the images were generated from CSAM, then there’s a victim. If they weren’t, there’s no victim.
this_1_is_mine@lemmy.world
on 22 May 2024 03:15
nextcollapse
I hate the no victim argument.
sugar_in_your_tea@sh.itjust.works
on 22 May 2024 03:26
collapse
Why? Can you elaborate?
PotatoKat@lemmy.world
on 22 May 2024 09:03
collapse
The images were created using photos of real children even if said photos weren’t CSAM (which can’t be guaranteed they weren’t). So the victims were are the children used to generate CSAM
dev_null@lemmy.ml
on 22 May 2024 11:15
nextcollapse
Sure, but isn’t the the perpetrator the company that trained the model without their permission? If a doctor saves someone’s life using knowledge based on nazi medical experiments, then surely the doctor isn’t responsible for the crimes?
PotatoKat@lemmy.world
on 22 May 2024 12:05
collapse
So is the car manufacturer responsible if someone drives their car into the sidewalk to kill some people?
Your analogy doesn’t match the premise. (Again assuming there is no csam in the training data which is unlikely) the training data is not the problem it is how the data is used. Using those same picture to generate photos of medieval kids eating ice cream with their family is fine. Using it to make CSAM is not.
It would be more like the doctor using the nazi experiments to do some other fucked up experiments.
Sorry, my app glitched out and posted my comment multiple times, and got me banned for spamming…
Now that I got unbanned I can reply.
So is the car manufacturer responsible if someone drives their car into the sidewalk to kill some people?
In this scenario no, because the crime was in how someone used the car, not in the creation of the car. The guy in this story did commit a crime, but for other reasons. I’m just saying that if you are claiming that children in the training data are victims of some crime, then that crime was committed when training the model. They obviously didn’t agree for their photos to be used that way, and most likely didn’t agree for their photos to be used for AI training at all. So by the time this guy came around, they were already victims, and would still be victims if he didn’t.
PotatoKat@lemmy.world
on 22 May 2024 19:54
collapse
I would argue that the person using the model for that purpose is further victimizing the children. Kinda like how with revenge porn the worst perpetrator is the person who uploaded the content, but every person viewing it from there is furthering the victimization. It is mentally damaging for the victim of revenge porn to know that their intimate videos are being seen/sought out.
sugar_in_your_tea@sh.itjust.works
on 22 May 2024 14:45
collapse
Let’s do a thought experiment, and I’d look to to tell me at what point a victim was introduced:
I legally acquire pictures of a child, fully clothed and everything
I draw a picture based on those legal pictures, but the subject is nude or doing sexually explicit things
I keep the picture for my own personal use and don’t distribute it
Or with AI:
I legally acquire pictures of children, fully clothed and everything
I legally acquire pictures of nude adults, some doing sexually explicit things
I train an AI on a mix of 1&2
I generate images of nude children, some of them doing sexually explicit things
I keep the pictures for my own personal use and don’t distribute any of them
I distribute my model, using the right to distribute from the legal acquisition of those images
At what point did my actions victimize someone?
If I distributed those images and those images resemble a real person, then that real person is potentially a victim.
I will say someone who does this creepy and I don’t want them anywhere near children (especially mine, and yes, I have kids), but I don’t think it should be illegal, provided the source material is legal. But as soon as I distribute it, there absolutely could be a victim. Being creepy shouldn’t be a crime.
PotatoKat@lemmy.world
on 22 May 2024 19:59
collapse
I think it should be illegal to make porn of a person without their permission regardless of if it was shared or not. Imagine the person it is based off of finds out someone is doing that. That causes mental strain on the person. Just like how revenge porn doesn’t actively harm a person but causes mental strafe (both the initial upload and continued use of it). For scenario 1 it would be at step 2 when the porn is made of the person. For scenario 2 it would be a mix between step 3 and 4.
sugar_in_your_tea@sh.itjust.works
on 22 May 2024 23:00
collapse
Thanks for sharing! I’m going to disagree with pretty much everything, so please stop reading here if you’re not interested.
Imagine the person it is based off of finds out someone is doing that. That causes mental strain on the person…
Sure, and there are plenty of things that can cause mental strain, but that doesn’t make those things illegal. For example:
public display of affection - could cause mental stain people who recently broke up or haven’t found love
drug use - recovering addicts could experience mental strain
finding out someone is masturbating to a picture of you
And so on. Those things aren’t illegal, but someone could experience mental strain from them. Experiencing that doesn’t make you a victim, it just means you experience it.
revenge porn doesn’t actively harm a person but causes mental strafe
Revenge porn damages someone’s reputation, at the very least, which is a large part of why it’s illegal.
Someone keeping those images for private use doesn’t cause harm, therefore it shouldn’t be illegal.
Someone doing something creepy for their own use should never be illegal.
PotatoKat@lemmy.world
on 23 May 2024 00:16
collapse
Thanks for sharing! I’m going to disagree with pretty much everything, so please stop reading here if you’re not interested.
I’m not one to stop because of disagreement. You’re in good faith and that’s all that matters imo
Revenge porn damages someone’s reputation, at the very least, which is a large part of why it’s illegal.
Someone keeping those images for private use doesn’t cause harm, therefore it shouldn’t be illegal.
I believe consent is a larger factor. The person who made it consented to have their photos/videos seen by that person but did not consent to them sharing it.
That’s why it’s not illegal to call someone a slut (even though that also damages reputation)
Someone doing something creepy for their own use should never be illegal.
What if the recording was made without the person’s consent. Say someone records their one night stand without the other person’s knowledge but they don’t share it with anyone. Should that be illegal?
sugar_in_your_tea@sh.itjust.works
on 23 May 2024 00:37
collapse
Consent is certainly important, but they don’t need your consent if the image was obtained legally and thus subject to fair use, or if you gave them permission in the past.
That’s why it’s not illegal to call someone a slut (even though that also damages reputation)
It can be, if that constitutes defamation or libel. A passing statement wouldn’t, but a post on a popular website absolutely could. It all comes down to the damages that (false) statement caused.
What if the recording was made without the person’s consent. Say someone records their one night stand without the other person’s knowledge but they don’t share it with anyone. Should that be illegal?
That depends on whether there was a reasonable expectation of privacy. If it’s in public, there’s no reasonable expectation of privacy.
In general, I’d say intimacy likely occurs somewhere with a reasonable expectation of privacy, at which point it would come down to consent (whether implied or explicit).
PotatoKat@lemmy.world
on 23 May 2024 01:25
collapse
It can be, if that constitutes defamation or libel. A passing statement wouldn’t, but a post on a popular website absolutely could. It all comes down to the damages that (false) statement caused.
If the person is a slut it wouldn’t be libel but it would still damage reputation. The person being a slut is true but calling them one still damages their reputation. If you release a home made video of a pornstar it would still be illegal even though it’s not something that would damage their reputation.
The reason for the illegality is the lack of consent not the reputation damage.
That depends on whether there was a reasonable expectation of privacy. If it’s in public, there’s no reasonable expectation of privacy.
Even in a 1 party consent state recording someone while you are having intercourse with them is illegal without their consent, because we make exceptions for especially sensitive subjects such as sex.
To go along with that I also believe that people who uploaded photos of themselves/their children did not consent to having their photos used to make sexual content. If they did it would be another matter to me entirely.
Edit: I also would like to say (and I really am sorry for bringing them into this) but from what you said you think it would be okay (not socially acceptable but okay/fine) for someone to take pictures of your kids while they’re at the park and use that to make porn. Really think about that. Is that something you think should be allowed? Imagine someone taking pictures of them at walmart and you ask what they’re doing and they straight up tell you “I like how they look I’m going to add them to my training data to make porn, don’t worry though I’m not sharing it with anyone” and you could do jack shit about it without facing legal consequences yourself. You think that is okay?
sugar_in_your_tea@sh.itjust.works
on 23 May 2024 02:52
collapse
If the person is a slut it wouldn’t be libel but it would still damage reputation
Sure, in which case the person wouldn’t legally be a victim. It’s completely legal to tell the truth.
But that strays a bit from the point. Making fake porn of someone is a false reputation of that person’s character, and thus illegal, but only if it actually causes damages to reputation (i.e. you distribute it). Or at least that’s the line of argumentation I think someone would use in states where “revenge porn” isn’t explicitly illegal.
Even if the person is a porn star, the damage is that the porn is coming from somewhere other than the approved channels, thus the damages. Or maybe it’s lost sales. Regardless, there are actual, articulable damages.
The reason for the illegality is the lack of consent not the reputation damage.
Maybe in states where it’s expressly illegal. I’m talking more from a theoretical standpoint where there isn’t an explicit law against it.
If there’s no explicit law, tht standard is defamation/libel or violation of a reasonable expectation of privacy.
we make exceptions for especially sensitive subjects such as sex.
That’s the reasonable expectation of privacy standard (that applies inside houses when in bedrooms, bathrooms, etc, even if it’s not your house). If you’re doing it in public, there’s no reasonable expectation of privacy, so I think a court would consider filming in that context to be legal.
Then again, this could certainly vary by jurisdiction.
I also believe that people who uploaded photos of themselves/their children did not consent to having their photos used to make sexual content
They don’t need to consent for any use, if it’s made available for personal use, then any individual can use it for personal use, even if that’s sexual content. As long as they don’t distribute it, they’re fine to use it as they please.
If you want control over how how content is used, don’t make it available for personal use.
but from what you said you think it would be okay
Yes. I certainly don’t want them to do that, but I really don’t want to live in a society with the surveillance necessary to prosecute such a law. Someone being creepy with pictures of my kids is disgusting, but it honestly doesn’t hurt me or my kids in any way, provided they don’t share those images with anyone.
So yes, I think it’s a necessary evil to have the kinds of privacy protections I think are valuable to have in a free society. Freedom means letting people do creepy things that don’t hurt anyone else.
PotatoKat@lemmy.world
on 23 May 2024 03:21
collapse
Even if the person is a porn star, the damage is that the porn is coming from somewhere other than the approved channels, thus the damages
The damages would be the mental harm done to the victim. Most porn stars have content available for free so that wouldn’t be a reason for damages
That’s the reasonable expectation of privacy standard (that applies inside houses when in bedrooms, bathrooms, etc, even if it’s not your house). If you’re doing it in public, there’s no reasonable expectation of privacy, so I think a court would consider filming in that context to be legal.
The expectation of privacy doesn’t apply to one party consent States but they still can’t record sexual activities of someone without their consent
If you want control over how how content is used, don’t make it available for personal use.
I don’t think people who uploaded pictures on Facebook consider that making it available for personal use
I really don’t want to live in a society with the surveillance necessary to prosecute such a law.
Did i say anything about surveillance? Just because something is made illegal doesn’t make it actively pursued, it just makes it so if someone gets caught doing it or gets reported doing it they can be stopped. Like you’d be able to stop the person from doing that to your children. Or if someone gets their house raided for something else they can be charged for it. Not every person who has real csam creates it or shares it, many times they just get caught by another charge then it gets found. Or the geek squad worker sees it on their computer and reports them.
It would give people avenues to stop others from using photos of their children in such a way. You wouldn’t need any extra surveillance
Freedom means letting people do creepy things that don’t hurt anyone else.
Do you think it’s okay for someone to have real csam? Let’s say the person who made it was properly prosecuted and the person who has the images/videos don’t share it, they just have it to use. Do you think that’s okay?
sugar_in_your_tea@sh.itjust.works
on 26 May 2024 14:27
collapse
I don’t think people who uploaded pictures on Facebook consider that making it available for personal use.
Then they shouldn’t have uploaded it to Facebook and made it publicly accessible.
Just because something is made illegal doesn’t make it actively pursued, it just makes it so if someone gets caught doing it or gets reported doing it they can be stopped.
It’s the next logical step for the pearl clutchers and amounts to “thought crime.”
These people aren’t doing anything to my children, they’re making their own images from images they have a right to use. It’s super creepy and I’d probably pick a fight with them if I found out, but I don’t think it should be illegal if there’s no victim.
The geek squad worker could still report these people, and it would be the prosecution’s job to prove that they were acquired or created in an illegal way.
Do you think it’s okay for someone to have real csam?
No, because that increases demand for child abuse. Those pictures are created by abuse of children, and having getting access to them encourages for child abuse to produce more content.
Possession itself isn’t the problem, the problem is how they’re produced.
I feel similarly about recreational drugs. Buying from dealers is bad because it encourages snuggling and everything related to it. I have no problem with weed or whatever, I have problems with the cartels. At least with drugs there’s a simple solution: legalize it. I likewise want a legal avenue for these people who would otherwise participate in child abuse to not abuse children. Them looking at creepy AI content generated from pictures of my child doesn’t hurt my child, just don’t share those images or otherwise let me know about it.
PotatoKat@lemmy.world
on 27 May 2024 08:37
collapse
It’s the next logical step for the pearl clutchers and amounts to “thought crime.”
I seriously doubt they would create any more surveillance for that than there already is for real CSAM.
The geek squad worker could still report these people, and it would be the prosecution’s job to prove that they were acquired or created in an illegal way.
That would just make it harder to prosecute people for CSAM since they will all claim their material was just ai. That would just end up helping child abusers get away with it.
Possession itself isn’t the problem, the problem is how they’re produced.
I think the production of generated CSAM is unethical because it still involves photos of children without their consent
No, because that increases demand for child abuse. Those pictures are created by abuse of children, and having getting access to them encourages for child abuse to produce more content.
There is evidence to suggest that viewing csam increases child seeking behavior. So them viewing generated CSAM would most likely have the same if not a similar result. That would mean that even just having access to the materials would increase the likelihood of child abuse
The survey was self reported so the reality is probably higher than the 42% cited from the study
I likewise want a legal avenue for these people who would otherwise participate in child abuse to not abuse children.
The best legal avenue for non-offending pedophiles to take is for them to find a psychologist that can help them work through their desires. Not to give them a thing that will make them want to offend even more.
sugar_in_your_tea@sh.itjust.works
on 27 May 2024 16:33
collapse
That would just make it harder to prosecute people for CSAM
That’s true, and an unfortunate part of preserving freedoms. That said, if someone is actually abusing children on the regular, police have a way of tracking that individual to catch them: investigations.
I wish police had to do them more often instead of leaving that job to the prosecution. If that means we need to pull officers away from other important duties like arresting black men for possessing a joint or pulling people over for speeding on an empty highway, I guess that’s what we have to do.
it still involves photos of children without their consent
It involves legally acquired images and is protected under “fair use” laws. You don’t need my permission to exercise your fair use rights, even if I think your use is disgusting. It’s not my business. But if you make it my business (i.e. you tell me), I may choose to assault you and hope the courts will side with me that they constitute “fighting words.”
Just because something is disgusting doesn’t make it illegal.
As for that article:
“This is really significant. We now have a peer-reviewed study to prove that watching [CSAM] can increase the risk of contact.”
It doesn’t prove anything, what it does is draw a correlation between people who search for CSAM on the dark web and are willing to answer a survey (a pretty niche group) and self-reported inclination to contact children. Correlation isn’t proof, it’s correlation.
That said, I don’t know if a better study could or should be conducted. Maybe survey people caught contacting children (sting operations) and those caught just distributing CSAM w/o child contact. We need go know the difference between those who progress to contact and those who don’t, and I don’t think this survey provides that.
find a psychologist that can help them work through their desire
I agree, and I think that should be widely accessible.
That said, I don’t think giving people a criminal record helps. If they need to be locked up to protect the public (i.e. there are actual victims), then let’s lock them up. But otherwise, we absolutely shouldn’t. Let’s make help available and push people toward getting that help.
deathbird@mander.xyz
on 22 May 2024 06:08
nextcollapse
the AI has to be trained on something first. It has to somehow know what a naked minor looks like. And to do that, well… You need to feed it CSAM.
First of all, not every image of a naked child is CSAM. This is actually been kind of a problem with automated CSAM detection systems triggering false positives on non-sexual images, and getting innocent people into trouble.
But also, AI systems can blend multiple elements together. They don’t need CSAM training material to create CSAM, just the individual elements crafted into a prompt sufficient to create the image while avoiding any safeguards.
PotatoKat@lemmy.world
on 22 May 2024 08:55
collapse
You ignored the second part of their post. Even if it didn’t use any csam is it right to use pictures of real children to generate csam? I really don’t think it is.
deathbird@mander.xyz
on 24 May 2024 03:14
collapse
There are probably safeguards in place to prevent the creation of CSAM, just like there are for other illegal and offensive things, but determined people work around them.
ICastFist@programming.dev
on 22 May 2024 13:30
collapse
It has to somehow know what a naked minor looks like.
Not necessarily
You need to feed it CSAM
You don’t. You just need lists of other things, properly tagged. If you feed an AI a bunch of clothed adults and a bunch of naked adults, it will, in theory, “understand” the difference between being clothed and naked and create any of its clothed adults, naked.
With that initial set above, you feed it a bunch of clothed children. When you ask for a naked child, it will either produce a child head with naked adult body, or a “weird” naked child. It “understands” that adult and child are different things, that clothed and naked are different things, and tries to infer what “naked child” looks like from what it “knows”.
So is it right to be using images of real children to train these AI?
This is the real question and one I don’t know the answer to, because it will boil down to consent to being part of a training model, whether your own as an adult, or a child’s parent, much like how it works for stock photos and videos.
“I consent to having my likeness used for AI training models, except for any use that involves NSFW content” - Fair enough. Good luck enforcing that.
I think the challenge with Generative AI CSAM is the question of where did training data originate? There has to be some questionable data there.
scoobford@lemmy.zip
on 22 May 2024 04:29
nextcollapse
That would mean you need to enforce the law for whoever built the model. If the original creator has 100TB of cheese pizza, then they should be the one who gets arrested.
Otherwise you’re busting random customers at a pizza shop for possession of the meth the cook smoked before his shift.
There is also the issue of determining if a given image is real or AI. If AI were legal, that means prosecution would need to prove images are real and not AI with the risk of letting go real offenders.
The need to ban AI CSAM is even clearer than cartoon CSAM.
Madison420@lemmy.world
on 22 May 2024 12:43
collapse
And in the process force non abusers to seek their thrill with actual abuse, good job I’m sure the next generation of children will appreciate your prudish factually inept effort. We’ve tried this with so much shit, prohibition doesn’t stop anything or just creates a black market and a abusive power system to go with it.
ZILtoid1991@lemmy.world
on 22 May 2024 17:47
collapse
My main issue with generation is the ability of making it close enough to reality. Even with the more realistic art stuff, some outright referenced or even traced CSAM. The other issue is the lack of easy differentiation between reality and fiction, and it muddies the water. “I swear officer, I thought it was AI” would become the new “I swear officer, she said she was 18”.
Madison420@lemmy.world
on 22 May 2024 22:22
collapse
That is not an end user issue, that’s a dev issue. Can’t train on scam if it isn’t available and as such is tacit admission of actual possession.
surewhynotlem@lemmy.world
on 21 May 2024 23:09
nextcollapse
Would Lisa Simpson be 8 years old, or 43 because the Simpsons started in 1989?
zbyte64@awful.systems
on 21 May 2024 23:27
collapse
Big brain PDF tells the judge it is okay because the person in the picture is now an adult.
You can say pedophile… that “pdf file” stuff is so corny and childish. Hey guys lets talk about a serious topic by calling it things like “pdf files” and “Graping”. Jfc
Why do people say “graping?” I’ve never heard that.
Please tell me it doesn’t have to do with “The Grapist” video that came out on early YouTube.
okiloki@feddit.de
on 22 May 2024 04:48
nextcollapse
To avoid censorship filters in social media, same with PDF files.
ICastFist@programming.dev
on 22 May 2024 12:21
collapse
Tiktok and Instagram are the main culprits, they’ll shadowban, or outright delist, any content that uses no-no words. Sex, rape, assault, drugs, die, suicide, it’s a rather big list
surewhynotlem@lemmy.world
on 22 May 2024 01:15
collapse
That’s the issue though. As far as I know it hasn’t been tested in court and it’s quite possible the law is useless and has no teeth.
With AI porn you can point to real victims whose unconsented pictures were used to train the models, and say that’s abuse. But when it’s just a drawing, who is the victim? Is it just a thought crime? Can we prosecute those?
dev_null@lemmy.ml
on 22 May 2024 11:22
nextcollapse
The discussion will never be resolved in your favour, if you shut down the discussion.
dev_null@lemmy.ml
on 22 May 2024 11:23
nextcollapse
The discussion will never be resolved in your favour, if you shut down the discussion.
dev_null@lemmy.ml
on 22 May 2024 11:23
nextcollapse
The discussion will never be resolved in your favour, if you shut down the discussion.
dev_null@lemmy.ml
on 22 May 2024 11:24
nextcollapse
The discussion will never be resolved in your favour, if you shut down the discussion.
dev_null@lemmy.ml
on 22 May 2024 11:25
nextcollapse
The discussion will never be resolved in your favour, if you shut down the discussion.
dev_null@lemmy.ml
on 22 May 2024 11:25
nextcollapse
The discussion will never be resolved in your favour, if you shut down the discussion.
dev_null@lemmy.ml
on 22 May 2024 11:26
nextcollapse
The discussion will never be resolved in your favour, if you shut down the discussion.
dev_null@lemmy.ml
on 22 May 2024 11:27
nextcollapse
The discussion will never be resolved in your favour, if you shut down the discussion.
dev_null@lemmy.ml
on 22 May 2024 11:27
nextcollapse
The discussion will never be resolved in your favour, if you shut down the discussion.
dev_null@lemmy.ml
on 22 May 2024 11:24
nextcollapse
The discussion will never be resolved in your favour, if you shut down the discussion.
Maggoty@lemmy.world
on 22 May 2024 16:44
nextcollapse
Sure, and then some judge starts making subjective decisions on drawn/painted art that didn’t hurt anyone and suddenly people are getting hurt.
The justice system is supposed to protect society, not hurt people you don’t like.
ZILtoid1991@lemmy.world
on 22 May 2024 18:55
collapse
While I do think realistic stuff should be illegal, no question, with the loli/shota/whatever, you’re just opening a can of worms that could be applied to other things too, and some already did.
Regulators used the very same “normalizing certain sexual acts” to try and censor more extreme form of porn and/or the sexual acts themselves, and partly succeeded in the UK. Sure, scat is gross, many like that exactly due to that. One could even talk about the health risks too. Same with fisting, which is too extreme for many, supposed to be extremely painful because many people’s only exposure to it was from Requiem for a Dream, and has some associated health risks. However, a lot of it is some misrepresentation of the truth, with scat isn’t that big of a health risk if you have a good immune system (rest can be mitigated with precautions and moderation), and fisting isn’t inherently painful (source: me).
And the same is true about loli/shota. The terms aren’t just applied to actual underage characters, but for the “short adults” common within the VTubing scene, many of which are also shorter in real life (obligatory “of course not all”). Some of those other characters are also adults, that have exaggerated, almost child-like physique. Most of it however is still just some depiction of children, and otherwise I can understand why some wants to abstain from even the “adult loli/shota” stuff. I remember when pubic hair removal was becoming mainstream, and many, like radical feminists, feared it would normalize pedophilia, I even got called a pedo by a pubic hair connoisseur for not really liking it. I also don’t really want to talk over victims of CSA, many of who want it banned, many of who want it legal.
As for normalizing: The greatest normalization is done by pedos getting into the fandom to recruit others, and entertain the idea of a lower age of consent. For a long time, we threw out these motherfuckers from our community. But then 4chan happened, and suddenly these very same people just started screaming “it’s just an edgy joke bro”, so at one point people trying to keep these creeps out of the anime community in general became villainified, and with gamergate and the culture wars hitting the scene, “gatekeeping the normies” became the priority, so these sick fucks became a feature, which created in the anime community
a nazi/pedo/weird gatekeeping free space,
and a space that doesn’t moralize about loli/shota.
I had a lot of connections to victims of CSA, most of them were teens, none were groomed by loli/shota (everyone’s mileage will vary on it, likely different in the age of the internet), but by either some non-pornographic work featuring a teen girl and an older man (usually in historic setting), or just by the perpetrator likening a 25+yo guy (often they lied they were way younger) going out with a 14 yo girl to her parents age gap (I’m in Hungary, where that’s technically legal🤮). Usually a simple “that big age gap isn’t okay in your age” talk did wonders, unless the only way for the girl to eat that day was to go out with that guy.
I thought cartoons/illustrations of that nature were only illegal in the UK (Coroners and Justices Act 2008) and Switzerland. TIL about the PROTECT Act.
ICastFist@programming.dev
on 22 May 2024 12:10
nextcollapse
Several countries prohibit any fictional depictions of child porn, whether drawn, written or otherwise. Wikipedia has an interesting list on that - en.wikipedia.org/…/Legality_of_child_pornography
Rayspekt@lemmy.world
on 22 May 2024 14:05
collapse
I wonder if there is significant migration happening into those countries where csam os legal.
ICastFist@programming.dev
on 22 May 2024 15:32
nextcollapse
Unlikely. Tourism, on the other hand…
ZILtoid1991@lemmy.world
on 22 May 2024 17:38
collapse
Most people instead have a trip to a place where underage sex workers are common, one can just have an external hard drive and/or a USB stick for that material which they hide. "An"caps are actively trying to form their own countries, partly to legalize “recordings of crimes” as they like to call them, if not outright to legalize child rape and child sex trafficking.
ZILtoid1991@lemmy.world
on 22 May 2024 17:33
collapse
The thing about the PROTECT Act is that it relies on the Miller test, which has obvious holes, and is like depends on who is reviewing it and stuff. I have heard even the UK law has holes which can be exploited.
Deceptichum@sh.itjust.works
on 21 May 2024 08:12
nextcollapse
What an oddly written article.
Additional evidence from the laptop indicates that he used extremely specific and explicit prompts to create these images. He likewise used specific ‘negative’ prompts—that is, prompts that direct the GenAI model on what not to include in generated content—to avoid creating images that depict adults.”
They make it sound like the prompts are important and/or more important than the 13,000 images…
ricecake@sh.itjust.works
on 21 May 2024 11:57
nextcollapse
In many ways they are. The image generated from a prompt isn’t unique, and is actually semi random. It’s not entirely in the users control. The person could argue “I described what I like but I wasn’t asking it for children, and I didn’t think they were fake images of children” and based purely on the image it could be difficult to argue that the image is not only “child-like” but actually depicts a child.
The prompt, however, very directly shows what the user was asking for in unambiguous terms, and the negative prompt removes any doubt that they thought they were getting depictions of adults.
Glass0448@lemmy.today
on 21 May 2024 22:01
collapse
And also it’s an AI.
13k images before AI involved a human with Photoshop or a child doing fucked up shit.
13k images after AI is just forgetting to turn off the CSAM auto-generate button.
Having an AI generate 13.000 images does not even take 24 hours (depending on hardware and settings ofc).
mightyfoolish@lemmy.world
on 22 May 2024 05:19
nextcollapse
Does this mean the AI was trained on CP material? How else would it know how to do this?
deathbird@mander.xyz
on 22 May 2024 05:53
nextcollapse
It would not need to be trained on CP. It would just need to know what human bodies can look like and what sex is.
AIs usually try not to allow certain content to be produced, but it seems people are always finding ways to work around those safeguards.
deraceituno@lemmynsfw.com
on 22 May 2024 06:51
nextcollapse
Training is how it knows it…
fidodo@lemmy.world
on 22 May 2024 08:52
nextcollapse
You can ask it to make an image of a man made of pizza. That doesn’t mean it was trained on images of that.
dustyData@lemmy.world
on 22 May 2024 14:16
collapse
But it means that it was trained on people and on pizza. If it can produce CSAM, it means it had access to pictures of naked minors. Even if it wasn’t in a sexual context.
Minors are people. It knows what clothed people of all ages look like. It also knows what naked adults look like. The whole point of AI is that it can fill in the gaps and create something it wasn’t trained on. Naked + child is just a simple equation for it to solve
herrvogel@lemmy.world
on 22 May 2024 10:12
nextcollapse
The whole point of those generative models that they are very good at blending different styles and concepts together to create coherent images. They’re also really good at editing images to add or remove entire objects.
MeanEYE@lemmy.world
on 22 May 2024 11:30
nextcollapse
You can always tell when someone has no clue about AI but has read online about it.
mightyfoolish@lemmy.world
on 22 May 2024 17:37
collapse
I think @deathbird@mander.xyz meant was the AI could be trained on what sex is and what children are at different points. Then a user request could put those two concepts together.
But as the replies I got show, there were multiple ways this could have got accomplished. All I know is AI needs to go to jail.
desktop_user@lemmy.blahaj.zone
on 23 May 2024 07:01
collapse
AIs usually try not to allow certain content to be produced, but it seems people are always finding ways to work around those safeguards.
Local model go brrrrrr
joel_feila@lemmy.world
on 22 May 2024 16:50
nextcollapse
Well some llm have been caught wirh cp in their training data
ZILtoid1991@lemmy.world
on 22 May 2024 17:29
collapse
Likely yes, and even commercial models have an issue with CSAM leaking into their datasets. The scummiest of all of them likelyget one offline model, then add their collection of CSAM to it.
TheObviousSolution@lemm.ee
on 22 May 2024 15:57
nextcollapse
He then allegedly communicated with a 15-year-old boy, describing his process for creating the images, and sent him several of the AI generated images of minors through Instagram direct messages. In some of the messages, Anderegg told Instagram users that he uses Telegram to distribute AI-generated CSAM. “He actively cultivated an online community of like-minded offenders—through Instagram and Telegram—in which he could show off his obscene depictions of minors and discuss with these other offenders their shared sexual interest in children,” the court records allege. “Put differently, he used these GenAI images to attract other offenders who could normalize and validate his sexual interest in children while simultaneously fueling these offenders’ interest—and his own—in seeing minors being sexually abused.”
I think the fact that he was promoting child sexual abuse and was communicating with children and creating communities with them to distribute the content is the most damning thing, regardless of people’s take on the matter.
Umm … That AI generated hentai on the page of the same article, though … Do the editors have any self-awareness? Reminds me of the time an admin decided the best course of action to call out CSAM was to directly link to the source.
Maggoty@lemmy.world
on 22 May 2024 16:38
nextcollapse
Wait do you think all Hentai is CSAM?
And sending the images to a 15 year old crosses the line no matter how he got the images.
BangCrash@lemmy.world
on 23 May 2024 04:36
collapse
Hentai is obviously not CSAM. But having a hentai image on an article about CSAM and child grooming is pretty poorly thought out
You know, looking at it again, I think it’s an ad.
BangCrash@lemmy.world
on 23 May 2024 09:29
collapse
It’s an ad for another article on that site.
Saledovil@sh.itjust.works
on 22 May 2024 18:46
collapse
Umm … That AI generated hentai on the page of the same article, though … Do the editors have any self-awareness? Reminds me of the time an admin decided the best course of action to call out CSAM was to directly link to the source.
The image depicts mature women, not children.
BangCrash@lemmy.world
on 23 May 2024 09:31
collapse
Correct. And OP’s not saying it is.
But to place that sort of image on an article about CSAM is very poorly thought out
StaySquared@lemmy.world
on 22 May 2024 16:15
nextcollapse
I wonder if cartoonized animals in CSAM theme is also illegal… guess I can contact my local FBI office and provide them the web addresses of such content. Let them decide what is best.
Ah yes, more bait articles rising to the top of Lemmy. The guy was arrested for grooming, he was sending these images to a minor. Outside of Digg, anyone have any suggestions for an alternative to Lemmy and Reddit? Lemmy’s moderation quality is shit, I think I’m starting to figure out where I lean on the success of my experimental stay with Lemmy
Edit: Oh god, I actually checked digg out after posting this and the site design makes it look like you’re actually scrolling through all of the ads at the bottom of a bulshit clickbait article
far_university1990@feddit.de
on 22 May 2024 20:04
nextcollapse
Go to instance that moderate like you like it.
FiniteBanjo@lemmy.today
on 22 May 2024 22:05
nextcollapse
Lemmy as a whole does not have moderation. Moderators on Lemmy.today cannot moderate Lemmy.world or Lemmy ml, they can only remove problematic posts as they come and as they see fit or block entire instances which is rare.
If you want stricter content rules than any of the available federated instances then you’ll have to either:
Use a centralized platform like Reddit but they’re going to sell you out for data profits and you’ll still have to occasionally deal with shit like “The Donald.”
Start your own instance with a self hosted server and create your own code of conduct and hire moderators to enforce it.
Yeah, I know, thats why I’m finding lemmy not for me. This new rage bait every week is tiring and not adding anything to my life except stress, and once I started looking at who the moderaters were when Lemmy’d find a new thing to rave about, I found that often there was 1-3 actual moderators, which, fuck that. With reddit, the shit subs were the exception, here it feels like they ALL (FEEL being a key word here) have a tendency to dive face first into rage bait
Edit: Most of the reddit migration happened because Reddit fucked over their moderators, a lot of us were happy with well moderated discussions, and if we didnt care to have moderators, we could have just stayed with reddit after the moderators were pushed away
You can go to an instance that follows your views closer and start blocking instances that post low quality content to you. Lemmy is a protocol, it’s not a single community. So the moderation and post quality is going to be determined by the instance you’re on and the community you’re with.
ArmokGoB@lemmy.dbzer0.com
on 23 May 2024 07:26
collapse
This is throwing a blanket over the problem. When the mods of a news community allow bait articles to stay up because they (presumably) further their views, it should be called out as a problem.
threaded - newest
Fuckin good job
Mhm I have mixed feelings about this. I know that this entire thing is fucked up but isn’t it better to have generated stuff than having actual stuff that involved actual children?
You know whats better? Having none of this shit
Yeah as I also said.
Better for whom and why?
Nirvana fallacy
Yeah would be nice. Unfortunelately it isn’t so and it’s never going to. Chasing after people generating distasteful AI pictures is not making the world a better place.
Better only means less worse in this case, I guess
It feeds and evolves a disorder which in turn increases risks of real life abuse.
But if AI generated content is to be considered illegal, so should all fictional content.
Or, more likely, it feeds and satisfies a disorder which in turn decreases risk of real life abuse.
Making it illegal so far helped nothing, just like with drugs
That’s not how these addictive disorders works… they’re never satisfied and always need more.
Two things:
Alternative perspective is to think that does watching normal porn make heterosexual men more likely to rape women? If not then why would it be different in this case?
The vast majority of pedophiles never offend. Most people in jail for child abuse are just plain old rapists with no special interest towards minors, they’re just an easy target. Pedophilia just describes what they’re attracted to. It’s not a synonym to child rapist. It usually needs to coinside with psychopathy to create the monster that most people think about when hearing that word.
That’s a bit of a difference in comparison.
A better comparison would be “does watching common heterosexual porn make common heterosexual men more interested in performing common heterosexual sexual acts?” or "does viewing pornography long term satiate a mans sex drive?” or “does consumption of nonconsensual pornography correlate to an increase in nonconsensual sex acts?”
Comparing “viewing child sexual content might lead it engaging in sexual acts with children” to “viewing sexual activity with women might lead to rape” is disingenuous and apples to oranges.
wchh.onlinelibrary.wiley.com/doi/full/…/tre.791
As for child porn, it’s not a given that there’s no relationship between consumption and abusing children. There are studies that indicate both outcomes, and are made much more complicated by one of both activities being extremely illegal and socially stigmatized making accurate tracking difficult.
It’s difficult to justify the notion that “most pedophiles never offend” when it can be difficult to identify both pedophiles and abuse.
pubmed.ncbi.nlm.nih.gov/21088873/ for example. It looks at people arrested for possession of child pornography. Within six years, 6% were charged with a child contact crime. Likewise, you can find research with a differing conclusion
Point being, you can’t just hand wave the potential for a link away on the grounds that porn doesn’t cause rape amongst typical heterosexual men. There’s too many factors making the statistics difficult to gather.
A problem that I see getting brought up is that generated AI images makes it harder to notice photos of actual victims, making it harder to locate and save them
And doesn’t the AI learn from real images?
True, but by their very nature their generations tend to create anonymous identities, and the sheer amount of them would make it harder for investigators to detect pictures of real, human victims (which can also include indicators of crime location.
It does learn from real images, but it doesn’t need real images of what it’s generating to produce related content.
As in, a network trained with no exposure to children is unlikely to be able to easily produce quality depictions of children. Without training on nudity, it’s unlikely to produce good results there as well.
However, if it knows both concepts it can combine them readily enough, similar to how you know the concept of “bicycle” and that of “Neptune” and can readily enough imagine “Neptune riding an old fashioned bicycle around the sun while flaunting it’s tophat”.
Under the hood, this type of AI is effectively a very sophisticated “error correction” system. It changes pixels in the image to try to “fix it” to matching the prompt, usually starting from a smear of random colors (static noise).
That’s how it’s able to combine different concepts from a wide range of images to create things it’s never seen.
Basically if I want to create … (I’ll use a different example for obvious reasons, but I’m sure you could apply it to the topic)
… “an image of a miniature denium airjet with Taylor Swift’s face on the side of it”, the AI generators can despite no such thing existing in the training data. It may take multiple attempts and effort with the text prompt to get exactly what you’re looking for, but you could eventually get a convincing image.
AI takes loads of preexisting data on airplanes, T.Swift, and denium to combine it all into something new.
Well that, and the idea of cathartic relief is increasingly being dispelled. Behaviour once thought to act as a pressure relief for harmful impulsive behaviour is more than likely just a pattern of escalation.
Source? From what I’ve heard, recent studies are showing the opposite.
Source
But there’s a lot more studies who have essentially said the same thing. The cathartic hypothesis is mainly a byproduct of the Freudian era of psychology, where hypothesis mainly just sounded good to someone on too much cocaine.
Do you have a source of studies showing the opposite?
Yes, but I’m too lazy to sauce everything again. If it’s not in my saved comments someone else will have to.
E: couldn’t find it on my reddit either. I have too many saved comments lol.
your source is exclusively about aggressive behavior…
it uses the term “arousal”, which is not referring to sexual arousal, but rather a state of heightened agitation.
provide an actual source in support of your claim, or stop spreading misinformation.
Lol, my source is about the cathartic hypothesis. So your theory is that it doesn’t work with anger, but does work for sexual deviancy?
Do you have a source that supports that?
you made the claim that the cathartic hypothesis is poorly supported by evidence, which you source supports, but is not relevant to the topic at hand.
your other claim is that sexual release follows the same patterns as aggression. that’s a pretty big claim! i’d like to see a source that supports that claim.
otherwise you’ve just provided a source that provides sound evidence, but is also entirely off-topic…
The belief that indulging in AI created child porn relieves the sexual deviant behaviour of being attracted to actual minors utilizes the cathartic theory. The cathartic theory is typically understood to relate to an array of emotions, not just anger. "Further, the catharsis hypothesis maintains that aggressive or sexual urges are relieved by “releasing” aggressive or sexual energy, usually through action or fantasy. "
That’s not a claim I make, it’s a claim that cathartic theory states. As I said the cathartic hypothesis is a byproduct of Freudian psychology, which has largely been debunked.
Your issue is with the theory in and of itself, which my claim is already stating to be problematic.
No, you are just conflating colloquial understanding of catharsis with the psychological theory.
and your source measured the effects of one single area that cathartic theory is supposed to apply to, not all of them.
your source does in no way support the claim that the observed effects apply to anything other than aggressive behavior.
i understand that the theory supposedly applies to other areas as well, but as you so helpfully pointed out: the theory doesn’t seem to hold up.
so either A: the theory is wrong, and so the association between aggression and sexuality needs to be called into question also;
or B: the theory isn’t wrong after all.
you are now claiming that the theory is wrong, but at the same time, the theory is totally correct! (when it’s convenient to you, that is)
so which is it now? is the theory correct? then your source must be
wrongirrelevant.or is the theory wrong? then the claim of a link between sexuality and aggression is also without support, until you provide a source for that claim.
you can’t have it both ways, but you’re sure trying to.
My original claim was that cathartic theory in and of itself is not founded on evidence based research.
When did I claim it was ever correct?
I think you are misconstruing my original claim with the claims made by the cathartic theory itself.
I don’t claim that cathartic theory is beneficial in any way, you are the one claiming that Cathartic theory is correct for sexual aggression, but not for violence.
Do you have a source that claims cathartic theory is beneficial for satiation deviant sexual impulses?
You are wanting me to provide an evidence based claim between the two when I’ve already said the overarching theory is not based on evidence?
The primary principle to establish is the theory of cathartic relief, not wether it works for one emotion or the other. You have not provided any evidence to support that claim, I have provided evidence that disputes it.
Let’s see here, listen to my therapist who has decades of real experience or a study from over 20 years ago?
Sorry bud, I know who I’m going with on this and it ain’t your academic.
Your therapist is still utilizing Freudian psychoanalysis?
Well, if age is a factor in your opinion about the validity of the care you receive, I have some bad news for you…
You’re still using 5,000 year old Armenian shoes?
Of course not. Stop being reductive.
Lol, you were the one who first dismissed evidence because it was 20 years old…
The point is you can reduce anything to its origin. That does not mean it’s still the same thing.
Okay, but how does the modern version of cathartic theory differ from what freud postulated?
I agree you can’t reduce things based on its original alone , which is why I included a scientific source as evidence…
I don’t know, that’s why I have a therapist, I’m not educated in psychology. But I do recognize a logical fallacy when I see one.
I doubt that, so far your argument has been based on the anecdotal fallacy mixed with a bit of the appeal to authority fallacy.
Lmao. Says the guy who tried to use a study on aggression to address sexual urges.
Reading comprehension is still hard for you? My argument was about Cathartic theory, which includes several emotions including sexual urges… It is a theory from freud, of course it covers sexual urges.
You and the other guy just have no idea what you’re talking about. How about providing any kind of source instead of talking out of your ass?
Did we memory hole the whole ‘known CSAM in training data’ thing that happened a while back? When you’re vacuuming up the internet you’re going to wind up with the nasty stuff, too. Even if it’s not a pixel by pixel match of the photo it was trained on, there’s a non-zero chance that what it’s generating is based off actual CSAM. Which is really just laundering CSAM.
I didn’t know that, my bad.
Fair but depressing, it seems like it barely registered in the news cycle.
IIRC it was something like a fraction of a fraction of 1% that was CSAM, with the researchers identifying the images through their hashes but they weren’t actually available in the dataset because they had already been removed from the internet.
Still, you could make AI CSAM even if you were 100% sure that none of the training images included it since that’s what these models are made for - being able to combine concepts without needing to have seen them before. If you hold the AI’s hand enough with prompt engineering, textual inversion and img2img you can get it to generate pretty much anything. That’s the power and danger of these things.
What % do you think was used to generate the CSAM, though? Like, if 1% of the images were cups it’s probably drawing on some of that to generate images of cups.
And yes, you could technically do this with no CSAM training material, but we don’t know if that’s what the AI is doing because the image sources used to train it were mass scraped from the internet. They’re using massive amounts of data without filtering it and are unable to say with certainty whether or not there is CSAM in the training material.
The arrest is only a positive. Allowing pedophiles to create AI CP is not a victimless crime. As others point out it muddies the water for CP of real children, but it also potentially would allow pedophiles easier ways to network in the open (if the images are legal they can easily be platformed and advertised), and networking between abusers absolutely emboldens them and results in more abuse.
As a society we should never allow the normalization of sexualizing children.
Interesting. What do you think about drawn images? Is there a limit to how will the artist can be at drawing/painting? Stick figures vs life like paintings. Interesting line to consider.
If it was photoreal and difficult to distinguish from real photos? Yes, it’s exactly the same.
And even if it’s not photo real, communities that form around drawn child porn are toxic and dangerous as well. Sexualizing children is something I am 100% against.
It feels like driving these people into the dark corners of the internet is worse than allowing them to collect in clearnet spaces where drawn csam is allowed.
I’m in favor of specific legislation criminalizing drawn CSAM. It’s definitely less severe than photographic CSAM, and it’s definitely harmful.
Is this proven or a common sense claim you’re making?
I wouldn’t be surprised if it’s a mixture of the two. It’s kind of like if you surround yourself with criminals regularly, you’re more likely to become one yourself. Not to say it’s a 100% given, just more probable.
So... its just a claim they're making and you're hoping it has actual backing.
I’m not hoping anything, haha wtf? The comment above me asked if it was a proven statement or common sense and I said I wouldn’t be surprised if it’s both. I felt confident that if I googled it, there would more than likely be studies backing up a common sense statement like that, as I’ve read in the past how sending innocent people or people who committed minor misdemeanors to prison has influenced them negatively to commit crimes they might not have otherwise.
And look at that, there are academic articles that do back it up:
waldenu.edu/…/what-influences-criminal-behavior
www.law.ac.uk/resources/…/is-prison-effective/
Etc, etc.
Turns out that your dominant social group and environment influences your behavior, what a shocking statement.
But you didn't say you had proof with your comment, you said it was probable. Basically saying its common sense that its proven.
Why are you getting aggressive about actually having to provide proof about something when saying its obvious?
Also, that seems to imply that locking up people for AI offenses would then encourage truly reprehensible behavior by linking them with those who already engage in it.
Almost like lumping people together as one big group, instead of having levels of grey area, means people are more likely to just go all in instead of sticking to something more morally defensible.
Because it’s a casual discussion, I think it’s obnoxious when people constantly demand sources to be cited in online comments section when they could easily look it up themselves. This isn’t some academic or formal setting.
And I disagree, only the second source mentioned prisons explicitly. The first source mentions social environments as well. So it’s a damned if you do, damned if you don’t situation. Additionally, even if you consider the second source, that source mentions punishment reforms to prevent that undesirable side effect from occuring.
I find it ironic that you criticized me for not citing sources and then didn’t read the sources. But, whatever. Typical social media comments section moment.
People request sources because people state their opinions as fact. If that’s how it’s presented then asking for a source is ok. Its either ask for a source or completely dismiss the comment.
Again, in casual conversation where no one was really debating, it’s obnoxious. When you’re talking to friends in real life and they say something, do you request sources from them? No, because it’d be rude and annoying. If you were debating them in earnest and you both disagreed on something, sure, that would be expected.
But that wasn’t the case here, the initial statement was common sense: If pedophiles are allowed to meet up and trade AI generated child sex abuse material, would that cause some of them to be more likely to commit crimes against real kids? And I think the answer is pretty obvious. The more you hang around people who agree with you, the more an echo chamber is cultivated. It’s like an alcoholic going into a bar without anyone there to support them in staying sober.
Anyway, it’s your opinion to think asking for sources from strangers in casual conversation is okay, and it’s mine to say it can be annoying in a lot of circumstances. We all have the Internet at our fingertips, look it up in the future if you’re unsure of someone’s assertion.
The far right in France normalized its discourses and they are now at the top of the votes.
Also in France, people talked about pedophilia at the TV in the 70s, 80s and at the beginning of the 90s. It was not just once in a while. It was frequent and open without any trouble. Writers would casually speak about sexual relationships with minors.
The normalization will blur the limits between AI and reality for the worse. It will also make it more popular.
The other point is also that people will always ends with the original. Again, politic is a good example. Conservatives try to mimic the far right to gain votes but at the end people vote for the far right…
And, someone has a daughter. A pedophile takes a picture of her without asking and ask an AI to produce CP based on her. I don’t want to see things like this.
Actually, that’s not quite as clear.
The conventional wisdom used to be, (normal) porn makes people more likely to commit sexual abuse (in general). Then scientists decided to look into that. Slowly, over time, they’ve become more and more convinced that (normal) porn availability in fact reduces sexual assault.
I don’t see an obvious reason why it should be different in case of CP, now that it can be generated.
It should be different because people can not have it. It is disgusting, makes them feel icky and thats just why it has to be bad. Conventional wisdom sometimes really is just convential idiocracy.
I wonder if religiosity is correlated.
Is “better than” the same as totally cool and legal?
No?
Is everything completely black and white for you?
The system isn't perfect, especially where we prioritize punishing people over rehabilitation. Would you rather punish everyone equally, emphasizing that if people are going to risk the legal implications (which, based on legal systems the world over, people are going to do) they might as well just go for the real thing anyways?
You don't have to accept it as morally acceptable, but you don't have to treat them as completely equivalent either.
There's gradations of questionable activity. Especially when there's no real victims involved. Treating everything exactly the same is, frankly speaking, insane. Its like having one punishment for all illegal behavior. Murder someone? Death penalty. Rob them? Straight to the electric chair. Jaywalking? Better believe you're getting the needle.
Wow. I didn’t say any of that, cool story though.
Go read what I said again and try replying to that instead of whatever this rant is on about
I got your back here.
Ironically, You ask if everything is completely black and white for someone without accepting that there’s nuance to the very issue you’re calling out. And assuming that “everything”- a very black and white term, is not very nuanced, is it?
No, not EVERYTHING, but some things. And this is one of those things. Both forms should be illegal. Period. No nuance, no argument, NO grey area.
This does not mean that nuance doesn’t exist. It just means that some believe that it SHOULDN’T exist within the paradigm of child porn.
I have trouble with this because it’s like 90% grey area. Is it a pic of a real child but inpainted to be nude? Was it a real pic but the face was altered as well? Was it completely generated but from a model trained on CSAM? Is the perceived age of the subject near to adulthood? What if the styling makes it only near realistic (like very high quality CG)?
I agree with what the FBI did here mainly because there could be real pictures among the fake ones. However, I feel like the first successful prosecution of this kind of stuff will be a purely moral judgement of whether or not the material “feels” wrong, and that’s no way to handle criminal misdeeds.
If not trained on CSAM or in painted but fully generated, I can’t really think of any other real legal arguments against it except for: “this could be real”. Which has real merit, but in my eyes not enough to prosecute as if it were real. Real CSAM has very different victims and abuse so it needs different sentencing.
Everything is 99% grey area. If someone tells you something is completely black and white you should be suspicious of their motives.
Yeah, it’s very similar to the “is loli porn unethical” debate. No victim, it could supposedly help reduce actual CSAM consumption, etc… But it’s icky so many people still think it should be illegal.
There are two big differences between AI and loli though. The first is that AI would supposedly be trained with CSAM to be able to generate it. An artist can create loli porn without actually using CSAM references. The second difference is that AI is much much easier for the layman to create. It doesn’t take years of practice to be able to create passable porn. Anyone with a decent GPU can spin up a local instance, and be generating within a few hours.
In my mind, the former difference is much more impactful than the latter. AI becoming easier to access is likely inevitable, so combatting it now is likely only delaying the inevitable. But if that AI is trained on CSAM, it is inherently unethical to use.
Whether that makes the porn generated by it unethical by extension is still difficult to decide though, because if artists hate AI, then CSAM producers likely do too. Artists are worried AI will put them out of business, but then couldn’t the same be said about CSAM producers? If AI has the potential to run CSAM producers out of business, then it would be a net positive in the long term, even if the images being created in the short term are unethical.
Just a point of clarity, an AI model capable of generating csam doesn’t necessarily have to be trained on csam.
That honestly brings up more questions than it answers.
Why is that? The whole point of generative AI is that it can combine concepts.
You train it on the concept of a chair using only red chairs. You train it on the color red, and the color blue. With this info and some repetition, you can have it output a blue chair.
The same applies to any other concepts. Larger, smaller, older, younger. Man, boy, woman, girl, clothed, nude, etc. You can train them each individually, gradually, and generate things that then combine these concepts.
Obviously this is harder than just using training data of what you want. It’s slower, it takes more effort, and results are inconsistent, but they are results. And then, you curate the most viable of the images created this way to train a new and refined model.
Yeah, there are photorealistic furry photo models, and I have yet to meet an anthropomorphic dragon IRL.
It is illegal. thefederalcriminalattorneys.com/possession-of-lol…
I wasn’t arguing about current laws. I was simply arguing about public perception, and whether the average person believes it should be illegal. There’s a difference between legality and ethicality. Something unethical can be legal, and something illegal can be ethical.
Weed is illegal, but public perception says it shouldn’t be.
Alcohol is worse then weed, yet alcohol is not banned.
The mitochondria is the powerhouse of the cell.
Why are you assuming everyone lives in the US? Your article even admits that it is legal elsewhere (Japan).
This is a better one: en.wikipedia.org/…/Legal_status_of_fictional_porn…
Gospel of the Jesus
I think one of the many problems with AI generated CSAM is that as AI becomes more advanced it will become increasingly difficult for authorities to tell the difference between what was AI generated and what isn’t.
Banning all of it means authorities don’t have to sift through images trying to decipher between the two. If one image is declared to be AI generated and it’s not…well… that doesn’t help the victims or create less victims. It could also make the horrible people who do abuse children far more comfortable putting that stuff out there because it can hide amongst all the AI generated stuff. Meaning authorities will have to go through far more images before finding ones with real victims in it. All of it being illegal prevents those sorts of problems.
And that’s a good point! Luckily it’s still (usually) fairly easy to identify AI generated images. But as they get more advanced, that will likely become harder and harder to do.
Maybe some sort of required digital signatures for AI art would help; Something like a public encryption key in the metadata, that can’t be falsified after the fact. Anything without that known and trusted AI signature would by default be treated as the real deal.
But this would likely require large scale rewrites of existing image formats, if they could even support it at all. It’s the type of thing that would require people way smarter than myself. But even that feels like a bodged solution to a problem that only exists because people suck. And if it required registration with a certificate authority (like an HTTPS certificate does) then it would be a hurdle for local AI instances to jump through. Because they would need to get a trusted certificate before they could sign their images.
Imo, not the best framework for creating laws. Essentially, it’s an appeal to emotion.
Yes, but the perp showed the images to a minor.
Apparently he sent some to an actual minor.
The generated stuff is as illegal as the real stuff. thefederalcriminalattorneys.com/possession-of-lol… en.wikipedia.org/wiki/PROTECT_Act_of_2003
I think the point is that child attraction itself is a mental illness and people indulging it even without actual child contact need to be put into serious psychiatric evaluation and treatment.
It’s better to have neither.
This mentality smells of “just say no” for drugs or “just don’t have sex” for abortions. This is not the ideal world and we have to find actual plans/solutions to deal with the situation. We can’t just cover our ears and hope people will stop
It reminds me of the story of the young man who realized he had an attraction to underage children and didn’t want to act on it, yet there were no agencies or organizations to help him, and that it was only after crimes were committed that anyone could get help.
I see this fake cp as only a positive for those people. That it might make it difficult to find real offenders is a terrible reason against.
And the Stable diffusion team get no backlash from this for allowing it in the first place?
Why are they not flagging these users immediately when they put in text prompts to generate this kind of thing?
Not everything exists on the cloud (someone else’s computer)
You can run the SD model offline, so on what service would that User be flagged?
my main question is: how much csam was fed into the model for training so that it could recreate more
i think it’d be worth investigating the training data usued for the model
This did happen a while back, with researchers finding thousands of hashes of CSAM images in LAION-2B. Still, IIRC it was something like a fraction of a fraction of 1%, and they weren’t actually available in the dataset because they had already been removed from the internet.
You could still make AI CSAM even if you were 100% sure that none of the training images included it since that’s what these models are made for - being able to combine concepts without needing to have seen them before. If you hold the AI’s hand enough with prompt engineering, textual inversion and img2img you can get it to generate pretty much anything. That’s the power and danger of these things.
That’s not how any of this works
Because what prompts people enter on their own computer isn't in their responsibility? Should pencil makers flag people writing bad words?
Stable Diffusion has been distancing themselves from this. The model that allows for this was leaked from a different company.
.
Bad title.
They caught him not simply for creating pics, but also for trading such pics etc.
That’s sickening to know there are bastards out there who will get away with it since they are only creating it.
I’m not sure. Let us assume that you generate it on your own PC at home (not using a public service) and don’t brag about it and never give it to anybody - what harm is done?
Even if the AI didn’t train itself on actual CSAM that is something that feels inherently wrong. Your mind is not right to think that’s acceptable IMO.
Laws shouldn't be about feelings though and we shouldn't prosecute people for victimless thought crimes. How often did you think something violent when someone really pissed you off? Should you have been persecuted for that thought too?
This goes away further than a thought
Who are the victims of someone generating such images privately then? It's on the same level as all the various fan fiction shit that was created manually over all the past decades.
And do we apply this to other depictions of criminalized things too? Would we ban the depiction of violence & sexual violence on TV, in books, and in video games too?
Society is not ok with the idea of someone cranking to CSAM, then just walking around town. It gives people wolf-in-sheep-clothing vibes.
So the notion of there being “ok” CSAM-style ai content is a non starter for a huge fraction of people because it still suggests appeasing a predator.
I’m definitely one of those people that simply can’t accept any version of it.
You can get away with a lot of heinous crimes by simply not telling people and sharing the results.
You consider it a heinous crime to draw a picture and keep it to yourself?
Read the article. He was arrested for sending the pictures to at least one minor.
Re-read RickyRigatoni’s comment.
Creating the pics is a crime by itself. thefederalcriminalattorneys.com/possession-of-lol…
Article title is a bit misleading. Just glancing through I see he texted at least one minor in regards to this and distributed those generated pics in a few places. Putting it all together, yeah, arrest is kind of a no-brainer. Ethics of generating csam is the same as drawing it pretty much. Not much we can do about it aside from education.
Lemmy really needs to stop justifying CP. We can absolutely do more than “eDuCaTiOn”. AI is created by humans, the training data is gathered by humans, it needs regulation like any other industry.
It’s absolutely insane to me how laissez-fair some people are about AI, it’s like a cult.
.
The fuck are you talking about? No one’s “enslaving” you because they’re trying to stop you from generating child porn.
Fucking libertarians dude.
.
Ah yes, we need child porn because it’s a slippery slope.
While I agree with your attitude, the whole ‘laissez-fair’ thing is probably a misunderstanding:
There is nothing we can do to stop the AI.
Nothing.
The genie is out of the bottle, the Pandora’s box has been opened, everything is out and it won’t ever return. The world will never be the same, and it’s irrelevant what people think.
That’s why we need to better understand the post-AI world we created, and figure out what do to now.
Also, to hell with CP. (feels weird to use the word ‘fuck’ here)
Thats not the question, the question is not “can we stop AI entirely” it’s about regulating its development and yes, we can make efforts to do that.
This attitude of “it’s inevitable, can’t do anything about it” is eerily similar logic to what is used in climate denial and other right-wing efforts. It’s a really poor attitude to have, especially about something as consequential as AI.
We have the best opportunity right now to create rules about its uses and development. The answer is not “do nothing” as if it’s some force of nature, as opposed toa tool created by humans.
I hear you, and I don’t necessarily disagree with you, I just know that’s not how anything works.
Regulations work for big companies, but there isn’t a big company behind this specific case. And those small-time users have run away and you can’t stop them.
It’s like trying to regulate cameras to not store specific images. Like, I get the sentiment, but sorry, no. It’s not that I would not like that, it’s just not possible.
This argument could be applied to anything though. A lot of people get away with myrder, we should still try and do what we can to stop it from happening.
You can’t sit in every car and force people to wear a seatbelt, we still have seatbelt laws and regulations for manufacturers.
Physical things are much easier to regulate than software, much less serverless.
We already regulate certain images, and it matters very little.
The bigger payoff will be from educating the public and accepting that we can’t win every war.
So accept defeat from the start, that’s really just a non-starter. AI models run on hardware, they are developed by specific people, their contents are distributed by specific individuals, code bases are hosted on hardware and on specific outlets.
It really does sound like you’re just trying to make excuses to avoid regulation, not that you genuinely have a good reason to think it’s not possible to try.
Dude the amount of open source, untrackable, distributed ai models is off the charts. This isn’t just about the models offered by subscription from the big players.
This is still one of the weaker arguments. There is a lot of malware out there too, people are still prosecuted when they’re caught developing and distributing it, we don’t just throw up our hands and pretend there’s nothing that can be done.
Like, yeah, some pedophile who also happens to be tech saavy might build his own AI model to make CP, that’s not some self-evident argument against attempting to stop them.
No, like, the tools to do these things are common and readily available. It’s not malware, it’s generalized ai tools, completely embroiled with non image ai work.
Pandora’s box is wide open. All of this work can be done trivially, completely offline with a basic PC. Anyone motivated can be offline and up and running in a weekend
You’re asking to outlaw something like a spreadsheet.
You download a general purpose image ai model, then train and prompt it completely offline
The models used are not trained on CP. The models weight are distributed freely and anybody can train a LORA on his computer. Its already too late to ban open weight models.
One of two classic excuses, virtue signalling to hijack control of our devices, our computing, an attack on libre software (they don’t care about CP). Next, they’ll be banning more math, encryption, again.
It says gullible at the start of this page, scroll up and see.
You don't need CSAM training data to create CSAM images. If your model knows how children looks like, how naked human bodies look like, then it can create naked children. That's simply how generative models like this work and has absolutely nothing to do with specifically trained models for CSAM using actual CSAM material.
So while I disagree with him, in that lack of education is the cause of CSAM or pedophilia... I'd say it could help with the general hysteria about LLMs, like the one's coming from you, who just let their emotions run wild when those topics arise. You people need to understand that the goal should be the protection of potential victims, not the punishment of victimless thought crimes.
Legally, a sufficiently detailed image depicting csam is csam, regardless of how it was produced. Sharing it is why he got caught, inevitably, but it’s still illegal even if he never brought a minor into it.
Making the CSAM is illegal by itself thefederalcriminalattorneys.com/possession-of-lol…
Title is pretty accurate.
No no no guys.
It’s perfectly okay to do this as this is art, not child porn as I was repeatedly told and down voted when I stated the fucking obvious
So if it’s art, we have to allow it under the constitution, right? It’s “free speech”, right?
Well yeah. Just because something makes you really uncomfortable doesn’t make it a crime. A crime has a victim.
Also, the vast majority of children are victimized because of the US’ culture of authoritarianism and religious fundamentalism. That’s why far and away children are victimized by either a relative or in a church. But y’all ain’t ready to have that conversation.
That thing over there being wrong doesn’t mean we can’t discuss this thing over here also being wrong.
So perhaps pipe down with your dumb whataboutism.
It’s not whataboutism, he’s being persecuted because of the idea that he’s hurting children all the while law enforcement refuses to truly persecute actual institutions victimizing children and are often colluding with traffickers. For instance LE throughout the country were well aware of the scale of the Catholic church’s crimes for generations.
How is this whataboutism.
Because it’s two different things.
We should absolutely go after the Catholic church for the crimes committed.
But here we are talking about the creation of child porn.
If you cannot understand this very simple premise, then we have nothing else to discuss.
They’re not two different things. They’re both supposedly acts of pedophilia except one would take actual courage to prosecute (churches) and the other which doesn’t have any actual victims is easy and is a PR get because certain people find it really icky.
I guess we’re done here then.
Yes, case closed. You were wrong. Sucks to suck.
Very mature response. Well done.
Just to be clear here, he's not actually persecuted for generating such imagery like the headline implies.
He might be persecuted, but he’s not prosecuted for it.
Fair enough. lol
It’s not ok to do this. thefederalcriminalattorneys.com/possession-of-lol…
First of all, it’s absolutely crazy to link to a 6 month old thread just to complain that you go downvoted in it. You’re pretty clearly letting this site get under your skin if you’re still hanging onto these downvotes.
Second, none of your 6 responses in that thread are logical, rational responses. You basically just assert that things that you find offensive enough should be illegal, and then just type in all caps at everyone who explains to you that this isn’t good logic.
The only way we can consider child porn prohibition constitutional is to interpret it as a protection of victims. Since both the production and distribution of child porn hurt the children forced into it, we ban it outright, not because it is obscene, but because it does real damage. This fits the logic of many other forms of non-protected speech, such as the classic “shouting ‘fire’ in a crowded theatre” example, where those hurt in the inevitable panic are victims.
Expanding the definition of child porn to include fully fictitious depictions, such as lolicon or AI porn, betrays this logic because there are no actual victims. This prohibition is rooted entirely in the perceived obscenity of the material, which is completely unconstitutional. We should never ban something because it is offensive, we should only ban it when it does real harm to actual victims.
I would argue that rape and snuff film should be illegal for the same reason.
The reason people disagree with you so strongly isn’t because they think AI generated pedo content is “art” in the sense that we appreciate it and defend it. We just strongly oppose your insistence that we should enforce obscenity laws. This logic is the same logic used as a cudgel against many other issues, including LGBTQ rights, as it basically argues that sexually disagreeable ideas should be treated as a criminal issue.
I think we all agree that AI pedo content is gross, and the people who make it and consume it are sick. But nobody is with you on the idea that drawings and computer renderings should land anyone in prison.
No, I just… Remembered the thread? Wasn’t difficult to remember it. Took me a minute to find it.
This may surprise you but CP isn’t something I discuss very often.
I don’t lose sleep over people defending CP as “art”, nor did it get under my skin. I just think these are fucking idiots and are for some baffling reason trying to defend the indefensible and go about my day. I’m not going to do anything about it, but I’m sure glad I don’t have such dumb comments linked to a public account with my IP address logged somewhere…
I just raised it to make my point.
I didn’t bother reading the rest of your essay. Its pretty clear from the first paragraph where you’re going to land.
This is tough, the goal should be to reduce child abuse. It’s unknown if AI generated CP will increase or reduce child abuse. It will likely encourage some individuals to abuse actual children while for others it may satisfy their urges so they don’t abuse children. Like everything else AI, we won’t know the real impact for many years.
How do you think they train models to generate CSAM?
Some of yall need to lookup what an LoRA is
Lol you don’t need to train it ON CSAM to generate CSAM. Get a clue.
It should be illegal either way, to be clear. But you think theyre not training models on CSAM? Youre trusting in the morality/ethics of people creating AI generated child pornography?
The use of CSAM in training generative AI models is an issue no matter how these models are being used.
The training doesn’t use csam, 0% chance big tech would use that in their dataset. The models are somewhat able to link concept like red and car, even if it had never seen a red car before.
Well, with models like SD at least, the datasets are large enough and the employees are few enough that it is impossible to have a human filter every image. They scrape them from the web and try to filter with AI, but there is still a chance of bad images getting through. This is why most companies install filters after the model as well as in the training process.
You make it sound like it is so easy to even find such content on the www. The point is, they do not need to be trained on such material. They are trained on regular kids, so they know their sizes, faces, etc. They're trained on nude bodies, so they also know how hairless genitals or flat chests look like. You don't need to specifically train a model on nude children to generate nude children.
purl.stanford.edu/kh752sm9123
I don’t know if we can say for certain it needs to be in the dataset, but I do wonder how many of the other models used to create CSAM are also trained on CSAM.
I suggest you actually download stable diffusion and try for yourself because it's clear that you don't have any clue what you're talking about. You can already make tiny people, shaved, genitals, flat chests, child like faces, etc. etc. It's all already there. Literally no need for any LoRAs or very specifically trained models.
I had an idea when these first AI image generators started gaining traction. Flood the CSAM market with AI generated images( good enough that you can’t tell them apart.) In theory this would put the actual creators of CSAM out of business, thus saving a lot of children from the trauma.
Most people down vote the idea on their gut reaction tho.
Looks like they might do it on their own.
My concern is why would it put them out of business? If we just look at legal porn there is already beyond huge amounts already created, and the market is still there for new content to be created constantly. AI porn hasn’t noticeably decreased the amount produced.
Really flooding the market with CSAM makes it easier to consume and may end up INCREASING the amount of people trying to get CSAM. That could end up encouraging more to be produced.
The market is slightly different tho. Most CSAM is images, with Porn theres a lot of video and images.
It’s also a victimless crime. Just like flooding the market with fake rhino horns and dropping the market price to a point that it isn’t worth it.
It's such an emotional topic that people lose all rationale.
I remember the Reddit arguments in the comment sections about pedos, already equalizing the term with actual child rapists, while others would argue to differentiate because the former didn't do anything wrong and shouldn't be stigmatized for what's going on in their heads but rather offered help to cope with it. The replies are typically accusations of those people making excuses for actual sexual abusers.
I always had the standpoint that I do not really care about people's fictional content. Be it lolis, torture, gore, or whatever other weird shit. If people are busy & getting their kicks from fictional stuff then I see that as better than using actual real life material, or even getting some hands on experiences, which all would involve actual real victims.
And I think that should be generally the goal here, no? Be it pedos, sadists, sociopaths, whatever. In the end it should be not about them, but saving potential victims. But people rather throw around accusations and become all hysterical to paint themselves sitting on their moral high horse (ironically typically also calling for things like executions or castrations).
Yeah, exact same feelings here. If there is no victim then who exactly is harmed?
It would be illegal in the United States. Artistic depictions of CSAM are illegal under the PROTECT act 2003.
And yet it’s out there in droves on mainstream sites, completely without issue. Drawings and animations are pretty unpoliced.
Isn’t there evidence that as artificial CSAM is made more available, the actual amount of abuse is reduced? I would research this but I’m at work.
America has some of the most militant anti pedophilic culture in the world but they far and away have the highest rates of child sexual assault.
I think AI is going to revel is how deeply hypocritical Americans are on this issue. You have gigantic institutions like churches committing industrial scale victimization yet you won’t find a 1/10th of the righteous indignation against other organized religions where there is just as much evidence it is happening as you will regarding one person producing images that don’t actually hurt anyone.
It’s pretty clear by how staggering a rate of child abuse that occurs in the states that Americans are just using child victims as weaponized politicalization (it’s next to impossible to convincingly fight off pedo accusations if you’re being mobbed) and aren’t actually interested in fighting pedophilia.
Most states will let grown men marry children as young as 14. There is a special carve out for Christian pedophiles.
Fortunately most instances are in the category of a 17 year old to an 18 year old, and require parental consent and some manner of judicial approval, but the rates of “not that” are still much higher than one would want.
~300k in a 20 year window total, 74% of the older partner being 20 or younger, and 95% of the younger partner being 16 or 17, with only 14% accounting for both partners being under 18.
There’s still no reason for it in any case, and I’m glad to live in one of the states that said "nah, never needed .
These cases are interesting tests of our first amendment rights. “Real” CP requires abuse of a minor, and I think we can all agree that it should be illegal. But it gets pretty messy when we are talking about depictions of abuse.
Currently, we do not outlaw written depictions nor drawings of child sexual abuse. In my opinion, we do not ban these things partly because they are obvious fictions. But also I think we recognize that we should not be in the business of criminalizing expression, regardless of how disgusting it is. I can imagine instances where these fictional depictions could be used in a way that is criminal, such as using them to blackmail someone. In the absence of any harm, it is difficult to justify criminalizing fictional depictions of child abuse.
So how are AI-generated depictions different? First, they are not obvious fictions. Is this enough to cross the line into criminal behavior? I think reasonable minds could disagree. Second, is there harm from these depictions? If the AI models were trained on abusive content, then yes there is harm directly tied to the generation of these images. But what if the training data did not include any abusive content, and these images really are purely depictions of imagination? Then the discussion of harms becomes pretty vague and indirect. Will these images embolden child abusers or increase demand for “real” images of abuse. Is that enough to criminalize them, or should they be treated like other fictional depictions?
We will have some very interesting case law around AI generated content and the limits of free speech. One could argue that the AI is not a person and has no right of free speech, so any content generated by AI could be regulated in any manner. But this argument fails to acknowledge that AI is a tool for expression, similar to pen and paper.
A big problem with AI content is that we have become accustomed to viewing photos and videos as trusted forms of truth. As we re-learn what forms of media can be trusted as “real,” we will likely change our opinions about fringe forms of AI-generated content and where it is appropriate to regulate them.
That’s it actually, all sites that allow it like danbooru, gelbooru, pixiv, etc. Have a clause against photo realistic content and they will remove it.
It comes back to distribution for me. If they are generating the stuff for themselves, gross, but I don’t see how it can really be illegal. But if your distributing them, how do we know their not real? The amount of investigative resources that would need to be dumped into that, and the impact on those investigators mental health, I don’t know. I really don’t have an answer, I don’t know how they make it illegal, but it really feels like distribution should be.
Cartoon CSAM is illegal in the United States
thefederalcriminalattorneys.com/possession-of-lol…
en.wikipedia.org/wiki/PROTECT_Act_of_2003
for some reason the US seems to hold a weird position on this one. I don’t really understand it.
It’s written to be illegal, but if you look at prosecution cases, i think there have been only a handful of charged cases. The prominent ones which also include relevant previous offenses, or worse.
It’s also interesting when you consider that there are almost definitely large image boards hosted in the US that host what could be constituted as “cartoon CSAM” notably e621, i’d have to verify their hosting location, but i believe they’re in the US. And so far i don’t believe they’ve ever had any issues with it. And i’m sure there are other good examples as well.
I suppose you could argue they’re exempt on the publisher rules. But these sites don’t moderate against these images, generally. And i feel like this would be the rare exception where it wouldnt be applicable.
The law is fucking weird dude. There is a massive disconnect between what we should be seeing, and what we are seeing. I assume because the authorities who moderate this shit almost exclusively go after real CSAM, on account of it actually being a literal offense, as opposed to drawn CSAM, being a proxy offense.
It seems to me to be a lesser charge. A net that catches a larger population and they can then go fishing for bigger fish to make the prosecutor look good. Or as I’ve heard from others, it is used to simplify prosecution. PedoAnon can’t argue “it’s a deepfake, not a real kid” to the SWAT team.
ah that could be a possibility as well. Just ensuring reasonable flexibility in prosecution so you can be sure of what you get.
Well thought-out and articulated opinion, thanks for sharing.
If even the most skilled hyper-realistic painters were out there painting depictions of CSAM, we’d probably still label it as free speech because we “know” it to be fiction.
When a computer rolls the dice against a model and imagines a novel composition of children’s images combined with what it knows about adult material, it does seem more difficult to label it as entirely fictional. That may be partly because the source material may have actually been real, even if the final composition is imagined. I don’t intend to suggest models trained on CSAM either, I’m thinking of models trained to know what both mature and immature body shapes look like, as well as adult content, and letting the algorithm figure out the rest.
Nevertheless, as you brought up, nobody is harmed in this scenario, even though many people in our culture and society find this behavior and content to be repulsive.
To a high degree, I think we can still label an individual who consumes this type of AI content to be a pedophile, and although being a pedophile is not in and of itself an illegal adjective to posses, it comes with societal consequences. Additionally, pedophilia is a DSM-5 psychiatric disorder, which could be a pathway to some sort of consequences for those who partake.
It feels incredibly gross to just say “generated CSAM is a-ok, grab your hog and go nuts”, but I can’t really say that it should be illegal if no child was harmed in the training of the model. The idea that it could be a gateway to real abuse comes to mind, but that’s a slippery slope that leads to “video games cause school shootings” type of logic.
I don’t know, it’s a very tough thing to untangle. I guess I’d just want to know if someone was doing that so I could stay far, far away from them.
It’s worth mentioning that in this instance the guy did send porn to a minor. This isn’t exactly a cut and dry, “guy used stable diffusion wrong” case. He was distributing it and grooming a kid.
The major concern to me, is that there isn’t really any guidance from the FBI on what you can and can’t do, which may lead to some big issues.
For example, websites like novelai make a business out of providing pornographic, anime-style image generation. The models they use deliberately tuned to provide abstract, “artistic” styles, but they can generate semi realistic images.
Now, let’s say a criminal group uses novelai to produce CSAM of real people via the inpainting tools. Let’s say the FBI cast a wide net and begins surveillance of novelai’s userbase.
Is every person who goes on there and types, “Loli” or “Anya from spy x family, realistic, NSFW” (that’s an underaged character) going to get a letter in the mail from the FBI? I feel like it’s within the realm of possibility. What about “teen girls gone wild, NSFW?” Or “young man, no facial body hair, naked, NSFW?”
This is NOT a good scenario, imo. The systems used to produce harmful images being the same systems used to produce benign or borderline images. It’s a dangerous mix, and throws the whole enterprise into question.
I’ll throw that baby out with the bathwater to be honest.
.
Simulated crimes aren’t crimes. Would you arrest every couple that finds health ways to simulate rape fetishes? Would you arrest every person who watches Fast and The Furious or The Godfather?
If no one is being hurt, if no real CSAM is being fed into the model, if no pornographic images are being sent to minors, it shouldn’t be a crime. Just because it makes you uncomfortable, don’t make it immoral.
Or, ya know, everyone who ever wanted to decapitate those stupid fucking Skyrim children. Crime requires damaged parties, and with this (idealized case, not the specific one in the article) there is none.
Those were demon children from hell (with like 2 exceptions maybe). It was a crime by Bethesda to make them invulnerable / protected by default.
If they were, any one who’s played games is fucked. I’m confident everyone who has played went on a total ramapage murdering the townfolk, pillaging their houses and blowing everything up…in Minecraft.
Artistic CSAM is definitely a crime in the United States. PROTECT act of 2003.
People have only gotten in trouble for that when they’re already in trouble for real CSAM. I’m not terrible interested in sticking up for actual CSAM scum.
.
If no real child is involved in any way, who is hurt?
For now, if you read the article, it states that he shared the pictures to form like minded groups where they got emboldened and could support each other and legitimize/normalize their perverted thoughts. How about no thanks.
wrong comment chain. people weren’t talking about the criminal shithead the article is about but about the scenario of someone using (not csam trained) models to create questionable content (thus it is implied that there would be no victim). we all know that there are bad actors out there, just like there are rapists and murderers. still we dont condemn true crime lovers or rape fetishists until they commit a crime. we could do the same with pedos but somehow we believe hating them into the shadows will stop them somehow from doing criminal stuff?
And I’m using the article as an example of that it doesn’t just stop at “victimless” images, because they are not fucking normal people. They are mentally sick, they are sexually turned on by the abuse of a minor, not by the minor but by abusing the minor, sexually.
In what world would a person like that stop at looking at images, they actively search for victims, create groups where they share and discuss abusing minors.
Yes dude, they are fucking dangerous bro, life is not fair. You wouldn’t say the same shit if some one close to you was a victim.
Maybe you should focus your energy on normalized things that actually effect kids like banning full contact sports that cause CTE.
What do you mean focus your energy, how much energy do you think I spend on discussing perverts? And what should I spend my time discussing contact sports. It’s sound like you are deflecting.
Pedophiles get turned on abusing minors, they are mentally sick. It’s not like its a normal sexual desire, they will never stop at watching “victimless” images. Fuck pedophiles they don’t deserve shit, and hope they eat shit he rest of their lives.
How is that different from any other dangerous fetish? Should we be arresting adult couples that do Age Play? All the BDSM communities? Do we even want to bring up the Vore art communities? Victimless is victimless.
No because its two consenting adults otherwise its illegal. Wtf is vore art, not going to google that. How do you know it’s victimless. Like I said they are turned on by abusing minors, and don’t know how else I can put it, I can’t be more clear.
Let me ask you this, do you think pedophiles care about their victims? If yes, then I want to hear why you think that. If no, why are we even having this argument?
Your ultimatum is flawed. Do you believe humans have impulse control? Yes or No.
I haven’t given you a ultimatum I gave you a question, and you can answer it anyway you want. Do or can pedophiles feel remorse for their victims? Are there pedophiles who feel remorse for their victims but still abuse children?
But let me say this again, pedophiles have no remorse towards their victims, they get turned on by it, I’m trying to tell you it’s not a just a sexual desire. They like the abuse part of it, abusing some one helpless, that is why they are turned on by abusing children.
Bro I can’t continue this, you’re not willing to understand it’s not about the kid, it is about abusing the kid, that is what they want. And if you can’t rigster that it’s not just a sexual desire, then we can agree to disagree.
You’re correct, pedophilia is a mental illness. A very tragic one since there is no hope and no cure. They can’t even ask for help because everyone will automatically assume they are also child molesters. Which is what you’re describing, but not all child molesters are pedophiles, and most pedophiles will never become child molesters… Like you said, some people just get off on exploiting the power dynamic and aren’t necessarily sexually attracted to children. Those people are the real danger.
Real children are in training data regardless of if there is csam in the data or not (which there is a high chance there is considering how they get their training data) so real children are involved
I’ve already stated that I do not support using images of real children in the models. Even if the images are safe/legal, it’s a violation of privacy.
Nobody is arguing that it’s moral. That’s not the line for government intervention. If it was then the entire private banking system would be in prison.
They would though. We know they would because conservatives already did the whole laws about how you can have sex in private thing.
www.ic3.gov/Media/Y2024/PSA240329 justice.gov/…/citizens-guide-us-federal-law-child…
They’ve actually issued warnings and guidance, and the law itself is pretty concise regarding what’s allowed.
…
uscode.house.gov/view.xhtml?hl=false&edition=prel…
If you’re going to be doing grey area things you should do more than the five minutes of searching I did to find those honestly.
It was basically born out of a supreme Court case in the early 2000s regarding an earlier version of the law that went much further and banned anything that “appeared to be” or “was presented as” sexual content involving minors, regardless of context, and could have plausibly been used against young looking adult models, artistically significant paintings, or things like Romeo and Juliet, which are neither explicit nor vulgar but could be presented as involving child sexual activity. (Juliet’s 14 and it’s clearly labeled as a love story).
After the relevant provisions were struck down, a new law was passed that factored in the justices rationale and commentary about what would be acceptable and gave us our current system of “it has to have some redeeming value, or not involve actual children and plausibly not look like it involves actual children”.
The Protect Act of 2003 means that any artistic depiction of CSAM is illegal. The guidance is pretty clear, FBI is gonna raid your house…eventually. We still haven’t properly funded the anti-CSAM departments.
Breaking news: Paint made illegal, cause some moron painted something stupid.
Some places do lock up spray paint due to its use in graffiti, so that’s not without precedent.
They lock it up because it’s frequently stolen. (Because of its use in graffiti, but still.)
I’d usually agree with you, but it seems he sent them to an actual minor for “reasons”.
Asked whether more funding will be provided for the anti-paint enforcement divisions: it’s such a big backlog, we’ll rather just wait for somebody to piss of a politician to focus our resources.
The headline/title needs to be extended to include the rest of the sentence
"and then sent them to a minor"
Yes, this sicko needs to be punished. Any attempt to make him the victim of " the big bad government" is manipulative at best.
Edit: made the quote bigger for better visibility.
That’s a very important distinction. While the first part is, to put it lightly, bad, I don’t really care what people do on their own. Getting real people involved, and minor at that? Big no-no.
All LLM headlines are like this to fuel the ongoing hysteria about the tech. It's really annoying.
Sure is. I report the ones I come across as clickbait or missleading title, explaining the parts left out…such as this one where those 7 words change the story completely.
Whoever made that headline should feel ashamed for victimizing a grommer.
Cartoon CSAM is illegal in the United States. Pretty sure the judges will throw his images under the same ruling.
en.wikipedia.org/wiki/PROTECT_Act_of_2003
thefederalcriminalattorneys.com/possession-of-lol…
It won’t. They’ll get them for the actual crime not the thought crime that’s been nerfed to oblivion.
Based on the blacklists that one has to fire up before browsing just about any large anime/erotica site, I am guessing that these “laws” are not enforced, because they are flimsy laws to begin with. Reading the stipulations for what constitutes a crime is just a hotbed for getting an entire case tossed out of court. I doubt any prosecutors would lean hard on possession of art unless it was being used in another crime.
I’d be torn on the idea of AI generating CP, if it were only that. On one hand if it helps them calm the urges while no one is getting hurt, all the better. But on the other hand it might cause them not to seek help, but problem is already stigmatized severely enough that they are most likely not seeking help anyway.
But sending that stuff to a minor. Big problem.
OMG. Every other post is saying their disgusted about the images part but it’s a grey area, but he’s definitely in trouble for contacting a minor.
Cartoon CSAM is illegal in the United States. AI images of CSAM fall into that category. It was illegal for him to make the images in the first place BEFORE he started sending them to a minor.
thefederalcriminalattorneys.com/possession-of-lol…
en.wikipedia.org/wiki/PROTECT_Act_of_2003
Yeah that’s toothless. They decided there is no particular way to age a cartoon, they could be from another planet that simply seem younger but are in actuality older.
It’s bunk, let them draw or generate whatever they want, totally fictional events and people are fair game and quite honestly I’d Rather they stay active doing that then get active actually abusing children.
Outlaw shibari and I guarantee you’d have multiple serial killers btk-ing some unlucky souls.
Exactly. If you can’t name a victim, it shouldn’t be illegal.
The problem with AI CSAM generation is that the AI has to be trained on something first. It has to somehow know what a naked minor looks like. And to do that, well… You need to feed it CSAM.
So is it right to be using images of real children to train these AI? You’d be hard-pressed to find someone who thinks that’s okay.
You make the assumption that the person generating the images also trained the AI model. You also make assumptions about how the AI was trained without knowing anything about the model.
Are there any guarantees that harmful images weren’t used in these AI models? Based on how image generation works now, it’s very likely that harmful images were used to train the data.
And if a person is using a model based on harmful training data, they should be held responsible.
However, the AI owner/trainer has even more responsibility in perpetuating harm to children and should be prosecuted appropriately.
I will have to disagree with you for several reasons.
.
If everywhere you go, everyone is abnormal, I have news for you
.
The topic that you’re choosing to focus on really interesting. what are your values?
My values are none of your business. Try attacking my arguments instead of looking for something about me to attack.
At the root of it beliefs aren’t based on logic they’re based on your value system. So why dance around the actual topic?
The difference between the things you’re listing and SAM is that those other things have actual utility outside of getting off. Were our phones made with human suffering? Probably but phones have many more uses than making someone cum. Are all those things wrong? Yea, but at least good came out of it outside of just giving people sexual gratification directly from the harm of others.
Lol, highly doubt it. These AI assholes pretend that all the training data randomly fell into the model (off the back of a truck) and that they cannot possibly be held responsible for that or know anything about it because they were too busy innovating.
There’s no guarantee that most regular porn sites don’t contain csam or other exploitative imagery and video (sex trafficking victims). There’s absolutely zero chance that there’s any kind of guarantee.
If the images were generated from CSAM, then there’s a victim. If they weren’t, there’s no victim.
I hate the no victim argument.
Why? Can you elaborate?
The images were created using photos of real children even if said photos weren’t CSAM (which can’t be guaranteed they weren’t). So the victims were are the children used to generate CSAM
.
.
.
.
Sure, but isn’t the the perpetrator the company that trained the model without their permission? If a doctor saves someone’s life using knowledge based on nazi medical experiments, then surely the doctor isn’t responsible for the crimes?
So is the car manufacturer responsible if someone drives their car into the sidewalk to kill some people?
Your analogy doesn’t match the premise. (Again assuming there is no csam in the training data which is unlikely) the training data is not the problem it is how the data is used. Using those same picture to generate photos of medieval kids eating ice cream with their family is fine. Using it to make CSAM is not.
It would be more like the doctor using the nazi experiments to do some other fucked up experiments.
(Also you posted your response like 5 times)
Sorry, my app glitched out and posted my comment multiple times, and got me banned for spamming… Now that I got unbanned I can reply.
In this scenario no, because the crime was in how someone used the car, not in the creation of the car. The guy in this story did commit a crime, but for other reasons. I’m just saying that if you are claiming that children in the training data are victims of some crime, then that crime was committed when training the model. They obviously didn’t agree for their photos to be used that way, and most likely didn’t agree for their photos to be used for AI training at all. So by the time this guy came around, they were already victims, and would still be victims if he didn’t.
I would argue that the person using the model for that purpose is further victimizing the children. Kinda like how with revenge porn the worst perpetrator is the person who uploaded the content, but every person viewing it from there is furthering the victimization. It is mentally damaging for the victim of revenge porn to know that their intimate videos are being seen/sought out.
Let’s do a thought experiment, and I’d look to to tell me at what point a victim was introduced:
Or with AI:
At what point did my actions victimize someone?
If I distributed those images and those images resemble a real person, then that real person is potentially a victim.
I will say someone who does this creepy and I don’t want them anywhere near children (especially mine, and yes, I have kids), but I don’t think it should be illegal, provided the source material is legal. But as soon as I distribute it, there absolutely could be a victim. Being creepy shouldn’t be a crime.
I think it should be illegal to make porn of a person without their permission regardless of if it was shared or not. Imagine the person it is based off of finds out someone is doing that. That causes mental strain on the person. Just like how revenge porn doesn’t actively harm a person but causes mental strafe (both the initial upload and continued use of it). For scenario 1 it would be at step 2 when the porn is made of the person. For scenario 2 it would be a mix between step 3 and 4.
Thanks for sharing! I’m going to disagree with pretty much everything, so please stop reading here if you’re not interested.
Sure, and there are plenty of things that can cause mental strain, but that doesn’t make those things illegal. For example:
And so on. Those things aren’t illegal, but someone could experience mental strain from them. Experiencing that doesn’t make you a victim, it just means you experience it.
Revenge porn damages someone’s reputation, at the very least, which is a large part of why it’s illegal.
Someone keeping those images for private use doesn’t cause harm, therefore it shouldn’t be illegal.
Someone doing something creepy for their own use should never be illegal.
I’m not one to stop because of disagreement. You’re in good faith and that’s all that matters imo
I believe consent is a larger factor. The person who made it consented to have their photos/videos seen by that person but did not consent to them sharing it.
That’s why it’s not illegal to call someone a slut (even though that also damages reputation)
What if the recording was made without the person’s consent. Say someone records their one night stand without the other person’s knowledge but they don’t share it with anyone. Should that be illegal?
Consent is certainly important, but they don’t need your consent if the image was obtained legally and thus subject to fair use, or if you gave them permission in the past.
It can be, if that constitutes defamation or libel. A passing statement wouldn’t, but a post on a popular website absolutely could. It all comes down to the damages that (false) statement caused.
That depends on whether there was a reasonable expectation of privacy. If it’s in public, there’s no reasonable expectation of privacy.
In general, I’d say intimacy likely occurs somewhere with a reasonable expectation of privacy, at which point it would come down to consent (whether implied or explicit).
If the person is a slut it wouldn’t be libel but it would still damage reputation. The person being a slut is true but calling them one still damages their reputation. If you release a home made video of a pornstar it would still be illegal even though it’s not something that would damage their reputation.
The reason for the illegality is the lack of consent not the reputation damage.
Even in a 1 party consent state recording someone while you are having intercourse with them is illegal without their consent, because we make exceptions for especially sensitive subjects such as sex.
To go along with that I also believe that people who uploaded photos of themselves/their children did not consent to having their photos used to make sexual content. If they did it would be another matter to me entirely.
Edit: I also would like to say (and I really am sorry for bringing them into this) but from what you said you think it would be okay (not socially acceptable but okay/fine) for someone to take pictures of your kids while they’re at the park and use that to make porn. Really think about that. Is that something you think should be allowed? Imagine someone taking pictures of them at walmart and you ask what they’re doing and they straight up tell you “I like how they look I’m going to add them to my training data to make porn, don’t worry though I’m not sharing it with anyone” and you could do jack shit about it without facing legal consequences yourself. You think that is okay?
Sure, in which case the person wouldn’t legally be a victim. It’s completely legal to tell the truth.
But that strays a bit from the point. Making fake porn of someone is a false reputation of that person’s character, and thus illegal, but only if it actually causes damages to reputation (i.e. you distribute it). Or at least that’s the line of argumentation I think someone would use in states where “revenge porn” isn’t explicitly illegal.
Even if the person is a porn star, the damage is that the porn is coming from somewhere other than the approved channels, thus the damages. Or maybe it’s lost sales. Regardless, there are actual, articulable damages.
Maybe in states where it’s expressly illegal. I’m talking more from a theoretical standpoint where there isn’t an explicit law against it.
If there’s no explicit law, tht standard is defamation/libel or violation of a reasonable expectation of privacy.
That’s the reasonable expectation of privacy standard (that applies inside houses when in bedrooms, bathrooms, etc, even if it’s not your house). If you’re doing it in public, there’s no reasonable expectation of privacy, so I think a court would consider filming in that context to be legal.
Then again, this could certainly vary by jurisdiction.
They don’t need to consent for any use, if it’s made available for personal use, then any individual can use it for personal use, even if that’s sexual content. As long as they don’t distribute it, they’re fine to use it as they please.
If you want control over how how content is used, don’t make it available for personal use.
Yes. I certainly don’t want them to do that, but I really don’t want to live in a society with the surveillance necessary to prosecute such a law. Someone being creepy with pictures of my kids is disgusting, but it honestly doesn’t hurt me or my kids in any way, provided they don’t share those images with anyone.
So yes, I think it’s a necessary evil to have the kinds of privacy protections I think are valuable to have in a free society. Freedom means letting people do creepy things that don’t hurt anyone else.
The damages would be the mental harm done to the victim. Most porn stars have content available for free so that wouldn’t be a reason for damages
The expectation of privacy doesn’t apply to one party consent States but they still can’t record sexual activities of someone without their consent
I don’t think people who uploaded pictures on Facebook consider that making it available for personal use
Did i say anything about surveillance? Just because something is made illegal doesn’t make it actively pursued, it just makes it so if someone gets caught doing it or gets reported doing it they can be stopped. Like you’d be able to stop the person from doing that to your children. Or if someone gets their house raided for something else they can be charged for it. Not every person who has real csam creates it or shares it, many times they just get caught by another charge then it gets found. Or the geek squad worker sees it on their computer and reports them.
It would give people avenues to stop others from using photos of their children in such a way. You wouldn’t need any extra surveillance
Do you think it’s okay for someone to have real csam? Let’s say the person who made it was properly prosecuted and the person who has the images/videos don’t share it, they just have it to use. Do you think that’s okay?
Then they shouldn’t have uploaded it to Facebook and made it publicly accessible.
It’s the next logical step for the pearl clutchers and amounts to “thought crime.”
These people aren’t doing anything to my children, they’re making their own images from images they have a right to use. It’s super creepy and I’d probably pick a fight with them if I found out, but I don’t think it should be illegal if there’s no victim.
The geek squad worker could still report these people, and it would be the prosecution’s job to prove that they were acquired or created in an illegal way.
No, because that increases demand for child abuse. Those pictures are created by abuse of children, and having getting access to them encourages for child abuse to produce more content.
Possession itself isn’t the problem, the problem is how they’re produced.
I feel similarly about recreational drugs. Buying from dealers is bad because it encourages snuggling and everything related to it. I have no problem with weed or whatever, I have problems with the cartels. At least with drugs there’s a simple solution: legalize it. I likewise want a legal avenue for these people who would otherwise participate in child abuse to not abuse children. Them looking at creepy AI content generated from pictures of my child doesn’t hurt my child, just don’t share those images or otherwise let me know about it.
I seriously doubt they would create any more surveillance for that than there already is for real CSAM.
That would just make it harder to prosecute people for CSAM since they will all claim their material was just ai. That would just end up helping child abusers get away with it.
I think the production of generated CSAM is unethical because it still involves photos of children without their consent
There is evidence to suggest that viewing csam increases child seeking behavior. So them viewing generated CSAM would most likely have the same if not a similar result. That would mean that even just having access to the materials would increase the likelihood of child abuse
theguardian.com/…/online-sexual-abuse-viewers-con…
The survey was self reported so the reality is probably higher than the 42% cited from the study
The best legal avenue for non-offending pedophiles to take is for them to find a psychologist that can help them work through their desires. Not to give them a thing that will make them want to offend even more.
That’s true, and an unfortunate part of preserving freedoms. That said, if someone is actually abusing children on the regular, police have a way of tracking that individual to catch them: investigations.
I wish police had to do them more often instead of leaving that job to the prosecution. If that means we need to pull officers away from other important duties like arresting black men for possessing a joint or pulling people over for speeding on an empty highway, I guess that’s what we have to do.
It involves legally acquired images and is protected under “fair use” laws. You don’t need my permission to exercise your fair use rights, even if I think your use is disgusting. It’s not my business. But if you make it my business (i.e. you tell me), I may choose to assault you and hope the courts will side with me that they constitute “fighting words.”
Just because something is disgusting doesn’t make it illegal.
As for that article:
It doesn’t prove anything, what it does is draw a correlation between people who search for CSAM on the dark web and are willing to answer a survey (a pretty niche group) and self-reported inclination to contact children. Correlation isn’t proof, it’s correlation.
That said, I don’t know if a better study could or should be conducted. Maybe survey people caught contacting children (sting operations) and those caught just distributing CSAM w/o child contact. We need go know the difference between those who progress to contact and those who don’t, and I don’t think this survey provides that.
I agree, and I think that should be widely accessible.
That said, I don’t think giving people a criminal record helps. If they need to be locked up to protect the public (i.e. there are actual victims), then let’s lock them up. But otherwise, we absolutely shouldn’t. Let’s make help available and push people toward getting that help.
First of all, not every image of a naked child is CSAM. This is actually been kind of a problem with automated CSAM detection systems triggering false positives on non-sexual images, and getting innocent people into trouble.
But also, AI systems can blend multiple elements together. They don’t need CSAM training material to create CSAM, just the individual elements crafted into a prompt sufficient to create the image while avoiding any safeguards.
You ignored the second part of their post. Even if it didn’t use any csam is it right to use pictures of real children to generate csam? I really don’t think it is.
There are probably safeguards in place to prevent the creation of CSAM, just like there are for other illegal and offensive things, but determined people work around them.
Not necessarily
You don’t. You just need lists of other things, properly tagged. If you feed an AI a bunch of clothed adults and a bunch of naked adults, it will, in theory, “understand” the difference between being clothed and naked and create any of its clothed adults, naked.
With that initial set above, you feed it a bunch of clothed children. When you ask for a naked child, it will either produce a child head with naked adult body, or a “weird” naked child. It “understands” that adult and child are different things, that clothed and naked are different things, and tries to infer what “naked child” looks like from what it “knows”.
This is the real question and one I don’t know the answer to, because it will boil down to consent to being part of a training model, whether your own as an adult, or a child’s parent, much like how it works for stock photos and videos.
“I consent to having my likeness used for AI training models, except for any use that involves NSFW content” - Fair enough. Good luck enforcing that.
I think the challenge with Generative AI CSAM is the question of where did training data originate? There has to be some questionable data there.
That would mean you need to enforce the law for whoever built the model. If the original creator has 100TB of cheese pizza, then they should be the one who gets arrested.
Otherwise you’re busting random customers at a pizza shop for possession of the meth the cook smoked before his shift.
There is also the issue of determining if a given image is real or AI. If AI were legal, that means prosecution would need to prove images are real and not AI with the risk of letting go real offenders.
The need to ban AI CSAM is even clearer than cartoon CSAM.
And in the process force non abusers to seek their thrill with actual abuse, good job I’m sure the next generation of children will appreciate your prudish factually inept effort. We’ve tried this with so much shit, prohibition doesn’t stop anything or just creates a black market and a abusive power system to go with it.
My main issue with generation is the ability of making it close enough to reality. Even with the more realistic art stuff, some outright referenced or even traced CSAM. The other issue is the lack of easy differentiation between reality and fiction, and it muddies the water. “I swear officer, I thought it was AI” would become the new “I swear officer, she said she was 18”.
That is not an end user issue, that’s a dev issue. Can’t train on scam if it isn’t available and as such is tacit admission of actual possession.
Would Lisa Simpson be 8 years old, or 43 because the Simpsons started in 1989?
Big brain PDF tells the judge it is okay because the person in the picture is now an adult.
You can say pedophile… that “pdf file” stuff is so corny and childish. Hey guys lets talk about a serious topic by calling it things like “pdf files” and “Graping”. Jfc
Why do people say “graping?” I’ve never heard that.
Please tell me it doesn’t have to do with “The Grapist” video that came out on early YouTube.
To avoid censorship filters in social media, same with PDF files.
Tiktok and Instagram are the main culprits, they’ll shadowban, or outright delist, any content that uses no-no words. Sex, rape, assault, drugs, die, suicide, it’s a rather big list
That’s the issue though. As far as I know it hasn’t been tested in court and it’s quite possible the law is useless and has no teeth.
With AI porn you can point to real victims whose unconsented pictures were used to train the models, and say that’s abuse. But when it’s just a drawing, who is the victim? Is it just a thought crime? Can we prosecute those?
.
.
.
the fuck was that spam supposed to do?
.
.
.
The discussion will never be resolved in your favour, if you shut down the discussion.
The discussion will never be resolved in your favour, if you shut down the discussion.
The discussion will never be resolved in your favour, if you shut down the discussion.
The discussion will never be resolved in your favour, if you shut down the discussion.
The discussion will never be resolved in your favour, if you shut down the discussion.
The discussion will never be resolved in your favour, if you shut down the discussion.
The discussion will never be resolved in your favour, if you shut down the discussion.
The discussion will never be resolved in your favour, if you shut down the discussion.
The discussion will never be resolved in your favour, if you shut down the discussion.
The discussion will never be resolved in your favour, if you shut down the discussion.
Sure, and then some judge starts making subjective decisions on drawn/painted art that didn’t hurt anyone and suddenly people are getting hurt.
The justice system is supposed to protect society, not hurt people you don’t like.
While I do think realistic stuff should be illegal, no question, with the loli/shota/whatever, you’re just opening a can of worms that could be applied to other things too, and some already did.
Regulators used the very same “normalizing certain sexual acts” to try and censor more extreme form of porn and/or the sexual acts themselves, and partly succeeded in the UK. Sure, scat is gross, many like that exactly due to that. One could even talk about the health risks too. Same with fisting, which is too extreme for many, supposed to be extremely painful because many people’s only exposure to it was from Requiem for a Dream, and has some associated health risks. However, a lot of it is some misrepresentation of the truth, with scat isn’t that big of a health risk if you have a good immune system (rest can be mitigated with precautions and moderation), and fisting isn’t inherently painful (source: me).
And the same is true about loli/shota. The terms aren’t just applied to actual underage characters, but for the “short adults” common within the VTubing scene, many of which are also shorter in real life (obligatory “of course not all”). Some of those other characters are also adults, that have exaggerated, almost child-like physique. Most of it however is still just some depiction of children, and otherwise I can understand why some wants to abstain from even the “adult loli/shota” stuff. I remember when pubic hair removal was becoming mainstream, and many, like radical feminists, feared it would normalize pedophilia, I even got called a pedo by a pubic hair connoisseur for not really liking it. I also don’t really want to talk over victims of CSA, many of who want it banned, many of who want it legal.
As for normalizing: The greatest normalization is done by pedos getting into the fandom to recruit others, and entertain the idea of a lower age of consent. For a long time, we threw out these motherfuckers from our community. But then 4chan happened, and suddenly these very same people just started screaming “it’s just an edgy joke bro”, so at one point people trying to keep these creeps out of the anime community in general became villainified, and with gamergate and the culture wars hitting the scene, “gatekeeping the normies” became the priority, so these sick fucks became a feature, which created in the anime community
I had a lot of connections to victims of CSA, most of them were teens, none were groomed by loli/shota (everyone’s mileage will vary on it, likely different in the age of the internet), but by either some non-pornographic work featuring a teen girl and an older man (usually in historic setting), or just by the perpetrator likening a 25+yo guy (often they lied they were way younger) going out with a 14 yo girl to her parents age gap (I’m in Hungary, where that’s technically legal🤮). Usually a simple “that big age gap isn’t okay in your age” talk did wonders, unless the only way for the girl to eat that day was to go out with that guy.
I thought cartoons/illustrations of that nature were only illegal in the UK (Coroners and Justices Act 2008) and Switzerland. TIL about the PROTECT Act.
Several countries prohibit any fictional depictions of child porn, whether drawn, written or otherwise. Wikipedia has an interesting list on that - en.wikipedia.org/…/Legality_of_child_pornography
I wonder if there is significant migration happening into those countries where csam os legal.
Unlikely. Tourism, on the other hand…
Most people instead have a trip to a place where underage sex workers are common, one can just have an external hard drive and/or a USB stick for that material which they hide. "An"caps are actively trying to form their own countries, partly to legalize “recordings of crimes” as they like to call them, if not outright to legalize child rape and child sex trafficking.
The thing about the PROTECT Act is that it relies on the Miller test, which has obvious holes, and is like depends on who is reviewing it and stuff. I have heard even the UK law has holes which can be exploited.
What an oddly written article.
They make it sound like the prompts are important and/or more important than the 13,000 images…
In many ways they are. The image generated from a prompt isn’t unique, and is actually semi random. It’s not entirely in the users control. The person could argue “I described what I like but I wasn’t asking it for children, and I didn’t think they were fake images of children” and based purely on the image it could be difficult to argue that the image is not only “child-like” but actually depicts a child.
The prompt, however, very directly shows what the user was asking for in unambiguous terms, and the negative prompt removes any doubt that they thought they were getting depictions of adults.
And also it’s an AI.
13k images before AI involved a human with Photoshop or a child doing fucked up shit.
13k images after AI is just forgetting to turn off the CSAM auto-generate button.
Having an AI generate 13.000 images does not even take 24 hours (depending on hardware and settings ofc).
Does this mean the AI was trained on CP material? How else would it know how to do this?
It would not need to be trained on CP. It would just need to know what human bodies can look like and what sex is.
AIs usually try not to allow certain content to be produced, but it seems people are always finding ways to work around those safeguards.
Training is how it knows it…
You can ask it to make an image of a man made of pizza. That doesn’t mean it was trained on images of that.
But it means that it was trained on people and on pizza. If it can produce CSAM, it means it had access to pictures of naked minors. Even if it wasn’t in a sexual context.
Minors are people. It knows what clothed people of all ages look like. It also knows what naked adults look like. The whole point of AI is that it can fill in the gaps and create something it wasn’t trained on. Naked + child is just a simple equation for it to solve
The whole point of those generative models that they are very good at blending different styles and concepts together to create coherent images. They’re also really good at editing images to add or remove entire objects.
You can always tell when someone has no clue about AI but has read online about it.
I think @deathbird@mander.xyz meant was the AI could be trained on what sex is and what children are at different points. Then a user request could put those two concepts together.
But as the replies I got show, there were multiple ways this could have got accomplished. All I know is AI needs to go to jail.
Local model go brrrrrr
Well some llm have been caught wirh cp in their training data
Likely yes, and even commercial models have an issue with CSAM leaking into their datasets. The scummiest of all of them likelyget one offline model, then add their collection of CSAM to it.
I think the fact that he was promoting child sexual abuse and was communicating with children and creating communities with them to distribute the content is the most damning thing, regardless of people’s take on the matter.
Umm … That AI generated hentai on the page of the same article, though … Do the editors have any self-awareness? Reminds me of the time an admin decided the best course of action to call out CSAM was to directly link to the source.
Wait do you think all Hentai is CSAM?
And sending the images to a 15 year old crosses the line no matter how he got the images.
Hentai is obviously not CSAM. But having a hentai image on an article about CSAM and child grooming is pretty poorly thought out
You know, looking at it again, I think it’s an ad.
It’s an ad for another article on that site.
The image depicts mature women, not children.
Correct. And OP’s not saying it is.
But to place that sort of image on an article about CSAM is very poorly thought out
I wonder if cartoonized animals in CSAM theme is also illegal… guess I can contact my local FBI office and provide them the web addresses of such content. Let them decide what is best.
Ah yes, more bait articles rising to the top of Lemmy. The guy was arrested for grooming, he was sending these images to a minor. Outside of Digg, anyone have any suggestions for an alternative to Lemmy and Reddit? Lemmy’s moderation quality is shit, I think I’m starting to figure out where I lean on the success of my experimental stay with Lemmy
Edit: Oh god, I actually checked digg out after posting this and the site design makes it look like you’re actually scrolling through all of the ads at the bottom of a bulshit clickbait article
Go to instance that moderate like you like it.
Lemmy as a whole does not have moderation. Moderators on Lemmy.today cannot moderate Lemmy.world or Lemmy ml, they can only remove problematic posts as they come and as they see fit or block entire instances which is rare.
If you want stricter content rules than any of the available federated instances then you’ll have to either:
Use a centralized platform like Reddit but they’re going to sell you out for data profits and you’ll still have to occasionally deal with shit like “The Donald.”
Start your own instance with a self hosted server and create your own code of conduct and hire moderators to enforce it.
Yeah, I know, thats why I’m finding lemmy not for me. This new rage bait every week is tiring and not adding anything to my life except stress, and once I started looking at who the moderaters were when Lemmy’d find a new thing to rave about, I found that often there was 1-3 actual moderators, which, fuck that. With reddit, the shit subs were the exception, here it feels like they ALL (FEEL being a key word here) have a tendency to dive face first into rage bait
Edit: Most of the reddit migration happened because Reddit fucked over their moderators, a lot of us were happy with well moderated discussions, and if we didnt care to have moderators, we could have just stayed with reddit after the moderators were pushed away
You can go to an instance that follows your views closer and start blocking instances that post low quality content to you. Lemmy is a protocol, it’s not a single community. So the moderation and post quality is going to be determined by the instance you’re on and the community you’re with.
This is throwing a blanket over the problem. When the mods of a news community allow bait articles to stay up because they (presumably) further their views, it should be called out as a problem.
you are too optimistic about the internet
I fail to see what part of my comment is optimistic? xD
That anywhere else is better.