Alexstarfire@lemmy.world
on 23 Aug 2024 04:58
nextcollapse
Awful title.
xavier666@lemm.ee
on 23 Aug 2024 08:34
nextcollapse
Clickbait 101
NOT_RICK@lemmy.world
on 23 Aug 2024 13:25
collapse
It’s the verge, after all. Nobody should read their slop
RageAgainstTheRich@lemmy.world
on 23 Aug 2024 05:13
nextcollapse
Even a few months ago it was hard for people with the knowledge to use AI on photos. I don’t like the idea of this but its unavoidable. There is already so much misinformation and this will make it so much worse.
gandalf_der_12te@lemmy.blahaj.zone
on 24 Aug 2024 03:38
collapse
I don’t believe there’s misinformation because we fail to discern the truth though. Misinformation exists because people believe what they want to believe.
Th4tGuyII@fedia.io
on 23 Aug 2024 05:42
nextcollapse
Image manipulation has always been a thing, and there are ways to counter it...
But we already know that a shocking amount of people will simply take what they see at face value, even if it does look suspicious.
The volume of AI generated misinformation online is already too damn high, without it getting more new strings in it's bow.
Governments don't seem to be anywhere near on top of keeping up with these AI developments either, so by the time the law starts accounting for all of this, the damage will be long done already.
Badeendje@lemmy.world
on 23 Aug 2024 08:05
nextcollapse
On our vacation 2 weeks ago my wife made an awesome picture just with one guy annoyingly in the background. She just tucked him and clicked the button… poof gone, perfect photo.
gravitas_deficiency@sh.itjust.works
on 23 Aug 2024 16:43
nextcollapse
But it’s never been this absolutely trivial to generate and distribute completely synthetic media. THAT is the real problem here.
Yep, this is a problem of volume of misinformation, the truth can just get buried by one single person generating thousands of fake photos, it’s really easy to lie, it’s really time consuming to fact check.
gravitas_deficiency@sh.itjust.works
on 24 Aug 2024 04:52
collapse
That’s precisely what I mean.
The effort ratio between generating synthetic visual media and corroborating or disproving a given piece of visual media has literally inverted and then grown by an order of magnitude in the last 3-5 years. That is fucking WILD. And more than a bit scary, when you really start to consider the potential malicious implications. Which you can see being employed all over the place today.
probableprotogen@lemmy.dbzer0.com
on 24 Aug 2024 02:39
nextcollapse
Honestly yeah I agree. Many mainstream social media platforms are infested with shitty generated content to the point of being insanity.
Drewelite@lemmynsfw.com
on 24 Aug 2024 13:02
collapse
You said “but” like it invalidated what I said, instead of being a true statement and a non sequitur.
You aren’t wrong, and I don’t think that changes what I said either.
kernelle@lemmy.world
on 23 Aug 2024 15:22
collapse
Lmao, “but” means your statement can be true and irrelevant at the same time. From the day photoshop could fool people lawyers have been trying to mark any image as faked, misplaced or out of context.
When you just now realise it’s an issue, that’s your problem. People can’t stop these tools from existing, so like, go yell at a cloud or something.
I am not disagreeing with you, but it’s intellectually dishonest to not acknowledge the context of the reality we live in: it used to require genuine talent and skill to use a paid tool to fake images, and now is as easy as entering text on your phone in a free app just describing what you want to see.
This is an exponential escalation of existing problems and technologies.
I never said I was just now worried about fake images. To say it myself: I’m worried about the now non-existent barrier that bad actors no longer need to clear to do whatever they want to do here.
kernelle@lemmy.world
on 23 Aug 2024 17:22
collapse
Let me be clear so you don’t misunderstand me. When it comes down to prove an image is genuine you haven’t been able to say “look at this picture, it’s real for sure” for almost 30 years. When you want to use a picture to prove something you have to provide much more details about where/how/when/why it was taken, access to those tools won’t change the fact a picture in a vacuum has no meaning.
Like I said, old-man-yelling-at-cloud energy.
ggppjj@lemmy.world
on 23 Aug 2024 17:37
nextcollapse
I am no longer interested in continuing a conversation with you, as you’ve convinced me that you’re not interested in engaging with what I am saying. Thank you for your time and perspective to this point.
chirping@infosec.pub
on 24 Aug 2024 13:10
collapse
As an “outside observer”, I think maybe you’re not seeing (what I believe is) the other guys viewpoint: What you are bringing up (photoshop has been possible already) is a core part of what he said from the start, and his point builds on top of that.
So obviously he already knows it, and arguing about it disregards that his line of argumentation builds upon the basis we all agreed upon to be true until you brought it up as … contrarian? To his point.
doesn’t seem like “old man yells at cloud” energy, more like “Uhm, achtually”
sorghum@sh.itjust.works
on 23 Aug 2024 18:20
collapse
Well yeah, I’m not concerned with its ease of use nowadays. I’m more concerned with the computer forensics experts not being able to detect a fake for which Photoshop has always been detectable.
kernelle@lemmy.world
on 23 Aug 2024 21:53
collapse
As the cat and mouse game continues, we ask ourselves, is water still wet?
sorghum@sh.itjust.works
on 24 Aug 2024 00:37
collapse
Just wait, image manipulation will happen at image creation and there will be no “original”. Proving an image is unmanipulated will be a landmark legal precedent and set the standard for being able to introduce photographic evidence. It is already a problem for audio recordings and will be eventually for video.
BastingChemina@slrpnk.net
on 23 Aug 2024 18:27
collapse
I really don’t have much knowledge on it but it sound like it’s would be an actual good application of blockchain.
Couldn’t a blockchain be used to certify that pictures are original and have not been tampered with ?
On the other hand if it was possible I’m certain someone either have already started it, it is the prefect investor magnet “Using blockchain to counter AI”
I am being serious, I am an IT and can’t see how that would work in any realistic way.
And even if we had a working system to track all changes made to a photo, it would only work if the author submitted the original image before any change haf been made, but how would you verify that the original copy of a photo submitted to the system has not been tempered with?
Sure, you could be required to submit the raw file from the camera, but it is only a matter of time untill AI can perfectly simulate an optical sensor to take a simulated raw of a simulated scene.
Nope, we simply have to fall back on building trust with photo journalists, and trust digital signatures to tell us when we are seeing a photograph modified outsided of the journalist’s agency.
BastingChemina@slrpnk.net
on 23 Aug 2024 19:25
collapse
Yep, I think we pictures are becoming a valuable as text and it is fine, we just need to get used to it.
Before photography became mainstream the only source of information was written, it is extremely simple to make a fake story so people had to rely on trusted sources. Then for a short period of history photography became a (kinda) reliable sources of information by itself and this trust system lost its importance.
In most cases seeing a photo means that we were seeing a true reflection of what happened, especially if we were song multiple photos of the same event.
Now we are arriving at the end of this period, we cannot trust a photo by itself anymore, tampering a photo is becoming as easy as writing a fake story. This is a great opportunity for journalists I believe.
There has never not been a time when photography was not manipulated in some way, be it as simple as picking a subject and framing it in a specific way can completely change the story.
I really enjoy photography as a hobby, however I find it a bit embarrasing and intrusive to take photos of other people, so my photos tend to look empty of people.
I will allways frame a picture to have no or as a very few people in it as possible.
In general I don’t edit my photos on the computer, I just let them speak for themselves, even if that story is a half truth.
We have never been able to trust photographs completely, though you make a good point about truth in numbers, that won’t go way just because of AI.
The big issue now is how easiy it is to make a completely believably faked photo out of an existing photo, we have been able to do this for decades, but is has been way, way harder to do.
As for the blockchain making photos valuable, we tried that, NFTs as a concept is dumb and has failed, I don’t believe that NFTs will be the future of ownership.
Ilovethebomb@lemm.ee
on 23 Aug 2024 06:07
nextcollapse
Meh, those edited photos could have been created in Photoshop as well.
This makes editing and retouching photos easier, and that’s a concern, but it’s not new.
Something I heard in the photoshop VS ai argument is it makes an already existing process much faster and almost anyone can do it which increases the shear amount that one person or a group could make almost how a printing press made the production of books so much faster (if you’re in to history)
I’m too tired to take a stance so I’m just sharing some arguments I’ve heard
Ilovethebomb@lemm.ee
on 23 Aug 2024 07:32
nextcollapse
Making creating fake images even easier definitely isn’t great, I agree with you there, but it’s nothing that couldn’t already be done with Photoshop.
I definitely don’t like the idea you can do this on your phone.
Bimbleby@lemmy.world
on 23 Aug 2024 08:16
collapse
Exactly, it was already established that pictures from untrusted sources are to be disregarded unless they can be verified by trusted sources.
It is basically how it has been forever with the written press: Just like everyone now has the capability to manipulate a picture. Everyone can write we are being invaded by aliens, but whether we should believe it is another thing.
It might take some time for the general public to learn this, but it should be a focus area of general schooling within the area of source criticism.
gandalf_der_12te@lemmy.blahaj.zone
on 24 Aug 2024 03:34
collapse
almost how a printing press made the production of books so much faster
… and we all know that lead to 30 years of bloody war, btw
zecg@lemmy.world
on 23 Aug 2024 06:31
nextcollapse
It’s a shitty toy that’ll make some people sorry when they don’t have any photos from their night out without tiny godzilla dancing on their table. It won’t have the staying power Google wishes it to, since it’s useless except for gags.
But, please, Verge,
It took specialized knowledge and specialized tools to sabotage the intuitive trust in a photograph.
get fucked
frengo_@lemmy.world
on 23 Aug 2024 06:41
nextcollapse
I wish tools to detect if an image is real or not become as easy to use and good as these AI tools bullshit.
MagicShel@programming.dev
on 23 Aug 2024 06:57
nextcollapse
We need to bring back people who can identify shops from some of the pixels and having seen quite a few shops in their time.
I’ve just tried to upload the picture of the girl with fake drugs on the floor in a AI detection tool and it told me it was 0,2% likely to have AI generated content. This does not look good.
palordrolap@fedia.io
on 23 Aug 2024 17:45
collapse
Any tool someone invents will be used to train an AI to circumvent that tool.
In fact that's how a lot of AI training is done in the first place.
Treedrake@fedia.io
on 23 Aug 2024 07:49
nextcollapse
This reaffirms my wish to go back to monkey.
hperrin@lemmy.world
on 23 Aug 2024 07:51
nextcollapse
We literally lived for thousands of years without photos. And we’ve lived for 30 years with Photoshop.
Squizzy@lemmy.world
on 23 Aug 2024 09:19
nextcollapse
The article takes a doomed tone for sure but the reality is we know how dangerous and prolific misinformation is.
echodot@feddit.uk
on 23 Aug 2024 11:54
nextcollapse
The Nazis based their entire philosophy on misinformation, and they did this in a world that predated computers. I don’t actually think there’s going to be a problem here all of the issues that the people are claiming exist have always been possible and not only possible but actually done in many cases.
AI is just the tool by which misinformation will now be spread but if AI didn’t exist the misinformation would just find another path.
Sineljora@sh.itjust.works
on 23 Aug 2024 18:37
collapse
I disagree with your point that it wouldn’t get worse. The Nazi example was in fact much worse for it’s time because of a new tool they called the “eighth great power”.
Goebbels used radio, which was new at the time, and subsidized radios for German citizens.
AI is new, faster and more compelling than radio, not limited to a specific media type, and everyone already has receivers.
Drewelite@lemmynsfw.com
on 24 Aug 2024 04:22
collapse
So, shouldn’t the pretense that images are sources of truth evaporating, be a good thing?
ZILtoid1991@lemmy.world
on 23 Aug 2024 09:31
collapse
Except it was way harder to do.
Now call me a “ableist, technophobic, luddite”, that wants to ruin the chance of other people making GTA-like VRMMORPGs from a single line of prompt!
AVincentInSpace@pawb.social
on 24 Aug 2024 18:28
collapse
have you considered just not listening to AI bros and not letting their opinions upset you
cley_faye@lemmy.world
on 23 Aug 2024 08:08
nextcollapse
This is only a threat to people that took random picture at face value. Which should not have been a thing for a long while, generative AI or not.
The source of an information/picture, as well as how it was checked has been the most important part of handling online content for decades. The fact that it is now easier for some people to make edits does not change that.
gandalf_der_12te@lemmy.blahaj.zone
on 24 Aug 2024 03:32
collapse
Your comment somehow just made me realize something:
When we see/read news, we have to trust the one who’s telling them to us. Since we weren’t there in person to see it with our own eyes. Therefore, it’s always about a “chain of trust”.
This is true no matter whether photos can be manipulated or not. People have been able to lie since humanity exists. Nothing has really changed. Photography, just like globalization, has only brought everything closer together, making it easier to have a more direct, straightforward relationship to other people and events. With the beginning of AI, this distance between you and an event is going to increase a bit, but the core mechanics are still similar.
I kind of wonder, how do we really know that something is true? Do atoms actually exist? What if we’re being lied to by our authorities. You say “of course not”. But what if? I mean, if we blindly trust authorities, we end up like the republicans, who believe everything that fox news tells them. How, then, do we discern truth?
How, then, do we discern truth?
I guess we have to give “proof” for everything, in the sense of mathematical proof. Something that everybody can follow, using only their fundamental assumptions and deduction. I guess that is what science is all about.
Nope it must be real because everyone knows fake photographs only became possible in 2022 with AI otherwise all these articles would be stupid.
mctoasterson@reddthat.com
on 23 Aug 2024 11:29
collapse
Lots of obviously fake tipoffs in this one. The overall scrawny bitch aesthetic, the fact she is wearing a club/bar wrist band, the bottle of Mom Party Select™ wine, and the persons thumb/knee in the frame… All those details are initially plausible until you see the shitty AI artifacts.
Ilovethebomb@lemm.ee
on 23 Aug 2024 11:33
nextcollapse
All the details you just mentioned are also present in the unaltered photo though. Only the “drugs” are edited in.
Didn’t read the article, did you?
echodot@feddit.uk
on 23 Aug 2024 11:52
nextcollapse
Em what. The drug power finale is what has been added in by the AI what are you talking about.
Ledivin@lemmy.world
on 23 Aug 2024 14:19
nextcollapse
Lots of obviously fake tipoffs in this one. The overall scrawny bitch aesthetic, the fact she is wearing a club/bar wrist band, the bottle of Mom Party Select™ wine, and the persons thumb/knee in the frame… All those details are initially plausible until you see the shitty AI artifacts.
This is an AI-edited photo, and literally every “artifact” you pointed out is present in the original except for the wine bottle. You’re not nearly as good as spotting fakes as you think you are - nobody is
Maybe it’s time to do something about confirming authenticity, rather than just accepting any old nonsense as evidence of anything.
At this point anything can be presented as evidence, and now can be equally refuted as an AI fabrication.
We need a new generation of secure cameras with internal signing of images and video (to prevent manipulation), built in LIDAR (to make sure they’re not filming a screen), periodic external timestamps of data (so nothing can be changed after the supposed date), etc.
ricdeh@lemmy.world
on 23 Aug 2024 10:37
nextcollapse
I am very opposed to this. It means surrendering all trust in pictures to Big Tech. If at some time only photos signed by Sony, Samsung, etc. are considered genuine, then photos taken with other equipment, e.g., independently manufactured cameras or image sensors, will be dismissed out of hand. If, however, you were to accept photos signed by the operating system on those devices regardless of who is the vendor, that would invalidate the entire purpose because everyone could just self-sign their pictures. This means that the only way to effectively enforce your approach is to surrender user freedom, and that runs contrary to the Free Software Movement and the many people around the world aligned with it. It would be a very dystopian world.
Blackmist@feddit.uk
on 23 Aug 2024 12:18
nextcollapse
It would also involve trusting those corporations not to fudge evidence themselves.
I mean, not everything photo related would have to be like this.
But if you wanted you photo to be able to document things, to provide evidence that could send people to prison or be executed…
The other choice is that we no longer accept photographic, audio or video evidence in court at all. If it can no longer be trusted and even a complete novice can convincingly fake things, I don’t see how it can be used.
There’s no need to make these things Big Tech, so if that’s why you are opposed to it, reconsider what you are actually opposed to. This could be implemented in a FOSS way or an open standard.
So you not trust HTTPS because you’d have to trust big tech? Microsoft and Google and others sign the certificates you use to trust that your are sending your password to your bank and not a phisher. Like how any browser can see and validate certificates, any camera could have a validation or certificate system in place to prove that the data is straight from an unmodified validated camera sensor.
PrimeMinisterKeyes@lemmy.world
on 23 Aug 2024 16:08
nextcollapse
What in the world is going on with Elsie’s hand in the “second of the five photographs?”
feedum_sneedson@lemmy.world
on 24 Aug 2024 06:09
collapse
I was thinking about those pictures! The garden is magic enough without there needing to be fairies at the bottom of it. I’m not sure if the saying is linked to these forgeries, but I always kind of thought it was.
AnUnusualRelic@lemmy.world
on 23 Aug 2024 09:55
nextcollapse
There are even actual statues of completely made up stuff.
JackGreenEarth@lemm.ee
on 23 Aug 2024 10:54
nextcollapse
People can write things that aren’t true! Oh no, now we can’t trust trustworthy texts such as scientific papers that have undergone peer review!
echodot@feddit.uk
on 23 Aug 2024 11:49
nextcollapse
The Verge are well versed on writing things that are untrue
BalooWasWahoo@links.hackliberty.org
on 24 Aug 2024 12:24
collapse
I mean… have you seen the scathing reports on scientific papers, psychology especially? Peer review doesn’t catch liars. It catches bad experimental design, and it sometimes screens out people the reviewers don’t like. Replication can catch liars sometimes, but even in the sciences that are ‘hard’ it is rare to see replication because that doesn’t bring the grant money in.
echodot@feddit.uk
on 23 Aug 2024 11:41
nextcollapse
Okay so it’s the verge so I’m not exactly expecting much but seriously?
No one on Earth today has ever lived in a world where photographs were not the linchpin of social consensus
People have been faking photographs basically since day one, with techniques like double exposure. Also even more sophisticated photo manipulation has been possible with Photoshop which has existed for decades.
There’s a photo of me taken in the '90s on thunder mountain at Disneyland which has been edited to look like I’m actually on a mountainside rather than in a theme park. I think we can deal with fakeable photographs the only difference here is the process is automatable which honestly doesn’t make even the blindest bit of difference. It’s quicker but so what.
TheFriar@lemm.ee
on 23 Aug 2024 13:10
nextcollapse
It used to take professionals or serious hobbyists to make something fake look believable. Now it’s at the tip of everyone’s fingers. Fake photos were already a smaller issue, but this very well could become a tidal wave of fakes trying to grab attention.
Think about how many scammers there are. Think about how many horny boys there are. Think about how much online political fuckery goes around these days. When believable photographs of whatever you want people to believe are at the tips of anyone’s fingers, it’s very, very easy to start a wildfire of misinformation. And think about the young girls being tormented in middle school and high school. And all the scammable old people. And all the fascists willing to use any tool at their disposal to sow discord and hatred.
It’s not a nothing problem. It could very well become a torrent of lies.
AwesomeLowlander@sh.itjust.works
on 23 Aug 2024 23:35
nextcollapse
It used to take professionals or serious hobbyists to make something fake look believable.
I mean…we can all see those are inanimate, right? But that doesn’t even change my point. If anything, it kinda helps prove my point. People are gullible as hell. What’s that saying? “A lie will get halfway around the world before the truth has a chance to pull its boots on.”
A torrent of believable fakes will call into question photographic evidence. I mean, we’ve all seen it happening already. Some kinda strange or interesting picture shows up and everyone is claiming it was AI generated. That’s the other half of the problem.
Photographic evidence is now called into question readily. That happened with photoshop too, but like I said, throw enough shit against the wall—with millions and millions of other people also throwing shit at the wall—and some is bound to stick. The probability is skyrocketing now that it’s in everyone’s hands and the actually AIgen pictures are becoming indecipherable from photo evidence.
That low effort fairy hoax made a bunch of people believe there were 8in. fairies just existing in the world, regardless of how silly that was. Now, stick something entirely believable into a photograph that only barely blurs the lines of reality and it can be like wildfire. Have you seen those stupid Facebook AI pages? Like shrimp Jesus, the kids in Africa building cars out of garlic cloves, etc. People are falling for that dumbass shit. Now put Kamala Harris doing something shady and release it in late October. I would honestly be surprised if we’re not hit with at least one situation like that in a few months.
rottingleaf@lemmy.world
on 24 Aug 2024 12:55
collapse
Come on, science fiction had similar technologies to fake things since 40s. The writing was on the wall.
It didn’t really work outside of authors’ and readers’ imagination, but the only reason we’re scared is that we’re forced into centralized hierarchical systems in which it’s harder to defend.
I mean, sure, deception as a concept has always been around. But let me just put it this way:
How many more scam emails, scam texts, how many more data leaks, conspiracy theories are going around these days? All of these things always existed. The Nigerian prince scam. That one’s been around forever. The door-to-door salesman, that one’s been around forever. The snake oil charlatan. Scams and lies have been around since we could communicate, probably. But never before have we been bombarded with them like we are today. Before, it took a guy with a rotary phone and a phone book a full day to try to scam 100 people. Now 100 calls go out all at once with a different fake phone number for each, spoofed to be as close to the recipient’s number as possible.
The effort input needed for these things have dropped significantly with new tech, and their prevalence skyrocketed. It’s not a new story. In fact, it’s a very old story. It’s just more common and much easier, so it’s taken up by more people because it’s more lucrative. Why spend all of your time trying to hack a campaign’s email (which is also still happening), when you can make one suspicious picture and get all of your bots to get it trending so your company gets billions in tax breaks? All at the click of a button. Then send your spam bots to call millions of people a day to spread the information about the picture, and your email bots to spam the picture to every Facebook conspiracy theorist. All in a matter of seconds.
This isn’t a matter of “what if.” This is kind of just the law of scams. It will be used for evil. No question. And it does have an effect. You can’t have random numbers call you anymore without you immediately expecting their spam. Soon, you won’t be able to get photo evidence without immediately thinking it might be fake. Water flows downhill, new tech gets used for scams. The like a law of nature at this point.
rottingleaf@lemmy.world
on 25 Aug 2024 13:53
collapse
Wise people still teach their children (and remind themselves) not to talk to strangers, say “no” if not sure, mind their own business because their attention and energy are not infinite, and trust only family.
You can’t have random numbers call you anymore without you immediately expecting their spam.
You’d be wary of people who are not your neighbors in the Middle Ages. Were you a nobleman, you’d still mostly talk to people you knew since childhood, yours or theirs, and the rare new faces would be people you’ve heard about since childhood, yours or theirs.
It’s not a new danger. Even qualitatively - the change for a villager coming to a big city during the industrial revolution was much more radical.
It’s not a new story. In fact, it’s a very old story.
And you just kinda proved my point. As time has gone on, the great of deception has grown with new technology. This is just the latest iteration. And every new one has expanded the chances/danger exponentially.
rottingleaf@lemmy.world
on 26 Aug 2024 07:13
collapse
What I really meant is that humanity is a self-regulating system. This disturbance will be regulated just as well as those other ones.
The unpleasant thing is that the example I’ve given involved lots of new power being created, while our disturbance is the opposite - people\forces already having power desperately trying to preserve their relative weight, at the cost of preventing new power being created.
But we will see if they’ll succeed. After all, the very reason they are doing this is because they can’t create power, and that is because their institutional understanding is lacking, and this in turn means that they are not in fact doing what they think they are. And by forcing those who can create power to the fringe, they are accelerating the tendencies for relief.
I don’t think this is the power redistribution you’re implying it is. I’m not actually sure what you mean by that. The power to create truths? To spread propaganda? I can’t think of any other power this tech would redistribute. Would you mind explaining?
rottingleaf@lemmy.world
on 26 Aug 2024 14:18
collapse
I don’t mean anything by that, because I didn’t say anything about any redistribution.
If your question is what does this have to do with ability to easily generate fakes - then the power created would be in killing untrusted (as opposed to webs of trust and f2f) web. It’s a good thing.
Halcyon@discuss.tchncs.de
on 24 Aug 2024 05:36
collapse
The new technique distorts reality in a much larger way. That hasn’t been there before. When everybody has this in their smartphones, we will look at manipulated pics on an hourly basis. That’s unprecedented.
conciselyverbose@sh.itjust.works
on 23 Aug 2024 12:08
nextcollapse
I think this is a good thing.
Pictures/video without verified provenance have not constituted legitimate evidence for anything with meaningful stakes for several years. Perfect fakes have been possible at the level of serious actors already.
Putting it in the hands of everyone brings awareness that pictures aren’t evidence, lowering their impact over time. Not being possible for anyone would be great, but that isn’t and hasn’t been reality for a while.
reksas@sopuli.xyz
on 23 Aug 2024 14:09
nextcollapse
While this is good thing, not being able to tell what is real and what is not would be disaster. What if every comment here but you were generated by some really advanced ai? What they can do now will be laughable compared to what they can do many years from now. And at that point it will be too late to demand anything to be done about it.
Ai generated content should have somekind of tag or mark that is inherently tied to it that can be used to identify it as ai generated, even if only part is used. No idea how that would work though if its even possible.
conciselyverbose@sh.itjust.works
on 23 Aug 2024 14:27
collapse
You already can’t. You can’t close Pandora’s box.
Adding labels just creates a false sense of security.
it wouldnt be label, that wouldnt do anything since it could just be erased. It should be something like invisible set of pixels on pictures or some inaudible soundpattern on sounds that can be detected in some way.
conciselyverbose@sh.itjust.works
on 23 Aug 2024 22:26
collapse
But it’s irrelevant. You can watermark all you want in the algorithms you control, but it doesn’t change the underlying fact that pictures have been capable of lying for years.
People just recognizing that a picture is not evidence of anything is better.
Yes, but reason why people dont already consider pictures irrelevant is that it takes time and effort to manipulate a picture. With ai not only is it fast it can be automated. Of course you shouldnt accept something so unreliable as legal evidence but this will spill over to everything else too
conciselyverbose@sh.itjust.works
on 24 Aug 2024 10:57
collapse
It doesn’t matter. Any time there are any stakes at all (and plenty of times there aren’t), there’s someone who will do the work.
It doesnt matter if you cant trust anything you see? What if you couldn’t be sure if you weren’t talking to bot right now?
conciselyverbose@sh.itjust.works
on 24 Aug 2024 11:46
collapse
Photos/video from unknown sources have already been completely worthless as evidence for a solid decade. If you used a random picture online to prove a point 5 years ago, you were wrong. This does not change that reality in any way.
The only thing changing is your awareness that they’re not credible.
What about reliable sources becoming less reliable? Knowing something is not credible doesn’t help if i can’t know what is credible
conciselyverbose@sh.itjust.works
on 24 Aug 2024 13:30
collapse
They are not reliable sources. You cannot become less reliable than “not at all”, and that has been the state of pictures and videos for many years already. There is absolutely no change to the evidentiary value of pictures/video.
Making the information more readily available does not change the reality that pictures aren’t evidence.
I’m not talking about evidence, i’m talking about fundamendal being able to trust anything digital at all in any context. What if you couldnt be sure if phonecall from your friend was actually from your friend or if you cant be sure about any picture shown to you if its actually about some real thing.
Things you need to be able to trust in daily life dont have to be court-level evidence. That is what abuse of ai will take from us.
conciselyverbose@sh.itjust.works
on 24 Aug 2024 14:07
collapse
It’s the exact same thing. You’re drawing a distinction between two identical things.
Pictures have not been credible for a long time. You shouldn’t have “trusted” a picture for anything 5 years ago.
The only thing that’s in any way different is that now you know you can’t trust it.
I completely agree. This is going to free kids from someone taking a picture of them doing something relatively harmless and extorting them. “That was AI, I wasn’t even at that party 🤷”
I can’t wait for childhood and teenage life to being a bit more free and a bit less constantly recorded.
gandalf_der_12te@lemmy.blahaj.zone
on 24 Aug 2024 03:21
collapse
yeah, every time you go to a party, and fun happens, somebody pulls out their smartphone and starts filming. it’s really bad. people can only relax when there’s privacy, and smartphones have stolen privacy from society for over 10 years now. we need to either ban filming in general (which is not doable) or discredit photographs - which we’re doing right now.
What do you mean explain away? I pointed out that they always stop the footage in a way that implies he dies- when he clearly doesn’t. Having an article about how AI photos can be used to manipulate our perception of reality cite an instance of careful propaganda manipulating the perception of what happened was just a little on the nose.
Seriously posting about a massacre from over 30 years ago where a few hundred people were killed fighting the cops like its supposed to carry water today? Just compare that to the massacre that’s happening right now in Gaza, way more actual evidence of heinous crimes and it’s way more of a concern to me because it’s my government funding it.
Melvin_Ferd@lemmy.world
on 23 Aug 2024 13:49
nextcollapse
Relevant XKCD. Humans have always been able to lie. Having a single form of irrefutable proof is the historical exception, not the rule.
samus12345@lemmy.world
on 23 Aug 2024 16:16
nextcollapse
Regarding that last panel, why would multiple people go through the trouble of carving lies about Ea-Nasir’s shitty copper? And even if they did, why would he keep them? No, his copper definitely sucked.
The obvious conjecture is that they were trying to commit fraud and get free copper
gandalf_der_12te@lemmy.blahaj.zone
on 24 Aug 2024 03:07
collapse
interesting thought. we haven’t had photos in history, and people didn’t need them. also, we’ve been able to produce text deepfakes all throughout history (and people actually did that - a lot) and somehow, humanity still survived and made progress. maybe we should question our assumptions whether we really need a medium to communicate absolute truth.
Drewelite@lemmynsfw.com
on 24 Aug 2024 04:13
nextcollapse
If you’re getting your truth from somewhere you don’t trust, you’ve already lost the plot. Having a medium to convey absolute truth is NOT the exception, because it never existed. Not with first hand accounts, not with photos, not with videos. Anything, from its inception, has been able to be faked by someone motivated enough.
What we need is an industry of independent ethically driven individuals to investigate and be a trusted source of truth on the world’s important events. Then they can release journals about their findings. We can call them journalers or something, I don’t know, I don’t have all the answers. Too bad nothing like that exists when we need it most 🥲
rottingleaf@lemmy.world
on 24 Aug 2024 12:51
collapse
What we need is distribution of power. Power acts upon information. There was that weird idea that with solid information there’s no need to distribute power. When people say “due process”, they usually mean that. This wasn’t true anyway.
Information is still fine, people lie and have always lied, humanity has always relied upon chains and webs of trust.
The issue is centralized power forcing you to walk their paths.
ulterno@lemmy.kde.social
on 24 Aug 2024 06:29
collapse
humanity still survived and made progress
Humanity never needed truth, for all of that.
Only a good enough illusion.
It’s just that, most of the times the illusions are not good enough and the truth comes out.
WoahWoah@lemmy.world
on 23 Aug 2024 17:24
nextcollapse
This is a hyperbolic article to be sure. But many in this thread are missing the point. It’s not that photo manipulation is new.
It’s the volume and quality of photo manipulation that’s new. “Flooding the zone with bullshit,” i.e. decreasing the signal-to-noise ratio, can have a demonstrable social effect.
jaggedrobotpubes@lemmy.world
on 23 Aug 2024 18:47
collapse
It seems like the only defense against this would be something along the lines of FUTO’s Harbor, or maybe Ghost Keys. I’m not gonna pretend to know enough about them technically or practically, but a system that can anonymously prove that you’re you across websites could potentially de-fuel that fire.
rottingleaf@lemmy.world
on 24 Aug 2024 12:48
collapse
Them - and F2F.
LucidNightmare@lemm.ee
on 23 Aug 2024 17:49
nextcollapse
There was actually a user on Lemmy that asked if the original photo for the massacre was AI. It hadn’t occurred to me that people who never heard of the 1989 Tiananmen Square protests and massacre would find the image and question if it was real or not.
A very sad sight, a very sad future.
HiddenLychee@lemmy.world
on 23 Aug 2024 23:33
nextcollapse
Photoshop has existed for years. It’s no different than a student in 2010 being shocked at the horrors of man and trying to figure out how it could be faked with a computer. People have denied the Holocaust for generations!
bluemite@lemmy.world
on 24 Aug 2024 01:21
nextcollapse
It is different. The old Photoshop process took a lot of time. Now an image can be manipulated incredibly quickly and spread almost as fast before anyone has time to do anything about it.
This argument keeps missing that it is not only the quality but mainly the quantity of fakes which is going to be the problem. The complete undermining of trust in photographic evidence is seen as a good thing for so many nefarious vested interests, that this is an aim they will actively strive for.
zero_spelled_with_an_ecks@programming.dev
on 24 Aug 2024 04:59
nextcollapse
Were they from the .ml instances?
turkalino@lemmy.yachts
on 24 Aug 2024 06:56
collapse
How is it sad? If they’re young and/or don’t have the best schooling, it’s not their fault they haven’t heard of it. And then they encounter an absurd picture and approach it with skepticism? That’s not sad at all. Healthy skepticism is good, especially with the influx of AI generated content
helenslunch@feddit.nl
on 23 Aug 2024 21:21
nextcollapse
As if photo manipulation hasn’t been around in better forms for decades…?
cordlesslamp@lemmy.today
on 24 Aug 2024 02:10
nextcollapse
Not as easy and accessible as now.
Before, I don’t even know how to erase a pimple on my selfies. Now I can easily generate picture of a photorealistic cat girl riding a bike naked on Time square that could fool any elders in my neighborhood.
helenslunch@feddit.nl
on 24 Aug 2024 02:42
collapse
Accessibility makes it the opposite of convincing.
And a skillfully modified photo is going to convince just about anyone.
WolfLink@sh.itjust.works
on 24 Aug 2024 04:00
collapse
There are some really subtle details experts can look at to detect Photoshop work, such as patterns in the JPEG artifacts than can indicate a photo was reocmpressed multiple times in some areas but not others.
helenslunch@feddit.nl
on 24 Aug 2024 09:52
collapse
They can do the same with these. Only its a lot less subtle.
It is the quantity of fakes because of the easy process which is going to be the problem. Fake pictures will very soon outnumber real, and the amount of them will still kerp grjwing exponentially even after that.
PenisDuckCuck9001@lemmynsfw.com
on 23 Aug 2024 22:43
nextcollapse
The world’s billionaires probably know there’s lots of photographic evidence of stuff they did at Epstien island floating around out there. This is why they’re trying to make ai produce art so realistic that photographs are no longer considered evidence so they can just claim its ai generated if any of that stuff ever gets out.
yamanii@lemmy.world
on 23 Aug 2024 23:47
nextcollapse
These photoshop comments are missing the point that it’s just like art, a good edit that can fool everyone needs someone that practiced a lot and has lots of experience, now even the lazy asses on the right can fake it easily.
Drewelite@lemmynsfw.com
on 24 Aug 2024 03:57
nextcollapse
I think this comment misses the point that even one doctored photo created by a team of highly skilled individuals can change the course of history. And when that’s what it takes, it’s easier to sell it to the public.
What matters is the source. What we’re being forced to reckon with now is: the assumption that photos capture indisputable reality has never and will never be true. That’s why we invented journalism. Ethically driven people to investigate and be impartial sources of truth on what’s happening in the world. But we’ve neglected and abused the profession so much that it’s a shell of what we need it to be.
The thing is that in the future the mere quantity of fakes will make the careful vetting process you describe physically impossible. You will be bombarded with high quality fakes to such an extent that you will simply have to give up trying to keep up, so it will be a choice of either dropping the vetting process or dropping bringing any pictures altogether. For profit driven corporate jwbed media outlets, the choice unfortunately will be obvious.
Drewelite@lemmynsfw.com
on 24 Aug 2024 12:59
collapse
I’m not talking about vetting pictures. I’m talking about journalists who investigate issues THEMSELVES and uncover the truth. They take their OWN pictures and post them on their website and accounts putting their credibility as collateral. We trust them, not because it’s a picture, but because of who took it.
This already happened with text, people learned “Don’t believe everything you read!” And invented the press to figure out the truth. It used to be a core part of our society. But people were tricked into thinking pictures and video were somehow mediums of empirical truth, just because it’s HARD to fake. But never impossible. Which is worse, actually. So we neglected the press and let it collapse into a shit show because we thought we could do it ourselves.
Yeah, it is going to be mainly a quantity issue rather than a quality one. The quality of faked photos has already been high since photoshop. Now a constant growing avalanche of high quality fakes (produced by all sorts of different vested interests with their own particular purposes) is going to barrage us on a daily basis, simply because it is cheap and easy
Hackworth@lemmy.world
on 24 Aug 2024 03:11
nextcollapse
This is one of the required steps on the way to holodecks. I’ve been ready for it for 30 years.
forgotmylastusername@lemmy.ml
on 24 Aug 2024 05:07
nextcollapse
It’s going to be used prolifically for something much more boring. Embellished product listings and fake reviews. If online shopping is frustrating now. It’s probably going to get a lot worse trying to weed out good quality things to buy as photographs are no longer reliable.
rottingleaf@lemmy.world
on 24 Aug 2024 11:56
collapse
Well, one may hope for a “worse is better” scenario. As in Star Wars EU, where people generally do shopping as they still do in less developed areas of our planet - asking people they trust, which ask other people they trust, and so on.
This is going to make centralized media a hellscape of fakery.
It’s like with viruses - if a virus kills people too fast, it’ll kill itself.
Maybe cypherpunk-style “public web” technologies will finally become mainstream, because the rest simply won’t be usable.
FinishingDutch@lemmy.world
on 24 Aug 2024 05:56
nextcollapse
I work at a newspaper as both a writer and photographer. I deal with images all day.
Photo manipulation has been around as long as the medium itself. And throughout the decades, people have worried about the veracity of images. When PhotoShop became popular, some decried it as the end of truthful photography. And now here’s AI, making things up entirely.
So, as a professional, am I worried? Not really. Because at the end of the day, it all comes down to ‘trust and verify when possible’. We generally receive our images from people who are wholly reliable. They have no reason to deceive us and know that burning that bridge will hurt their organisation and career. It’s not worth it.
If someone was to send us an image that’s ‘too interesting’, we’d obviously try to verify it through other sources. If a bunch of people photographed that same incident from different angles, clearly it’s real. If we can’t verify it, well, we either trust the source and run it, or we don’t.
Tamo240@programming.dev
on 24 Aug 2024 06:15
nextcollapse
If a bunch of people photographed that same incident from different angles, clearly it’s real
Interesting that this is the threshold because it might need to be raised. In the past it was definitely true that perspective was a hard problem to solve, so multiple angles would increase the likelihood of veracity. Now with AI tools and even just the proliferation and access to 3D effects packages it might no longer be the case.
FinishingDutch@lemmy.world
on 24 Aug 2024 06:27
collapse
Well again, multiple, independent sources that each have a level of trust go pretty far.
From my personal experience with AI though… I found it difficult to get it to generate consistent images. So if I’d ask it for different angles of the same thing, details on it would change. Can it be done? Sure. With good systems and a bit of photoshopping you could likely fake multiple angles of it.
But for the images we run? It wouldn’t really be worth the effort I imagine. We’re not talking iconic shots like the ones mentioned in the article.
helopigs@lemmy.world
on 24 Aug 2024 06:20
nextcollapse
oddly enough, there are models trained to generate different angles of a given scene!
you’re right about the importance of trust. leveraging and scaling interpersonal trust is the key to consensus.
uienia@lemmy.world
on 24 Aug 2024 07:14
nextcollapse
Personally I think this kind of response shows how not ready we are, because it is grounded in the antiquated assumption that it is just more of the same old instead of a complete revolution in both the quality and quantity of fakery going to happen.
I disagree, they are not talking about the online low trust sources that will indeed undergo massive changes, they’re talking about organisations with chains of trust, and they make a compelling case that they won’t be affected as much.
Not that you’re wrong either, but your points don’t really apply to their scenario. People who built their career in photography will have t more to lose, and more opportunity to be discovered, so they really don’t want to play silly games when a single proven fake would end their career for good. It’ll happen no doubt, but it’ll be rare and big news, a great embarrassment for everyone involved.
Online discourse, random photos from events, anything without that chain of trust (or where the “chain of trust” is built by people who don’t actually care), that’s where this is a game changer.
rottingleaf@lemmy.world
on 24 Aug 2024 11:49
nextcollapse
So politicians and other scum have gotten themselves a technology to put the jinn back into the bottle.
FunnyUsername@lemmy.world
on 24 Aug 2024 14:24
nextcollapse
Sounds like the photographic equivalent of doping
FinishingDutch@lemmy.world
on 24 Aug 2024 17:21
collapse
Exactly. I can’t control where other people find news, and if they choose poor sources, well, that’s on them. All I can do is be the best, most reliable source for them if they choose to read our news.
Our newspaper community is smaller than you might think. People frequently move around from company to company. I’ve worked in radio, TV news as well as newspapers for the past 20 years. I have a lot of former colleagues who work at other companies within our regional media. And us journalists are a gossipy bunch, as you can imagine. If someone actively tries to undermine my trust, they wouldn’t just be blackballed from the dozen or so regional newspapers that we publish, but also the larger national conglomerate that runs about 40. We take pride in good sources. Undermine that, and you’re not working for us.
Knock_Knock_Lemmy_In@lemmy.world
on 24 Aug 2024 09:05
nextcollapse
If a bunch of people photographed that same incident from different angles, clearly it’s real.
I don’t think you can assume this anymore.
Jiggle_Physics@lemmy.world
on 24 Aug 2024 09:43
collapse
Yeah photo editing software, and AI, can be used to create images from different points of view, mimicking different styles, and qualities, of different equipment, and make adjustments for continuity from perspective, to perspective. Unless we have way for something, like AI, to be able to identify fabricated images, using some sort of encoding fingerprint, or something, it won’t be forever until they are completely indiscernible from the genuine article. You would have to be able to prove a negative, that the person who claims to have taken the photo could not have, in order to do so. This, as we know, is far more difficult than current discretionary methods.
FinishingDutch@lemmy.world
on 24 Aug 2024 17:11
collapse
The point I’m making isn’t really about the ability to fake specific angles or the tech side of it. It’s about levels of trust and independent sources.
It’s certainly possible for people to put up some fake accounts and tweet some fake images of seperate angles. But I’m not trusting random accounts on Twitter for that. We look at sources like AP, Reuters, AFP… if they all have the same news images from different angles, it’s trustworthy enough for me. On a smaller scale, we look at people and sources we trust and have vetted personally. People with longstanding relationships. It really does boil down to a ‘circle of trust’: if I don’t know a particular photographer, I’ll talk to someone who can vouch for them based on past experiences.
And if all else fails and it’s just too juicy not to run? We’d slap a big 'ole ‘this image has not been verified’ on it. Which we’ve never had to do so far, because we’re careful with our sources.
Jiggle_Physics@lemmy.world
on 24 Aug 2024 18:16
collapse
Sorry, but if traditional news media loses much more ground to “alternative fact” land, and other reasons for decline vs the new media, I have zero faith they won’t just give in and go with it. I mean, if they are gonna fail anyway, why not at least see if they can get themselves a slice of that pie.
Photo manipulation has been around as long as the medium itself. And throughout the decades, people have worried about the veracity of images. When PhotoShop became popular, some decried it as the end of truthful photography. And now here’s AI, making things up entirely.
I actually think it isn’t the AI photo or video manipulation part that makes it a bigger issue nowadays (at least not primarily), but the way in which they are consumed. AI making things easier is just another puzzle piece in this trend.
Information volume and speed has increased dramatically, resulting in an overflow that significantly shortens the timespan that is dedicated to each piece of content. If i slowly read my sunday newspaper during breakfast, then i’ll give it much more attention, compared to scrolling through my social media feed. That lack of engagement makes it much easier for missinformation to have the desired effect.
There’s also the increased complexity of the world. Things can on the surface seem reasonable and true, but have knock on consequences that aren’t immediately apparent or only hold true within a narrow picture, but fall appart once viewed from a wider perspective. This just gets worse combined with the point above.
Then there’s the downfall of high profile leading newsoutlets in relevance and the increased fragmentation of the information landscape. Instead of carefully curated and verified content, immediacy and clickbait take priority. And this imo also has a negative effect on those more classical outlets, which have to compete with it.
You also have increased populism especially in politics and many more trends, all compounding on the same issue of missinformation.
And even if caught and corrected, usually the damage is done and the correction reaches far fewer people.
nasi_goreng@lemmy.zip
on 24 Aug 2024 14:30
nextcollapse
Except that’s not what happens.
Just take a look at Facebook. Tons of AI generated slop with tens or even hundred thousands likes and people actually believing them. I live in Indonesia, and people often shares fake things just for monetisation engagement and ordinary people have no skill no discern them.
You and I, or even every person here are belong to the rare people that actually able to discern information properly. Most people are just doom scrolling the internet and believing random things that appears to be realistic. Especially for people where tech eduation and literation are not widespread.
ikidd@lemmy.world
on 24 Aug 2024 15:08
nextcollapse
Unfortunately, newspapers and news sources like it that verify information reasonably well aren’t where most people get their info from anymore, and IMO, are unlikely to be around in a decade. It’s become pretty easy to get known misinformation widely distributed and refuting it does virtually nothing to change popular opinion on these stories anymore. This is only going to get worse with tools like this.
FinishingDutch@lemmy.world
on 24 Aug 2024 16:58
collapse
I can’t control where people find their information, that’s a fact. If people choose to find their news on unreliable, fake, agenda-driven, bot-infested social media, there’s very little I can do to stop that.
All I can do is be the best possible source for people who choose to find their news with us.
The ‘death of newspapers’ has been a theme throughout the decades. Radio is faster, it’s going to kill papers. TV is faster, it’s going to kill papers. The internet is faster, it’s going to kill newspapers… and yet, there’s still newspapers. And we’re evolving too. We’re not just a printed product, we also ARE an internet news source. The printed medium isn’t as fast, sure, but that’s also something that our actual readers like. The ability to sit down and read a properly sourced, well written story at a time and place of their choosing. A lot of them still prefer to read their paper saturday morning over a nice breakfast. Like any business, we adapt to the changing needs of consumers. Printed papers might not be as big as they once were, but they won’t be dying out any time soon.
I don’t dispute the usefulness of proper reporting, but at the rate I see newspapers dropping all around us, I’ll be astounded if there’s more than a very few around in a decade. But maybe I’m wrong and people will surprise me and start looking for quality reporting. Doubt it, but maybe.
mipadaitu@lemmy.world
on 24 Aug 2024 18:46
collapse
Thank you. This was a well thought out and logical response.
schnurrito@discuss.tchncs.de
on 24 Aug 2024 06:18
nextcollapse
Sam_Bass@lemmy.world
on 24 Aug 2024 06:35
nextcollapse
No sweat since i am eschewing most things google related.
BallsandBayonets@lemmings.world
on 24 Aug 2024 07:08
collapse
The majority of others aren’t. The technology also isn’t exclusive to Google, or won’t be for long. Forget placing drugs on a person to have an excuse to arrest them, there will be photographic evidence, completely fake, of anyone counter to the system doing whatever crime they want to pin on us.
rottingleaf@lemmy.world
on 24 Aug 2024 12:01
collapse
Look at the good side of this - now nobody has any reason to trust central authorities or any kind of official organization.
Previously it required enormous power to do such things. Now it’s a given that if there’s no chain of trust from the object to the spectator, any information is noise.
It all looks dark everywhere, but what if we will finally have that anarchist future, made by the hands of our enemies?
TypicalHog@lemm.ee
on 24 Aug 2024 08:38
nextcollapse
Damn, those are pretty damn good!
endofline@lemmy.ca
on 24 Aug 2024 12:58
nextcollapse
Davidjjdj@lemmy.world
on 24 Aug 2024 14:45
collapse
Great point. But tools that make it so a 10 year old can manipulate photos even better than your example in several minutes, are in fact fairly new.
Hell they can generate photos that fool 70% of people on Facebook, though now that I say that, maybe that bar isn’t too high…
KillingTimeItself@lemmy.dbzer0.com
on 24 Aug 2024 19:06
collapse
we’ve been able to do this kinda shit since the days of film, it wasn’t hard, just required some clever stitching and blending.
It’s “more accessible” I’m more concerned about shit like AI generated videos though. Those are spooky. Or also just the general accessibility of “natural bot nets” now.
threaded - newest
Awful title.
Clickbait 101
It’s the verge, after all. Nobody should read their slop
Even a few months ago it was hard for people with the knowledge to use AI on photos. I don’t like the idea of this but its unavoidable. There is already so much misinformation and this will make it so much worse.
I don’t believe there’s misinformation because we fail to discern the truth though. Misinformation exists because people believe what they want to believe.
Image manipulation has always been a thing, and there are ways to counter it...
But we already know that a shocking amount of people will simply take what they see at face value, even if it does look suspicious.
The volume of AI generated misinformation online is already too damn high, without it getting more new strings in it's bow.
Governments don't seem to be anywhere near on top of keeping up with these AI developments either, so by the time the law starts accounting for all of this, the damage will be long done already.
On our vacation 2 weeks ago my wife made an awesome picture just with one guy annoyingly in the background. She just tucked him and clicked the button… poof gone, perfect photo.
But it’s never been this absolutely trivial to generate and distribute completely synthetic media. THAT is the real problem here.
Yep, this is a problem of volume of misinformation, the truth can just get buried by one single person generating thousands of fake photos, it’s really easy to lie, it’s really time consuming to fact check.
That’s precisely what I mean.
The effort ratio between generating synthetic visual media and corroborating or disproving a given piece of visual media has literally inverted and then grown by an order of magnitude in the last 3-5 years. That is fucking WILD. And more than a bit scary, when you really start to consider the potential malicious implications. Which you can see being employed all over the place today.
Honestly yeah I agree. Many mainstream social media platforms are infested with shitty generated content to the point of being insanity.
All hail the nail and gear 😉
TL;DR: The new Reimage feature on the Google Pixel 9 phones is really good at AI manipulation, while being very easy to use. This is bad.
Some serious old-man-yelling-at-cloud energy
It’ll sink in for you when photographic evidence is no longer admissible in court
Photoshop has existed for a bit now. So incredibly shocking it was only going to get better and easier to do, move along with the times oldtimer.
Photoshop requires time and talent to make a believable image.
This requires neither.
But it has been possible, for more than a decade
You said “but” like it invalidated what I said, instead of being a true statement and a non sequitur.
You aren’t wrong, and I don’t think that changes what I said either.
Lmao, “but” means your statement can be true and irrelevant at the same time. From the day photoshop could fool people lawyers have been trying to mark any image as faked, misplaced or out of context.
When you just now realise it’s an issue, that’s your problem. People can’t stop these tools from existing, so like, go yell at a cloud or something.
You are misunderstanding me.
I am not disagreeing with you, but it’s intellectually dishonest to not acknowledge the context of the reality we live in: it used to require genuine talent and skill to use a paid tool to fake images, and now is as easy as entering text on your phone in a free app just describing what you want to see.
This is an exponential escalation of existing problems and technologies.
I never said I was just now worried about fake images. To say it myself: I’m worried about the now non-existent barrier that bad actors no longer need to clear to do whatever they want to do here.
Let me be clear so you don’t misunderstand me. When it comes down to prove an image is genuine you haven’t been able to say “look at this picture, it’s real for sure” for almost 30 years. When you want to use a picture to prove something you have to provide much more details about where/how/when/why it was taken, access to those tools won’t change the fact a picture in a vacuum has no meaning.
Like I said, old-man-yelling-at-cloud energy.
I am no longer interested in continuing a conversation with you, as you’ve convinced me that you’re not interested in engaging with what I am saying. Thank you for your time and perspective to this point.
As an “outside observer”, I think maybe you’re not seeing (what I believe is) the other guys viewpoint: What you are bringing up (photoshop has been possible already) is a core part of what he said from the start, and his point builds on top of that. So obviously he already knows it, and arguing about it disregards that his line of argumentation builds upon the basis we all agreed upon to be true until you brought it up as … contrarian? To his point. doesn’t seem like “old man yells at cloud” energy, more like “Uhm, achtually”
Well yeah, I’m not concerned with its ease of use nowadays. I’m more concerned with the computer forensics experts not being able to detect a fake for which Photoshop has always been detectable.
As the cat and mouse game continues, we ask ourselves, is water still wet?
Just wait, image manipulation will happen at image creation and there will be no “original”. Proving an image is unmanipulated will be a landmark legal precedent and set the standard for being able to introduce photographic evidence. It is already a problem for audio recordings and will be eventually for video.
I really don’t have much knowledge on it but it sound like it’s would be an actual good application of blockchain.
Couldn’t a blockchain be used to certify that pictures are original and have not been tampered with ?
On the other hand if it was possible I’m certain someone either have already started it, it is the prefect investor magnet “Using blockchain to counter AI”
How would that work?
I am being serious, I am an IT and can’t see how that would work in any realistic way.
And even if we had a working system to track all changes made to a photo, it would only work if the author submitted the original image before any change haf been made, but how would you verify that the original copy of a photo submitted to the system has not been tempered with?
Sure, you could be required to submit the raw file from the camera, but it is only a matter of time untill AI can perfectly simulate an optical sensor to take a simulated raw of a simulated scene.
Nope, we simply have to fall back on building trust with photo journalists, and trust digital signatures to tell us when we are seeing a photograph modified outsided of the journalist’s agency.
Yep, I think we pictures are becoming a valuable as text and it is fine, we just need to get used to it.
Before photography became mainstream the only source of information was written, it is extremely simple to make a fake story so people had to rely on trusted sources. Then for a short period of history photography became a (kinda) reliable sources of information by itself and this trust system lost its importance.
In most cases seeing a photo means that we were seeing a true reflection of what happened, especially if we were song multiple photos of the same event.
Now we are arriving at the end of this period, we cannot trust a photo by itself anymore, tampering a photo is becoming as easy as writing a fake story. This is a great opportunity for journalists I believe.
There has never not been a time when photography was not manipulated in some way, be it as simple as picking a subject and framing it in a specific way can completely change the story.
I really enjoy photography as a hobby, however I find it a bit embarrasing and intrusive to take photos of other people, so my photos tend to look empty of people.
I will allways frame a picture to have no or as a very few people in it as possible.
In general I don’t edit my photos on the computer, I just let them speak for themselves, even if that story is a half truth.
We have never been able to trust photographs completely, though you make a good point about truth in numbers, that won’t go way just because of AI.
The big issue now is how easiy it is to make a completely believably faked photo out of an existing photo, we have been able to do this for decades, but is has been way, way harder to do.
As for the blockchain making photos valuable, we tried that, NFTs as a concept is dumb and has failed, I don’t believe that NFTs will be the future of ownership.
Meh, those edited photos could have been created in Photoshop as well.
This makes editing and retouching photos easier, and that’s a concern, but it’s not new.
Something I heard in the photoshop VS ai argument is it makes an already existing process much faster and almost anyone can do it which increases the shear amount that one person or a group could make almost how a printing press made the production of books so much faster (if you’re in to history)
I’m too tired to take a stance so I’m just sharing some arguments I’ve heard
Making creating fake images even easier definitely isn’t great, I agree with you there, but it’s nothing that couldn’t already be done with Photoshop.
I definitely don’t like the idea you can do this on your phone.
Exactly, it was already established that pictures from untrusted sources are to be disregarded unless they can be verified by trusted sources.
It is basically how it has been forever with the written press: Just like everyone now has the capability to manipulate a picture. Everyone can write we are being invaded by aliens, but whether we should believe it is another thing.
It might take some time for the general public to learn this, but it should be a focus area of general schooling within the area of source criticism.
… and we all know that lead to 30 years of bloody war, btw
It’s a shitty toy that’ll make some people sorry when they don’t have any photos from their night out without tiny godzilla dancing on their table. It won’t have the staying power Google wishes it to, since it’s useless except for gags.
But, please, Verge,
get fucked
I wish tools to detect if an image is real or not become as easy to use and good as these AI tools bullshit.
We need to bring back people who can identify shops from some of the pixels and having seen quite a few shops in their time.
Captain Disillusion vs. The Artificer
It’s fundamentally not possible.
At some point fakes will be pixel perfect indistinguishable.
I’ve just tried to upload the picture of the girl with fake drugs on the floor in a AI detection tool and it told me it was 0,2% likely to have AI generated content. This does not look good.
Any tool someone invents will be used to train an AI to circumvent that tool.
In fact that's how a lot of AI training is done in the first place.
This reaffirms my wish to go back to monkey.
We literally lived for thousands of years without photos. And we’ve lived for 30 years with Photoshop.
The article takes a doomed tone for sure but the reality is we know how dangerous and prolific misinformation is.
The Nazis based their entire philosophy on misinformation, and they did this in a world that predated computers. I don’t actually think there’s going to be a problem here all of the issues that the people are claiming exist have always been possible and not only possible but actually done in many cases.
AI is just the tool by which misinformation will now be spread but if AI didn’t exist the misinformation would just find another path.
I disagree with your point that it wouldn’t get worse. The Nazi example was in fact much worse for it’s time because of a new tool they called the “eighth great power”.
Goebbels used radio, which was new at the time, and subsidized radios for German citizens. AI is new, faster and more compelling than radio, not limited to a specific media type, and everyone already has receivers.
So, shouldn’t the pretense that images are sources of truth evaporating, be a good thing?
Except it was way harder to do.
Now call me a “ableist, technophobic, luddite”, that wants to ruin the chance of other people making GTA-like VRMMORPGs from a single line of prompt!
You know that’s not possible right?
if I as an anti-AI person said that, I’d be called out for posting FUD…
What are you talking about lol
have you considered just not listening to AI bros and not letting their opinions upset you
This is only a threat to people that took random picture at face value. Which should not have been a thing for a long while, generative AI or not.
The source of an information/picture, as well as how it was checked has been the most important part of handling online content for decades. The fact that it is now easier for some people to make edits does not change that.
Your comment somehow just made me realize something: When we see/read news, we have to trust the one who’s telling them to us. Since we weren’t there in person to see it with our own eyes. Therefore, it’s always about a “chain of trust”.
This is true no matter whether photos can be manipulated or not. People have been able to lie since humanity exists. Nothing has really changed. Photography, just like globalization, has only brought everything closer together, making it easier to have a more direct, straightforward relationship to other people and events. With the beginning of AI, this distance between you and an event is going to increase a bit, but the core mechanics are still similar.
I kind of wonder, how do we really know that something is true? Do atoms actually exist? What if we’re being lied to by our authorities. You say “of course not”. But what if? I mean, if we blindly trust authorities, we end up like the republicans, who believe everything that fox news tells them. How, then, do we discern truth?
How, then, do we discern truth? I guess we have to give “proof” for everything, in the sense of mathematical proof. Something that everybody can follow, using only their fundamental assumptions and deduction. I guess that is what science is all about.
.
It’s always been about context and provenance. Who took the image? Are there supporting accounts?
But also, it has always been about the knowlege that no one… Absolutely no one… Does lines of coke from a woven mat floor covering.
<img alt="don’t do drugs kids. " src="https://lemmy.world/pictrs/image/ac28040d-e62a-47c0-b3ef-98a3592ef802.jpeg">
Here is a famous faked photo of fairies from 1917 -> en.m.wikipedia.org/wiki/Cottingley_Fairies
Nope it must be real because everyone knows fake photographs only became possible in 2022 with AI otherwise all these articles would be stupid.
Lots of obviously fake tipoffs in this one. The overall scrawny bitch aesthetic, the fact she is wearing a club/bar wrist band, the bottle of Mom Party Select™ wine, and the persons thumb/knee in the frame… All those details are initially plausible until you see the shitty AI artifacts.
All the details you just mentioned are also present in the unaltered photo though. Only the “drugs” are edited in.
Didn’t read the article, did you?
Em what. The drug power finale is what has been added in by the AI what are you talking about.
This is an AI-edited photo, and literally every “artifact” you pointed out is present in the original except for the wine bottle. You’re not nearly as good as spotting fakes as you think you are - nobody is
This comment is pure gold, you are already fooled but think you have a discerning eye, you are not immune to propaganda.
We’ve had fake photos for over 100 years at this point.
en.wikipedia.org/wiki/Cottingley_Fairies
Maybe it’s time to do something about confirming authenticity, rather than just accepting any old nonsense as evidence of anything.
At this point anything can be presented as evidence, and now can be equally refuted as an AI fabrication.
We need a new generation of secure cameras with internal signing of images and video (to prevent manipulation), built in LIDAR (to make sure they’re not filming a screen), periodic external timestamps of data (so nothing can be changed after the supposed date), etc.
I am very opposed to this. It means surrendering all trust in pictures to Big Tech. If at some time only photos signed by Sony, Samsung, etc. are considered genuine, then photos taken with other equipment, e.g., independently manufactured cameras or image sensors, will be dismissed out of hand. If, however, you were to accept photos signed by the operating system on those devices regardless of who is the vendor, that would invalidate the entire purpose because everyone could just self-sign their pictures. This means that the only way to effectively enforce your approach is to surrender user freedom, and that runs contrary to the Free Software Movement and the many people around the world aligned with it. It would be a very dystopian world.
It would also involve trusting those corporations not to fudge evidence themselves.
I mean, not everything photo related would have to be like this.
But if you wanted you photo to be able to document things, to provide evidence that could send people to prison or be executed…
The other choice is that we no longer accept photographic, audio or video evidence in court at all. If it can no longer be trusted and even a complete novice can convincingly fake things, I don’t see how it can be used.
There’s no need to make these things Big Tech, so if that’s why you are opposed to it, reconsider what you are actually opposed to. This could be implemented in a FOSS way or an open standard.
So you not trust HTTPS because you’d have to trust big tech? Microsoft and Google and others sign the certificates you use to trust that your are sending your password to your bank and not a phisher. Like how any browser can see and validate certificates, any camera could have a validation or certificate system in place to prove that the data is straight from an unmodified validated camera sensor.
What in the world is going on with Elsie’s hand in the “second of the five photographs?”
I was thinking about those pictures! The garden is magic enough without there needing to be fairies at the bottom of it. I’m not sure if the saying is linked to these forgeries, but I always kind of thought it was.
There are even actual statues of completely made up stuff.
People can write things that aren’t true! Oh no, now we can’t trust trustworthy texts such as scientific papers that have undergone peer review!
The Verge are well versed on writing things that are untrue
I mean… have you seen the scathing reports on scientific papers, psychology especially? Peer review doesn’t catch liars. It catches bad experimental design, and it sometimes screens out people the reviewers don’t like. Replication can catch liars sometimes, but even in the sciences that are ‘hard’ it is rare to see replication because that doesn’t bring the grant money in.
Okay so it’s the verge so I’m not exactly expecting much but seriously?
People have been faking photographs basically since day one, with techniques like double exposure. Also even more sophisticated photo manipulation has been possible with Photoshop which has existed for decades.
There’s a photo of me taken in the '90s on thunder mountain at Disneyland which has been edited to look like I’m actually on a mountainside rather than in a theme park. I think we can deal with fakeable photographs the only difference here is the process is automatable which honestly doesn’t make even the blindest bit of difference. It’s quicker but so what.
It used to take professionals or serious hobbyists to make something fake look believable. Now it’s at the tip of everyone’s fingers. Fake photos were already a smaller issue, but this very well could become a tidal wave of fakes trying to grab attention.
Think about how many scammers there are. Think about how many horny boys there are. Think about how much online political fuckery goes around these days. When believable photographs of whatever you want people to believe are at the tips of anyone’s fingers, it’s very, very easy to start a wildfire of misinformation. And think about the young girls being tormented in middle school and high school. And all the scammable old people. And all the fascists willing to use any tool at their disposal to sow discord and hatred.
It’s not a nothing problem. It could very well become a torrent of lies.
en.wikipedia.org/wiki/Cottingley_Fairies
Your point being…?
I mean…we can all see those are inanimate, right? But that doesn’t even change my point. If anything, it kinda helps prove my point. People are gullible as hell. What’s that saying? “A lie will get halfway around the world before the truth has a chance to pull its boots on.”
A torrent of believable fakes will call into question photographic evidence. I mean, we’ve all seen it happening already. Some kinda strange or interesting picture shows up and everyone is claiming it was AI generated. That’s the other half of the problem.
Photographic evidence is now called into question readily. That happened with photoshop too, but like I said, throw enough shit against the wall—with millions and millions of other people also throwing shit at the wall—and some is bound to stick. The probability is skyrocketing now that it’s in everyone’s hands and the actually AIgen pictures are becoming indecipherable from photo evidence.
That low effort fairy hoax made a bunch of people believe there were 8in. fairies just existing in the world, regardless of how silly that was. Now, stick something entirely believable into a photograph that only barely blurs the lines of reality and it can be like wildfire. Have you seen those stupid Facebook AI pages? Like shrimp Jesus, the kids in Africa building cars out of garlic cloves, etc. People are falling for that dumbass shit. Now put Kamala Harris doing something shady and release it in late October. I would honestly be surprised if we’re not hit with at least one situation like that in a few months.
Come on, science fiction had similar technologies to fake things since 40s. The writing was on the wall.
It didn’t really work outside of authors’ and readers’ imagination, but the only reason we’re scared is that we’re forced into centralized hierarchical systems in which it’s harder to defend.
I mean, sure, deception as a concept has always been around. But let me just put it this way:
How many more scam emails, scam texts, how many more data leaks, conspiracy theories are going around these days? All of these things always existed. The Nigerian prince scam. That one’s been around forever. The door-to-door salesman, that one’s been around forever. The snake oil charlatan. Scams and lies have been around since we could communicate, probably. But never before have we been bombarded with them like we are today. Before, it took a guy with a rotary phone and a phone book a full day to try to scam 100 people. Now 100 calls go out all at once with a different fake phone number for each, spoofed to be as close to the recipient’s number as possible.
The effort input needed for these things have dropped significantly with new tech, and their prevalence skyrocketed. It’s not a new story. In fact, it’s a very old story. It’s just more common and much easier, so it’s taken up by more people because it’s more lucrative. Why spend all of your time trying to hack a campaign’s email (which is also still happening), when you can make one suspicious picture and get all of your bots to get it trending so your company gets billions in tax breaks? All at the click of a button. Then send your spam bots to call millions of people a day to spread the information about the picture, and your email bots to spam the picture to every Facebook conspiracy theorist. All in a matter of seconds.
This isn’t a matter of “what if.” This is kind of just the law of scams. It will be used for evil. No question. And it does have an effect. You can’t have random numbers call you anymore without you immediately expecting their spam. Soon, you won’t be able to get photo evidence without immediately thinking it might be fake. Water flows downhill, new tech gets used for scams. The like a law of nature at this point.
Wise people still teach their children (and remind themselves) not to talk to strangers, say “no” if not sure, mind their own business because their attention and energy are not infinite, and trust only family.
You’d be wary of people who are not your neighbors in the Middle Ages. Were you a nobleman, you’d still mostly talk to people you knew since childhood, yours or theirs, and the rare new faces would be people you’ve heard about since childhood, yours or theirs.
It’s not a new danger. Even qualitatively - the change for a villager coming to a big city during the industrial revolution was much more radical.
That’s exactly what I meant when I said:
And you just kinda proved my point. As time has gone on, the great of deception has grown with new technology. This is just the latest iteration. And every new one has expanded the chances/danger exponentially.
What I really meant is that humanity is a self-regulating system. This disturbance will be regulated just as well as those other ones.
The unpleasant thing is that the example I’ve given involved lots of new power being created, while our disturbance is the opposite - people\forces already having power desperately trying to preserve their relative weight, at the cost of preventing new power being created.
But we will see if they’ll succeed. After all, the very reason they are doing this is because they can’t create power, and that is because their institutional understanding is lacking, and this in turn means that they are not in fact doing what they think they are. And by forcing those who can create power to the fringe, they are accelerating the tendencies for relief.
I don’t think this is the power redistribution you’re implying it is. I’m not actually sure what you mean by that. The power to create truths? To spread propaganda? I can’t think of any other power this tech would redistribute. Would you mind explaining?
I don’t mean anything by that, because I didn’t say anything about any redistribution.
If your question is what does this have to do with ability to easily generate fakes - then the power created would be in killing untrusted (as opposed to webs of trust and f2f) web. It’s a good thing.
Mind explaining what you meant then? I guess I misunderstood your point
Yes, I meant the second sentence)
Oh I thought we were having a friendly discussion. I didn’t know we were arguing. My bad, I’ll change my tone
We were, I’ve just answered your question before you asking it, so referred to that
.
The new technique distorts reality in a much larger way. That hasn’t been there before. When everybody has this in their smartphones, we will look at manipulated pics on an hourly basis. That’s unprecedented.
I think this is a good thing.
Pictures/video without verified provenance have not constituted legitimate evidence for anything with meaningful stakes for several years. Perfect fakes have been possible at the level of serious actors already.
Putting it in the hands of everyone brings awareness that pictures aren’t evidence, lowering their impact over time. Not being possible for anyone would be great, but that isn’t and hasn’t been reality for a while.
While this is good thing, not being able to tell what is real and what is not would be disaster. What if every comment here but you were generated by some really advanced ai? What they can do now will be laughable compared to what they can do many years from now. And at that point it will be too late to demand anything to be done about it.
Ai generated content should have somekind of tag or mark that is inherently tied to it that can be used to identify it as ai generated, even if only part is used. No idea how that would work though if its even possible.
You already can’t. You can’t close Pandora’s box.
Adding labels just creates a false sense of security.
it wouldnt be label, that wouldnt do anything since it could just be erased. It should be something like invisible set of pixels on pictures or some inaudible soundpattern on sounds that can be detected in some way.
But it’s irrelevant. You can watermark all you want in the algorithms you control, but it doesn’t change the underlying fact that pictures have been capable of lying for years.
People just recognizing that a picture is not evidence of anything is better.
Yes, but reason why people dont already consider pictures irrelevant is that it takes time and effort to manipulate a picture. With ai not only is it fast it can be automated. Of course you shouldnt accept something so unreliable as legal evidence but this will spill over to everything else too
It doesn’t matter. Any time there are any stakes at all (and plenty of times there aren’t), there’s someone who will do the work.
It doesnt matter if you cant trust anything you see? What if you couldn’t be sure if you weren’t talking to bot right now?
Photos/video from unknown sources have already been completely worthless as evidence for a solid decade. If you used a random picture online to prove a point 5 years ago, you were wrong. This does not change that reality in any way.
The only thing changing is your awareness that they’re not credible.
What about reliable sources becoming less reliable? Knowing something is not credible doesn’t help if i can’t know what is credible
They are not reliable sources. You cannot become less reliable than “not at all”, and that has been the state of pictures and videos for many years already. There is absolutely no change to the evidentiary value of pictures/video.
Making the information more readily available does not change the reality that pictures aren’t evidence.
I’m not talking about evidence, i’m talking about fundamendal being able to trust anything digital at all in any context. What if you couldnt be sure if phonecall from your friend was actually from your friend or if you cant be sure about any picture shown to you if its actually about some real thing.
Things you need to be able to trust in daily life dont have to be court-level evidence. That is what abuse of ai will take from us.
It’s the exact same thing. You’re drawing a distinction between two identical things.
Pictures have not been credible for a long time. You shouldn’t have “trusted” a picture for anything 5 years ago.
The only thing that’s in any way different is that now you know you can’t trust it.
I suppose the conclusion is that we need better ways to verify things
The conclusion is to learn to be comfortable with uncertainty.
The world is inherently uncertain, all the way down to the possibility of measuring subatomic particles.
I completely agree. This is going to free kids from someone taking a picture of them doing something relatively harmless and extorting them. “That was AI, I wasn’t even at that party 🤷”
I can’t wait for childhood and teenage life to being a bit more free and a bit less constantly recorded.
yeah, every time you go to a party, and fun happens, somebody pulls out their smartphone and starts filming. it’s really bad. people can only relax when there’s privacy, and smartphones have stolen privacy from society for over 10 years now. we need to either ban filming in general (which is not doable) or discredit photographs - which we’re doing right now.
There was film of that exact event. The guy didn’t get run over by the tank, he got on the hood and berated the driver.
Cops in America would run you over for less
Well, luckily all just had a talk and some tea about it and nobody died
youtu.be/vSbx352cn8A
Explain away all these other photos then
What do you mean explain away? I pointed out that they always stop the footage in a way that implies he dies- when he clearly doesn’t. Having an article about how AI photos can be used to manipulate our perception of reality cite an instance of careful propaganda manipulating the perception of what happened was just a little on the nose.
Seriously posting about a massacre from over 30 years ago where a few hundred people were killed fighting the cops like its supposed to carry water today? Just compare that to the massacre that’s happening right now in Gaza, way more actual evidence of heinous crimes and it’s way more of a concern to me because it’s my government funding it.
TAKING OUR JOBSAI Is Already Taking Jobs in the Video Game Industry - Wired
Democrats push bill to hire illegal immigrants - FOX News
HARASSING WOMEN AND CHILDRENBoys are taking images of female classmates and using AI to deepfake nude photos - Fortune
Female Fox journalist harassed and chased by migrants while reporting outside shelter - Dailymail
A THREAT TO OUR WAY OF LIFEA.I. Poses ‘Risk of Extinction,’ Industry Leaders Warn - NYTimes
Poll: Americans Fear their way of life is under threat - Fox News
THEY’RE SHITTING ON THE BEACHESREWRITING HISTORY BY DOCTORING PHOTOS WITH NEVER SEEN BEFORE PHOTO MANIPULATIONS
Sorry everyone I keep forgetting which zeitgeist that media is currently using to make us hate and fear something.
.
…did you just post 6 completely random articles as if there was some sort of point other than “news sites report lots of different news?”
No, I mean there’s headings and groupings to assist with the inference
There might be a point. I see an association. If others do as well that’s good. If others don’t that is also ok.
To spell it out directly. I think its weird that media is recycling headlines for AI from republican headlines for immigration.
Often I cannot see the forest for the trees but sometimes I feel the presence of it even when I’m in it.
Relevant XKCD. Humans have always been able to lie. Having a single form of irrefutable proof is the historical exception, not the rule.
Regarding that last panel, why would multiple people go through the trouble of carving lies about Ea-Nasir’s shitty copper? And even if they did, why would he keep them? No, his copper definitely sucked.
The obvious conjecture is that they were trying to commit fraud and get free copper
interesting thought. we haven’t had photos in history, and people didn’t need them. also, we’ve been able to produce text deepfakes all throughout history (and people actually did that - a lot) and somehow, humanity still survived and made progress. maybe we should question our assumptions whether we really need a medium to communicate absolute truth.
If you’re getting your truth from somewhere you don’t trust, you’ve already lost the plot. Having a medium to convey absolute truth is NOT the exception, because it never existed. Not with first hand accounts, not with photos, not with videos. Anything, from its inception, has been able to be faked by someone motivated enough.
What we need is an industry of independent ethically driven individuals to investigate and be a trusted source of truth on the world’s important events. Then they can release journals about their findings. We can call them journalers or something, I don’t know, I don’t have all the answers. Too bad nothing like that exists when we need it most 🥲
What we need is distribution of power. Power acts upon information. There was that weird idea that with solid information there’s no need to distribute power. When people say “due process”, they usually mean that. This wasn’t true anyway.
Information is still fine, people lie and have always lied, humanity has always relied upon chains and webs of trust.
The issue is centralized power forcing you to walk their paths.
Humanity never needed truth, for all of that. Only a good enough illusion.
It’s just that, most of the times the illusions are not good enough and the truth comes out.
This is a hyperbolic article to be sure. But many in this thread are missing the point. It’s not that photo manipulation is new.
It’s the volume and quality of photo manipulation that’s new. “Flooding the zone with bullshit,” i.e. decreasing the signal-to-noise ratio, can have a demonstrable social effect.
It seems like the only defense against this would be something along the lines of FUTO’s Harbor, or maybe Ghost Keys. I’m not gonna pretend to know enough about them technically or practically, but a system that can anonymously prove that you’re you across websites could potentially de-fuel that fire.
Them - and F2F.
There was actually a user on Lemmy that asked if the original photo for the massacre was AI. It hadn’t occurred to me that people who never heard of the 1989 Tiananmen Square protests and massacre would find the image and question if it was real or not.
A very sad sight, a very sad future.
Photoshop has existed for years. It’s no different than a student in 2010 being shocked at the horrors of man and trying to figure out how it could be faked with a computer. People have denied the Holocaust for generations!
It is different. The old Photoshop process took a lot of time. Now an image can be manipulated incredibly quickly and spread almost as fast before anyone has time to do anything about it.
This argument keeps missing that it is not only the quality but mainly the quantity of fakes which is going to be the problem. The complete undermining of trust in photographic evidence is seen as a good thing for so many nefarious vested interests, that this is an aim they will actively strive for.
Were they from the .ml instances?
How is it sad? If they’re young and/or don’t have the best schooling, it’s not their fault they haven’t heard of it. And then they encounter an absurd picture and approach it with skepticism? That’s not sad at all. Healthy skepticism is good, especially with the influx of AI generated content
As if photo manipulation hasn’t been around in better forms for decades…?
Not as easy and accessible as now.
Before, I don’t even know how to erase a pimple on my selfies. Now I can easily generate picture of a photorealistic cat girl riding a bike naked on Time square that could fool any elders in my neighborhood.
Accessibility makes it the opposite of convincing.
And a skillfully modified photo is going to convince just about anyone.
There are some really subtle details experts can look at to detect Photoshop work, such as patterns in the JPEG artifacts than can indicate a photo was reocmpressed multiple times in some areas but not others.
They can do the same with these. Only its a lot less subtle.
It is the quantity of fakes because of the easy process which is going to be the problem. Fake pictures will very soon outnumber real, and the amount of them will still kerp grjwing exponentially even after that.
The world’s billionaires probably know there’s lots of photographic evidence of stuff they did at Epstien island floating around out there. This is why they’re trying to make ai produce art so realistic that photographs are no longer considered evidence so they can just claim its ai generated if any of that stuff ever gets out.
Wont work against any good digital forensics.
These photoshop comments are missing the point that it’s just like art, a good edit that can fool everyone needs someone that practiced a lot and has lots of experience, now even the lazy asses on the right can fake it easily.
I think this comment misses the point that even one doctored photo created by a team of highly skilled individuals can change the course of history. And when that’s what it takes, it’s easier to sell it to the public.
What matters is the source. What we’re being forced to reckon with now is: the assumption that photos capture indisputable reality has never and will never be true. That’s why we invented journalism. Ethically driven people to investigate and be impartial sources of truth on what’s happening in the world. But we’ve neglected and abused the profession so much that it’s a shell of what we need it to be.
The thing is that in the future the mere quantity of fakes will make the careful vetting process you describe physically impossible. You will be bombarded with high quality fakes to such an extent that you will simply have to give up trying to keep up, so it will be a choice of either dropping the vetting process or dropping bringing any pictures altogether. For profit driven corporate jwbed media outlets, the choice unfortunately will be obvious.
I’m not talking about vetting pictures. I’m talking about journalists who investigate issues THEMSELVES and uncover the truth. They take their OWN pictures and post them on their website and accounts putting their credibility as collateral. We trust them, not because it’s a picture, but because of who took it.
This already happened with text, people learned “Don’t believe everything you read!” And invented the press to figure out the truth. It used to be a core part of our society. But people were tricked into thinking pictures and video were somehow mediums of empirical truth, just because it’s HARD to fake. But never impossible. Which is worse, actually. So we neglected the press and let it collapse into a shit show because we thought we could do it ourselves.
Yeah, it is going to be mainly a quantity issue rather than a quality one. The quality of faked photos has already been high since photoshop. Now a constant growing avalanche of high quality fakes (produced by all sorts of different vested interests with their own particular purposes) is going to barrage us on a daily basis, simply because it is cheap and easy
This is one of the required steps on the way to holodecks. I’ve been ready for it for 30 years.
It’s going to be used prolifically for something much more boring. Embellished product listings and fake reviews. If online shopping is frustrating now. It’s probably going to get a lot worse trying to weed out good quality things to buy as photographs are no longer reliable.
Well, one may hope for a “worse is better” scenario. As in Star Wars EU, where people generally do shopping as they still do in less developed areas of our planet - asking people they trust, which ask other people they trust, and so on.
This is going to make centralized media a hellscape of fakery.
It’s like with viruses - if a virus kills people too fast, it’ll kill itself.
Maybe cypherpunk-style “public web” technologies will finally become mainstream, because the rest simply won’t be usable.
I work at a newspaper as both a writer and photographer. I deal with images all day.
Photo manipulation has been around as long as the medium itself. And throughout the decades, people have worried about the veracity of images. When PhotoShop became popular, some decried it as the end of truthful photography. And now here’s AI, making things up entirely.
So, as a professional, am I worried? Not really. Because at the end of the day, it all comes down to ‘trust and verify when possible’. We generally receive our images from people who are wholly reliable. They have no reason to deceive us and know that burning that bridge will hurt their organisation and career. It’s not worth it.
If someone was to send us an image that’s ‘too interesting’, we’d obviously try to verify it through other sources. If a bunch of people photographed that same incident from different angles, clearly it’s real. If we can’t verify it, well, we either trust the source and run it, or we don’t.
Interesting that this is the threshold because it might need to be raised. In the past it was definitely true that perspective was a hard problem to solve, so multiple angles would increase the likelihood of veracity. Now with AI tools and even just the proliferation and access to 3D effects packages it might no longer be the case.
Well again, multiple, independent sources that each have a level of trust go pretty far.
From my personal experience with AI though… I found it difficult to get it to generate consistent images. So if I’d ask it for different angles of the same thing, details on it would change. Can it be done? Sure. With good systems and a bit of photoshopping you could likely fake multiple angles of it.
But for the images we run? It wouldn’t really be worth the effort I imagine. We’re not talking iconic shots like the ones mentioned in the article.
oddly enough, there are models trained to generate different angles of a given scene!
you’re right about the importance of trust. leveraging and scaling interpersonal trust is the key to consensus.
Personally I think this kind of response shows how not ready we are, because it is grounded in the antiquated assumption that it is just more of the same old instead of a complete revolution in both the quality and quantity of fakery going to happen.
I disagree, they are not talking about the online low trust sources that will indeed undergo massive changes, they’re talking about organisations with chains of trust, and they make a compelling case that they won’t be affected as much.
Not that you’re wrong either, but your points don’t really apply to their scenario. People who built their career in photography will have t more to lose, and more opportunity to be discovered, so they really don’t want to play silly games when a single proven fake would end their career for good. It’ll happen no doubt, but it’ll be rare and big news, a great embarrassment for everyone involved.
Online discourse, random photos from events, anything without that chain of trust (or where the “chain of trust” is built by people who don’t actually care), that’s where this is a game changer.
So politicians and other scum have gotten themselves a technology to put the jinn back into the bottle.
Sounds like the photographic equivalent of doping
Exactly. I can’t control where other people find news, and if they choose poor sources, well, that’s on them. All I can do is be the best, most reliable source for them if they choose to read our news.
Our newspaper community is smaller than you might think. People frequently move around from company to company. I’ve worked in radio, TV news as well as newspapers for the past 20 years. I have a lot of former colleagues who work at other companies within our regional media. And us journalists are a gossipy bunch, as you can imagine. If someone actively tries to undermine my trust, they wouldn’t just be blackballed from the dozen or so regional newspapers that we publish, but also the larger national conglomerate that runs about 40. We take pride in good sources. Undermine that, and you’re not working for us.
I don’t think you can assume this anymore.
Yeah photo editing software, and AI, can be used to create images from different points of view, mimicking different styles, and qualities, of different equipment, and make adjustments for continuity from perspective, to perspective. Unless we have way for something, like AI, to be able to identify fabricated images, using some sort of encoding fingerprint, or something, it won’t be forever until they are completely indiscernible from the genuine article. You would have to be able to prove a negative, that the person who claims to have taken the photo could not have, in order to do so. This, as we know, is far more difficult than current discretionary methods.
The point I’m making isn’t really about the ability to fake specific angles or the tech side of it. It’s about levels of trust and independent sources.
It’s certainly possible for people to put up some fake accounts and tweet some fake images of seperate angles. But I’m not trusting random accounts on Twitter for that. We look at sources like AP, Reuters, AFP… if they all have the same news images from different angles, it’s trustworthy enough for me. On a smaller scale, we look at people and sources we trust and have vetted personally. People with longstanding relationships. It really does boil down to a ‘circle of trust’: if I don’t know a particular photographer, I’ll talk to someone who can vouch for them based on past experiences.
And if all else fails and it’s just too juicy not to run? We’d slap a big 'ole ‘this image has not been verified’ on it. Which we’ve never had to do so far, because we’re careful with our sources.
Sorry, but if traditional news media loses much more ground to “alternative fact” land, and other reasons for decline vs the new media, I have zero faith they won’t just give in and go with it. I mean, if they are gonna fail anyway, why not at least see if they can get themselves a slice of that pie.
I actually think it isn’t the AI photo or video manipulation part that makes it a bigger issue nowadays (at least not primarily), but the way in which they are consumed. AI making things easier is just another puzzle piece in this trend.
Information volume and speed has increased dramatically, resulting in an overflow that significantly shortens the timespan that is dedicated to each piece of content. If i slowly read my sunday newspaper during breakfast, then i’ll give it much more attention, compared to scrolling through my social media feed. That lack of engagement makes it much easier for missinformation to have the desired effect.
There’s also the increased complexity of the world. Things can on the surface seem reasonable and true, but have knock on consequences that aren’t immediately apparent or only hold true within a narrow picture, but fall appart once viewed from a wider perspective. This just gets worse combined with the point above.
Then there’s the downfall of high profile leading newsoutlets in relevance and the increased fragmentation of the information landscape. Instead of carefully curated and verified content, immediacy and clickbait take priority. And this imo also has a negative effect on those more classical outlets, which have to compete with it.
You also have increased populism especially in politics and many more trends, all compounding on the same issue of missinformation.
And even if caught and corrected, usually the damage is done and the correction reaches far fewer people.
Except that’s not what happens.
Just take a look at Facebook. Tons of AI generated slop with tens or even hundred thousands likes and people actually believing them. I live in Indonesia, and people often shares fake things just for monetisation engagement and ordinary people have no skill no discern them.
You and I, or even every person here are belong to the rare people that actually able to discern information properly. Most people are just doom scrolling the internet and believing random things that appears to be realistic. Especially for people where tech eduation and literation are not widespread.
Unfortunately, newspapers and news sources like it that verify information reasonably well aren’t where most people get their info from anymore, and IMO, are unlikely to be around in a decade. It’s become pretty easy to get known misinformation widely distributed and refuting it does virtually nothing to change popular opinion on these stories anymore. This is only going to get worse with tools like this.
I can’t control where people find their information, that’s a fact. If people choose to find their news on unreliable, fake, agenda-driven, bot-infested social media, there’s very little I can do to stop that.
All I can do is be the best possible source for people who choose to find their news with us.
The ‘death of newspapers’ has been a theme throughout the decades. Radio is faster, it’s going to kill papers. TV is faster, it’s going to kill papers. The internet is faster, it’s going to kill newspapers… and yet, there’s still newspapers. And we’re evolving too. We’re not just a printed product, we also ARE an internet news source. The printed medium isn’t as fast, sure, but that’s also something that our actual readers like. The ability to sit down and read a properly sourced, well written story at a time and place of their choosing. A lot of them still prefer to read their paper saturday morning over a nice breakfast. Like any business, we adapt to the changing needs of consumers. Printed papers might not be as big as they once were, but they won’t be dying out any time soon.
I don’t dispute the usefulness of proper reporting, but at the rate I see newspapers dropping all around us, I’ll be astounded if there’s more than a very few around in a decade. But maybe I’m wrong and people will surprise me and start looking for quality reporting. Doubt it, but maybe.
Thank you. This was a well thought out and logical response.
I think astralcodexten.com/…/mostly-skeptical-thoughts-on… mostly applies to this too
No sweat since i am eschewing most things google related.
The majority of others aren’t. The technology also isn’t exclusive to Google, or won’t be for long. Forget placing drugs on a person to have an excuse to arrest them, there will be photographic evidence, completely fake, of anyone counter to the system doing whatever crime they want to pin on us.
Look at the good side of this - now nobody has any reason to trust central authorities or any kind of official organization.
Previously it required enormous power to do such things. Now it’s a given that if there’s no chain of trust from the object to the spectator, any information is noise.
It all looks dark everywhere, but what if we will finally have that anarchist future, made by the hands of our enemies?
Damn, those are pretty damn good!
Photography manipulation existed almost since the invention of photography. It was only much harder see the famous photo edition history.com/…/josef-stalin-great-purge-photo-reto…
Great point. But tools that make it so a 10 year old can manipulate photos even better than your example in several minutes, are in fact fairly new.
Hell they can generate photos that fool 70% of people on Facebook, though now that I say that, maybe that bar isn’t too high…
we’ve been able to do this kinda shit since the days of film, it wasn’t hard, just required some clever stitching and blending.
It’s “more accessible” I’m more concerned about shit like AI generated videos though. Those are spooky. Or also just the general accessibility of “natural bot nets” now.