Top AI expert 'completely terrified' of 2024 election, shaping up to be 'tsunami of misinformation' (fortune.com)
from L4s@lemmy.world to technology@lemmy.world on 30 Dec 2023 16:00
https://lemmy.world/post/10130106

Top AI expert ‘completely terrified’ of 2024 election, shaping up to be ‘tsunami of misinformation’::“I can’t prove that," says Oren Etzioni, professor emeritus at the University of Washington. “I hope to be proven wrong. But the ingredients are there.”

#technology

threaded - newest

autotldr@lemmings.world on 30 Dec 2023 16:00 next collapse

This is the best summary I could come up with:


That could look like persuasive text messages, false announcements about voting processes shared in different languages on WhatsApp, or bogus websites mocked up to look like official government ones in your area, experts said.

Faced with content that is made to look and sound real, “everything that we’ve been wired to do through evolution is going to come into play to have us believe in the fabrication rather than the actual reality,” said misinformation scholar Kathleen Hall Jamieson, director of the Annenberg Public Policy Center at the University of Pennsylvania.

But pro-democracy advocates argue the takeover has shifted what once was a flawed but useful resource for news and election information into a largely unregulated echo chamber that amplifies hate speech and misinformation.

Twitter used to be one of the “most responsible” platforms, showing a willingness to test features that might reduce misinformation even at the expense of engagement, said Jesse Lehrich, co-founder of Accountable Tech, a nonprofit watchdog group.

“I’m worried that in 2024, we’re going to see similar recycled, ingrained false narratives but more sophisticated tactics,” said Roberta Braga, founder and executive director of the Digital Democracy Institute of the Americas.

In Colorado, Secretary of State Jena Griswold said informative paid social media and TV campaigns that humanize election workers have helped inoculate voters against misinformation.


The original article contains 1,764 words, the summary contains 218 words. Saved 88%. I’m a bot and I’m open source!

Siegfried@lemmy.world on 30 Dec 2023 16:45 next collapse

Had this in Argentina last year and I could bet even the “debate” was highly AI scripted

Edit, funny thing, it was not cheap… they estimate 15 kM US$

ripcord@lemmy.world on 30 Dec 2023 16:54 next collapse

What wasn’t cheap? The election…?

Siegfried@lemmy.world on 30 Dec 2023 17:32 collapse

Massa’s campaign was paid with money from the state. Here, during elections, the state pays part of the campaign of every party, but they went way above that number with the candidate of the ruling party

WhiteHawk@lemmy.world on 30 Dec 2023 21:59 collapse

15 thousand million million?

= 15 quadrillion USD?

Corkyskog@sh.itjust.works on 31 Dec 2023 00:18 next collapse

No wonder they have inflation problems.

Siegfried@lemmy.world on 31 Dec 2023 00:57 collapse

Once you start dividing by zero, your limit is the stars ✨️

Siegfried@lemmy.world on 31 Dec 2023 00:56 collapse

15 billions, sorry… I got horribly lost in translation… in spanish billions is millions of millions and plural nouns in acronyms are marked with double letters. Like EEUU its Estados Unidos (united states).

crsu@lemmy.world on 30 Dec 2023 17:05 next collapse

If anything it’s highlighting problems that already existed like the rollback of the Fairness Doctrine. If it was still in effect you would have more legal ground to stand on. Now they can lie, shrug and say ‘we’re just entertainers’ and they do it from Maddow to Carlson

FarFarAway@startrek.website on 30 Dec 2023 19:52 next collapse

Maybe an unpopular opinion, but I feel like anything produced by AI should be somehow watermarked at the source. At this point there’s only a handful of companies. It wouldn’t be too hard to have them all insert something into the final product that is easily identifiable. Something like a microscopic signature in a corner, with model info and date produced…idk. Not anything that ruins the image, but something that can be seen by anyone, if looked for.

If nothing else there should be a large push to inform the public of telltale features to look for (i.e. too many appendages) to help them determine if it’s created by AI or not. While not fool proof, if it can discount even a portion of the misinformation, imo, it’s worth an effort.

To me, it seems irresponsible of the companies running the AI to just unleash it upon the world without training us humans to understand what we’re looking at. Letting us see how realistic everything is while letting us know its been produced by AI, at least helps us to comprehend the scope of the matter and adapt to the situation at hand. Esp for those who don’t fully grasp what AI can and cannot do.

wahming@monyet.cc on 30 Dec 2023 21:00 next collapse

The technology is open source. Anybody can run it themselves and disable the watermarking.

ugjka@lemmy.world on 30 Dec 2023 21:04 next collapse

It is just maths and most of it is public. if you can buy a 100K$ datacenter gpu you can have your own chat gpt, heck you can even do shit with regular consumer gpus. It is like trying to stop encryption

randon31415@lemmy.world on 30 Dec 2023 21:34 next collapse

Llamafile can be run from a normal computer without a gpu. It can look at a folder full of pictures and rename them based on what the picture looks like:

hackaday.com/…/using-local-ai-on-the-command-line…

FarFarAway@startrek.website on 30 Dec 2023 21:37 collapse

Well, shit. This explains a lot. But also, what chucklehead thought that was a good idea.

I know, now that you mention it, I vaguely remember something about how they didn’t think it should be kept only by some corps or something. Which is commendable but at the same time, ugh.

I have no problem with everyone being able to use it ,but there should have been an introductory period, if nothing else. Jeeze.

Whelp, fake everything here we…are.

ugjka@lemmy.world on 30 Dec 2023 22:26 next collapse

Social media corps just need to use AI to cross check what is in Video/Photos against various established news organizations. Pretty sure that will be the solution - moderating AI content with AI

deranger@sh.itjust.works on 30 Dec 2023 23:08 collapse

Feels like people probably said the same thing about the printing press when it came around. Imagine the sheer increase in volume of printed lies after its invention.

just_change_it@lemmy.world on 30 Dec 2023 21:53 next collapse

I pinky promise to watermark my ai works!!!

Come on. If I use an ai tool to generate something and incorporate it to a released product… how is that any different than googling for an idea and incorporating it into my released product? Why is a search aggregator: a thing that takes all the information you allow it to off of a site and presents it to the public ANY different? You’re using an algorithm to get an output that you desire based on an input.

daltotron@lemmy.world on 30 Dec 2023 22:22 next collapse

They already do that, it’s just invisible to the naked eye, and is only identifiable to other AIs, which can pretty easily classify between real and fake. Adversarial networks.

The distinction between what’s real and what’s fake, as always, will just end up coming down to who has the most resources, and who has the luxury of constructing their own reality. It’s an arms race, both algorithms need active maintenance in order to supercede each other.

grayman@lemmy.world on 30 Dec 2023 22:42 next collapse

I bet you’re also the kind of person that thinks putting up “no guns” signs keeps bad people from shooting innocent people.

FarFarAway@startrek.website on 31 Dec 2023 18:38 collapse

Not at all. Back to that educating the public bit.

I’m not a tech wiz, but I do know my way around basic functions of a computer. If i have no* idea how it works or what it’s capable of, how are people who know next to nothing supposed to figure it out?

FlaminGoku@reddthat.com on 30 Dec 2023 20:07 next collapse

I appreciate this take and think it’s a great idea. You have everything written on an immutable distributed ledger (dare i say blockchain) so that no matter what is created and shared, it can be traced back.

You still allow it’s capabilities to evolve but you always will be able to confirm with a check.

It will be similar to the pictures of diseased lungs and hearts on cigarettes. People will still “buy” the “news” even though it’s fake.

At this point though, you can run a deepfake off a laptop, there would need to be a complete fork for existing code with heavy regulation.

Human@lemmy.dbzer0.com on 31 Dec 2023 02:46 next collapse

I initially thought this was the way to go too, but imo theres a problem: the only individuals who could produce high-level unwatermarked content would be those with access to GPU clusters—state actors and corpos, who would undoubtedly use it to manipulate the masses that have been trained to trust the watermark

I think in the best-case scenario, we’re just going to have to ride out a couple of very strange years while people adjust to a new reality. Shits gonna get weird

FarFarAway@startrek.website on 31 Dec 2023 18:47 collapse

You have a point. I sometimes definitely forget to consider the flip side of the coin.

nutsack@lemmy.world on 31 Dec 2023 19:04 next collapse

a lot of the bad actors here would probably not be complying with such a policy. there is no way to enforce it.

tsonfeir@lemm.ee on 01 Jan 2024 02:20 next collapse

Should, maybe, is-no.

silvercove@lemdro.id on 03 Jan 2024 08:36 collapse

This is technically not possible.

DonPiano@lemmy.ca on 31 Dec 2023 03:50 next collapse

The word they’re looking for is “shitnami”.

thejml@lemm.ee on 30 Dec 2023 17:28 next collapse

This was going to be true, AI or not. There’s no reason for them NOT to try to dupe you into voting for them… or at least against the other person. That’s been the way it’s been since the beginning. It’s definitely been ramping up the last 30-60 years, and tech will 100% be leveraged to those ends where it can be because they’d be dumb not to. They want to be in office. Whatever gaslighting they can do, they’ll do.

Without some sort of monitoring or accountability it’s just going to get worse. But even if they had fines for misinformation, they’ll just do some math to find out if the fine is worth it. If they put out an ad that says something about the other side that drives voters to them and they get caught, the voters likely won’t see a retraction. Their views likely won’t change back, so the fine doesn’t do anything but increase the cost of the ad and may still be worth it.

BoneALisa@lemm.ee on 30 Dec 2023 21:22 next collapse

I believe this wholeheartedly. I work for an mcsp, and we have a client who runs a chain of “news” sites. They are buying a bunch of AI server equipment for their racks and we are almost 100% certain its to pump out garbage for the election.

silvercove@lemdro.id on 03 Jan 2024 08:35 next collapse

America interfered with every election in the world. I’m not too sad that theybare getting a taste of their own medicine.

serial_crusher@lemmy.basedcount.com on 30 Dec 2023 18:56 collapse

the volume of actual AI misinformation is going to pale in comparison to the volume of people trying to use AI misinformation as a boogeyman to scare you into voting a certain way.