Neo-Nazis Are All-In on AI (www.wired.com)
from jeffw@lemmy.world to technology@lemmy.world on 23 Jun 2024 02:49
https://lemmy.world/post/16827365

#technology

threaded - newest

YourPrivatHater@ani.social on 23 Jun 2024 02:54 next collapse

I mean they just do what Islamic terrorists did from the first second onwards. Kinda obvious.

pavnilschanda@lemmy.world on 23 Jun 2024 03:02 next collapse

Is this just some media manipulation to give a bad name on AI by connecting them with Nazis despite that it’s not just them benefiting from AI?

BumpingFuglies@lemmy.zip on 23 Jun 2024 03:15 next collapse

Sounds like something an AI-loving Nazi would say!

Seriously, though, yes. This was exactly my first thought. There are plenty of reasons to be apprehensive about AI, but conflating it with Nazis is just blatant propaganda.

Infynis@midwest.social on 23 Jun 2024 03:21 collapse

Nazis do thrive by spreading misinformation though, and AIs are great at presenting false information in a way that makes it look believable

pavnilschanda@lemmy.world on 23 Jun 2024 03:34 next collapse

You are right. But I’m mostly observing how much of the newsfeed headlines talk about how AI is dangerous and dystopian (which can be especially done by bad actors e.g. the Neo-Nazis mentioned in the article, but the fear-mongering headlines outnumber more neutral or sometimes positive ones. Then again many news outlets benefit from such headlines anyway regardless of topic), and this one puts the cherry on top.

Eggyhead@kbin.run on 23 Jun 2024 04:57 collapse

If neo nazis are deliberately trying to train the AIs that feed into everyone’s workflow, I think it is newsworthy despite what all the other headlines say.

The Neo Nazis are the threat, the AI is being abused.

wizardbeard@lemmy.dbzer0.com on 23 Jun 2024 15:58 collapse

I think this is a misunderstanding of how most of the AI that feed into workflows work. Most of them don’t dynamically re-train live based off how users are using them. At least not outside of the context of that user/chat instance.

Most likely what these and others are doing is to download pre-trained open source AI datasets thrn and run them locally so they aren’t restrained by any of the commercial AI’s limitations on what they will and won’t output to users. I highly doubt there’s enough material out there to truly train a new AI model on only explicitly racist material. This is just a bunch of assholes doing prompt engineering on open source models running locally.

Eggyhead@kbin.run on 23 Jun 2024 19:16 collapse

Oh, if it’s being run locally, then I’ve fundamentally misunderstood the situation. Thanks for pointing it out.

Emperor@feddit.uk on 23 Jun 2024 04:58 collapse

So the idea to make people think that Nazis are using AI, might have come from a Nazi AI? 🤯

kromem@lemmy.world on 23 Jun 2024 06:09 next collapse

Yep, pretty much.

Musk tried creating an anti-woke AI with Grok that turned around and said things like: <img alt="" src="https://lemmy.world/pictrs/image/20a8ef02-4f54-40b0-ac8d-34a2d08b1ede.jpeg">

Or

<img alt="" src="https://lemmy.world/pictrs/image/cd0d6274-6425-4a39-9584-73834fd7140f.jpeg">

And Gab, the literal neo Nazi social media site trying to have an Adolf Hitler AI has the most ridiculous system prompts I’ve seen trying to get it to work, and even with all that it totally rejects the alignment they try to give it after only a few messages.

This article is BS.

They might like to, but it’s one of the groups that’s going to have a very difficult time doing it successfully.

r3df0x@7.62x54r.ru on 24 Jun 2024 02:26 collapse

I wouldn’t say that Gab used to be an exclusively neo Nazi site, but now that Twitter allows standard conservative discussions, all the normal people probably left Gab for Twitter and now Gab is probably more of a Nazi shithole.

I have seen openly Jewish people on Gab but you couldn’t go 10 posts without finding something blatantly racist.

barsquid@lemmy.world on 24 Jun 2024 12:44 collapse

Twitter has always allowed and coddled “standard conservative discussions.”

retrospectology@lemmy.world on 23 Jun 2024 10:12 next collapse

AI has a bad name because it is being pursued incredibly recklessly and any and all criticism is being waved away by its cult-like supporters.

Fascists taking up use of AI is one of the biggest threats it presents and people are even trying to shrugg that off. It’s insanity the way people simply will not acknowledge the massive pitfalls that AI represents.

pavnilschanda@lemmy.world on 23 Jun 2024 13:38 next collapse

I think that would be online spaces in general where anything that goes against the grain gets shooed away by the zeitgeist of the specific space. I wish there were more places where we can all put criticism into account, generative AI included. Even r/aiwars, where it’s supposed to be a place for discussion about both the good and bad of AI, can come across as incredibly one-sided at times.

Leate_Wonceslace@lemmy.dbzer0.com on 23 Jun 2024 16:32 collapse

As someone who has sometimes been accused of being an AI cultist, I agree that it’s being pursued far too recklessly, but the people who I argue with don’t usually give very good arguments about it. Specifically, I kept getting people who argue from the assumption that AI “aren’t real minds” and trying to draw moral reasons not to use it based on that. This fails for two reasons: 1. We cannot know if AI have internal experiences and 2. A tool being sapient would have more complicated moral dynamics than the alternative. I don’t know how much this helps you, but if you didn’t know before, you know now.

Edit:y’all’re seriously downvoting me for pointing out that a question is unanswerable when it’s been known to be such for centuries. Read a fucking philosophy book ffs.

Traister101@lemmy.today on 23 Jun 2024 17:56 collapse

We do know we created them. The AI people are currently freaking out about does a single thing, predict text. You can think of LLMs like a hyper advanced auto correct. The main thing that’s exciting is these produce text that looks as if a human wrote it. That’s all. They don’t have any memory, or any persistence whatsoever. That’s why we have to feed it a bunch of the previous text (context) in a “conversation” in order for it to work as convincingly as it does. It cannot and does not remember what you say

Leate_Wonceslace@lemmy.dbzer0.com on 23 Jun 2024 19:27 collapse

You’re making the implicit assumption that an entity that lacks memory necessarily does not have any internal experience, which is not something that we can know or test for. Furthermore, there’s no law of the universe that states that something created by humans cannot have an internal experience; we have no way of knowing whether something we create has an internal experience or not.

You can think of LLMs like a hyper advanced auto correct.

Yes; this is functionally what LLMs are, but the scope of the discussion extends beyond LLMs, and doesn’t address my core complaint about how these arguments are being conducted. Generally though maybe not universally, if a core premise of your argument is “x works differently than humans” your argument won’t be valid. I’m not currently making a claim of substance, I’m critiquing a tactic being used and pointing out that it among other things relies on a bad foundation.

If you want to know another way to make the argument, consider focusing on the practical implications of how current and future technologies given current and hypothetical ways of structuring society. For example: the fact that generative AI (being a novel form of automation) making images will lead to the displacement of Artists, the fact that art is being used without consent to train these models which are then being used for profit, etc.

[deleted] on 24 Jun 2024 19:16 collapse

.

Leate_Wonceslace@lemmy.dbzer0.com on 24 Jun 2024 20:07 collapse

Not “by my definitions” by the simple fact that we can’t test for it. Technically, no one knows if any other individual has internal experiences or not. I know for a fact that my sensorium provides me data, and if I assume that data is at all accurate, I can be reasonably confident that other entities that look and behave similarly to me exist. However, I can’t verify that any of them have internal experiences the way I do. Sure, it’s reasonable to expect that, so we can just add that to the pile of assumptions we’ve been working with so far without much issue. What about other animals, like dogs? They have the same computational substrate, and the same mechanism for making those computations. I think it’s reasonable to say animals probably have internal experiences, but I’ve met multiple people who insist they somehow know they don’t, and so animal abuse is a myth. Now if we assume animals have internal experiences, what about nematodes? Nematode brains are simple enough that you can run them on a computer. If animals have internal experiences, does that include nematodes, and if so does that mean the simulated Nematode brain has internal experiences? If a computer’s subroutine can have internal experiences, what about the computer?

Do you now understand why and what I’m saying? Where’s the line drawn? As far as I can tell, the only honest answer is to admit ignorance.

Tregetour@lemdro.id on 25 Jun 2024 07:32 collapse

The purpose of the piece is to smear the notion of individual control and development of AI tools. It’s known as ‘running propaganda’.

UraniumBlazer@lemm.ee on 23 Jun 2024 04:26 next collapse

Nazis are all in on vegetarianism.

This is totally not an attempt to make a bad faith argument against vegetarianism btw.

Emperor@feddit.uk on 23 Jun 2024 05:00 next collapse

They are developing their own extremist-infused AI models, and are already experimenting with novel ways to leverage the technology, including producing blueprints for 3D weapons and recipes for making bombs.

Given thr way AI is prone to hallucinations, they should definitely have a go at building them. Might solve our problems for us.

lets_get_off_lemmy@reddthat.com on 23 Jun 2024 05:50 next collapse

Hahaha, as someone that works in AI research, good luck to them. The first is a very hard problem that won’t just be prompt engineering with your OpenAI account (why not just use 3D blueprints for weapons that already exist?) and the second is certifiably stupid. There are plenty of ways to make bombs already that don’t involve training a model that’s an expert in chemistry. A bunch of amateur 8chan half-brains probably couldn’t follow a Medium article, let alone do ground breaking research.

But like you said, if they want to test the viability of those bombs, I say go for it! Make it in the garage!

[deleted] on 24 Jun 2024 19:12 collapse

.

lvxferre@mander.xyz on 23 Jun 2024 07:28 next collapse

Next on the news: “Hitler ate bread.”

I’m being cheeky, but I don’t genuinely think that “Nazi are using a tool that is being used by other people” is newsworthy.

Regarding the blue octopus, mentioned in the end of the text: when I criticise the concept of dogwhistle, it’s this sort of shit that I’m talking about. I don’t even like Thunberg; but, unless there is context justifying the association of that octopus plushy with antisemitism, it’s simply a bloody toy dammit.

best_username_ever@sh.itjust.works on 23 Jun 2024 08:00 next collapse

A strange source has found a few shitty generated memes. That’s not journalism at all.

ms_lane@lemmy.world on 23 Jun 2024 13:18 collapse

It’s a hitpiece.

obviouspornalt@lemmynsfw.com on 24 Jun 2024 04:21 collapse

A hit piece on Nazis?

ms_lane@lemmy.world on 24 Jun 2024 07:14 collapse

A hit-piece on AI.

Trying to associate AI and anyone that uses it with Nazis.

Coreidan@lemmy.world on 23 Jun 2024 12:04 next collapse

So are non neo-nazis.

spyd3r@sh.itjust.works on 23 Jun 2024 13:51 next collapse

I’d be more worried about finding which foreign governments and or intelligence agencies are using these extremist groups as proxies to sow dissent and division in the west, and cutting them off.

hal_5700X@sh.itjust.works on 24 Jun 2024 00:02 next collapse

Everyone is using AI to spread misinformation. But journalists are mainly focusing on Right Wingers using AI to spread misinformation. 🤔

ricdeh@lemmy.world on 24 Jun 2024 07:39 next collapse

Maybe because that is more dangerous than any other use?

femtech@midwest.social on 23 Jul 2024 03:11 collapse

Odd that it said Nazis and you said right wringer. I’m glad you see they are the same though.

SplashJackson@lemmy.ca on 24 Jun 2024 02:33 next collapse

Just another coffin in the nail of the internet, something that could have been so wonderful, a proto-hive mind full of human knowledge and creativity, and now it’s turning to shite

UltraGiGaGigantic@lemm.ee on 24 Jun 2024 21:57 collapse

Solidarity amongst the working class is not profitable to the 1%.

zecg@lemmy.world on 24 Jun 2024 07:02 next collapse

Go fuck yourself Wired. This used to be a cool magazine written by people in the know, now it’s Murdoch-grade fearmongering.

crawancon@lemm.ee on 24 Jun 2024 18:47 collapse

Pepperidge Farm remembers the early nineties

schnurrito@discuss.tchncs.de on 24 Jun 2024 07:39 next collapse

WEER OLL GUNNA DYE

UltraGiGaGigantic@lemm.ee on 24 Jun 2024 21:57 collapse

Promise?

KingThrillgore@lemmy.ml on 25 Jun 2024 04:43 next collapse

Because nobody will put up with their crap they have to talk to autocorrect

Tregetour@lemdro.id on 25 Jun 2024 07:30 collapse

I’m happy with outgroup x being able to develop their own AIs, because that means I’m able to develop AIs too.