Threat Actors Exploring Large Language Models for Cyberattacks, Microsoft and OpenAI Report (www.microsoft.com)
from Squire1039@lemm.ee to technology@lemmy.world on 19 Feb 2024 21:29
https://lemm.ee/post/24259081

Summary

This research, conducted by Microsoft and OpenAI, focuses on how nation-state actors and cybercriminals are using large language models (LLMs) in their attacks.

Key findings:

Specific examples:

Additional points:

#technology

threaded - newest

Pantherina@feddit.de on 19 Feb 2024 22:51 next collapse

And thats why you dont produce tools that are not needed and cause harm, MicroShit

FaceDeer@kbin.social on 20 Feb 2024 00:39 collapse

I am baffled that you appear to be attacking Microsoft over this. They're doing research to counter bad actors here.

Pantherina@feddit.de on 20 Feb 2024 01:05 next collapse

They are funding and forcefully pushing that tool to Windows. And now they want to “protect” against “threat actors”.

Dont believe a word that comes out of Big Tech PR departments.

FaceDeer@kbin.social on 20 Feb 2024 01:06 collapse

You think Microsoft is the only organization capable of producing these tools? They weren't even the first.

Pantherina@feddit.de on 20 Feb 2024 01:44 collapse

That is true. Still, huge big tech companies are the biggest threat actors

demonsword@lemmy.world on 20 Feb 2024 15:52 collapse

They’re doing research to counter bad actors here

“Bad actors” as defined by the US gov’t, of course. Home of the “brave” that bombs the shit out of everyone they dislike using unmanned drones, and currently supports a ongoing genocide happening right now in the middle east. Literally the paradise of freedom and justice on Earth.

AbouBenAdhem@lemmy.world on 19 Feb 2024 23:15 next collapse

I assume they mean threat actors besides Microsoft and OpenAI?

[deleted] on 19 Feb 2024 23:33 collapse

.

FunderPants@lemmy.ca on 20 Feb 2024 02:50 collapse

I mean, yea okay, but most of those use cases are exactly what everyone else is using them for so far.