Microsoft sues service for creating illicit content with its AI platform (arstechnica.com)
from kid@sh.itjust.works to cybersecurity@sh.itjust.works on 13 Jan 11:35
https://sh.itjust.works/post/31027612

#cybersecurity

threaded - newest

Alphane_Moon@lemmy.world on 13 Jan 11:52 next collapse

Read this article earlier, it wasn’t very clear to me what the focus was of this illicit gen AI content.

Very sneaky approach I have to say.

RagnarokOnline@programming.dev on 13 Jan 12:16 collapse

This article is hilarious to me for some reason…

All 10 defendants were named John Doe because Microsoft doesn’t know their identity.

So Microsoft doesn’t know who the people are.

Microsoft didn’t say how the legitimate customer accounts were compromised but said hackers have been known to create tools to search code repositories for API keys developers inadvertently included in the apps they create. Microsoft and others have long counseled developers to remove credentials and other sensitive data from code they publish, but the practice is regularly ignored.

The accounts that were compromised were likely stolen because the account owners listed API creds directly in their code.

Microsoft didn’t outline precisely how the defendants’ software was allegedly designed to bypass the guardrails the company had created.

Microsoft won’t explain how their system is busted.

The lawsuit alleges the defendants’ service violated the Computer Fraud and Abuse Act, the Digital Millennium Copyright Act, the Lanham Act, and the Racketeer Influenced and Corrupt Organizations Act and constitutes wire fraud, access device fraud, common law trespass, and tortious interference. The complaint seeks an injunction enjoining the defendants from engaging in “any activity herein.”

Whatever the hackers generated sure did piss Microsoft off.

rtxn@lemmy.world on 13 Jan 12:20 next collapse

to bypass the guardrails the company had created

What a delightful way to say that those guardrails were worth, in effect, fuck all.

bitwolf@sh.itjust.works on 14 Jan 06:10 collapse

It gets even better

These code-based restrictions have been repeatedly bypassed in recent years through hacks, some benign and performed by researchers and others by malicious threat actors.

Yet their public statement is

Microsoft’s AI services deploy strong safety measures, including built-in safety mitigations at the AI model, platform, and application levels.

Sounds like they preferred to keep it live and race to mitigate but the holes were still open.
But they’re really going at them, suing someone they can’t identify, and shouting off every violation they can hope to apply to it.

Its irresponsible.