OpenAI whistleblower Suchir Balaji found dead in San Francisco apartment (www.mercurynews.com)
from silence7@slrpnk.net to technology@lemmy.world on 14 Dec 00:14
https://slrpnk.net/post/16133425

#technology

threaded - newest

pagenotfound@lemmy.world on 14 Dec 00:31 next collapse

We’re truly in a dystopian future when big tech nerds are doing mafia hits. Reminds me of that guy in Better Call Saul that hired Mike as a bodyguard.

Ganbat@lemmy.dbzer0.com on 14 Dec 00:33 next collapse

Police say it appears to be a suicide. Probably true, honestly, but that doesn’t mean he wasn’t driven to it.

WorldsDumbestMan@lemmy.today on 14 Dec 02:43 next collapse

That’s even worse!

mosiacmango@lemm.ee on 14 Dec 04:19 next collapse

I wouldn’t believe the cops without some evidence either way.

NeoNachtwaechter@lemmy.world on 14 Dec 06:09 next collapse

Police say it appears to be a suicide.

Let me guess: it was less than 30 stabs that they found in his back?

mriguy@lemmy.world on 14 Dec 13:18 collapse

“He blew the whistle on a multibillion dollar company - obviously he knew they’d kill him! Suicide.”

SplashJackson@lemmy.ca on 14 Dec 03:28 next collapse

What whistle did he blow?

whostosay@lemmy.world on 14 Dec 05:27 next collapse

I didn’t even read the article. I just barely skimmed it and guess what I found within 2 seconds.

“Balaji’s death comes three months after he publicly accused OpenAI of violating U.S. copyright law while developing ChatGPT, a generative artificial intelligence program that has become a moneymaking sensation used by hundreds of millions of people across the world.”

Grimy@lemmy.world on 14 Dec 05:51 next collapse

It was more of an opinion piece. They were already being sued and he didn’t bring any new info forward from what I understand.

Jimmycakes@lemmy.world on 14 Dec 07:46 next collapse

He had hard proof chat gpt used copyright work to train. Opening them up to lawsuits of said copyright holders and basically collapsing the whole company.

phoneymouse@lemmy.world on 14 Dec 08:39 collapse

You don’t even need “Hard” proof. The mere fact that ChatGPT “knows” about certain things indicate that it ingested certain copyrighted works. There are countless examples. Can it quote a book you like? Does it know the plot details? There is no other way for it to get certain information about such things.

zqps@sh.itjust.works on 14 Dec 14:21 next collapse

The issue is proving that it ingested the original copyrighted work, and not some hypothetical public copyleft essay.

sean@lemmy.wtf on 19 Dec 11:29 collapse

Facts aren’t protected by copyright. Regurgitating facts about a thing is in no way illegal, even if done by ai and done by ingested copyrighted material. I can legally make a website dedicated to stating only facts about Disney products (all other things the same) when prompted by questions of my users.

phoneymouse@lemmy.world on 19 Dec 11:39 collapse

I think you’re missing the point. We are talking about whether it is fair use under the law for an AI model to even ingest copyrighted works and for those works to be used as a basis to generate the model’s output without the permission of the copyright holder of those works. This is an unsettled legal question that is being litigated right now.

Also, in some cases, the models do produce verbatim quotes of original works. So, it’s not even like we’re just arguing about whether the AI model stated some “facts.” We are also saying, hey can an AI model verbatim reproduce an actual copyrighted work? It’s settled law that humans cannot do that except in limited circumstances.

sean@lemmy.wtf on 19 Dec 11:42 collapse

The mere fact that ChatGPT “knows” about certain things indicate that it ingested certain copyrighted works.

This is the bit I’m responding to. This “mere fact” that you propose is not copyright infringement by facts I’ve stated. I’m not making claims to any of your other original statements

Verbatim reproduction may be copyright infringement, but that wasn’t your original claim that I quoted and am responding to (I didn’t make that clear earlier, that’s on me).

“Apologies” for my autistic way of communicating (I’m autistic)

phoneymouse@lemmy.world on 19 Dec 11:56 collapse

I think you’re using the word fact in two senses here.

I am making an argument that ChatGPT and other AI models were created by copyrighted works and my “proof” is the “fact” that it can reproduce those works verbatim or state facts about them that can be derived from nowhere else but in the original copyrighted work or a derivative copyrighted work that used the original under fair use.

Now, the question is — is it fair use under copyright law, for AI models to be built with copyrighted materials?

If it is considered fair use, I’m guessing it would have a chilling effect on human creativity given that no creator can guarantee themselves a living if their style of works can be reproduced so cheaply without them once AI has been trained using their works as inputs. So, it would then become necessary to revisit copyright law to redefine fair use such that we don’t discourage creators. AI can only really “remix” what it has seen before. If nothing new is being created because AI has killed all incentive to make new things, it will stagnate and degrade.

GasMaskedLunatic@lemmy.dbzer0.com on 14 Dec 05:14 next collapse

His AI GF must’ve convinced him to shoot himself in the back of the head with a shotgun twice.

NeoNachtwaechter@lemmy.world on 14 Dec 06:10 collapse

Twice is better. Reliable.

Subverb@lemmy.world on 14 Dec 14:45 collapse

If Zombieland taught us anything, it’s the double-tap.

macattack@lemmy.world on 14 Dec 05:20 next collapse

RIP. Hope that his whistleblowing doesn’t end up falling on deaf ears

fmstrat@lemmy.nowsci.com on 14 Dec 13:40 collapse

There are 11 others involved in the case.

calcopiritus@lemmy.world on 14 Dec 09:44 next collapse

When the working class kills a CEO, there’s a reward by the FBI and is found in a week. When a company does it, the world is silent.

elucubra@sopuli.xyz on 14 Dec 12:56 next collapse

Whistleblower deaths should have all of a company’s director’s investigated by default. It may be that 99% are innocent, but just one or two seeing their massively valuable stock, and options in danger, may be driven to such actions on their own.

JustJack23@slrpnk.net on 14 Dec 15:42 collapse

The police are too busy shooting people’s pets and being scared of acorns to investigate something like this.

JustJack23@slrpnk.net on 14 Dec 15:43 collapse

Oh and shooting people for avoid the 3.50 New York subway fare…

werefreeatlast@lemmy.world on 14 Dec 14:20 collapse

Obviously suicide because it happened at his house/apartment. Because who else would suicide him self in his apartment right? I wouldn’t go trying to figure out how it happened. Like with finger printing surfaces or sniffing dogs or checking cameras. Why would sniffing a dog even help?