A New York Times copyright lawsuit could kill OpenAI (www.vox.com)
from L4s@lemmy.world to technology@lemmy.world on 19 Jan 2024 20:00
https://lemmy.world/post/10922209

A New York Times copyright lawsuit could kill OpenAI::A list of authors and entertainers are also suing the tech company for damages that could total in the billions.

#technology

threaded - newest

autotldr@lemmings.world on 19 Jan 2024 20:00 next collapse

This is the best summary I could come up with:


Late last year, the New York Times sued OpenAI and Microsoft, alleging that the companies are stealing its copyrighted content to train their large language models and then profiting off of it.

Meanwhile, the Senate Judiciary Subcommittee on Privacy, Technology, and Law held a hearing in which news executives implored lawmakers to force AI companies to pay publishers for using their content.

In its rebuttal, OpenAI said that regurgitation is a “rare bug” that the company is “working to drive to zero.” It also claims that the Times “intentionally manipulated prompts” to get this to happen and “cherry-picked their examples from many attempts.”

A growing list of authors and entertainers have been filing lawsuits since ChatGPT made its splashy debut in the fall of 2022, accusing these companies of copying their works in order to train their models.

Developers have sued OpenAI and Microsoft for allegedly stealing software code, while Getty Images is embroiled in a lawsuit against Stability AI, the makers of image-generating model Stable Diffusion, over its copyrighted photos.

In that 2013 decision, Judge Chin said its technology “advances the progress of the arts and sciences, while maintaining respectful consideration for the rights of authors and other creative individuals, and without adversely impacting the rights of copyright holders.” And a 2023 economics study of the effects of Google Books found that “digitization significantly boosts the demand for physical versions” and “allows independent publishers to introduce new editions for existing books, further increasing sales.” So consider that another point in favor of giving tech platforms room to innovate.


The original article contains 1,628 words, the summary contains 259 words. Saved 84%. I’m a bot and I’m open source!

kjPhfeYsEkWyhoxaxjGgRfnj@lemmy.world on 19 Jan 2024 20:18 next collapse

I doubt it. It would likely kill any non Giant tech backed AI companies though

Microsoft has armies of lawyers and cash to pay. It would make life a lot harder, but they’d survive

db2@lemmy.world on 19 Jan 2024 20:21 next collapse

Oh no, how terrible. What ever will we do without Shenanigans Inc. 🙄

sin_free_for_00_days@sopuli.xyz on 19 Jan 2024 20:28 next collapse

Oh no. Anyways.

kaitco@lemmy.world on 19 Jan 2024 20:45 next collapse

I never thought that the AI-driven apocalypse could be impeded by a simple lawsuit. And, yet, here we are.

maynarkh@feddit.nl on 19 Jan 2024 20:54 collapse

One has to wonder why in Star Trek the Federation did not simply sue the Borg.

BearOfaTime@lemm.ee on 19 Jan 2024 21:29 next collapse

Hahahahahahaha hahahahahahaha omg, thank you for the very real, actual laugh-out-loud moment.

Now I’m envisioning Picard one one side, Borq Borg (wtf autocorrect?) Queen on the other, and what, Q as judge, looking older by the minute, just hating life.

kaitco@lemmy.world on 19 Jan 2024 22:40 collapse

Well, that comes down to the particular venue. Who’s going to rule? The Kardassians??

SatanicNotMessianic@lemmy.ml on 19 Jan 2024 21:05 next collapse

The NYT has a market cap of about $8B. MSFT has a market cap of about $3T. MSFT could take a controlling interest in the Times for the change it finds in the couch cushions. I’m betting a good chunk of the c-suites of the interested parties have higher personal net worths than the NYT has in market cap.

I have mixed feelings about how generative models are built and used. I have mixed feelings about IP laws. I think there needs to be a distinction between academic research and for-profit applications. I don’t know how to bring the laws into alignment on all of those things.

But I do know that the interested parties who are developing generative models for commercial use, in addition to making their models available for academics and non-commercial applications, could well afford to properly compensate companies for their training data.

[deleted] on 19 Jan 2024 21:54 collapse

.

ripcord@lemmy.world on 20 Jan 2024 00:07 next collapse

Or Musk when he decided he didn’t like what people were saying on Twitter.

SatanicNotMessianic@lemmy.ml on 20 Jan 2024 00:16 collapse

I completely agree. I don’t want them to buy out the NYT, and I would rather move back to the laws that prevented over-consolidation of the media. I think that Sinclair and the consolidated talk radio networks represent a very real source of danger to democracy. I think we should legally restrict the number of markets a particular broadcast company can be in, and I also believe that we can and should come up with an argument that’s the equivalent of the Fairness Doctrine that doesn’t rest on something as physical and mundane as the public airwaves.

Grimy@lemmy.world on 19 Jan 2024 21:16 next collapse

This would bring up the cost of entry for making a model and nothing more. OpenAI will buy the data if they have too and so will google. The money will only go to the owners of the New York Times and its shareholders, none of the journalists who will be let go in the coming years will see a dime.

We must keep the entry into the AI game as low as possible or the only two players will be Microsoft and Google. And as our economy becomes increasingly AI driven, this will cement them owning it.

Pragmatism or slavery, these are the two options.

[deleted] on 19 Jan 2024 21:32 next collapse

.

Even_Adder@lemmy.dbzer0.com on 19 Jan 2024 21:51 collapse

He’s not arguing for OpenAI, but for the rest of us. AI is a public technology, but we’re on the verge of losing our ability to participate due to things like this and the megacorps’ attempts at regulatory capture. Which they might just get. Their campaign against AI is a lot like governments’ attempts to destroy encryption. Support open source development, It’s our only chance. Their AI will never work for us. John Carmack put it best.

<img alt="" src="https://cdn.discordapp.com/attachments/114846470686900232/1163524791114932294/image.png?ex=653fe3e7&is=652d6ee7&hm=a671e192d0b070b6a5ac36cf51ffd50ef27cca9b68ff120140f6f0fcfd31b446&">

Fuck "Open"AI, fuck Microsoft. Pragmatism or slavery.

[deleted] on 19 Jan 2024 22:01 collapse

.

Grimy@lemmy.world on 19 Jan 2024 22:36 collapse

If you want to know my personal political stance, I think every company with more than 50 or so employees should be owned by the state. I’m for the dismantling of the stock market and the owner caste. I’m also a realist and understand those things won’t come to pass anytime soon. OpenAI will remain and they will happily eat all the fines if it guarantees them a monopoly.

I wasn’t playing devil’s advocate. My point is these legislation only help companies like OpenAI while bringing no benefit whatsoever to any of us.

There are also ways to hold giant megacorporations to a different set of standards than independent developers.

Yes but that isn’t what is being currently proposed, is it?

[deleted] on 19 Jan 2024 22:47 next collapse

.

Grimy@lemmy.world on 19 Jan 2024 23:02 next collapse

I never claimed to be a copyright lawyer and there is literally no other copyright discussion except the ones pertaining to AI. I touched on my ideals because you were implying I was pro big business.

I always try to have a reasonable discussion with you but you always end up writing these kinds of comments while never adressing my actual arguments. Have a good day bro.

[deleted] on 19 Jan 2024 23:17 collapse

.

Grimy@lemmy.world on 19 Jan 2024 23:45 collapse

You edited your comment after I responded. This is what you originally posted:

“That’s a pretty good trick, trying to conflate regulation of OpenAI with other impossible ideals you claim to hold, and drawing a hard line between that and your own suggestion: to let OpenAI win.

I feel sorry for your clients.

(By the way, Grimy claims to be a copyright lawyer, but for some reason he only crawls out of the woodwork when OpenAI is discussed. Sam Altman himself seems like a less biased source for how AI should be treated.)

[deleted] on 19 Jan 2024 23:51 collapse

.

[deleted] on 19 Jan 2024 23:50 collapse

.

[deleted] on 19 Jan 2024 23:28 next collapse

.

Even_Adder@lemmy.dbzer0.com on 19 Jan 2024 23:43 collapse

Did you delete your last comment people replied to repost it again without replies? Link to the last thread.

<img alt="" src="https://i.imgur.com/ZgXYJSX.png">

Grimy@lemmy.world on 19 Jan 2024 23:56 collapse

He deleted that one too lmao

Even_Adder@lemmy.dbzer0.com on 19 Jan 2024 23:58 collapse

This person SUCKS. This has to be the shittiest behavior I’ve seen on Lemmy.

[deleted] on 19 Jan 2024 23:48 next collapse

.

Even_Adder@lemmy.dbzer0.com on 19 Jan 2024 23:56 collapse

LWD@lemm.ee is deleting their comments and reposting the same comment to dodge replies. Link to the last thread.

<img alt="" src="https://i.imgur.com/ZgXYJSX.png">

charonn0@startrek.website on 19 Jan 2024 21:43 next collapse

If OpenAI owns a Copyright on the output of their LLMs, then I side with the NYT.

If the output is public domain–that is you or I could use it commercially without OpenAI’s permission–then I side with OpenAI.

Sort of like how a spell checker works. The dictionary is Copyrighted, the spell check software is Copyrighted, but using it on your document doesn’t grant the spell check vendor any Copyright over it.

I think this strikes a reasonable balance between creators’ IP rights, AI companies’ interest in expansion, and the public interest in having these tools at our disposal. So, in my scheme, either creators get a royalty, or the LLM company doesn’t get to Copyright the outputs. I could even see different AI companies going down different paths and offering different kinds of service based on that distinction.

Grimy@lemmy.world on 19 Jan 2024 22:58 next collapse

I think it currently resides with the one doing the generation and not openAI itself. Officially it is a bit unclear.

Hopefully, all gens become copyleft just for the fact that ais tend to repeat themselves. Specific faces will pop up quite often in image gen for example.

tabular@lemmy.world on 19 Jan 2024 23:58 next collapse

I want people to take my code if they share their changes (gpl). Taking and not giving back is just free labor.

gram_cracker@lemmynsfw.com on 20 Jan 2024 16:59 collapse

If LLMs like ChatGPT are allowed to produce non-copyrighted work after being trained on copyrighted work, you can effectively use them to launder copyright, which would be equivalent to abolishing it at the limit.

A much more elegant and equitable solution would be to just abolish copyright outright. It’s the natural direction of a country that chooses to invest in LLMs anyways.

mashbooq@infosec.pub on 19 Jan 2024 21:44 next collapse

good

tonytins@pawb.social on 19 Jan 2024 21:58 next collapse

The problem with copyright is that everything is automatically copyrighted. The copyright logo is purely symbolic, at this point. Both sides are technically right, even though the courts have ruled that anything an AI outputs is actually in the public domain.

Even_Adder@lemmy.dbzer0.com on 20 Jan 2024 03:19 collapse

Works involving the use of AI are copyrightable. Also, the Copyright Office’s guidance isn’t law. Their guidance reflects only the office’s interpretation based on its experience, it isn’t binding in the courts or other parties. Guidance from the office is not a substitute for legal advice, and it does not create any rights or obligations for anyone. They are the lowest rung on the ladder for deciding what law means.

tonytins@pawb.social on 20 Jan 2024 14:49 collapse

I wasn’t talking about Copyright Office. I was talking about the courts.

Even_Adder@lemmy.dbzer0.com on 20 Jan 2024 16:00 collapse

This ruling is about something else entirely. He tried to argue that the AI itself was the author and that copyright should pass to him as he hired it.

An excerpt from your article:

In 2018, Dr. Thaler sought to register “Recent Entrance” with the U.S. Copyright Office, listing the Creativity Machine as its author. He claimed that ownership had been transferred to him under the work-for-hire doctrine, which allows the employer of the creator of a given work or the commissioner of the work to be considered its legal author. However, in 2019, the Copyright Office denied copyright registration for “Recent Entrance,” ruling that the work lacked the requisite human authorship. Dr. Thaler requested a review of his application, but the Copyright Office once more refused registration, restating the requirement that a human have created the work.

Copyright is afforded to humans, you can’t register an AI as an author, the same as a monkey can’t hold copyright.

wikibot@lemmy.world on 20 Jan 2024 16:01 next collapse

Here’s the summary for the wikipedia article you mentioned in your comment:

Between 2011 and 2018, a series of disputes took place about the copyright status of selfies taken by Celebes crested macaques using equipment belonging to the British wildlife photographer David J. Slater. The disputes involved Wikimedia Commons and the blog Techdirt, which have hosted the images following their publication in newspapers in July 2011 over Slater’s objections that he holds the copyright, and People for the Ethical Treatment of Animals (PETA), who have argued that the copyright should be assigned to the macaque. Slater has argued that he has a valid copyright claim because as he engineered the situation that resulted in the pictures by travelling to Indonesia, befriending a group of wild macaques, and setting up his camera equipment in such a way that a selfie might come about. The Wikimedia Foundation’s 2014 refusal to remove the pictures from its Wikimedia Commons image library was based on the understanding that copyright is held by the creator, that a non-human creator (not being a legal person) cannot hold copyright, and that the images are thus in the public domain.

^to^ ^opt^ ^out^^,^ ^pm^ ^me^ ^‘optout’.^ ^article^ ^|^ ^about^

tonytins@pawb.social on 21 Jan 2024 02:59 collapse

Yes. I know. That’s I’ve been saying this whole time.

Even_Adder@lemmy.dbzer0.com on 21 Jan 2024 03:09 collapse

Then you should amend your comment to:

even though the courts have ruled that anything atributed to an AI outputs as an author is actually in the public domain.

Because as typed, it is wrong.

tonytins@pawb.social on 21 Jan 2024 03:42 collapse

You must be a blast at parties.

800XL@lemmy.world on 19 Jan 2024 23:08 next collapse

YES! AI is cool I guess, but the massive AI circlejerk is so irritating though.

If OpenAI can infringe upon all the copyrighted material on the net then the internet can use everything of theirs all for free too.

sugarfree@lemmy.world on 20 Jan 2024 00:05 next collapse

We hold ourselves back for no reason. This stuff doesn’t matter, AI is the future and however we get there is totally fine with me.

Zaderade@lemmy.world on 20 Jan 2024 00:09 collapse

AI without proper regulation could be the downfall of humanity. Many pros, but the cons may outweigh them. Opinion.

sugarfree@lemmy.world on 20 Jan 2024 00:25 collapse

AI development will not be hamstrung by regulations. If governments want to “regulate” (aka kill) AI, then AI development in their jurisdiction will move elsewhere.

TheFriar@lemm.ee on 20 Jan 2024 00:34 collapse

Yeah, like all those pre-80s regulation in the US. Nothing got done due to all those pesky, pesky regulations.

makyo@lemmy.world on 20 Jan 2024 00:33 next collapse

I always say this when this comes up because I really believe it’s the right solution - any generative AI built with unlicensed and/or public works should then be free for the public to use.

If they want to charge for access that’s fine but they should have to go about securing legal rights first. If that’s impossible, they should worry about profits some other way like maybe add-ons such as internet connected AI and so forth.

Drewelite@lemmynsfw.com on 20 Jan 2024 01:28 next collapse

A very compelling solution! Allows a model of free use while providing an avenue for business to spend time developing it

dasgoat@lemmy.world on 20 Jan 2024 01:56 next collapse

Running AI isn’t free, and AI calculations pollute like a motherfucker

This isn’t me saying you’re wrong on an ethical or judicial standpoint, because on those I agree. It’s just that, on a practical level considerations have to be made.

For me, those considerations alone (and a ton of other considerations such as digital slavery, child porn etc) make me just want to pull the plug already.

AI was fun. It’s a dumb idea for dumb buzzword spewing silicon valley ghouls. Pull the plug and be done with it.

seliaste@lemmy.blahaj.zone on 20 Jan 2024 15:54 collapse

The thing is that those models aren’t even open source, if it was then you could argue that openai’s business model is renting processing power. Except they’re not so their business model is effectively selling models trained on copyrighted data

dasgoat@lemmy.world on 20 Jan 2024 16:54 collapse

Plus, they built the whole thing on the basis of “research purposes” when in reality from the very start they intended to use this as a business above all else. But tax benefits, copyright leniency etcetera were used liberally because ‘it’s just research’.

And then keeping it closed source. The whole thing is a typical silicon valley scam where they will use whatever they can get their grubby little hands on, and when the product is finally here, they make sure to throw it into the world with such a force that legislators can’t even respond adequately. That’s how they make sure that there will be no legislation on if the whole thing is even legal or ethical to begin with, but merely to keep it contained. From then on, they can just keep everything in courts indefinitely while the product festers like a cancer.

It’s the same thing with blockchains basically.

Also, again, digital slavery being used to ‘train’ models and child porn being used to train them because the web scrapers they used can’t and won’t discern whatever shit they rake up into the garbled pile of other people’s works.

miridius@lemmy.world on 20 Jan 2024 10:46 next collapse

Nice idea but how do you propose they pay for the billions of dollars it costs to train and then run said model?

nexusband@lemmy.world on 20 Jan 2024 14:54 next collapse

Then don’t do it. Simple as that.

miridius@lemmy.world on 20 Jan 2024 15:19 collapse

This is why we can’t have nice things

nickwitha_k@lemmy.sdf.org on 21 Jan 2024 03:43 next collapse

If we didn’t live under an economic system where creatives need to sell their works to make a living or even just survive, there wouldn’t be an issue. What OpenAI is doing is little different than any other worker exploitation, however. They are taking the fruits of the labor of others, without compensation of any kind, then using it to effectively destroy their livelihoods.

Few, if any, of the benefits of technological innovation related to LLMs or related tech is improving things for anyone but the already ultra-wealthy. That is the actual reason that we can’t have nice things; the greedy being obsessed with taking and taking while giving less than nothing back in return.

Just like noone is entitled to own a business that can’t afford to pay a living wage, OpenAI is not entitled to run a business aimed at building tools to destroy the livelihoods of countless thousands, if not millions, of creatives by building their tools out of stolen works.

I say this as one who is in support of trying to create actual AGI and potentially “uplift” species, making humanity less lonely. I think OpenAI doesn’t have what it takes and is nothing more than another scam to rob workers of the value of their labor.

General_Effort@lemmy.world on 21 Jan 2024 16:23 collapse

This is the wrong way around. The NYT wants money for the use of its “intellectual property”. This is about money for property owners. When building rents go up, you wouldn’t expect construction workers to benefit, right?

In fact, more money for property owners means that workers lose out, because where else is the money going to come from? (well, “money”)

AI, like all previous forms of automation, allows us to produce more and better goods and services with the same amount of labor. On average, society becomes richer. Whether these gains should go to the rich, or be more evenly distributed, is a choice that we, as a society, make. It’s a matter of law, not technology.

The NYT lawsuit is about sending these gains to the rich. The NYT has already made its money from its articles. The authors were paid, in full, and will not get any more money. Giving money to these property owners will not make society any richer. It just moves wealth to property owners for being property owners. It’s about more money for the rich.

If OpenAI has to pay these property owners for no additional labor, then it will eventually have to increase subscription fees to balance the cash flow. People, who pay a subscription, probably feel that it benefits them, whether they use it for creative writing, programming, or entertainment. They must feel that the benefit is worth, at least, that much in terms of money.

So, the subscription fees represent a part of the gains to society. If a part of these subscription fees is paid to property owners, who did not contribute anything, then that means that this part of the social gains is funneled to property owners, IE mainly the ultra-rich, simply for being owners/ultra-rich.

nickwitha_k@lemmy.sdf.org on 21 Jan 2024 22:28 collapse

This is the wrong way around. The NYT wants money for the use of its “intellectual property”. This is about money for property owners. When building rents go up, you wouldn’t expect construction workers to benefit, right?

I do not find that to be an apt analogy. This is more like someone setting up shop in the NYT’s lobby, stealing issues, and cutting them up to make their own newspaper that they sell from said lobby, without permission or compensation. OpenAI just refined a technology to parasitize off of others’ labor and is using it to seev rent over intellectual property that they don’t own or have rights to use.

So, the subscription fees represent a part of the gains to society. If a part of these subscription fees is paid to property owners, who did not contribute anything, then that means that this part of the social gains is funneled to property owners, IE mainly the ultra-rich, simply for being owners/ultra-rich.

I’m going to have to strongly disagree with here. The subscription fees are only going to the ultra-wealthy who are using LLMs to parasitize off of labor. The NYT is not who I’m worried about having their livelihoods destroyed, it’s the individual artists, actors, and creatives, as well as those whose jobs are being replaced with terrible chatbots that cannot actually do the work but are implemented anyway to drive lay-offs and boost stock prices. The NYT and other suits are merely a proxy due to the wealth gap leading to it being nearly impossible for those most impacted to successfully utilize the courts to remedy their situation.

General_Effort@lemmy.world on 22 Jan 2024 00:20 collapse

I do not find that to be an apt analogy.

The point is that the people who create some property don’t get a cut when the property rises in value. You keep calling the intellectual property of the NYT labor. I think there’s something there you seriously misunderstand.

This is more like someone setting up shop in the NYT’s lobby, stealing issues, and cutting them up to make their own newspaper that they sell from said lobby, without permission or compensation.

That’s an analogy for a normal practice in journalism. Like when other news media (other websites, TV, radio, …) reports that on what the NYT reports. I’m sure you have seen articles where it said something like “The NYT reported that…”.

That’s not what the case is mainly about. I’m not sure if anything like that is even mentioned.

nickwitha_k@lemmy.sdf.org on 22 Jan 2024 04:58 collapse

I think that you are overlooking the part about aggressively competing against the original creation with something that is impossible without the existence of the initial creation. Also the part where the NYT isn’t really the ones most impacted by the current pushes for adoption of LLMs and similar tech. It’s not a case of punchcard computer operators becoming obsolete. It’s a case of using technology to deny those involved in creating and evolving culture along with those in the few remaining jobs that allow one to get by the ability to make a living.

Humanity as a whole isn’t benefiting, only the ultra-wealthy who are using and refining these tools for no other purbose but to further bludgeon and dehumanize workers, grow the number on the precipice of total ruin, and increase the wealth gap further. So, the NYT merely is playing the role of “the enemy of my enemy”.

If the tools WEREN’T being used primarily to skim even more wealth and push more into poverty, there would be no problems (especially if the result were reform of the currently awful IP laws). But, we currently live in a world where billionaires are writing to profitable tech companies demanding mass lay-offs and deep salary cuts to increase stock prices, voice actors are thrown under the bus by their own union, and eating disorder helpline workers are fired en masse for unionizing to to be replaced with chatbots that cause measurable harm to vulnerable people. OpenAI deserves to be shutdown for the harm that they are enabling and profiting from.

General_Effort@lemmy.world on 22 Jan 2024 17:08 collapse

At first, I laid out how a win for NYT will benefit mainly the wealthy. It will increase the wealth gap. Clearly you don’t agree.

Could you please explain where you see an error in my reasoning/where it was not clear enough?

nickwitha_k@lemmy.sdf.org on 22 Jan 2024 19:24 collapse

I think that the error in your reasoning is that OpenAI’s tools are themselves greatly accelerating the expansion of the wealth gap. They have been greedily wreckless though and pissed off other wealthy groups. The NYT dosn’t care about the rest of us but a victory from them might help establish precedents that help.

To put it another way, both orgs are terrible but, by negative impact on humanity, OpenAI is, measurably, magnitudes worse in multiple ways (harming workers, driving up demand for compute thus accelerating fossil fuel demand and global warming).

Would you be able to clarify your reasoning for thinking that OpenAI is less harmful?

General_Effort@lemmy.world on 22 Jan 2024 21:00 collapse

It’s not about who is less harmful. It is about what the effects of the precedent would be.

The precedent would be that the NYT can charge money for AI training. OpenAI is not likely to disappear if it loses, but even if it did, the NYT would just find some other buyer. A win for the NYT would establish that they can charge money for AI training on their archive. That’s pure profit for the owners of the NYT. There is no reason why they should pay the authors again (or the printers, assistants, janitors, and anyone else involved in the making of the articles).

Whatever harm you see coming from OpenAI is not going away if the NYT wins. All a NYT win would mean, is that owners of intellectual property get money without having to do work.

nickwitha_k@lemmy.sdf.org on 23 Jan 2024 00:17 collapse

Whatever harm you see coming from OpenAI is not going away if the NYT wins. All a NYT win would mean, is that owners of intellectual property get money without having to do work.

I don’t think that we are entirely on the same page for the impact of the precedent. If the ruling is not comically constrained, it would layout the path forward for other creators and IP owners. The vast majority of which are individuals which are those most harmed.

Honestly, the best possible (though unlikely in the current political climate) outcome would be an overhaul of IP laws to make them in any way sane and legally codify mechanisms to protect workers from career loss due to automation such as taxes on AI and automation tools that replace humans in order to fund re-skilling programs as well as, ideally, publicly-funded organizations to pay cultural workers and remove the possibility of the extinction of the working artist, like is done with Ireland’s Art Council progams (I say this as one who works primarily in automation). Without such outcomes or a narrow ruling, yeah, it would effectively be just lead to NYT getting more money.

ETA: While we don’t currently seem to be on the same page, I do want to say thank you for the civil conversation and good points, and my apologies for my ADHD habit of getting a bit verbose.

General_Effort@lemmy.world on 24 Jan 2024 13:20 collapse

[I]t would layout the path forward for other creators and IP owners. The vast majority of which are individuals which are those most harmed.

Everyone owns some property, but a tiny percentage of rich people own most of all property. EG quick google says that a daily newspaper contains 50,000 to 200,000 words. That’s about as much as a novel. Seems about right. So in even a few months, a single newspaper exceeds the lifetime output of any single author.

It gets worse. Something like ChatGPT is trained on several hundred billion words. So even the NYT couldn’t negotiate for more than a tiny part of the training data. For a single individual, the share would be so small, that the bureaucracy of handling the payments would eat a chunk. They’d have to go through middle men, like established publishers or stock photo site. So, a part of any money for “creators”, would still be redirected to the rich. You can google how much shutterstock contributors get for the image AI trained on that data.

Mind that companies like Meta and Apple have their fingers on a lot of user data. They can use their TOS to train on that data. In the long run, it would only get worse because AI companies have their fingers on the creations.

Eventually, our system relies on the fact that people have to do something for others to get money. Even people like Zuckerberg or Bezos built companies that provided services that people wanted. The problem with Big Tech, with “enshittification”, is that these companies are now in a position where they don’t have to do that anymore. Anything that makes it possible to extract money just for being the owner of some property will make everything worse.

In Japan the law is that you can train AI on anything, unless it is a dataset made specifically for AI training (IIRC). You don’t exactly need this provision. You would not put such a dataset on the open internet, if you don’t want it to be free. But it might come in handy if a dataset gets leaked.


I am certain that human artists will not disappear any time soon. It’s true that skill at manual drawing or painting is made less important. Digital artists, so far, could not emulate these looks well, certainly not without manually drawing on a graphics tablet. But digital artists are still working artists. I would think that the digital part of the skill set was already, pre-AI, the commercially more lucrative part.

I can see that genAI disrupts particular income streams, such as stock photography, or small commissions to draw role-playing characters. However, that does not mean that there will be a net loss of artist jobs or income. Say, pre-AI you could spend your limited money to either commission a single drawing or on a night out. Post-AI you might get 10 or a hundred images for the same money. You might be more inclined to take the images over the night out. In that case, waiters and bartenders would take the hit, and would have to become digital artists.

It’s impossible to predict the eventual outcome. It depends on what people chose, what people value; on fashion. I am certain of one thing, though. As we need less and less labor to satisfy our basic needs, we will invest more into leisure, entertainment and such. Why is barista a thing? The luxury of a freshly made coffee, prepared with skill, has become fashionable.

Incidentally, the same worries (and mean-spirited attacks) also existed when photography was invented.

I agree on the social programs but I must point out that the issue is not new (and I’m not talking about photography or digital art). Which is why such programs exist, to some degree. We accept that this happens continuously for blue collar jobs and not just because of automation. We’re not just making more and better robots, we’re also switching from fossil fuels to renewables, from combustion engines to electric, and so on.

You may have heard of the loss of factory jobs over the last couple decades as a social issue. Automation has been blamed for the so-called hollowing out of the middle class. These former middle class jobs have disappeared, freeing up workers to EG deliver food. So you’re left with jobs that require a university degree at the top and a driving license at the bottom.

What’s new is that AI is coming for white collar jobs. I think that explains a lot of the reaction. It’s the people who think and write in newspapers and on the internet who have reason to worry about their cushy careers. The threat is to people like them and the people they know; their class, if you will. I believe (and hope) that AI will benefit society by putting everyone into the same boat again. If (or when) an apprentice electrician can do the same task as an electrical engineer with a Master’s degree, then the middle is no more hollow. Sure, the engineer loses out, but people on avera

nexusband@lemmy.world on 21 Jan 2024 12:10 collapse

Yeah, Anarchy is such a good thing to teach AI, Kids and others…

Smoogs@lemmy.world on 20 Jan 2024 15:03 collapse

Defending scamming as a business model is not a business model.

Pacmanlives@lemmy.world on 20 Jan 2024 15:49 next collapse

Not really how it works these days. Look at Uber and Lime/Bird scooters. They basically would just show up to a city and say the hell with the law we are starting our business here. We just call it disruptive technology

makyo@lemmy.world on 21 Jan 2024 01:21 collapse

Unfortunately true, and the long arm of the law, at least in the business world, isn’t really that long. Would love to see some monopoly busting to scare a few of these big companies into shape.

canihasaccount@lemmy.world on 20 Jan 2024 16:01 next collapse

Would you, after devoting full years of your adult life to the unpaid work of learning the requisite advanced math and computer science needed to develop such a model, like to spend years more of your life to develop a generative AI model without compensation? Within the US, it is legal to use public text for commercial purposes without any need to obtain a permit. Developers of such models deserve to be paid, just like any other workers, and that doesn’t happen unless either we make AI a utility (or something similar) and funnel tax dollars into it or the company charges for the product so it can pay its employees.

I wholeheartedly agree that AI shouldn’t be trained on copyrighted, private, or any other works outside of the public domain. I think that OpenAI’s use of nonpublic material was illegal and unethical, and that they should be legally obligated to scrap their entire model and train another one from legal material. But developers deserve to be paid for their labor and time, and that requires the company that employs them to make money somehow.

adrian783@lemmy.world on 20 Jan 2024 20:29 next collapse

then openai should close its doors

thecrotch@sh.itjust.works on 20 Jan 2024 22:49 collapse

Would you, after devoting full years of your adult life to the unpaid work of learning the requisite advanced math and computer science needed to develop such a model, like to spend years more of your life to develop a generative AI model without compensation?

No. I wouldn’t want to write a kernel from scratch for free either. But Linus Torvalds did. He even found a way to monetize it without breaking any laws.

ExLisper@linux.community on 20 Jan 2024 16:14 next collapse

Also anything produced with solar power should be free.

Prok@lemmy.world on 20 Jan 2024 16:23 collapse

Yes, good point, resource collection is nearly identical to content generation

poopkins@lemmy.world on 20 Jan 2024 16:33 next collapse

What is unlicensed work? Copyrighted content will not have a licence agreement but this doesn’t mean you can freely infringe on copyright law.

h3rm17@sh.itjust.works on 20 Jan 2024 16:59 next collapse

Stuff lile public domain books, I guess, like alice in wonderland, and cc0 content

poopkins@lemmy.world on 20 Jan 2024 19:59 collapse

Right: public works are content in the public domain where the copyright has expired and Creative Commons licenced content is, well, licenced.

makyo@lemmy.world on 21 Jan 2024 01:20 collapse

By unlicensed I mean works that haven’t been licensed IE anything being used without permission or some other right

fidodo@lemmy.world on 21 Jan 2024 02:05 next collapse

There’s plenty of money to be made providing infrastructure. Lots of companies make a ton of money providing infrastructure for open source projects.

On another note, why is open AI even called “open”?

ItsMeSpez@lemmy.world on 21 Jan 2024 03:28 collapse

On another note, why is open AI even called “open”?

It’s because of the implication…

asdfasdfasdf@lemmy.world on 22 Jan 2024 03:51 collapse

That goes against the fundamental idea of something being unlicensed, meaning there are no repercussions from using the content.

I think what you mean already exists: open source licenses. Some open source licenses stipulate that the material is free, can be modified, etc. and you can do whatever you want with it, but only on the condition that whatever you create is under the same open source license.

makyo@lemmy.world on 22 Jan 2024 10:25 collapse

Ugh I see what you mean - no I mean unlicensed as in ‘they didn’t bother to license copyrighted works’ and public as in ‘stuff they scraped from Reddit, Twitter, and etc. without permission from anyone’.

asdfasdfasdf@lemmy.world on 22 Jan 2024 12:53 collapse

Ah gotcha.

milkjug@lemmy.wildfyre.dev on 20 Jan 2024 03:11 next collapse

Don’t threaten me with a good time!

GilgameshCatBeard@lemmy.ca on 20 Jan 2024 04:16 next collapse

Fingers crossed!

Daxtron2@startrek.website on 20 Jan 2024 04:24 next collapse

Oh great more Lemmy anti technology circlejerking

[deleted] on 20 Jan 2024 16:07 next collapse

.

airportline@lemmy.ml on 20 Jan 2024 16:09 next collapse

inshallah

BeautifulMind@lemmy.world on 21 Jan 2024 02:31 next collapse

Is there a possible way that both the NYT and OpenAI could lose?

dankm@lemmy.ca on 21 Jan 2024 06:26 next collapse

Not without a bunch of lawyers winning.

VonCesaw@lemmy.world on 21 Jan 2024 06:47 next collapse

Honestly, I’d rather OpenAI lose this one, and NYT lose later on in a much more embarrassing manner that cuts all the golden parachutes

webghost0101@sopuli.xyz on 21 Jan 2024 14:57 collapse

NYT loses even if they win.

While id love to see Openai forced to take a step back ai isn’t going away.

Journalism will have to adapt or it will get replaced, just like so many jobs, including my own.

[deleted] on 21 Jan 2024 12:41 collapse

.