Judge backs AI firm over use of copyrighted books (www.bbc.com)
from Davriellelouna@lemmy.world to technology@lemmy.world on 25 Jun 13:23
https://lemmy.world/post/31969005

#technology

threaded - newest

Grimy@lemmy.world on 25 Jun 13:27 next collapse

80% of the book market is owned by 5 publishing houses.

They want to create a monopoly around AI and kill open source. The copyright industry is not our friend. This is a win, not a loss.

OmegaMouse@pawb.social on 25 Jun 13:34 next collapse

What, how is this a win? Three authors lost a lawsuit to an AI firm using their works.

Grimy@lemmy.world on 25 Jun 16:59 next collapse

The lawsuit would not have benefitted their fellow authors but their publishing houses and the big ai companies.

ShittyBeatlesFCPres@lemmy.world on 25 Jun 19:52 collapse

It would harm the A.I. industry if Anthropic loses the next part of the trial on whether they pirated books — from what I’ve read, Anthropic and Meta are suspected of getting a lot off torrent sites and the like.

It’s possible they all did some piracy in their mad dash to find training material but Amazon and Google have bookstores and Google even has a book text search engine, Google Scholar, and probably everything else already in its data centers. So, not sure why they’d have to resort to piracy.

sentient_loom@sh.itjust.works on 25 Jun 13:35 next collapse

How exactly does this benefit “us” ?

gaylord_fartmaster@lemmy.world on 25 Jun 14:42 collapse

Because books are used to train both commercial and open source language models?

sentient_loom@sh.itjust.works on 25 Jun 19:36 collapse

used to train both commercial

commercial training is, in this case, stealing people’s work for commercial gain

and open source language models

so, uh, let us train open-source models on open-source text. There’s so much of it that there’s no need to steal.

?

I’m not sure why you added a question mark at the end of your statement.

gaylord_fartmaster@lemmy.world on 25 Jun 22:43 collapse

I’m not sure why you added a question mark at the end of your statement.

I was questioning whether or not you would see that as a benefit. Clearly you don’t.

Are you also against libraries letting people borrow books since those are also lost sales for the authors, or are you just a luddite?

sentient_loom@sh.itjust.works on 25 Jun 22:48 collapse

libraries letting people borrow books

This is so far from analogous that it’s almost a nonsequitur.

are you just a luddite?

No, and you don’t even believe such nonsense. You’re grasping, ineffectively.

hendrik@palaver.p3x.de on 25 Jun 13:38 next collapse

Keep in mind this isn't about open-weight vs other AI models at all. This is about how training data can be collected and used.

bob_omb_battlefield@sh.itjust.works on 25 Jun 13:53 next collapse

If you aren’t allowed to freely use data for training without a license, then the fear is that only large companies will own enough works or be able to afford licenses to train models.

Nomad_Scry@lemmy.sdf.org on 25 Jun 14:01 next collapse

If they can just steal a creator’s work, how do they suppose creators will be able to afford continuing to be creators?

Right. They think we have enough original works that the machines can just make any new creations.

😠

bob_omb_battlefield@sh.itjust.works on 25 Jun 14:11 next collapse

Yeah, I guess the debate is which is the lesser evil. I didn’t make the original comment but I think this is what they were getting at.

Nomad_Scry@lemmy.sdf.org on 25 Jun 14:17 next collapse

Absolutely. The current copyright system is terrible but an AI replacement of creators is worse.

Grimy@lemmy.world on 25 Jun 17:11 collapse

Yes precisely.

I don’t see a situation where the actual content creators get paid.

We either get open source ai, or we get closed ai where the big ai companies and copyright companies make bank.

I think people are having huge knee jerk reactions and end up supporting companies like Disney, Universal Music and Google.

MudMan@fedia.io on 25 Jun 14:17 next collapse

It is entirely possible that the entire construct of copyright just isn't fit to regulate this and the "right to train" or to avoid training needs to be formulated separately.

The maximalist, knee-jerk assumption that all AI training is copying is feeding into the interests of, ironically, a bunch of AI companies. That doesn't mean that actual authors and artists don't have an interest in regulating this space.

The big takeaway, in my book, is copyright is finally broken beyond all usability. Let's scrap it and start over with the media landscape we actually have, not the eighteenth century version of it.

hendrik@palaver.p3x.de on 25 Jun 14:55 collapse

I'm fairly certain this is the correct answer here. Also there is a seperation between judicative and legislative. It's the former which is involved, but we really need to bother the latter. It's the only way, unless we want to use 18th century tools on the current situation.

Grimy@lemmy.world on 25 Jun 17:18 collapse

The companies like record studio who already own all the copyrights aren’t going to pay creators for something they already owned.

All the data has already been signed away. People are really optimistic about an industry that has consistently fucked everyone they interact with for money.

hendrik@palaver.p3x.de on 25 Jun 15:03 collapse

Yes. But then do something about it. Regulate the market. Or pass laws which address this. I don't really see why we should do something like this then, it still kind of contributes to the problem as free reign still advantages big companies.

(And we can write in law whatever we like. It doesn't need to be a stupid and simplistic solution. If you're concerned with big companies, just write they have to pay a lot and small companies don't. Or force everyone to open their models. That's all options which can be formulated as a new rule. And those would address the issue at hand.)

Grimy@lemmy.world on 25 Jun 17:14 collapse

Because of the vast amount of data needed, there will be no competitive viable open source solution if half the data is kept in a walled garden.

This is about open weights vs closed weights.

JcbAzPx@lemmy.world on 25 Jun 19:44 next collapse

They haven’t dewalled the garden yet. The copyright infringement part of the case will continue.

hendrik@palaver.p3x.de on 25 Jun 20:50 collapse

I agree that we need open-source and emancipate ourselves. The main issue I see is: The entire approach doesn't work. I'd like to give the internet as an example. It's meant to be very open, connect everyone and enable them to share information freely. It is set up to be a level playing field... Now look what that leads to. Trillion dollar mega-corporations, privacy issues everywhere and big data silos. That's what the approach promotes. I agree with the goal. But in my opinion the approach will turn out to lead to less open source and more control by rich companies. And that's not what we want.

Plus nobody even opens the walled gardes. Last time I looked, Reddit wanted money for data. Other big platforms aren't open either. And there's kind of a small war going on with the scrapers and crawlers and anti-measures. So it's not as if it's open as of now.

Grimy@lemmy.world on 25 Jun 22:05 collapse

A lot of our laws are indeed obsolete. I think the best solution would be to force copy left licenses on anything using public created data.

But I’ll take the wild west we have now with no walls then any kind of copyright dystopia. Reddit did successfully sell it’s data to Google for 60 million. Right now, you can legally scrape anything you want off reddit, it is an open garden in every sense of the word (even if they dont like it). It’s a lot more legal then using pirated books, but Google still bet 60 million that copyright laws would swing broadly in their favor.

I think it’s very foolhardy to even hint at a pro copyright stance right now. There is a very real chance of AI getting monopolized and this is how they will do it.

hendrik@palaver.p3x.de on 25 Jun 22:24 collapse

I agree a copyright dystopia wouldn't be any good. Just mind that wild west or law of the jungle is the "right of the strongest". You're advantaging big companies and disadvantaging smaller players or people with ethics or who are more open/transparent.

And I don't think legality with web scraping is the biggest issue. Sure I maybe could do it if it were possible. But I'm occasionally doing some weird stuff and most services have countermeasures in place. In reality I just can't scrape Reddit. Lot's of bots and crawlers just don't work any more. I'm getting rate limited left and right from all big platforms. Lots of things require an account these days, and services are quick banning me for "suspicious activity". It's barely possible to download Youtube videos these days. So, no. I can't. While Google can just pay for it and have the data.

Also Reddit isn't really the benevolent underdog here. They're a big company as well. And they're not selling their data... They're selling their user's data. They're mainly monetizing other people's creations.

SonOfAntenora@lemmy.world on 25 Jun 13:55 collapse

Cool than, try to do some torrenting out there and don’t hide that. Tell us how it goes.

The rules don’t change. This just means AI overlords can do it, not that you can do it too

OfCourseNot@fedia.io on 25 Jun 15:21 collapse

I've been pirating since Napster, never have hidden shit. It's usually not a crime, except in America it seems, to download content, or even share it freely. What is a crime is to make a business distributing pirated content.

SonOfAntenora@lemmy.world on 25 Jun 15:29 collapse

I know but you see what they’re doing with ai, a small server used for piracy and sharing is punished, in some cases, worse than a theft. AI business are making bank (or are they? There is still no clear path to profitability) on troves pirated content. This (for small guys like us) is not going to change the situation. For instance, if we used the same dataset to train some AI in a garage and with no business or investor behind things would be different. We’re at a stage where AI is quite literally to important to fail for somebody out there. I’d argue that AI is, in fact going to be shielded for this reason regardless of previous legal outcomes.

hendrik@palaver.p3x.de on 25 Jun 15:51 collapse

Agreed. And even if it were, it's always like this. Anthropic is a big company. They likely have millions available for good lawyers. While the small guy hasn't. So they're more able to just do stuff and do away with some legal restrictions. Or just pay a fine and that's pocket change for them. So big companies always have more options than the small guy.

hendrik@palaver.p3x.de on 25 Jun 13:30 next collapse

Previous discussion from yesterday about the same topic: https://lemmy.world/post/31923154

gedaliyah@lemmy.world on 25 Jun 13:34 next collapse

I’m not pirating. I’m building my model.

QuadratureSurfer@lemmy.world on 25 Jun 14:39 collapse

To anyone who is reading this comment without reading through the article. This ruling doesn’t mean that it’s okay to pirate for building a model. Anthropic will still need to go through trial for that:

But he rejected Anthropic’s request to dismiss the case, ruling the firm would have to stand trial over its use of pirated copies to build its library of material.

Artisian@lemmy.world on 25 Jun 18:34 collapse

I also read through the judgement, and I think it’s better for anthropic than you describe. He distinguishes three issues:

A) Use any written material they get their hands on to train the model (and the resulting model doesn’t just reproduce the works).

B) Buy a single copy of a print book, scan it, and retain the digital copy for a company library (for all sorts of future purposes).

C) Pirate a book and retain that copy for a company library (for all sorts of future purposes).

A and B were fair use by summary judgement. Meaning this judge thinks it’s clear cut in anthropics favor. C will go to trial.

xthexder@l.sw0.com on 25 Jun 19:32 collapse

C could still bankrupt the company depending on how trial goes. They pirated a lot of books.

Artisian@lemmy.world on 25 Jun 19:56 next collapse

As a civil matter, the publishing houses are more likely to get the full money if anthropic stays in business (and does well). So it might be bad, but I’m really skeptical about bankruptcy (and I’m not hearing anyone seriously floating it?)

xthexder@l.sw0.com on 26 Jun 03:12 collapse

Depending on the type of bankruptcy, the business can still operate, all their profits would just be going towards paying off their depts.

Xerxos@lemmy.ml on 26 Jun 18:50 collapse

It might be that bad. Most ‘damage’ (as publishers see it) comes from distribution, not the download itself. Depending on how they acquired the books, it might be not be much of a problem.

the_q@lemmy.zip on 25 Jun 13:44 next collapse

An 80 year old judge on their best day couldn’t be trusted to make an informed decision. This guy was either bought or confused into his decision. Old people gotta go.

FaceDeer@fedia.io on 25 Jun 14:43 next collapse

Did you read the actual order? The detailed conclusions begin on page 9. What specific bits did he get wrong?

ViatorOmnium@piefed.social on 25 Jun 16:35 collapse

I'm on page 12 and I already saw a false equivalence between human learning and AI training.

FaceDeer@fedia.io on 25 Jun 16:53 collapse

Is it this?

First, Authors argue that using works to train Claude’s underlying LLMs was like using works to train any person to read and write, so Authors should be able to exclude Anthropic from this use (Opp. 16).

That's the judge addressing an argument that the Authors made. If anyone made a "false equivalence" here it's the plaintiffs, the judge is simply saying "okay, let's assume their claim is true." As is the usual case for a preliminary judgment like this.

ag10n@lemmy.world on 25 Jun 17:41 next collapse

Page 6 the judge writes the LLM “memorized” the content and could “recite” it.

Neither is true in training or use of LLMs

Artisian@lemmy.world on 25 Jun 18:37 next collapse

Depends on the content and the method. There are tons of ways to encrypt data, and under relevant law they may still count as copies. There are certainly weaker NN models where we can extract a lot of the training data, even if it’s not easy, from the model parameters (even if we can’t find a prompt that gets the model to regurgitate).

FaceDeer@fedia.io on 25 Jun 18:43 collapse

The judge writes that the Authors told him that LLMs memorized the content and could recite it. He then said "for purposes of argument I'll assume that's true," and even despite that he went ahead and ruled that LLM training does not violate copyright.

It was perhaps a bit daring of Anthropic not to contest what the Authors claimed in that case, but as it turns out the result is an even stronger ruling. The judge gave the Authors every benefit of the doubt and still found that they had no case when it came to training.

MeaanBeaan@lemmy.world on 26 Jun 00:49 collapse

Wait, the authors argued that? Why? That’s literally the opposite of the thing they needed to argue.

AwesomeLowlander@sh.itjust.works on 27 Jun 02:09 collapse

Funny, there’s a lot of people on lemmy itself (especially around dbzer0) who would agree with the judge wholeheartedly.

AbouBenAdhem@lemmy.world on 25 Jun 14:19 next collapse

IMO the focus should have always been on the potential for AI to produce copyright-violating output, not on the method of training.

SculptusPoe@lemmy.world on 25 Jun 14:33 next collapse

If you try to sell “the new adventures of Doctor Strange, Jonathan Strange and Magic Man.” existing copyright laws are sufficient and will stop it. Really, training should be regulated by the same laws as reading. If they can get the material through legitimate means it should be fine, but pulling data that is not freely accessible should be theft, as it is already.

devfuuu@lemmy.world on 25 Jun 14:36 next collapse

That “freely” there really does a lot of hard work.

SculptusPoe@lemmy.world on 25 Jun 17:47 collapse

It means what it means, “freely” pulls its own weight. I didn’t say “readily” accessible. Torrents could be viewed as “readily” accessible but it couldn’t be viewed as “freely” accessible because at the very least you bear the guilt of theft. Library books are “freely” accessible, and if somehow the training involved checking out books and returning them digitally, it should be fine. If it is free to read into neurons it is free to read into neural systems. If payment for reading is expected then it isn’t free.

Womble@lemmy.world on 25 Jun 19:24 collapse

Civil cases of copyright infringment are not theft, no matter what the MPIA have trained you to believe.

JcbAzPx@lemmy.world on 25 Jun 19:41 collapse

But they are copyright infringement, which costs more than theft.

Imgonnatrythis@sh.itjust.works on 25 Jun 21:28 next collapse

I have a freely accessible document that I have a cc license for that states it is not to be used for commercial use. This is commercial use. Your policy would allow for that document to be used though since it is accessible. This kind of policy discourages me from easily sharing my works as others profit from my efforts and my works are more likely to be attributed to a corporate beast I want nothing to do with then to me.

I’m all for copyright reform and simpler copyright law, but these companies need to be held to standard copyright rules and not just made up modifications. I’m convinced a perfectly decent LLM could be built without violating copyrights.

I’d also be ok sharing works with a not for profit open source LLM and I think others might as well.

kate@lemmy.uhhoh.com on 25 Jun 22:35 collapse

as it is already

Copies of copyrighted works cannot be regarded as “stolen property” for the purposes of a prosecution under the National Stolen Property Act of 1934.

https://en.m.wikipedia.org/wiki/Dowling_v.United_States(1985)

Artisian@lemmy.world on 25 Jun 18:38 collapse

Plantifs made that argument and the judge shoots it down pretty hard. That competition isn’t what copyright protects from. He makes an analogy with teachers teaching children to write fiction: they are using existing fantasy to create MANY more competitors on the fiction market. Could an author use copyright to challenge that use?

Would love to hear your thoughts on the ruling itself (it’s linked by reuters).

Cort@lemmy.world on 26 Jun 23:03 collapse

Orcs and dwarves (with a v) are creations of Tolkien, if the fantasy stories include them, it’s a violation of copyright the same as including Mickey mouse.

My argument would have been to ask the ai for the bass line to Queen & David Bowie’s Under Pressure. Then refer to that as a reproduction of copyrighted material. But then again, AI companies probably have better lawyers than vanilla ice.

MyOpinion@lemmy.today on 25 Jun 17:36 next collapse

I hate AI with a fire that keeps we warm at night. That is all.

BlameTheAntifa@lemmy.world on 25 Jun 20:17 next collapse

<img alt="" src="https://lemmy.world/pictrs/image/e31961fa-0318-4c78-9e7e-86626b557709.jpeg">

Anakin: “Judge backs AI firm over use of copyrighted books”
Padme: “But they’ll be held accountable when they reproduce parts of those works or compete with the work they were trained on, right?”
Anakin: “…”
Padme: “Right?”

Fingolfinz@lemmy.world on 25 Jun 20:53 collapse

Pirate everything!