ericisshort@lemmy.world
on 21 Oct 2023 00:33
nextcollapse
I’m very interested in this case and am curious to see where the courts draw the line here.
Beware of an incoming hot take - I don’t see the concept of training AI on published works as much different than a human learning from published works as long as they both go on to make their own original works. I have definitely seen AIs straight-up plagiarize before, but that seems like a different issue entirely from producing similar works. I think allowing plagiarism is a problem with the constraints of the training rather than a fundamental problem with the entire concept of AI training.
Armok_the_bunny@lemmy.world
on 21 Oct 2023 01:09
nextcollapse
A standard I could see being applied is one that I think has some precedent, where if the work it is supposed to be similar to is anywhere in the training set then it’s a copyright violation. One of the valid defenses against copyright claims in court is that the defendant reasonably could have been unaware of the original work, and that seems to me like a reasonable equivalent.
ericisshort@lemmy.world
on 21 Oct 2023 02:17
nextcollapse
But humans make works that are similar to other works all the time. I just hope that we set the same standards for AI violating copyright as we have for humans. There is a big difference between derivative works and those that violate copyright.
Doesn’t this argument assume that AI are human? That’s a pretty huge reach if you ask me. It’s not even clear if LLM are AI, nevermind giving them human rights.
Saganastic@kbin.social
on 21 Oct 2023 07:30
nextcollapse
Machine learning falls under the category of AI. I agree that works produced by LLMs should count as derivative works, as long as they're not too similar.
nybble41@programming.dev
on 21 Oct 2023 23:02
collapse
Not every work produced by a LLM should count as a derivative work—just the ones that embody unique, identifiable creative elements from specific work(s) in the training set. We don’t consider every work produced by a human to be a derivative work of everything they were trained on; work produced by (a human using) an AI should be no different.
ericisshort@lemmy.world
on 21 Oct 2023 11:16
collapse
No, I’m not assuming that. It’s not about concluding AI’s are human. It’s about having concrete standards on which to design laws. Setting a lower standard for copyright violation by LLMs would be like setting a lower speed limit for a self-driving car, and I don’t think it makes any logical sense. To me that would be a disappointingly protectionist and luddite perspective to apply to this new technology.
If LLM are software then they can’t commit copyright violation, the onus for breaking laws falls on the people who use them. And until someone proves otherwise in a court of law they are software.
ericisshort@lemmy.world
on 21 Oct 2023 20:21
collapse
No one is saying we charge a piece of software with a crime. Corporations aren’t human, but they can absolutely be charged with copyright violations, so being human isn’t a requirement for this at all.
Depending on the situation, you would either charge the user of the software (if they directed the software to violate copyright) and/or the company that makes the software (if they negligently release an LLM that has been proven to produce results that violate copyright).
p03locke@lemmy.dbzer0.com
on 21 Oct 2023 20:16
collapse
You can’t copyright a style.
p03locke@lemmy.dbzer0.com
on 21 Oct 2023 20:16
collapse
Beware of an incoming hot take - I don’t see the concept of training AI on published works as much different than a human learning from published works as long as they both go on to make their own original works.
The fact that this is considered a “hot take” is depressing.
ericisshort@lemmy.world
on 21 Oct 2023 23:09
collapse
It’s much less of a hot take for people in the tech community, but it is for many artists and creatives who feel threatened by AI’s potential to devalue what they’ve dedicated their lives to.
p03locke@lemmy.dbzer0.com
on 22 Oct 2023 16:16
collapse
They should have felt threatened by the sheer weight of an incredibly oversaturated industry, sabotaging itself with a system that rewards the lucky and punishes 99.99% of the people that try to get into it. Everybody else who “made it” are practicing survivorship bias to justify their career choices.
Leaps in AI technology was just another barbell added to the pile.
ericisshort@lemmy.world
on 22 Oct 2023 17:46
collapse
Agreed
thepianistfroggollum@lemmynsfw.com
on 21 Oct 2023 02:17
collapse
These dumb fucks should go ahead and sue Google, then, if searching and providing song lyrics is considered copyright infringement.
threaded - newest
I’m very interested in this case and am curious to see where the courts draw the line here.
Beware of an incoming hot take - I don’t see the concept of training AI on published works as much different than a human learning from published works as long as they both go on to make their own original works. I have definitely seen AIs straight-up plagiarize before, but that seems like a different issue entirely from producing similar works. I think allowing plagiarism is a problem with the constraints of the training rather than a fundamental problem with the entire concept of AI training.
A standard I could see being applied is one that I think has some precedent, where if the work it is supposed to be similar to is anywhere in the training set then it’s a copyright violation. One of the valid defenses against copyright claims in court is that the defendant reasonably could have been unaware of the original work, and that seems to me like a reasonable equivalent.
But humans make works that are similar to other works all the time. I just hope that we set the same standards for AI violating copyright as we have for humans. There is a big difference between derivative works and those that violate copyright.
Doesn’t this argument assume that AI are human? That’s a pretty huge reach if you ask me. It’s not even clear if LLM are AI, nevermind giving them human rights.
Machine learning falls under the category of AI. I agree that works produced by LLMs should count as derivative works, as long as they're not too similar.
Not every work produced by a LLM should count as a derivative work—just the ones that embody unique, identifiable creative elements from specific work(s) in the training set. We don’t consider every work produced by a human to be a derivative work of everything they were trained on; work produced by (a human using) an AI should be no different.
.
No, I’m not assuming that. It’s not about concluding AI’s are human. It’s about having concrete standards on which to design laws. Setting a lower standard for copyright violation by LLMs would be like setting a lower speed limit for a self-driving car, and I don’t think it makes any logical sense. To me that would be a disappointingly protectionist and luddite perspective to apply to this new technology.
If LLM are software then they can’t commit copyright violation, the onus for breaking laws falls on the people who use them. And until someone proves otherwise in a court of law they are software.
No one is saying we charge a piece of software with a crime. Corporations aren’t human, but they can absolutely be charged with copyright violations, so being human isn’t a requirement for this at all.
Depending on the situation, you would either charge the user of the software (if they directed the software to violate copyright) and/or the company that makes the software (if they negligently release an LLM that has been proven to produce results that violate copyright).
.
Here is an alternative Piped link(s):
https://piped.video/0ytoUuO-qvg
Piped is a privacy-respecting open-source alternative frontend to YouTube.
I’m open-source; check me out at GitHub.
You can’t copyright a style.
The fact that this is considered a “hot take” is depressing.
It’s much less of a hot take for people in the tech community, but it is for many artists and creatives who feel threatened by AI’s potential to devalue what they’ve dedicated their lives to.
They should have felt threatened by the sheer weight of an incredibly oversaturated industry, sabotaging itself with a system that rewards the lucky and punishes 99.99% of the people that try to get into it. Everybody else who “made it” are practicing survivorship bias to justify their career choices.
Leaps in AI technology was just another barbell added to the pile.
Agreed
These dumb fucks should go ahead and sue Google, then, if searching and providing song lyrics is considered copyright infringement.