‘Impossible’ to create AI tools like ChatGPT without copyrighted material, OpenAI says (www.theguardian.com)
from L4s@lemmy.world to technology@lemmy.world on 09 Jan 2024 10:00
https://lemmy.world/post/10488934

‘Impossible’ to create AI tools like ChatGPT without copyrighted material, OpenAI says::Pressure grows on artificial intelligence firms over the content used to train their products

#technology

threaded - newest

autotldr@lemmings.world on 09 Jan 2024 10:00 next collapse

This is the best summary I could come up with:


The developer OpenAI has said it would be impossible to create tools like its groundbreaking chatbot ChatGPT without access to copyrighted material, as pressure grows on artificial intelligence firms over the content used to train their products.

Chatbots such as ChatGPT and image generators like Stable Diffusion are “trained” on a vast trove of data taken from the internet, with much of it covered by copyright – a legal protection against someone’s work being used without permission.

AI companies’ defence of using copyrighted material tends to lean on the legal doctrine of “fair use”, which allows use of content in certain circumstances without seeking the owner’s permission.

John Grisham, Jodi Picoult and George RR Martin were among 17 authors who sued OpenAI in September alleging “systematic theft on a mass scale”.

Getty Images, which owns one of the largest photo libraries in the world, is suing the creator of Stable Diffusion, Stability AI, in the US and in England and Wales for alleged copyright breaches.

The submission said it backed “red-teaming” of AI systems, where third-party researchers test the safety of a product by emulating the behaviour of rogue actors.


The original article contains 530 words, the summary contains 190 words. Saved 64%. I’m a bot and I’m open source!

hellothere@sh.itjust.works on 09 Jan 2024 10:07 next collapse

OK, so pay for it.

Pretty simple really.

bjoern_tantau@swg-empire.de on 09 Jan 2024 10:39 next collapse

Or let’s use this opportunity to make copyright much less draconian.

hellothere@sh.itjust.works on 09 Jan 2024 10:59 next collapse

I’m no fan of the current copyright law - the Statute of Anne was much better - but let’s not kid ourselves that some of the richest companies in the world have any desire what so ever to change it.

Gutless2615@ttrpg.network on 09 Jan 2024 11:41 collapse

My brother in Christ I’m begging you to look just a little bit into the history of copyright expansion.

hellothere@sh.itjust.works on 09 Jan 2024 11:47 next collapse

I am well aware.

[deleted] on 09 Jan 2024 12:43 collapse

.

Gutless2615@ttrpg.network on 09 Jan 2024 14:30 collapse

I only discuss copyright on posts about AI copyright issues. Yes, brilliant observation. I also talk about privacy y issues on privacy relevant posts, labor issues on worker rights related articles and environmental justice on global warming pieces. Truly a brilliant and skewering observation. Youre a true internet private eye.

Fair use and pushing back against (corporate serving) copyright maximalism is an issue I am passionate about and engage in. Is that a problem for you?

[deleted] on 09 Jan 2024 14:38 collapse

.

[deleted] on 09 Jan 2024 14:53 collapse

.

[deleted] on 09 Jan 2024 15:08 collapse

.

[deleted] on 09 Jan 2024 15:22 collapse

.

[deleted] on 09 Jan 2024 15:32 collapse

.

Gutless2615@ttrpg.network on 09 Jan 2024 15:48 collapse

Not legal advice not your lawyer etc etc. But I would likely never suggest someone pursue aggressively against individual piracy. You write contracts for your partners. You fight businesses when they breach. You make great work and price it appropriately. You make your wins there and you do everything you can to not find yourself in a courtroom or arbitration if you can avoid it. You’re not winning any friends and you’re not saving yourself any trouble by raging against torrents. Especially for small creators the calculus never (imo) works out in their favor. More often than not, small artists and creators need to be much more concerned about and need help with being able to defend themselves against spurious accusations of infringement by larger corporate Ip rent seekers and more-or-less automated systems (again: cyberpunk dystopia).

Speaking personally I find the equivocation of “copyright infringement” and “theft” ridiculous. One download = \ = one “stolen” sale, and it never has. Theft requires depriving the original of the property, being able to exercise exclusive control over it. Conceptually it has always broken down when talking about digital goods.

[deleted] on 10 Jan 2024 22:53 collapse

.

Gutless2615@ttrpg.network on 11 Jan 2024 01:32 collapse

I’m Incredibly worried about AI deepfakes and voice cloning for a whole host of reasons. It’s one of the things I think we are collectively least prepared to deal with. The privacy concerns, national security, cyber security - to say nothing of disinformation and yeah, labor impacts — we are fucked and not at all ready for this.

Name and likeness rights, rights of publicity though and privacy rights don’t stem from copyright and don’t require an expansion of copyright to further protect. There’s case law already preventing a business from cloning someone without their permission, and everyone will be paying very close attention to those parts of contracts moving forward, I’d wager. As to wholesale replacing actors and talent with generated content — yeah, I’m very worried a lot of artists and creative people are as fucked as the lawyers and the accountants and writers and everyone else when it comes to job displacement.

Again, despite your really aggressive tone, I’m telling you: we almost certainly agree more than we disagree. It is ghoulish watching studios rush to replace extras and voice actors and resurrect dead actors. True cyberpunk dystopia necromancer shit. I’m hoping that we see more victories won in this genuinely encouraging resurgence of labor (todays SAG AFTRA deal notwithstanding) and legislation directly addressing the labor impacts of AI more broadly. Different kinds of guard rails and safety nets. I just don’t think copyright is the answer you think it is to the horrors that we both agree are coming.

Fisk400@feddit.nu on 09 Jan 2024 11:07 next collapse

As long as capitalism exist in society, just being able go yoink and taking everyone’s art will never be a practical rule set.

dhork@lemmy.world on 09 Jan 2024 11:53 collapse

¿Porque no los dos?

I don’t understand why people are defending AI companies sucking up all human knowledge by saying “well, yeah, copyrights are too long anyway”.

Even if we went back to the pre-1976 term of 28 years, renewable once for a total of 56 years, there’s still a ton of recent works that AI are using without any compensation to their creators.

I think it’s because people are taking this “intelligence” metaphor a bit too far and think if we restrict how the AI uses copyrighted works, that would restrict how humans use them too. But AI isn’t human, it’s just a glorified search engine. At least all standard search engines do is return a link to the actual content. These AI models chew up the content and spit out something based on it. It simply makes sense that this new process should be licensed separately, and I don’t care if it makes some AI companies go bankrupt. Maybe they can work adequate payment for content into their business model going forward.

deweydecibel@lemmy.world on 09 Jan 2024 15:08 next collapse

It shouldn’t be cheap to absorb and regurgitate the works of humans the world over in an effort to replace those humans and subsequently enrich a handful of silicon valley people.

Like, I don’t care what you think about copyright law and how corporations abuse it, AI itself is corporate abuse.

And unlike copyright, which does serve its intended purpose of helping small time creators as much as it helps Disney, the true benefits of AI are overwhelmingly for corporations and investors. If our draconian copyright system is the best tool we have to combat that, good. It’s absolutely the lesser of the two evils.

lolcatnip@reddthat.com on 09 Jan 2024 17:02 collapse

Do you believe it’s reasonable, in general, to develop technology that has the potential to replace some human labor?

Do you believe compensating copyright holders would benefit the individuals whose livelihood is at risk?

the true benefits of AI are overwhelmingly for corporations and investors

“True” is doing a lot of work here, I think. From my perspective the main beneficiaries of technology like LLMs and stable diffusion are people trying to do their work more efficiently, people paying around, and small-time creators who suddenly have custom graphics to illustrate their videos, articles, etc. Maybe you’re talking about something different, like deep fakes? The downside of using a vague term like “AI” is that it’s too easy to accidently conflate things that have little in common.

EldritchFeminity@lemmy.blahaj.zone on 10 Jan 2024 00:13 collapse

There’s 2 general groups when it comes to AI in my mind: Those whose work would benefit from the increased efficiency AI in various forms can bring, and those who want the rewards of work without putting in the effort of working.

The former include people like artists who could do stuff like creating iterations of concept sketches before choosing one to use for a piece to make that part of their job easier/faster.

Much of the opposition of AI comes from people worrying about/who have been harmed by the latter group. And it all comes down the way that the data sets are sourced.

<img alt="" src="https://64.media.tumblr.com/1b3d5df1b26e9e86235b0b04d519dd80/58959199633e4619-52/s1280x1920/b5f726cd9b99024521bc55d5d4fae65bda736cde.jpg">

These are people who want to use the hard work of others for their own benefit, without giving them compensation; and the corporations fall pretty squarely into this group. As does your comment about “small-time creators who suddenly have custom graphics to illustrate their videos, articles, etc.” Before AI, they were free to hire an artist to do that for them. MidJourney, for example, falls into this same category - the developers were caught discussing various artists that they “launder through a fine tuned Codex” (their words, not mine, here for source) for prompts. If these sorts of generators were using opt-in data sets, paying licensing fees to the creators, or some other way to get permission to use their work, this tech could have tons of wonderful uses, like for those small-time creators. This is how music works. There are entire businesses that run on licensing copyright free music out to small-time creators for their videos and stuff, but they don’t go out recording bands and then splicing their songs up to create synthesizers to sell. They pay musicians to create those songs.

Instead of doing what the guy behind IKEA did when he thought “people besides the rich deserve to be able to have furniture”, they’re cutting up Bob Ross paintings to sell as part of their collages to people who want to make art without having to actually learn how to make it or pay somebody to turn their idea into reality. Artists already struggle in a world that devalues creativity (I could make an entire rant on that, but the short is that the starving artist stereotype exists for a reason), and the way companies want to use AI like this is to turn the act of creating art into a commodity even more; to further divest the inherently human part of art from it. They don’t want to give people more time to create and think and enjoy life; they merely want to wring even more value out of them more efficiently. They want to take the writings of their journalists and use them to train the AI that they’re going to replace them with, like a video game journalism company did last fall with all of the writers they had on staff in their subsidiary companies. They think, “why keep 20 writers on staff when we can have a computer churn out articles for our 10 subsidiaries?” Last year, some guy took a screenshot of a piece of art that one of the artists for Genshin Impact was working on while livestreaming, ran it through some form of image generator, and then came back threatening to sue the artist for stealing his work.

Copyright laws don’t favor the small guy, but they do help them protect their work as a byproduct of working for corporate interests. In the case of the Genshin artist, the fact that they were livestreaming their work and had undeniable, recorded proof that the work was theirs and not some rando in their stream meant that copyright law would’ve been on their side if it had actually gone anywhere rather than some asshole just being an asshole. Trademark isn’t quite the same, but I always love telling the story of the time my dad got a cease and desist letter from a company in another state for the name of a product his small business made. So he did some research, found out that they didn’t have the trademark for it in that state, got the trademark himself, and then sent them back their own letter with the names cut out and pasted in the opposite spots. He never heard from them again!

<img alt="" src="https://64.media.tumblr.com/63d5c6a819922f275080fe8beb13bd03/884bf56921a9a09b-d2/s1280x1920/dafee1633c6c966b0ff1fe9df1fe5dea3391ae96.jpg">

AnneBonny@lemmy.dbzer0.com on 09 Jan 2024 15:44 next collapse

I don’t understand why people are defending AI companies sucking up all human knowledge by saying “well, yeah, copyrights are too long anyway”.

Would you characterize projects like wikipedia or the internet archive as “sucking up all human knowledge”?

dhork@lemmy.world on 09 Jan 2024 16:04 next collapse

In Wikipedia’s case, the text is (well, at least so far), written by actual humans. And no matter what you think about the ethics of Wikipedia editors, they are humans also. Human oversight is required for Wikipedia to function properly. If Wikipedia were to go to a model where some AI crawls the web for knowledge and writes articles based on that with limited human involvement, then it would be similar. But that’s not what they are doing.

The Internet Archive is on a bit less steady legal ground (see the resent legal actions), but in its favor it is only storing information for archival and lending purposes, and not using that information to generate derivative works which it is then selling. (And it is the lending that is getting it into trouble right now, not the archiving).

phillaholic@lemm.ee on 10 Jan 2024 17:28 next collapse

The Internet Archive has no ground to stand on at all. It would be one thing if they only allowed downloading of orphaned or unavailable works, but that’s not the case.

randon31415@lemmy.world on 10 Jan 2024 18:22 collapse

Wikipedia has had bots writing articles since the 2000 census information was first published. The 2000 census article writing bot was actually the impetus for Wikipedia to make the WP:bot policies.

MBM@lemmings.world on 09 Jan 2024 17:35 next collapse

Does Wikipedia ever have issues with copyright? If you don’t cite your sources or use a copyrighted image, it will get removed

assassin_aragorn@lemmy.world on 09 Jan 2024 18:09 next collapse

Wikipedia is free to the public. OpenAI is more than welcome to use whatever they want if they become free to the public too.

afraid_of_zombies@lemmy.world on 10 Jan 2024 15:40 collapse

It is free. They have a pair model with more stuff but the baseline model is more than enough for most things.

assassin_aragorn@lemmy.world on 10 Jan 2024 15:59 collapse

There should be no paid model if they aren’t going to pay for training material.

afraid_of_zombies@lemmy.world on 10 Jan 2024 16:24 collapse

There also shouldn’t be goal post moving in lemmy threads but yet here we are. Can you move the goalposts back into position for me?

assassin_aragorn@lemmy.world on 10 Jan 2024 18:18 collapse

My position has always been that OpenAI can either pay for training materials or make money solely on advertisements. Having a paid version is completely unacceptable if they aren’t paying for training.

afraid_of_zombies@lemmy.world on 10 Jan 2024 19:43 collapse

OpenAI is more than welcome to use whatever they want if they become free to the public too.

My position has always been

Left the goalposts and went on to gaslighting

afraid_of_zombies@lemmy.world on 10 Jan 2024 15:39 collapse

The copyright shills in this thread would shutdown Wikipedia

lolcatnip@reddthat.com on 09 Jan 2024 16:50 next collapse

I don’t understand why people are defending AI companies

Because it’s not just big companies that are affected; it’s the technology itself. People saying you can’t train a model on copyrighted works are essentially saying nobody can develop those kinds of models at all. A lot of people here are naturally opposed to the idea that the development of any useful technology should be effectively illegal.

dhork@lemmy.world on 09 Jan 2024 17:36 next collapse

I am not saying you can’t train on copyrighted works at all, I am saying you can’t train on copyrighted works without permission. There are fair use exemptions for copyright, but training AI shouldn’t apply. AI companies will have to acknowledge this and get permission (probably by paying money) before incorporating content into their models. They’ll be able to afford it.

lolcatnip@reddthat.com on 09 Jan 2024 18:26 collapse

What if I do it myself? Do I still need to get permission? And if so, why should I?

I don’t believe the legality of doing something should depend on who’s doing it.

BURN@lemmy.world on 09 Jan 2024 19:55 collapse

Yes you would need permission. Just because you’re a hobbyist doesn’t mean you’re exempt from needing to follow the rules.

As soon as it goes beyond a completely offline, personal, non-replicatible project, it should be subject to the same copyright laws.

If you purely create a data agnostic AI model and share the code, there’s no problem, as you’re not profiting off of the training data. If you create an AI model that’s available for others to use, then you’d need to have the licensing rights to all of the training data.

BURN@lemmy.world on 09 Jan 2024 17:58 next collapse

You can make these models just fine using licensed data. So can any hobbyist.

You just can’t steal other people’s creations to make your models.

lolcatnip@reddthat.com on 09 Jan 2024 18:25 collapse

Of course it sounds bad when you using the word “steal”, but I’m far from convinced that training is theft, and using inflammatory language just makes me less inclined to listen to what you have to say.

BURN@lemmy.world on 09 Jan 2024 19:48 collapse

Training is theft imo. You have to scrape and store the training data, which amounts to copyright violation based on replication. It’s an incredibly simple concept. The model isn’t the problem here, the training data is.

brain_in_a_box@lemmy.ml on 09 Jan 2024 22:20 next collapse

Copyright violation isn’t theft in the first place

BURN@lemmy.world on 09 Jan 2024 22:32 collapse

Yes it is. Moralize it all you want, but it’s still theft

brain_in_a_box@lemmy.ml on 09 Jan 2024 22:48 collapse

I’m not the one moralising. Theft is theft, copyright violation is copyright violation.

lolcatnip@reddthat.com on 09 Jan 2024 22:38 collapse

Training is theft imo.

Then it appears we have nothing to discuss.

assassin_aragorn@lemmy.world on 09 Jan 2024 18:08 collapse

This is frankly very simple.

  • If the AI is trained on copyrighted material and doesn’t pay for it, then the model should be freely available for everyone to use.

  • If the AI is trained on copyrighted material and pays a license for it, then the company can charge people for using the model.

If information should be free and copyright is stifling, then OpenAI shouldn’t be able to charge for access. If information is valuable and should be paid for, then OpenAI should have paid for the training material.

OpenAI is trying to have it both ways. They don’t want to pay for information, but they want to charge for information. They can’t have one without the either.

afraid_of_zombies@lemmy.world on 10 Jan 2024 15:38 collapse

recent works that AI are using without any compensation to their creators.

Name the creator.

dhork@lemmy.world on 10 Jan 2024 16:05 collapse

Um… Sure?

authorsguild.org/…/sign-our-open-letter-to-genera…

readwrite.com/midjourney-ai-art-program-faces-law…

thehill.com/…/4392624-new-york-times-chatgpt-laws…

These are all writers and artists who have found their works wholly sucked into these Generative AI applications, and being made into derivative works,nwithout any compensation at all. This isn’t an abstract argument, content creators are actively discovering this, and their only recourse right now is to file lawsuits.

afraid_of_zombies@lemmy.world on 10 Jan 2024 16:47 collapse

One name not a fucking click bait article. I want one single name of the artist who is now on food stamps because openai trained their model on their art.

dhork@lemmy.world on 10 Jan 2024 16:56 collapse

That first link to the Authors Guild is to an open letter with over 15,000 names on it, but you didn’t bother clicking on it, did you?

afraid_of_zombies@lemmy.world on 10 Jan 2024 17:14 collapse

Having problems reading and following instructions.

Give me a name of the artist who had a nice successful career and because of AI copying their works is now poor. 1 name. Not click bait, not a slacktivism open letter. 1 name.

No victim = No crime

dhork@lemmy.world on 10 Jan 2024 17:53 collapse

That’s not how copyright works, though. You don’t need to make someone destitute before it matters.

afraid_of_zombies@lemmy.world on 10 Jan 2024 18:05 collapse

Fine. Show me the name of the person who still has a middle class income but before AI copied their work was rich. Meeting you halfway here.

[deleted] on 10 Jan 2024 18:21 next collapse

.

afraid_of_zombies@lemmy.world on 10 Jan 2024 19:41 next collapse

Am I making a claim?

[deleted] on 10 Jan 2024 20:16 collapse

.

afraid_of_zombies@lemmy.world on 10 Jan 2024 20:23 collapse

Which is it am I making a claim or not? I can’t with wishy-washy fence sitting vagueness. What claim did I explicitly state and where did I state it?

Strawmen are fun to fight aren’t they?

[deleted] on 10 Jan 2024 22:00 collapse

.

afraid_of_zombies@lemmy.world on 10 Jan 2024 23:52 collapse

Totally unable to distinguish between a citation and a random link.

Not your fault. The concept of citation of claims is owned by Disney

afraid_of_zombies@lemmy.world on 10 Jan 2024 19:41 collapse

My fault for trying to be nice. Congrats on showing that not one person can name a victim for this supposed crime

dhork@lemmy.world on 10 Jan 2024 18:25 collapse

Hey, what did you do with your goalposts? I thought I saw them right here, but now they’re all the way over there…

S410@lemmy.ml on 09 Jan 2024 10:42 collapse

Every work is protected by copyright, unless stated otherwise by the author.
If you want to create a capable system, you want real data and you want a wide range of it, including data that is rarely considered to be a protected work, despite being one.
I can guarantee you that you’re going to have a pretty hard time finding a dataset with diverse data containing things like napkin doodles or bathroom stall writing that’s compiled with permission of every copyright holder involved.

[deleted] on 09 Jan 2024 10:54 next collapse

.

hellothere@sh.itjust.works on 09 Jan 2024 11:01 next collapse

I never said it was going to be easy - and clearly that is why OpenAI didn’t bother.

If they want to advocate for changes to copyright law then I’m all ears, but let’s not pretend they actually have any interest in that.

Fisk400@feddit.nu on 09 Jan 2024 11:09 next collapse

Sounds like a OpenAI problem and not an us problem.

Exatron@lemmy.world on 09 Jan 2024 11:35 next collapse

How hard it is doesn’t matter. If you can’t compensate people for using their work, or excluding work people don’t want users, you just don’t get that data.

There’s plenty of stuff in the public domain.

afraid_of_zombies@lemmy.world on 10 Jan 2024 15:43 collapse

And artists are being compensated now fairly?

Exatron@lemmy.world on 11 Jan 2024 17:52 collapse

Previous wrongs don’t make this instance right.

afraid_of_zombies@lemmy.world on 11 Jan 2024 19:03 collapse

now

deweydecibel@lemmy.world on 09 Jan 2024 15:28 next collapse

I can guarantee you that you’re going to have a pretty hard time finding a dataset with diverse data containing things like napkin doodles or bathroom stall writing that’s compiled with permission of every copyright holder involved.

You make this sound like a bad thing.

BURN@lemmy.world on 09 Jan 2024 17:59 collapse

And why is that a bad thing?

Why are you entitled to other peoples work, just because “it’s hard to find data”?

S410@lemmy.ml on 10 Jan 2024 02:51 collapse

Why are you entitled to other peoples work?

Do you really think you’ve never consumed data that was not intended for you? Never used copyrighted works or their elements in your own works?

Re-purposing other people’s work is literally what humanity has been doing for far longer than the term “license” existed.

If the original inventor of the fire drill didn’t want others to use it and barred them from creating a fire bow, arguing it’s “plagiarism” and “a tool that’s intended to replace me”, we wouldn’t have a civilization.

If artists could bar other artists from creating music or art based on theirs, we wouldn’t have such a thing as “genres”. There are genres of music that are almost entirely based around sampling and many, many popular samples were never explicitly allowed or licensed to anyone. Listen to a hundred most popular tracks of the last 50 years, and I guarantee you, a dozen or more would contain the amen break, for example.

Whatever it is you do with data: consume and use yourself or train a machine learning model using it, you’re either disregarding a large number of copyright restrictions and using all of it, or exist in an informational vacuum.

BURN@lemmy.world on 10 Jan 2024 03:10 collapse

People do not consume and process data the same way an AI model does. Therefore it doesn’t matter about how humans learn, because AIs don’t learn. This isn’t repurposing work, it’s using work in a way the copyright holder doesn’t allow, just like copyright holders are allowed to prohibit commercial use.

S410@lemmy.ml on 10 Jan 2024 05:13 collapse

It’s called “machine learning”, not “AI”, and it’s called that for a reason.

“AI” models are, essentially, solvers for mathematical system that we, humans, cannot describe and create solvers for ourselves, due to their complexity.

For example, a calculator for pure numbers is a pretty simple device all the logic of which can be designed by a human directly. For the device to be useful, however, the creator will have to analyze mathematical works of other people (to figure out how math works to begin with) and to test their creation against them. That is, they’d run formulas derived and solved by other people to verify that the results are correct.

With “AI” instead of designing all the logic manually, we create a system which can end up in a number of finite, yet still near infinite states, each of which defines behavior different from the other. By slowly tuning the model using existing data and checking its performance we (ideally) end up with a solver for some incredibly complex system. Such as languages or images.

If we were training a regular calculator this way, we might feed it things like “2+2=4”, “3x3=9”, “10/5=2”, etc.

If, after we’re done, the model can only solve those three expressions - we have failed. The model didn’t learn the mathematical system, it just memorized the examples. That’s called overfitting and that’s what every single “AI” company in the world is trying to avoid. (And to do so, they need a lot of diverse data)

Of course, if instead of those expressions the training set consisted of Portrait of Dora Maar, Mona Lisa, and Girl with a Pearl Earring, the model would only generate those tree paintings.

However, if the training was successful, we can ask the model to solve 3x10/5+2 - an expression it has never seen before - and it’d give us the correct result - 8. Or, in case of paintings, if we ask for a “Portrait of Mona List with a Pearl Earring” it would give us a brand new image that contains elements and styles of the thee paintings from the training set merged into a new one.

Of course the architecture of a machine learning model and the architecture of the human brain doesn’t match, but the things both can do are quite similar. Creating new works based on existing ones is not, by any means, a new invention. Here’s a picture that merges elements of “Fear and Loathing in Las Vegas” and “My Little Pony”, for example.

The major difference is that skills and knowledge of individual humans necessary to do things like that cannot be transferred or lend to other people. Machine learning models can be. This tech is probably the closest we’ll even be to being able to shake skills and knowledge “telepathically”, so to say.

BURN@lemmy.world on 11 Jan 2024 00:35 collapse

I’m well aware of how machine learning works. I did 90% of the work for a degree in exactly it. I’ve written semi-basic neural networks from scratch, and am familiar with terminology around training and how the process works.

Humans learn, process, and most importantly, transform data in a different manner than machines. The sum totality of the human existence each individual goes through means there is a transformation based on that existence that can’t be replicated by machines.

A human can replicate other styles, as you show with your example, but that doesn’t mean that is the total extent of new creation. It’s been proven in many cases that civilizations create art in isolation, not needing to draw from any previous art to create new ideas. That’s the human element that can’t be replicated in anything less than true General AI with real intelligence.

Machine Learning models such as the LLMs/GenerativeAI of today are statistically based on what it has seen before. While it doesn’t store the data, it does often replicate it in its outputs. That shows that the models that exist now are not creating new ideas, rather mixing up what they already have.

CosmoNova@lemmy.world on 09 Jan 2024 10:09 next collapse

Let’s wait until everyone is laid off and it’s ‘impossible’ to get by without mass looting then, shall we?

S410@lemmy.ml on 09 Jan 2024 10:33 next collapse

They’re not wrong, though?

Almost all information that currently exists has been created in the last century or so. Only a fraction of all that information is available to be legally acquired for use and only a fraction of that already small fraction has been explicitly licensed using permissive licenses.

Things that we don’t even think about as “protected works” are in fact just that. Doesn’t matter what it is: napkin doodles, writings on bathrooms stall walls, letters written to friends and family. All of those things are protected, unless stated otherwise. And, I don’t know about you, but I’ve never seen a license notice attached to a napkin doodle.

Now, imagine trying to raise a child while avoiding every piece of information like that; information that you aren’t licensed to use. You wouldn’t end up with a person well suited to exist in the world. They’d lack education regarding science, technology, they’d lack understanding of pop-culture, they’d know no brand names, etc.

Machine learning models are similar. You can train them that way, sure, but they’d be basically useless for real-world applications.

AntY@lemmy.world on 09 Jan 2024 11:06 next collapse

The main difference between the two in your analogy, that has great bearing on this particular problem, is that the machine learning model is a product that is to be monetized.

S410@lemmy.ml on 09 Jan 2024 11:35 next collapse

Not necessarily. There’s plenty that are open source and available for free to anyone willing to provide their own computational power.
In cases where you pay for a service, it could be argued that you aren’t paying for the access to the model or its results, but the convenience and computational power necessary to run the model.

GentlemanLoser@ttrpg.network on 09 Jan 2024 12:14 next collapse

Naive

MBM@lemmings.world on 09 Jan 2024 17:39 collapse

Sounds like a solution would be to force, for any AI, to either share the source code or proof that it’s not trained on copyrighted data

testfactor@lemmy.world on 09 Jan 2024 13:53 next collapse

And real children aren’t in a capitalist society?

[deleted] on 09 Jan 2024 14:53 next collapse

.

deweydecibel@lemmy.world on 09 Jan 2024 15:29 next collapse

And ultimately replace the humans it learned from.

Zoboomafoo@slrpnk.net on 09 Jan 2024 20:14 next collapse

Good, I want AI to do all my work for me

afraid_of_zombies@lemmy.world on 10 Jan 2024 16:13 collapse

Yes clearly 90 years plus death of artist is acceptable

BURN@lemmy.world on 09 Jan 2024 18:02 collapse

Also an “AI” is not human, and should not be regulated as such

afraid_of_zombies@lemmy.world on 10 Jan 2024 16:16 collapse

Neither is a corporation and yet they claim first amendment rights.

BURN@lemmy.world on 10 Jan 2024 16:27 collapse

That’s an entirely separate problem, but is certainly a problem

afraid_of_zombies@lemmy.world on 10 Jan 2024 16:49 collapse

I don’t think it is. We have all these non-human stuff we are awarding more rights to than we have. You can’t put a corporation in jail but you can put me in jail. I don’t have freedom from religion but a corporation does.

BURN@lemmy.world on 10 Jan 2024 16:55 collapse

Corporations are not people, and should not be treated as such.

If a company does something illegal, the penalty should be spread to the board. It’d make them think twice about breaking the law.

We should not be awarding human rights to non-human, non-sentient creations. LLMs and any kind of Generative AI are not human and should not in any case be treated as such.

afraid_of_zombies@lemmy.world on 10 Jan 2024 17:15 collapse

Corporations are not people, and should not be treated as such.

Understand. Please tell Disney that they no longer own Mickey Mouse.

BURN@lemmy.world on 10 Jan 2024 17:16 collapse

Again, I literally already said that it’s a problem.

IP law is also different than granting rights to corporations. Corporations SHOULD be allowed to own IP, provided they’ve compensated the creator.

afraid_of_zombies@lemmy.world on 10 Jan 2024 17:56 collapse

For 90 years after the ceator’s death?

BURN@lemmy.world on 10 Jan 2024 18:00 collapse

Honestly, yes. I’m ok with that. People are not entitled to be able to do anything they want with someone else’s IP. 90 years is almost reasonable. Cut it in half and I’d also consider it fairly reasonable.

I’m all for expanding copyright for individuals and small companies (small media companies, photographers who are incorporated, artists who make money based on commissions, etc) and reducing it for mega corps, but there’s an extremely fine line around that.

afraid_of_zombies@lemmy.world on 10 Jan 2024 18:04 collapse

Well I am not. If the goal is to promote artistic creation it should not follow inheritance. Heck it shouldn’t even be 45 years. No one at Disney was alive when Mickey was made therefore it should be public domain.

Once you fix that let me know.

Exatron@lemmy.world on 09 Jan 2024 11:37 collapse

The difference here is that a child can’t absorb and suddenly use massive amounts of data.

S410@lemmy.ml on 09 Jan 2024 11:47 collapse

The act of learning is absorbing and using massive amounts of data. Almost any child can, for example, re-create copyrighted cartoon characters in their drawing or whistle copyrighted tunes.

If you look at, pretty much, any and all human created works, you will be able to trace elements of those works to many different sources. We, usually, call that “sources of inspiration”. Of course, in case of human created works, it’s not a big deal. Generally, it’s considered transformative and a fair use.

hellothere@sh.itjust.works on 09 Jan 2024 11:53 next collapse

It’s a question of scale. A single child cannot replace literally all artists, for example.

Barbarian@sh.itjust.works on 09 Jan 2024 12:46 next collapse

I really don’t understand this whole “learning” thing that everybody claims these models are doing.

A Markov chain algorithm with different inputs of text and the output of the next predicted word isn’t colloquially called “learning”, yet it’s fundamentally the same process, just less sophisticated.

They take input, apply a statistical model to it, generate output derived from the input. Humans have creativity, lateral thinking and the ability to understand context and meaning. Most importantly, with art and creative writing, they’re trying to express something.

“AI” has none of these things, just a probability for which token goes next considering which tokens are there already.

testfactor@lemmy.world on 09 Jan 2024 14:05 next collapse

Out of curiosity, how far do you extend this logic?

Let’s say I’m an artist who does fractal art, and I do a line of images where I take jpegs of copywrite protected art and use the data as a seed to my fractal generation function.

Have I have then, in that instance, taken a copywritten work and simply applied some static algorithm to it and passed it off as my own work, or have I done something truly transformative?

The final image I’m displaying as my own art has no meaningful visual cues to the original image, as it’s just lines and colors generated using the image as a seed, but I’ve also not applied any “human artistry” to it, as I’ve just run it through an algorithm.

Should I have to pay the original copywrite holder?
If so, what makes that fundamentally different from me looking at the copywritten image and drawing something that it inspired me to draw?
If not, what makes that fundamentally different from AI images?

[deleted] on 09 Jan 2024 14:57 collapse

.

testfactor@lemmy.world on 09 Jan 2024 16:19 collapse

I feel like you latched on to one sentence in my post and didn’t engage with the rest of it at all.

That sentence, in your defense, was my most poorly articulated, but I feel like you responded devoid of any context.

Am I to take it, from your response, that you think that a fractal image that uses a copywritten image as a seed to it’s random number generator would be copyright infringement?

If so, how much do I, as the creator, have to “transform” that base binary string to make it “fair use” in your mind? Are random but flips sufficient?
If so, how is me doing that different than having the machine do that as a tool? If not, how is that different than me editing the bits using a graphical tool?

[deleted] on 09 Jan 2024 16:40 collapse

.

testfactor@lemmy.world on 09 Jan 2024 19:43 collapse

Fair on all counts. I guess my counter then would be, what is AI art other than running a bunch of pieces of other art through a computer system, then adding some “stuff you did” (to use your phrase) via a prompt, and then submitting the output as your own art.

That’s nearly identical to my fractal example, which I think you’re saying would actually be fair use?

[deleted] on 09 Jan 2024 20:32 collapse

.

PipedLinkBot@feddit.rocks on 09 Jan 2024 20:32 collapse

Here is an alternative Piped link(s):

a relatively short (to me) video

Piped is a privacy-respecting open-source alternative frontend to YouTube.

I’m open-source; check me out at GitHub.

agamemnonymous@sh.itjust.works on 09 Jan 2024 14:19 next collapse

Humans have creativity, lateral thinking and the ability to understand context and meaning

What evidence do you have that those aren’t just sophisticated, recursive versions of the same statistical process?

Barbarian@sh.itjust.works on 10 Jan 2024 17:16 collapse

I think the best counter to this is to consider the zero learning state. A language model or art model without any training data at all will output static, basically. Random noise.

A group of humans socially isolated from the rest of the world will independently create art and music. It has happened an uncountable number of times. It seems to be a fairly automatic emergent property of human societies.

With that being the case, we can safely say that however creativity works, it’s not merely compositing things we’ve seen or heard before.

agamemnonymous@sh.itjust.works on 10 Jan 2024 18:33 collapse

I disagree with this analysis. Socially isolated humans aren’t isolated, they still have nature to imitate. There’s no such thing as a human with no training data. We gather training data our whole life, possibly from the womb. Even in an isolated group, we still have others of the group to imitate, who in turn have ancestors, and again animals and natural phenomena. I would argue that all creativity is precisely compositing things we’ve seen or heard before.

sus@programming.dev on 09 Jan 2024 14:39 collapse

I don’t think “learning” is a word reserved only for high-minded creativeness. Just rote memorization and repetition is sometimes called learning. And there are many intermediate states between them.

Exatron@lemmy.world on 11 Jan 2024 17:51 collapse

The problem is that a human doesn’t absorb exact copies of what it learns from, and fair use doesn’t include taking entire works, shoving them in a box, and shaking it until something you want comes out.

S410@lemmy.ml on 12 Jan 2024 06:28 collapse

Expect for all the cases when humans do exactly that.

A lot of learning is, really, little more than memorization: spelling of words, mathematical formulas, physical constants, etc. But, of course, those are pretty small, so they don’t count?

Then there’s things like sayings, which are entire phrases that only really work if they’re repeated verbatim. You sure can deliver the same idea using different words, but it’s not the same saying at that point.

To make a cover of a song, for example, you have to memorize the lyrics and melody of the original, exactly, to be able to re-create it. If you want to make that cover in the style of some other artist, you, obviously, have to learn their style: that is, analyze and memorize what makes that style unique. (e.g. C418 - Haggstrom, but it’s composed by John Williams)

Sometimes the artists don’t even realize they’re doing exactly that, so we end up with with “subconscious plagiarism” cases, e.g. Bright Tunes Music v. Harrisongs Music.

Some people, like Stephen Wiltshire, are very good at memorizing and replicating certain things; way better than you, I, or even current machine learning systems. And for that they’re praised.

PipedLinkBot@feddit.rocks on 12 Jan 2024 06:29 next collapse

Here is an alternative Piped link(s):

C418 - Haggstrom, but it’s composed by John Williams

Piped is a privacy-respecting open-source alternative frontend to YouTube.

I’m open-source; check me out at GitHub.

Exatron@lemmy.world on 12 Jan 2024 10:25 collapse

Except they literally don’t. Human memory doesn’t retain an exact copy of things. Very good isn’t the same as exactly. And human beings can’t grab everything they see and instantly use it.

S410@lemmy.ml on 12 Jan 2024 11:01 collapse

Machine learning doesn’t retain an exact copy either. Just how on earth do you think can a model trained on terabytes of data be only a few gigabytes in side, yet contain “exact copies” of everything? If “AI” could function as a compression algorithm, it’d definitely be used as one. But it can’t, so it isn’t.

Machine learning can definitely re-create certain things really closely, but to do it well, it generally requires a lot of repeats in the training set. Which, granted, is a big problem that exists right now, and which people are trying to solve. But even right now, if you want an “exact” re-creation of something, cherry picking is almost always necessary, since (unsurprisingly) ML systems have a tendency to create things that have not been seen before.

Here’s an image from an article claiming that machine learning image generators plagiarize things.

However, if you take a second to look at the image, you’ll see that the prompters literally ask for screencaps of specific movies with specific actors, etc. and even then the resulting images aren’t one-to-one copies. It doesn’t take long to spot differences, like different lighting, slightly different poses, different backgrounds, etc.

If you got ahold of a human artist specializing in photoreal drawings and asked them to re-create a specific part of a movie they’ve seen a couple dozen or hundred times, they’d most likely produce something remarkably similar in accuracy. Very similar to what machine learning images generators are capable of at the moment.

positiveWHAT@lemmy.world on 09 Jan 2024 11:17 next collapse

Is this the point where we start UBI and start restructuring society for the future of AI?

LouNeko@lemmy.world on 09 Jan 2024 11:34 next collapse

Copyright protection only exists in the context of generating profit from someone else’s work. If you were to figure out cold fusion and I’d look at your research and say “That’s cool, but I am going to go do some woodworking.” I am not infringing any copyrights. It’s only ever an issue if the financial incentive to trace the profits back to it’s copyrighted source outway the cost of doing so. That’s why China has had free reign to steal any western technology, fighting them in their courts is not worth it. But with AI it’s way easier to trace the output back to it’s source (especially for art), so the incentive is there.

The main issue is the extraction of value from the original data. If I where to steal some bricks from your infinite brick pile and build a house out of them, do you have a right to my house? Technically I never stole a house from you.

hellothere@sh.itjust.works on 09 Jan 2024 11:46 next collapse

You’re conflating copyright and patents.

LouNeko@lemmy.world on 10 Jan 2024 13:47 collapse

Shit, you’re right, I’am.

afraid_of_zombies@lemmy.world on 10 Jan 2024 17:11 collapse

Also conflating theft vs copying

afraid_of_zombies@lemmy.world on 10 Jan 2024 17:11 collapse

You stole bricks. How rich I am does not impact what you did. Copying is not theft. You can keep stretching any shady analogy you want but you can’t change the fundamentals.

ChrislyBear@lemmy.world on 09 Jan 2024 12:11 next collapse

So if I look at a painting study it and then emulate the original painter’s artstyle, then I’m in breach of their copyright?

Or if I read a lot of fantasy like GRRM or JK Rowling and I also write a fantasy book and say, that they were my Inspiration, I’m breaching their copyright??

That’s not how it works, and if it is, it shouldn’t be!

Sure, if a start reproducing work, i.e. plagiarizing the work of others, then I’m doing sth wrong.

And to spin this further: If I raise a child on children’s books by a specific author, am I breaching copyright, when my child enters the workforce and starts to earn money??? Stupid, yes! But so are the copyright claims against LLMs, in my opinion.

TwilightVulpine@lemmy.world on 09 Jan 2024 13:34 next collapse

I don’t think it’s accurate to call the work of AI the same as the human brain, but most importantly, the difference is that humans and tools have and should have different rights. Someone can’t simply point a camera at a picture and say “I can look at it with my eye and keep it in my memory, so why can’t the camera?”

Because we ensure the right of learning for people. That doesn’t mean it’s a free pass to technologically process works however one sees fit.

Nevermind that the more people prodded AIs, the more they have found that the reproductions are much more identical than simply vaguely replicating style from them. People have managed to get whole sentences from books and obvious copies of real artwork, copyrighted characters and celebrities by prompting AI in specific ways.

testfactor@lemmy.world on 09 Jan 2024 13:52 next collapse

To be fair, I think your analogy falls apart a bit because you can in fact take a picture of pretty much any art you want to, legally speaking.

You can’t go sell it or anything, but you are definitely not in breach of copyright just by taking the picture.

TwilightVulpine@lemmy.world on 09 Jan 2024 14:11 collapse

That’s a rebuttal on the level of “if a tree falls in the forest and nobody is there to hear it”. Legally, theoretically, you should need permission just as much, but nobody is going to sue you over something nobody else sees.

Copyright addresses reproduction and distribution, paid or not, including derivative works. There are exemptions for journalism and education, AI advanced a lot by using copyrighted materials under the reasoning that it was technological research, but as it spun off into commercial use, its reliance on copyrighted materials for training has become much more questionable.

lolcatnip@reddthat.com on 09 Jan 2024 16:45 collapse

Copyright law only works because most violations are not feasible to prosecute. A world where copyright laws are fully enforced would be an authoritarian dystopia where all art and science is owned by wealthy corporations.

Copyright law is inherently authoritarian. The conversation we should have been having for the last 100 years isn’t about how much we’ll tolerate technical violations of copyright law; it’s how much we’ll tolerate the chilling effect of copyright law on sharing for the sake of promoting new creative works.

TwilightVulpine@lemmy.world on 09 Jan 2024 17:04 collapse

Absolutely and I’m with you on that. I think Copyright is excessively long and overly restrictive.

But that is another conversation.

The conversation we are having now is how to protect and compensate human creators that need their livelihoods to keep creating in our society as it is, when these new AI tools, trained on their works, are used to deliberately replace them.

There are many issues with copyright as it is right now, but it is literally the only resort that artists have left in this situation. It’s not a given that opposing copyright hinders corporations. In this particular case there are many corporations salivating at the opportunity to replace human creators with AI, to get faster work, cheaper, to appropriate distinctive styles without needing to hire the people who developed them.

There is a chilling effect on its own happening here. There are writers and artists today that are seeing their jobs handed to AI, which decide creative works are not a feasible career to have anymore. Not only this is tragic by virtue of human interest alone, since AI relies on human creators to be trained, it’s very possible that they will spiral into recursive derivativeness and become increasingly stale, devoid of fresh ideas and styles.

afraid_of_zombies@lemmy.world on 10 Jan 2024 17:06 collapse

Oh now I won’t get a 19th Transformers movie or a 24th Fast and the Furious movie. How about this: you fix copyright law such that Disney doesn’t get an infinite ownership of Mickey Mouse for all time and then we will talk about a chatbot.

TwilightVulpine@lemmy.world on 10 Jan 2024 17:20 collapse

That is just avoiding the issues with some tenuously related outrage. AI will not cause or prevent a 24th Fast and Furious movie from being made, it’s an established brand with plenty of investor backing. If AIs require a massive library of owned IP to train, Universal can use it. If it doesn’t, they still can use it. If the suggestion here is that some upcoming AI creator is going to take down Fast and Furious and soulless corporate media… I don’t see any reason whatsoever why this might happen.

But many small independent artists with a couple thousand followers or upcoming artists improving their skills working for media companies will have their opportunities cut short if AI is used as a substitute for their work. It’s not Mickey that is going to suffer, it’s small creators who have true passion.

But ignoring that because “Copyright Bad”? That ain’t it chief. The world is not quite so simple.

afraid_of_zombies@lemmy.world on 10 Jan 2024 17:56 collapse

The copyright system we have now is not good and I am surprised people are defending something not worth defending. As for the mythical struggling artist killed by AI please show me them.

TwilightVulpine@lemmy.world on 10 Jan 2024 18:15 collapse

Mythical? Way to spell out that you don’t keep up with one single artist or the struggles they face.

Like I said before, I agree that the Copyright system is severely flawed and it needs a complete overhaul. Because the law that is intended to protect and support creators should do just that instead of being a tool for corporate control and profits. Even creators of derivative works ought to have better protections than they have now. It should enable them to maintain a livelihood and continue creating, which benefits our whole culture by the introduction of new ideas and aesthetics.

It shouldn’t, though, enable their replacement.

But hey, you couldn’t make any more clearer that you don’t give a single fuck about any of that. What, do you just hate Copyright because you want free shit?

afraid_of_zombies@lemmy.world on 10 Jan 2024 19:44 collapse

Name the artist. Name the artist that has gone broke solely because chatgpt copied their work. One single name.

TwilightVulpine@lemmy.world on 10 Jan 2024 20:12 collapse

You think you got a gotcha there, don’t you? No need to discuss it in a measured way if you can take a single debatable point and cling to it like your life depend on it.

By definition I can’t know if one artist truly quit or not or why if they aren’t posting anymore. They might simply disappear. Also, commercial use of AI is just starting to ramp up so expecting the impacts to be immediately observable else it isn’t happening, is just not a reasonable way to evaluate the situation.

The point of AI is to make creation of text, images and audio faster and easier without any human effort. It’s only logical to expect it to substitute people’s work.

I have heard people saying things about how AI is already being used for roles that would employ people. If you want names and proof of full bankruptcy you can go look into it yourself. But I’m not going to make a full investigative essay for someone who’s making bad faith single sentence responses.

Though thinking of it, wasn’t this news article posted in this very community? Duolingo lays off staff as language learning app shifts toward AI Well, the way you are acting you might just say “oh but that’s not an artist”, to try to ‘win’ the argument on a technicality, despite proof of AI usage leading to job losses.

afraid_of_zombies@lemmy.world on 10 Jan 2024 20:25 collapse

Cool. You can’t name a single artist that lost their job because AI stole their work. I am glad you agree that this “crime” has no victim and it is just TERF shits like Rowling trying to get more money. Thank you for being open minded enough to change your views based on evidence.

TwilightVulpine@lemmy.world on 10 Jan 2024 20:46 collapse

Oh look at that, exactly the type of shallow pedant response I thought you would give, and even spiced up with completely unrelated guilt by association. You know, as if Rowling was the only writer ever and no poor trans artist existed.

🙄

As expected, you straight up pretended you didn’t see the Duolingo thing. Truly words are wasted on you, you can’t even be stubborn in a challenging way. I’m done with you.

afraid_of_zombies@lemmy.world on 10 Jan 2024 21:20 collapse

Please pay me fifty dollars for describing what I wrote as it was under copyright

General_Effort@lemmy.world on 09 Jan 2024 15:07 collapse

the right of learning

That’s not a thing. There is a right to an education, but that is not about copyright (though it may imply the necessity of fair use exceptions in certain contexts).

Also, you are confused about AI output. It’s possible to make the AI spit out training data, but it takes, indeed, prodding. It’s unlikely to matter by US law.

Jomega@lemmy.world on 09 Jan 2024 14:00 collapse

You’re comparing something humans often do subconsciously to a machine that was programmed to do that. Unless you’re arguing that intent doesn’t matter (pretty much every judge in America will tell you it does) then we’re talking about 2 completely different things.

Edit: Disregard the struck out portion of my comment. Apparently I don’t know shit about law. My point is that comparing a a quirk of human psychology to the strict programming of a machine is a false equivalency.

Gutless2615@ttrpg.network on 09 Jan 2024 15:30 collapse

Intent does not matter for copyright infringement, it’s a strict liability.

Jomega@lemmy.world on 09 Jan 2024 17:27 next collapse

I looked it up and you’re right. I must of been thinking of a different crime. That’ll teach me to go spouting off about stuff.

My point that AI is programmed to recycle and humans aren’t is still something I stand by, so I edited my comment.

afraid_of_zombies@lemmy.world on 10 Jan 2024 17:08 collapse

Another proof that it is a bullshit law. Someone could literally die on my property and there are situations where I would not even get a small fine.

Chee_Koala@lemmy.world on 09 Jan 2024 12:22 next collapse

But our current copyright model is so robust and fair! They will only have to wait 95y after the author died, which is a completely normal period.

If you want to control your creations, you are completely free to NOT publish it. Nowhere it’s stated that to be valuable or beautiful, it has to be shared on the world podium.

We’ll have a very restrictive Copyright for non globally transmitted/published works, and one for where the owner of the copyright DID choose to broadcast those works globally. They have a couple years to cash in, and then after I dunno, 5 years, we can all use the work as we see fit. If you use mass media to broadcast creative works but then become mad when the public transforms or remixes your work, you are part of the problem.

Current copyright is just a tool for folks with power to control that power. It’s what a boomer would make driving their tractor / SUV while chanting to themselves: I have earned this.

[deleted] on 09 Jan 2024 12:42 next collapse

.

PipedLinkBot@feddit.rocks on 09 Jan 2024 12:42 next collapse

Here is an alternative Piped link(s):

a great video

Piped is a privacy-respecting open-source alternative frontend to YouTube.

I’m open-source; check me out at GitHub.

drislands@lemmy.world on 09 Jan 2024 13:48 next collapse

Them: “Oh yeah I have 10 minutes until my dentist appointment, I’ll check that out.”

h3rm17@sh.itjust.works on 09 Jan 2024 14:04 next collapse

Funny thing is, human artists work quite similar to AI, in that they take the whole of human art creation, build on ot and create something new (sometimes quite derivative). No art comes out of a vacuum, it builds on previous works. I would not really say AI plagiarizes anything, unless it reproduced pretty much the exact work of someone

just_change_it@lemmy.world on 09 Jan 2024 14:16 next collapse

I think it’s pretty amazing when people just run with the dogma that empowers billionaires.

Every creator hopes they’ll be the next taylor swift and that they’ll retain control of their art for those life + 70 years and make enough to create their own little dynasty.

The reality is that long duration copyright is almost exclusively a tool of the already wealthy, not a tool for the not-yet-wealthy. As technology improves it will be easier and easier for wealth to control the system and deny the little guy’s copyright on grounds that you used something from their vast portfolio of copyright/patent/trademark/ipmonopolyrulelegalbullshit. Already civil legal disputes are largely a function of who has the most money.

I don’t have the solution that helps artists earn a living, but it doesn’t seem like copyright is doing them many favors as-is unless they are retired rockstars who have already earned in excess of the typical middle class lifetime earnings by the time they hit 35, or way earlier.

[deleted] on 09 Jan 2024 14:50 next collapse

.

afraid_of_zombies@lemmy.world on 10 Jan 2024 15:58 collapse

Current Copyright Law Imperfect,

Yeah and Joseph Stalin was a bit naughty. As long as we are seeing how understated we can be.

If you don’t have the solution, perhaps you should not attack one of the remaining defenses against rampant abuses of peoples’ livelihood.

The creator of Superman wasnt paid royalties and was laid off. Many years later he worked a restaurant delivery guy and ended up dropping off food at DC comics. The artist that built that company doing a sandwich run.

[deleted] on 10 Jan 2024 17:03 collapse

.

afraid_of_zombies@lemmy.world on 10 Jan 2024 17:12 collapse

If you got an accusation go ahead and make it. I will be hearing downloading a fucking car

[deleted] on 10 Jan 2024 17:41 collapse

.

afraid_of_zombies@lemmy.world on 10 Jan 2024 18:01 collapse

I am on topic. Our copyright system is flamming garbage and this is a money grab. Everyone is sitting here getting all worked up about who the criminal is and I am asking who the victim is.

Tell me the name of the artist whose career was ruined by AI copying their original art work. I am not impressed by J.K. “billionaire terf” Rowling POTENTIALLY not making another half million. If you can’t produce a victim then there is no crime.

[deleted] on 10 Jan 2024 18:19 collapse

.

afraid_of_zombies@lemmy.world on 10 Jan 2024 19:42 collapse

Wouldn’t know. I don’t click random links. If you have an argument make it.

assassin_aragorn@lemmy.world on 09 Jan 2024 17:55 collapse

I don’t have the solution that helps artists earn a living, but it doesn’t seem like copyright is doing them many favors as-is unless they are retired rockstars who have already earned in excess of the typical middle class lifetime earnings by the time they hit 35, or way earlier.

Just because copyright helps them less doesn’t mean it doesn’t help them at all. And at the end of the day, I’d prefer to support the retired rockstars over the stealing billionaires.

Chee_Koala@lemmy.world on 09 Jan 2024 15:15 collapse

First:

I truly believe that they don’t matter as an individual when looking at their creation as a whole. It matters among their loved ones, and for that person itself. Why do you need more… importance? From who? Why do you need to matter in scope of creation? Is it a creation for you? Then why publish it? Is it a creation for others? Then why does your identity matter? It just seems like egotism with extra steps. Using copyright to combat this seems like a red herring argument made by people who have portfolio’s against people who don’t…

You are not only your own person, you carry human culture remnants distilled out of 12000 years of humanity! You plagiarised almost the whole of humanity while creating your ‘unique’ addition to culture. But, because your remixed work is newer and not directly traceable to its direct origins, we’re gonna pretend that you wrote it as a hermit living without humanity on a rock and establish the rules from there on out. If it was fair for all the players in this game, it would already be impossible to not plagiarise.

lolcatnip@reddthat.com on 09 Jan 2024 16:34 collapse

IMHO being able to “control your creations” isn’t what copyright was created for; it’s just an idea people came up with by analogy with physical property without really thinking through what purpose is supposed to serve. I believe creators of intellectual “property” have no moral right to control what happens with their creations, and they only have a limited legal right to do so as a side-effect of their legal right to profit from their creations.

800XL@lemmy.world on 09 Jan 2024 12:45 next collapse

I guess the lesson here is pirate everything under the sun and as long as you establish a company and train a bot everything is a-ok. I wish we knew this when everyone was getting dinged for torrenting The Hurt Locker back when.

Remember when the RIAA got caught with pirated mp3s and nothing happened?

What a stupid timeline.

devilish666@lemmy.world on 09 Jan 2024 14:29 next collapse

Nah… it’s not too complicated
AI basically just bunch of if / else or case / switch statement in spaghetti code

reverendsteveii@lemm.ee on 09 Jan 2024 14:41 next collapse

if it’s impossible for you to have something without breaking the law you have to do without it

if it’s impossible for the artistocrat class to have something without breaking the law, we change or ignore the law

lolcatnip@reddthat.com on 09 Jan 2024 16:22 collapse

Copyright law is mostly bullshit, though.

Krauerking@lemy.lol on 09 Jan 2024 17:09 collapse

Oh sure. But why is it only the massive AI push that allows the large companies owning the models full of stolen materials that make basic forgeries of the stolen items the ones that can ignore the bullshit copyright laws?

It wouldn’t be because it is super profitable for multiple large industries right?

afraid_of_zombies@lemmy.world on 10 Jan 2024 17:00 collapse

Just because people are saying the law is bad doesn’t mean they are saying the lawbreakers are good. Those two are independent of each other.

I have never been against cannabis legalization. That doesn’t mean I think people who sold it on the streets are good people.

NeoNachtwaechter@lemmy.world on 09 Jan 2024 15:05 next collapse

Burglary is impossible without breaking some doors and locks. So you have to make it legal to break doors and locks now, because otherwise I cannot go on with my profession.

afraid_of_zombies@lemmy.world on 10 Jan 2024 16:13 collapse

Sharing knowledge isn’t burglary.

NeoNachtwaechter@lemmy.world on 10 Jan 2024 22:05 collapse

But is there a nal in a nalogy?

Milk_Sheikh@lemm.ee on 09 Jan 2024 15:07 next collapse

Wow! You’re telling me that onerous and crony copyright laws stifle innovation and creativity? Thanks for solving the mystery guys, we never knew that!

[deleted] on 09 Jan 2024 16:16 next collapse

.

deweydecibel@lemmy.world on 09 Jan 2024 19:15 collapse

innovation and creativity

Neither of which are being stiffled here. OpenAI didn’t write ChatGPT with copyrighted code.

What’s being “stiffled” is corporate harvesting and profiting of the works of individuals, at their expense. And damn right it should be.

SCB@lemmy.world on 09 Jan 2024 20:21 next collapse

at their expense

How?

Milk_Sheikh@lemm.ee on 09 Jan 2024 22:11 collapse

‘Data poisoning’, encryption, & copyright.

afraid_of_zombies@lemmy.world on 10 Jan 2024 15:37 collapse

Please show me the poor artist whose work was stolen. I want a name.

If there is no victim there is no crime.

EldritchFeminity@lemmy.blahaj.zone on 10 Jan 2024 18:15 collapse

<img alt="" src="https://64.media.tumblr.com/59c836bdbaca344e152b7aedd7e1dea3/91d428c66fc78bd7-19/s1280x1920/ef5fe1186b68ff62fb47ebbc1f7784e9fd90ba66.jpg">

<img alt="" src="https://64.media.tumblr.com/a13f42ab5de7f7fa8a88daaca00c9b74/91d428c66fc78bd7-24/s1280x1920/eed42e3c85367a374e87a4e9b1e711df15dcddec.jpg">

<img alt="" src="https://64.media.tumblr.com/63d5c6a819922f275080fe8beb13bd03/884bf56921a9a09b-d2/s1280x1920/dafee1633c6c966b0ff1fe9df1fe5dea3391ae96.jpg">

Click here to find out more

Just because you think art isn’t actually work and artists don’t deserve to be paid for the work they do doesn’t make it okay and doesn’t make you right.

<img alt="" src="https://64.media.tumblr.com/1b3d5df1b26e9e86235b0b04d519dd80/58959199633e4619-52/s1280x1920/b5f726cd9b99024521bc55d5d4fae65bda736cde.jpg">

afraid_of_zombies@lemmy.world on 10 Jan 2024 19:46 collapse

Instead of screenshots why can’t you just type in the name? Why is basic research to back up your defense of the current copyright system so freaken impossible?

[deleted] on 10 Jan 2024 22:55 next collapse

.

afraid_of_zombies@lemmy.world on 10 Jan 2024 23:51 collapse

Attack the argument not the person.

[deleted] on 10 Jan 2024 23:58 collapse

.

EldritchFeminity@lemmy.blahaj.zone on 11 Jan 2024 01:48 collapse

How many you want, apart from Sara Winters up there and 8Pxl, who I linked to. I have 25 pages of them from court documents. Roughly 4,000 names in total, in alphabetical order.

…courtlistener.com/…/gov.uscourts.cand.407208.129…

If I hadn’t included screenshots, you would’ve just claimed they were made up. Keep moving the goalposts, AI shill.

afraid_of_zombies@lemmy.world on 11 Jan 2024 05:04 collapse

Type the name.

[deleted] on 09 Jan 2024 15:11 next collapse

.

Arcka@midwest.social on 09 Jan 2024 21:44 next collapse

Copied cars. Copying is not theft or stealing.

afraid_of_zombies@lemmy.world on 10 Jan 2024 15:49 collapse

If I steal something from you I have it and you don’t. When I copy an idea from you, you still have the idea. As a whole the two person system has more knowledge. While actual theft is zero sum. Downloading a car and stealing a car are not the same thing.

And don’t even try the awarding artists and inventor argument. Companies that fund R&D get tax breaks for it, so they already get money. An artists are rarely compensated appropriately.

Daxtron2@startrek.website on 09 Jan 2024 15:14 next collapse

I’ve learned from lemmy that individual’s abuse of copyright is good👍

LLMs trained on copyrighted material and suddenly everyone is an advocate for more strict copyright enforcement?

andrew_bidlaw@sh.itjust.works on 09 Jan 2024 15:58 collapse

Who is behind each? Individual abuse is just an expense to a corporation, LLMs caused a lot of fear in regular artists.

Daxtron2@startrek.website on 09 Jan 2024 16:14 collapse

You’re not afraid of the technology you’re afraid of corporations abusing it to exploit their workforce. Don’t blame the technology, blame the corporations.

lolcatnip@reddthat.com on 09 Jan 2024 16:26 collapse

You’re describing the difference between the original Luddism that’s against exploitation and the degenerate form that’s just a blind hatred of new technology. Unfortunately there seems to be a lot of the latter on Lemmy.

Daxtron2@startrek.website on 09 Jan 2024 17:16 next collapse

Yeah Lemmy and the world in general seems to just parrot the opinions of whichever talking head they listen to. I recognize that there are certainly issues both ethically and technically with LLMs and image generation especially. However I also utilize both these tools on a daily basis to make my life more efficient which frees me up to do more things I enjoy. That to me is the most important thing we should regulate about automation, it should make lives easier, not give us more work to do.

General_Effort@lemmy.world on 09 Jan 2024 20:31 collapse

Any idea which talking head that might be?

I’ve been trying to figure out which ideology is behind these arguments (where there are arguments). The emphasis on property and human creativity is quite reminiscent of Ayn Rand, but she is not quoted. Well, no one is cited. Actually, the way things are just asserted and objections just bulldozed over while screaming theft is also reminiscent of Rand.

ramjambamalam@lemmy.ca on 09 Jan 2024 17:58 next collapse

I bet a lot of the AI bashers are the same demographic that grew up with the Internet and mocked the baby boomers who were Internet skeptics.

afraid_of_zombies@lemmy.world on 10 Jan 2024 17:09 collapse

A lot of people seem to know what the “original” luddism was about. Must have been that popular article on the subject several years ago.

holycrap@lemm.ee on 09 Jan 2024 15:28 next collapse

I have the perfect solution. Shorten the copyright duration.

IzzyScissor@lemmy.world on 09 Jan 2024 17:07 next collapse

Help Help! My business model is illegal, but it makes SO MUCH money! What do I doooo?

afraid_of_zombies@lemmy.world on 10 Jan 2024 15:50 collapse

It doesn’t really make any money yet also the law is bad. Copyrights shouldn’t be a thing but if they have to be they should be short duration.

kibiz0r@lemmy.world on 09 Jan 2024 17:13 next collapse

I’m dumbfounded that any Lemmy user supports OpenAI in this.

We’re mostly refugees from Reddit, right?

Reddit invited us to make stuff and share it with our peers, and that was great. Some posts were just links to the content’s real home: Youtube, a random Wordpress blog, a Github project, or whatever. The post text, the comments, and the replies only lived on Reddit. That wasn’t a huge problem, because that’s the part that was specific to Reddit. And besides, there were plenty of third-party apps to interact with those bits of content however you wanted to.

But as Reddit started to dominate Google search results, it displaced results that might have linked to the “real home” of that content. And Reddit realized a tremendous opportunity: They now had a chokehold on not just user comments and text posts, but anything that people dare to promote online.

At the same time, Reddit slowly moved from a place where something may get posted by the author of the original thing to a place where you’ll only see the post if it came from a high-karma user or bot. Mutated or distorted copies of the original instance, reformated to cut through the noise and gain the favor of the algorithm. Re-posts of re-posts, with no reference back to the original, divorced of whatever context or commentary the original creator may have provided. No way for the audience to respond to the author in any meaningful way and start a dialogue.

This is a miniature preview of the future brought to you by LLM vendors. A monetized portal to a dead internet. A one-way street. An incestuous ouroborous of re-posts of re-posts. Automated remixes of automated remixes.

There are genuine problems with copyright law. Don’t get me wrong. Perhaps the most glaring problem is the fact that many prominent creators don’t even own the copyright to the stuff they make. It was invented to protect creators, but in practice this “protection” gets assigned to a publisher immediately after the protected work comes into being.

And then that copyright – the very same thing that was intended to protect creators – is used as a weapon against the creator and against their audience. Publishers insert a copyright chokepoint in-between the two, and they squeeze as hard as they desire, wringing it of every drop of profit, keeping creators and audiences far away from each other. Creators can’t speak out of turn. Fans can’t remix their favorite content and share it back to the community.

This is a dysfunctional system. Audiences are denied the ability to access information or participate in culture if they can’t pay for admission. Creators are underpaid, and their creative ambitions are redirected to what’s popular. We end up with an auto-tuned culture – insular, uncritical, and predictable. Creativity reduced to a product.

But.

If the problem is that copyright law has severed the connection between creator and audience in order to set up a toll booth along the way, then we won’t solve it by giving OpenAI a free pass to do the exact same thing at massive scale.

flamingarms@feddit.uk on 09 Jan 2024 19:23 next collapse

And yet, I believe LLMs are a natural evolutionary product of NLP and a powerful tool that is a necessary step forward for humanity. It is already capable of exceptionally quickly scaffolding out basic tasks. In it, I see the assumptions that all human knowledge is for all humans, rudimentary tasks are worth automating, and a truly creative idea is often seeded by information that already exists and thus creativity can be sparked by something that has access to all information.

I am not sure what we are defending by not developing them. Is it a capitalism issue of defending people’s money so they can survive? Then that’s a capitalism problem. Is it that we don’t want to get exactly plagiarized by AI? That’s certainly something companies are and need to continue taking into account. But researchers repeat research and come to the same conclusions all the time, so we’re clearly comfortable with sharing ideas. Even in the Writer’s Guild strikes in the States, both sides agreed that AI is helpful in script-writing, they just didn’t want production companies to use it as leverage to pay them less or not give them credit for their part in the production.

EldritchFeminity@lemmy.blahaj.zone on 10 Jan 2024 19:11 collapse

The big issue is, as you said, a capitalism problem, as people need money from their work in order to eat. But, it goes deeper than that and that doesn’t change the fact that something needs to be done to protect the people creating the stuff that goes into the learning models. Ultimately, it comes down to the fact that datasets aren’t ethically sourced and that people want to use AI to replace the same people whose work they used to create said AI, but it also has a root in how society devalues the work of creativity. People feel entitled to the work of artists. For decades, people have believed that artists shouldn’t be fairly compensated for their work, and the recent AI issue is just another stone in the pile. If you want to see how disgusting it is, look up stuff like “paid in exposure” and the other kinds of things people tell artists they should accept as payment instead of money.

In my mind, there are two major groups when it comes to AI: Those whose work would benefit from the increased efficiency AI would bring, and those who want the reward for work without actually doing the work or paying somebody with the skills and knowledge to do the work. MidJourney is in the middle of a lawsuit right now and the developers were caught talking about how you “just need to launder it through a fine tuned Codex.” With the “it” here being artists’ work. Link The vast majority of the time, these are the kinds of people I see defending AI; they aren’t people sharing and collaborating to make things better - they’re people who feel entitled to benefit from others’ work without doing anything themselves. Making art is about the process and developing yourself as a person as much as it is about the end result, but these people don’t want all that. They just want to push a button and get a pretty picture or a story or whatever, and then feel smug and superior about how great an artist they are.

All that needs to be done is to require that the company that creates the AI has to pay a licensing fee for copyrighted material, and allow for copyright-free stuff and content where they have gotten express permission to use (opt-in) to be used freely. Those businesses with huge libraries of copyright-free music that you pay a subscription fee to use work like this. They pay musicians to create songs for them; they don’t go around downloading songs and then cut them up to create synthesizers that they sell.

Milk_Sheikh@lemm.ee on 10 Jan 2024 00:23 next collapse

Mutated or distorted copies of the original instance, reformated to cut through the noise and gain the favor of the algorithm. Re-posts of re-posts, with no reference back to the original, divorced of whatever context or commentary the original creator may have provided… This is a miniature preview of the future brought to you by LLM vendors. A monetized portal to a dead internet. A one-way street. An incestuous ouroborous of re-posts of re-posts. Automated remixes of automated remixes.

The internet is genuinely already trending this way just from LLM AI writing things like: articles and bot reviews, listicle and ‘review’ websites that laser focus for SEO hits, social media comments and posts to propagandize or astroturf…

We are going to live and die by how the Captcha-AI arms race is ran against the malicious actors, but that won’t help when governments or capital give themselves root access.

afraid_of_zombies@lemmy.world on 10 Jan 2024 15:45 collapse

Too long didn’t read, busy downloading a car now. How much did Disney pay for this comment?

wosat@lemmy.world on 09 Jan 2024 17:38 next collapse

This situation seems analogous to when air travel started to take off (pun intended) and existing legal notions of property rights had to be adjusted. IIRC, a farmer sued an airline for trespassing because they were flying over his land. The court ruled against the farmer because to do otherwise would have killed the airline industry.

Patches@sh.itjust.works on 09 Jan 2024 20:53 collapse

I member

And we did so before then with ‘Mineral Rights’. You can drill for oil on your property but If you find it - it ain’t yours because you only own what you can walk on in many places. Capitalists are gonna capitalize

BURN@lemmy.world on 09 Jan 2024 17:55 next collapse

Too bad

Why do they have free reign to store and use copyrighted material as training data? AIs don’t learn as a human would, and comparisons can’t be made between the learning processes.

SCB@lemmy.world on 09 Jan 2024 20:20 next collapse

Why do you have free reign to do the same?

AIs don’t learn as a human would, and comparisons can’t be made between the learning processes.

I think you’re going to have a hard time proving a financial distinction between them

BURN@lemmy.world on 09 Jan 2024 20:33 collapse

You don’t need to prove a financial difference. They are fundamentally different systems that function in different ways. They cannot be compared 1:1 and laws cannot be applied as a 1:1. New regulations need to be added around AI use of copyrighted material.

SCB@lemmy.world on 09 Jan 2024 20:36 collapse

I agree. For instance, it should be secured in law that you can train AI on anything, to avoid frivolous discussions like this.

Output is what should be moderated by law.

BURN@lemmy.world on 09 Jan 2024 20:48 collapse

No

Why are you entitled to use everyone else’s work? It should be secured in law that licensing applies to training data to avoid frivolous discussions like this. Then it’s an entirely opt-in solution, which works in the benefit of everyone except the people stealing data.

Output doesn’t matter since it’s pretty well settled it’s not derivative work (as much as I disagree with that statement).

SCB@lemmy.world on 09 Jan 2024 21:00 collapse

the people stealing data

No one is doing this

Output doesn’t matter since it’s pretty well settled it’s not derivative work

Cool, discussion over.

BURN@lemmy.world on 09 Jan 2024 21:03 collapse

It is stealing data. In order to train on it they have to store the data. That’s a copyright violation. There’s no way to interpret it as not stealing data.

5too@lemmy.world on 09 Jan 2024 22:16 collapse

It is not stealing. The data is still there. It is, at worst, copyright violation.

BURN@lemmy.world on 10 Jan 2024 00:10 collapse

Copyright violations is stealing

ultranaut@lemmy.world on 10 Jan 2024 03:36 collapse

Stealing means someone has been deprived of their property, which is not the case for copyright violations.

INHALE_VEGETABLES@aussie.zone on 09 Jan 2024 20:01 collapse

They can be made. Imagine trying to hold any conversations without being able to reference popular culture.

pacology@lemmy.world on 09 Jan 2024 18:04 next collapse

We’ll, strictly speaking you could have an AI that only knows about the world up to 1928 and talks like it’s 1928.

charonn0@startrek.website on 09 Jan 2024 18:12 next collapse

Sounds like a fatal problem. That’s a shame.

whoisearth@lemmy.ca on 09 Jan 2024 18:14 next collapse

If OpenAI is right (I think they are) one of two things need to happen.

  1. All AI should be open source and non-profit
  2. Copywrite law needs to be abolished

For number 1. Good luck for all the reasons we all know. Capitalism must continue to operate.

For number 1. Good luck because those in power are mostly there off the backs of those before them (see Disney, Apple, Microsoft, etc)

Anyways, fun to watch play out.

SCB@lemmy.world on 09 Jan 2024 20:19 next collapse

There’s a third solution you’re overlooking.

3: OpenAI (or other) wins a judgment that AI content is not inherently a violation of copyright regardless of materials it is trained upon.

Hedgehawk@lemmy.world on 09 Jan 2024 20:34 collapse

It’s not really about the AI content being a violation or not though is it. It’s more about a corporation using copyrighted content without permission to make their product better.

SCB@lemmy.world on 09 Jan 2024 20:36 collapse

If it’s not a violation of copyright then this is a non-issue. You don’t need permission to read books.

BURN@lemmy.world on 09 Jan 2024 21:04 collapse

AI does not “read books” and it’s completely disingenuous to compare them to humans that way.

SCB@lemmy.world on 09 Jan 2024 21:06 next collapse

That’s certainly an opinion you have

BURN@lemmy.world on 09 Jan 2024 21:07 collapse

Backed by technical facts.

AIs fundamentally process information differently than humans. That’s not up for debate.

SCB@lemmy.world on 09 Jan 2024 21:28 collapse

Yes this is an argument in my favor, you just don’t understand AI/LLMs enough to know why.

BURN@lemmy.world on 09 Jan 2024 21:31 collapse

I could say the same about you, considering I’ve watched you peddle false information for months about this subject.

AI learns differently than humans. That isn’t a fact up for debate. That’s one of the few objective truths around this industry.

SCB@lemmy.world on 09 Jan 2024 21:34 collapse

I work with AI every day at my job. My buddy is a literal AI researcher and we hobby-build together too.

I’m not concerned with what you think is “objective truth” when you have no idea what you’re talking about.

BURN@lemmy.world on 09 Jan 2024 21:39 next collapse

Ok and?

That doesn’t mean it’s any less theft, or that you have any idea what you’re talking about.

www.rws.com/blog/large-language-models-humans/

lesswrong.com/…/llm-cognition-is-probably-not-hum…

There’s also countless papers on google scholar that point out the differences.

SCB@lemmy.world on 09 Jan 2024 22:46 collapse

My entire premise hinges on the fact that these papers agree with me.

Bartsbigbugbag@lemmy.ml on 10 Jan 2024 15:14 collapse

You use an AI to help you come up with your talking points at your job at the IOF?

whoisearth@lemmy.ca on 09 Jan 2024 21:06 collapse

Similarly I don’t read “War and Peace” and then use that to go and write “Peace and War”

rivermonster@lemmy.world on 09 Jan 2024 20:29 next collapse

It’s why AI ultimately will be the death of capitalism, or the dawn of the endless war against the capitalists (literally, and physically).

AI will ultimately replace most jobs, capitalism can’t work without wage slave, or antique capitalism aka feudalism… so yeah. Gonna need to move towards UBI and more utopian, or just a miserable endless bloody awful war against the capitalists.

the_ocs@lemmy.world on 09 Jan 2024 21:07 collapse

There’s no open source without copyright, only public domain

db0@lemmy.dbzer0.com on 09 Jan 2024 22:08 collapse

If all is public domain, all is open source

the_ocs@lemmy.world on 09 Jan 2024 22:22 collapse

Open source also includes viral licenses like the GPL. Without copyright, the GPL is not enforceable.

db0@lemmy.dbzer0.com on 10 Jan 2024 00:24 collapse

It doesn’t have to be. One leak and the code is open for all

ugjka@lemmy.world on 09 Jan 2024 18:18 next collapse

TBH I only use LLMs when traditional search fails and even then I’m not sure if I’m getting something useful or hallucination. I need better search engines not fancy AI bullshitters

Evotech@lemmy.world on 09 Jan 2024 18:20 next collapse

Then LLMs should be FOSS

rivermonster@lemmy.world on 09 Jan 2024 20:31 next collapse

All AI should be FOSS and public domain, owned by the people, and all gains from its use taxed at 100%. It’s only because of the public that AI exists, through the schools, universities, NSF, grants, etc and all the other places that taxes have been poured into that created the advances upon which AI stands, and the AI critical research as well.

BURN@lemmy.world on 09 Jan 2024 21:06 collapse

That does nothing to solve the problem of data being used without consent to train the models. It doesn’t matter if the model is FOSS if it stole all the data it trained on.

Arcka@midwest.social on 09 Jan 2024 21:36 next collapse

Copying is not theft or stealing.

BURN@lemmy.world on 09 Jan 2024 21:40 collapse

Copying copyright protected data is theft AND stealing

Edit: this also applies to my stance on piracy, which I don’t engage in for the same reason. It’s theft

db0@lemmy.dbzer0.com on 09 Jan 2024 22:10 next collapse

By definition you’re wrong

afraid_of_zombies@lemmy.world on 10 Jan 2024 16:23 next collapse

You are only hurting yourself by adopting a rule like that.

[deleted] on 10 Jan 2024 18:05 collapse

.

BURN@lemmy.world on 10 Jan 2024 18:07 collapse

It’s theft.

You can steal all you want, but it’s still theft. Piracy is theft, stealing data to be used as training data is theft.

Not everyone wants their creations to be infinitely shared beyond their control. If someone creates something, they’re entitled to absolute control over it.

[deleted] on 10 Jan 2024 18:36 collapse

.

afraid_of_zombies@lemmy.world on 10 Jan 2024 16:58 collapse

The only way I can steal data from you is if I break into your office and walk off with your hard drive. Do you have access to something? It hasn’t been stolen.

ook_the_librarian@lemmy.world on 09 Jan 2024 19:54 next collapse

It’s not “impossible”. It’s expensive and will take years to produce material under an encompassing license in the quantity needed to make the model “large”. Their argument is basically “but we can have it quickly if you allow legal shortcuts.”

Patches@sh.itjust.works on 09 Jan 2024 20:50 next collapse

That argument has unfortunately worked for many other Tech Bros

afraid_of_zombies@lemmy.world on 10 Jan 2024 15:35 next collapse

The law is shit

agitatedpotato@lemmy.world on 11 Jan 2024 03:10 collapse

Whenever a company says something is impossible, they usually mean it’s just unprofitable.

KingThrillgore@lemmy.ml on 09 Jan 2024 19:58 next collapse

Its almost like we had a thing where copyrighted things used to end up but they extended the dates because money

rivermonster@lemmy.world on 09 Jan 2024 20:27 next collapse

I was literally about to come in here and say it would be an interesting tangential conversation to talk about how FUCKED copyright laws are, and how relevant to the discussion it would be.

More upvote for you!

Ultraviolet@lemmy.world on 09 Jan 2024 21:38 collapse

This is where they have the leverage to push for actual copyright reform, but they won’t. Far more profitable to keep the system broken for everyone but have an exemption for AI megacorps.

bravesirrbn@lemmy.world on 09 Jan 2024 21:58 next collapse

Then don’t

flop_leash_973@lemmy.world on 09 Jan 2024 22:04 next collapse

If it ends up being OK for a company like OpenAI to commit copyright infringement to train their AI models it should be OK for John/Jane Doe to pirate software for private use.

But that would never happen. Almost like the whole of copyright has been perverted into a scam.

badbytes@lemmy.world on 09 Jan 2024 22:11 next collapse

You wouldn’t steal a car, would you?

flop_leash_973@lemmy.world on 09 Jan 2024 22:23 collapse

<img alt="" src="https://lemmy.world/pictrs/image/632281ea-31f9-436b-83b3-c1133fe2968d.jpeg">

Honytawk@lemmy.zip on 10 Jan 2024 19:35 collapse

It is funny how Hollywood was droning that sentence into our head, and now they are downloading actors themselves. Oh the irony.

tinwhiskers@lemmy.world on 10 Jan 2024 02:27 collapse

Using copyrighted material is not the same thing as copyright infringement. You need to (re)publish it for it to become an infringement, and OpenAI is not publishing the material made with their tool; the users of it are. There may be some grey areas for the law to clarify, but as yet, they have not clearly infringed anything, any more than a human reading copyrighted material and making a derivative work.

hperrin@lemmy.world on 10 Jan 2024 05:02 next collapse

It comes from OpenAI and is given to OpenAI’s users, so they are publishing it.

linearchaos@lemmy.world on 10 Jan 2024 19:27 collapse

It’s being mishmashed with a billion other documents just like to make a derivative work. It’s not like open hours giving you a copy of Hitchhiker’s Guide to the Galaxy.

hperrin@lemmy.world on 10 Jan 2024 23:58 collapse

New York Times was able to have it return a complete NYT article, verbatim. That’s not derivative.

Fraubush@lemm.ee on 11 Jan 2024 01:03 next collapse

I thought the same thing until I read another perspective into it from Mike Masnick and, from what he writes, it seems pretty clear they manipulated ChatGPT with some very specific prompts that someone who doesn’t already pay NYT for access would not be able to do. For example, feeding it 3 verbatim paragraphs from an article and asking it to generate the rest if you understand how these LLMs work, its really not surprising that you can indeed force it to do things like that but it’s an extreme and I’m qith Masnick and the user your responding to on this one myself.

I also watched most of today’s subcommittee hearing on AI and journalism. A lot of the arguments are that this will destroy local journalism. Look, strong local journalism is some of the most important work that is dying right now. But the grave was dug by these large media companies and hedge funds that bought up and gutted those local news orgs and not many people outside of the industry batted an eye while that was happening. This is a bit of a tangent but I don’t exactly trust the giant headgefunds who gutted these local news journalists ocer the padt deacde to all of a sudden care at all about how important they are.

Sorry fir the tangent butbheres the article i mentioned thats more on topic - mediagazer.com/231228/p11#a231228p11

hperrin@lemmy.world on 11 Jan 2024 02:00 collapse

So they gave it the 3 paragraphs that are available publicly, said continue, and it spat out the rest of the article that’s behind a paywall. That sure sounds like copyright infringement.

linearchaos@lemmy.world on 11 Jan 2024 01:08 collapse

And that’s not the intent of the service, it’s a bug and they’ll fix it.

Syntha@sh.itjust.works on 11 Jan 2024 04:26 next collapse

Insane how this comment is downvoted, when, as far as a I’m aware, it’s literally just the legal reality at this point in time.

A_Very_Big_Fan@lemmy.world on 11 Jan 2024 04:41 collapse

any more than a human reading copyrighted material and making a derivative work.

It seems obvious to me that it’s not doing anything different than a human does when we absorb information and make our own works. I don’t understand why practically nobody understands this

I’m surprised to have even found one person that agrees with me

BURN@lemmy.world on 11 Jan 2024 15:46 collapse

Because it’s objectively not true. Humans and ML models fundamentally process information differently and cannot be compared. A model doesn’t “read a book” or “absorb information”

A_Very_Big_Fan@lemmy.world on 12 Jan 2024 10:56 collapse

I didn’t say they processed information the same, I said generative AI isn’t doing anything that humans don’t already do. If I make a drawing of Gordon Freeman or Courage the Cowardly Dog, or even a drawing of Gordon Freeman in the style of Courage the Cowardly Dog, I’m not infringing on the copyright of Valve or John Dilworth. (Unless I monetize it, but even then there’s fair-use…)

Or if I read a statistic or some kind of piece of information in an article and spoke about it online, I’m not infringing the copyright of the author. Or if I listen to hundreds of hours of a podcast and then do a really good impression of one of the hosts online, I’m not infringing on that person’s copyright or stealing their voice.

Neither me making that drawing, nor relaying that information, nor doing that impression are copyright infringement. Me uploading a copy of Courage or Half-Life to the internet would be, or copying that article, or uploading the hypothetical podcast on my own account somewhere. Generative AI doesn’t publish anything, and even if it did I think there would be a strong case for fair-use for the same reasons humans would have a strong case for fair-use for publishing their derivative works.

PeterPoopshit@lemmy.world on 09 Jan 2024 23:10 next collapse

My hot take is that it’s not like most of those independent artists are getting compensated fairly by the companies that own them anyway if at all. Stealing ai training content is just stealing from corporations. Corporations who are probably politically fighting to keep things worse for the average person in your country.

Theft is “a crime” but I never saw anyone complaining about how unfair it was all those times I myself got fucked over by google bullshitting their way out of giving me my ad revenue. If normal people can’t profit from stuff like this, we shouldn’t be doing anything to protect the profits of evil corporations.

dutchkimble@lemy.lol on 10 Jan 2024 02:13 next collapse

Cool, don’t do it then

Boiglenoight@lemmy.world on 10 Jan 2024 05:35 next collapse

Piracy by another name. Copyrighted materials are being used for profit by companies that have no intention of compensating the copyright holder.

Pulptastic@midwest.social on 10 Jan 2024 14:22 collapse

Piracy is OK when corporations do it.

baseless_discourse@mander.xyz on 10 Jan 2024 05:57 next collapse

Yeah, I also have no way to own a billion dollar. Sucks for both of us…

shotgun_surgery@links.hackliberty.org on 10 Jan 2024 13:23 next collapse

Intellectual property is crap anyway

recapitated@lemmy.world on 11 Jan 2024 00:47 collapse

I don’t even know which side you’re on, and I love that.

shotgun_surgery@links.hackliberty.org on 12 Jan 2024 16:26 collapse

Lol I despise both monopolies and intellectual property. And it’s so nice to see them fight each other for once.

Blackmist@feddit.uk on 10 Jan 2024 15:25 next collapse

Maybe you shouldn’t have done it then.

I can’t make a Jellyfin server full of content without copyrighted material either, but the key difference here is I’m not then trying to sell that to investors.

afraid_of_zombies@lemmy.world on 10 Jan 2024 15:33 collapse

Maybe copyrights don’t protect artists they protect corporations

Shazbot@lemmy.world on 10 Jan 2024 16:42 next collapse

Reading these comments has shown me that most users don’t realize that not all working artists are using 1099s and filing as an individual. Once you have stable income and assets (e.g. equipment) there are tax and legal benefits to incorporating your business. Removing copyright protections for large corporations will impact successful small artists who just wanted a few tax breaks.

afraid_of_zombies@lemmy.world on 10 Jan 2024 16:48 collapse

Ok don’t care. Ban copyright

BURN@lemmy.world on 10 Jan 2024 17:53 collapse

They protect artists AND protect corporations, and you can’t have one without the other. It’s much better the way it is compared to no copyright at all.

afraid_of_zombies@lemmy.world on 10 Jan 2024 17:54 collapse

Which is why no artist has ever been screwed. Nope never one happened.

BURN@lemmy.world on 10 Jan 2024 17:55 collapse

They’re screwed less than they would be if copyright was abolished. It’s not a perfect system by far, but over restrictive is 100x better than an open system of stealing from others.

afraid_of_zombies@lemmy.world on 10 Jan 2024 18:04 next collapse

Citation needed.

Also copying isn’t stealing.

agitatedpotato@lemmy.world on 11 Jan 2024 03:08 collapse

So without copyright, if an artist makes a cool picture and coca cola uses it to sell soda and decided not to give the artist any money, now they have no legal recourse, and that’s better? I don’t think the issue is as much copyright inherently, as much as it is who holds and enforces those rights. If all copyrights were necessarily held by the people who actually made what is copy-written, much of the problems would be gone.

afraid_of_zombies@lemmy.world on 10 Jan 2024 15:32 next collapse

If the copyright people had their way we wouldn’t be able to write a single word without paying them. This whole thing is clearly a fucking money grab. It is not struggling artists being wiped out, it is big corporations suing a well funded startup.

[deleted] on 10 Jan 2024 16:50 next collapse

.

afraid_of_zombies@lemmy.world on 10 Jan 2024 16:54 collapse

If the rule is stupid or evil we should applaud people who break it.

kiagam@lemmy.world on 10 Jan 2024 17:46 next collapse

we should use those who break it as a beacon to rally around and change the stupid rule

Grabbels@lemmy.world on 11 Jan 2024 11:28 collapse

Except they pocket millions of dollars by breaking that rule and the original creators of their “essential data” don’t get a single cent while their creations indirectly show up in content generated by AI. If it really was about changing the rules they wouldn’t be so obvious in making it profitable, but rather use that money to make it available for the greater good AND pay the people that made their training data. Right now they’re hell-bent in commercialising their products as fast as possible.

If their statement is that stealing literally all the content on the internet is the only way to make AI work (instead of for example using their profits to pay for a selection of all that data and only using that) then the business model is wrong and illegal. It’s as a simple as that.

I don’t get why people are so hell-bent on defending OpenAI in this case; if I were to launch a food-delivery service that’s affordable for everyone, but I shoplifted all my ingredients “because it’s the only way”, most would agree that’s wrong and my business is illegal. Why is this OpenAI case any different? Because AI is an essential development? Oh, and affordable food isn’t?

afraid_of_zombies@lemmy.world on 11 Jan 2024 16:00 collapse

I am not defending OpenAi I am attacking copyright. Do you have freedom of speech if you have nothing to say? Do you have it if you are a total asshole? Do you have it if you are the nicest human who ever lived? Do you have it and have no desire to use it?

phillaholic@lemm.ee on 10 Jan 2024 17:26 next collapse

A ton of people need to read some basic background on how copyright, trademark, and patents protect people. Having none of those things would be horrible for modern society. Wiping out millions of jobs, medical advancements, and putting control into the hands of companies who can steal and strongarm the best. If you want to live in a world run by Mafia style big business then sure.

BlueMagma@sh.itjust.works on 10 Jan 2024 18:08 next collapse

I see and understand your point regarding trademark, but I don’t understand how removing copyright or patents would have this effect, could you elaborate ?

mihnt@lemmy.world on 10 Jan 2024 19:48 collapse

Small business comes up with something, big business takes idea and puts it in all their stores/factories. Small business loses out because they can’t compete. Small business goes poof trying to compete.

BlueMagma@sh.itjust.works on 10 Jan 2024 21:26 collapse

Is it not what is already happening with our current system ? The little guy never have the ressources to fight legal battle against the big guy and enforce it’s “intellectual property”.

And the opposite would be true in a world without patent, small businesses could win because they would be free to reuse and adapt big businesses’ ideas.

It feels very simplistic to reduce patents to “protection of the little business”, in our current world they mostly protect the big ones.

Also this small example doesn’t elaborate about how removing copyrights would so negatively affects our society

mihnt@lemmy.world on 10 Jan 2024 21:51 next collapse

Well, I was just giving an example of something that is bad about not having a patent system. Personally, I think the patent system is good thing, but it needs a lot of reworking and we don’t and probably won’t ever have the proper government to fix it what with all the big businesses living in the politician’s pockets.

BURN@lemmy.world on 11 Jan 2024 00:19 next collapse

I mean we’ve seen it work multiple times against Apple where a smaller company has been able to enforce their patent against them.

aesthelete@lemmy.world on 11 Jan 2024 01:38 collapse

There’s a reason why the sharks on shark tank ask if ideas are patented. Without a patent, your idea can be ripped off without any recompense.

Sure there are problems with some patents, such as software patents, but the system should be reformed rather than completely tossed.

xenoclast@lemmy.world on 11 Jan 2024 02:03 next collapse

I agree with you on part …It’s moot anyway. It’s the current law of the land. The glue of society and all that. It’s illegal now so they shouldn’t do it.

If you have enough money (required) and make a solid legal argument to change the laws (optional: depends on how much money you start with) then they can do it… But for now they should STFU and shut the fuck down.

31337@sh.itjust.works on 11 Jan 2024 07:22 collapse

Meh, patents are monopolies over ideas, do much more harm than good, and help big business much more than they help the little guy. Being able to own an idea seems crazy to me.

I marginally support copyright laws, just because they provide a legal framework to enforce copyleft licenses. Though, I think copyright is abused too much on places like YouTube. In regards to training generative AI, the goal is not to copy works, and that would make the model’s less useful. It’s very much fair use.

Trademarks are generally good, but sometimes abused as well.

phillaholic@lemm.ee on 12 Jan 2024 03:18 collapse

Patents don’t let you own an idea. They give you an exclusive right to use the idea for a limited time in exchange for detailed documentation on how your idea works. Once the patent expires everyone can use it. But while it’s under patent anyone can look up the full documentation and learn from it. Without this, big business could reverse engineer the little guys invention and just steal it.

31337@sh.itjust.works on 12 Jan 2024 03:42 collapse

Goes both ways. As someone who has tried bringing new products to market, it’s extremely annoying that nearly everything you can think of already has similar patent. I’ve also reverse engineered a few things (circuits and disassembled code), as a little guy, working for a small business . I don’t think people usually scan patents to learn things, and reverse engineering usually isn’t too hard.

If I were a capitalist, I’d argue that if a big business “steals” an idea, and implements it more effectively and efficiently than the small business, then the small business should probably fail.

phillaholic@lemm.ee on 12 Jan 2024 14:27 collapse

Amazon is practically a case study on your last point. They routinely copy competitors products that use their platform to sell, taking most of the profits for themselves and sometimes putting those others out of business. I don’t see that as a good thing, it’s anticompetitive and eventually the big business just squeezes for more profit.

randon31415@lemmy.world on 10 Jan 2024 18:31 next collapse

I wonder if the act of picking cotton was copyrighted, would we had got the cotton gin? We have automated most non-creative pursues and displaced their workers. Is it because people can take joy out of creative pursues that we balk at the automation? If you have a particular style in picking items to fulfill Amazon orders, should that be copyrighted and protected from being used elsewhere?

MaxVoltage@lemmy.world on 10 Jan 2024 19:57 collapse

Bro the cotton gin literally led to millions of black slaves because now it was profitable. Worst example possible

i literally coughed i laughed so hard

randon31415@lemmy.world on 10 Jan 2024 22:15 collapse

So automation can lead to more (crappy) jobs? www.smbc-comics.com/comic/the-future

unreasonabro@lemmy.world on 10 Jan 2024 19:07 next collapse

finally capitalism will notice how many times it has shot up its own foot with their ridiculous, greedy infinite copyright scheme

As a musician, people not involved in the making of my music make all my money nowadays instead of me anyway. burn it all down

MaxVoltage@lemmy.world on 10 Jan 2024 19:56 collapse

Pitchfork fest 2024

unreasonabro@lemmy.world on 11 Jan 2024 23:18 collapse

… that’s a good album name, might use that ;)

MaxVoltage@lemmy.world on 12 Jan 2024 01:17 collapse

it would sell

veniasilente@lemm.ee on 10 Jan 2024 20:30 next collapse

“Impossible”? They just need to ask for permission from each source. It’s not like they don’t already know who the sources are, since the AIs are issuing HTTP(S) requests to fetch them.

Eigerloft@lemmy.world on 10 Jan 2024 23:15 collapse

Ask permission AND pay for its use in perpetuity.

badbytes@lemmy.world on 10 Jan 2024 22:26 next collapse

Impossible. Then illegal? Get fucked AI

Treczoks@lemmy.world on 10 Jan 2024 23:35 next collapse

If a business relies on breaking the law as a fundament of their business model, it is not a business but an organized crime syndicate. A Mafia.

platypus_plumba@lemmy.world on 10 Jan 2024 23:45 collapse

It’s impossible to extract all the money from a bank without robbing the bank :(

dasgoat@lemmy.world on 11 Jan 2024 00:04 next collapse

Cool! Then don’t!

NeatNit@discuss.tchncs.de on 11 Jan 2024 00:45 collapse

hijacking this comment

OpenAI was IMHO well within its rights to use copyrighted materials when it was just doing research. They were* doing research on how far large language models can be pushed, where’s the ceiling for that. It’s genuinely good research, and if copyrighted works are used just to research and what gets published is the findings of the experiments, that’s perfectly okay in my book - and, I think, in the law as well. In this case, the LLM is an intermediate step, and the published research papers are the “product”.

The unacceptable turning point is when they took all the intermediate results of that research and flipped them into a product. That’s not the same, and most or all of us here can agree - this isn’t okay, and it’s probably illegal.

* disclaimer: I’m half-remembering things I’ve heard a long time ago, so even if I phrase things definitively I might be wrong

dasgoat@lemmy.world on 11 Jan 2024 15:34 collapse

True, with the acknowledgement that this was their plan all along and the research part was always intended to be used as a basis for a product. They just used the term ‘research’ as a workaround that allowed them to do basically whatever to copyrighted materials, fully knowing that they were building a marketable product at every step of their research

That is how these people essentially function, they’re the tax loophole guys that make sure you and I pay less taxes than Amazon. They are scammers who have no regard for ethics and they can and will use whatever they can to reach their goal. If that involves lying about how you’re doing research when in actuality you’re doing product development, they will do that without hesitation. The fact that this product now exists makes it so lawmakers are now faced with a reality where the crimes are kind of past and all they can do is try and legislate around this thing that now exists. And they will do that poorly because they don’t understand AI.

And this just goes into fraud in regards to research and copyright. Recently it came out that LAION-5B, an image generator that is part of Stable Diffusion, was trained on at least 1000 images of child pornography. We don’t know what OpenAI did to mitigate the risk of their seemingly indiscriminate web scrapers from picking up harmful content.

AI is not a future, it’s a product that essentially functions to repeat garbled junk out of things we have already created, all the while creating a massive burden on society with its many, many drawbacks. There are little to no arguments FOR AI, and many, many, MANY to stop and think about what these fascist billionaire ghouls are burdening society with now. Looking at you, Peter Thiel. You absolute ghoul.

NeatNit@discuss.tchncs.de on 11 Jan 2024 20:40 collapse

True, with the acknowledgement that this was their plan all along and the research part was always intended to be used as a basis for a product. They just used the term ‘research’ as a workaround that allowed them to do basically whatever to copyrighted materials, fully knowing that they were building a marketable product at every step of their research

I really don’t think so. I do believe OpenAI was founded with genuine good intentions. But around the time it transitioned from a non-profit to a for-profit, those good intentions were getting corrupted, culminating in the OpenAI of today.

The company’s unique structure, with a non-profit’s board of directors controlling the company, was supposed to subdue or prevent short-term gain interests from taking precedence over long-term AI safety and other such things. I don’t know any of the details beyond that. We all know it failed, but I still believe the whole thing was set up in good faith, way back when. Their corruption was a gradual process.

There are little to no arguments FOR AI

Outright not true. There’s so freaking many! Here’s some examples off the top of my head:

  • Just today, my sister told me how ChatGPT (her first time using it) identified a song for her based on her vague description of it. She has been looking for this song for months with no success, even though she had pretty good key details: it was a duet, released around 2008-2012, and she even remembered a certain line from it. Other tools simply failed, and ChatGPT found it instantly. AI is just a great tool for these kinds of tasks.
  • If you have a huge amount of data to sift through, looking for something specific but that isn’t presented in a specific format - e.g. find all arguments for and against assisted dying in this database of 200,000 articles with no useful tags - then AI is the perfect springboard. It can filter huge datasets down to just a tiny fragment, which is small enough to then be processed by humans.
  • Using AI to identify potential problems and pitfalls in your work, which can’t realistically be caught by directly programmed QA tools. I have no particular example in mind right now, unfortunately, but this is a legitimate use case for AI.
  • Also today, I stumbled upon Rapid, a map editing tool for OpenStreetMap which uses AI to predict and suggest things to add - with the expectation that the user would make sure the suggestions are good before accepting them. I haven’t formed a full opinion about it in particular (and especially wary because it was made by Facebook), but these kinds of productivity boosters are another legitimate use case for AI. Also in this category is GitHub’s Copilot, which is its own can of worms, but if Copilot’s training data wasn’t stolen the way it was, I don’t think I’d have many problems with it. It looks like a fantastic tool (I’ve never used it myself) with very few downsides for society as a whole. Again, other than the way it was trained.

As for generative AI and pictures especially, I can’t as easily offer non-creepy uses for it, but I recommend you see this video which takes a very frank take on the matter: nebula.tv/…/austinmcconnell-i-used-ai-in-a-video-… if you have access to Nebula, www.youtube.com/watch?v=iRSg6gjOOWA otherwise.
Personally I’m still undecided on this sub-topic.

Deepfakes etc. are just plain horrifying, you won’t hear me give them any wiggle room.

Don’t get me wrong - I am not saying OpenAI isn’t today rotten at the core - it is! But that doesn’t mean ALL instances of AI that could ever be are evil.

PipedLinkBot@feddit.rocks on 11 Jan 2024 20:40 next collapse

Here is an alternative Piped link(s):

https://www.piped.video/watch?v=iRSg6gjOOWA

Piped is a privacy-respecting open-source alternative frontend to YouTube.

I’m open-source; check me out at GitHub.

dasgoat@lemmy.world on 11 Jan 2024 22:51 collapse

‘It’s just this one that is rotten to the core’

‘Oh and this one’

‘Oh this one too huh’

‘Oh shit the other one as well’

Yeah you’re not convincing me of shit. I haven’t even mentioned the goddamn digital slavery these operations are running, or how this shit is polluting our planet so someone somewhere can get some AI Childporn? Fuck that shit.

You’re afraid to look behind the curtains because you want to ride the hypetrain. Have fun while it lasts, I hope it burns every motherfucker who thought this shit was a good idea to the motherfucking ground.

NeatNit@discuss.tchncs.de on 11 Jan 2024 23:12 collapse

You’re really cherry picking from what I said, and then you make up stuff I didn’t say. Good talk.

[deleted] on 11 Jan 2024 23:33 next collapse

.

[deleted] on 11 Jan 2024 23:33 collapse

.

McArthur@lemmy.world on 11 Jan 2024 02:48 next collapse

It feels to be like every other post on lemmy is taking about how copyright is bad and should be changed, or piracy is caused by fragmentation and difficulty accessing information (streaming sites). Then whenever this topic comes up everyone completely flips. But in my mind all this would do is fragment the ai market much like streaming services (suddenly you have 10 different models with different licenses), and make it harder for non mega corps without infinite money to fund their own llms (of good quality).

Like seriously, can’t we just stay consistent and keep saying copyright bad even in this case? It’s not really an ai problem that jobs are effected, just a capitalism problem. Throw in some good social safety nets and tax these big ai companies and we wouldn’t even have to worry about the artist’s well-being.

HiddenLayer5@lemmy.ml on 11 Jan 2024 03:17 next collapse

I think looking at copyright in a vacuum is unhelpful because it’s only one part of the problem. IMO, the reason people are okay with piracy of name brand media but are not okay with OpenAI using human-created artwork is from the same logic of not liking companies and capitalism in general. People don’t like the fact that AI is extracting value from individual artists to make the rich even richer while not giving anything in return to the individual artists, in the same way we object to massive and extremely profitable media companies paying their artists peanuts. It’s also extremely hypocritical that the government and by extention “copyright” seems to care much more that OpenAI is using name brand media than it cares about OpenAI scraping the internet for independent artists’ work.

Something else to consider is that AI is also undermining copyleft licenses. We saw this in the GitHub Autopilot AI, a 100% proprietary product, but was trained on all of GitHub’s user-generated code, including GPL and other copyleft licensed code. The art equivalent would be CC-BY-SA licenses where derivatives have to also be creative commons.

McArthur@lemmy.world on 11 Jan 2024 05:11 collapse

Maybe I’m optimistic but I think your comparison to big media companies paying their artist’s peanuts highlights to me that the best outcome is to let ai go wild and just… Provide some form of government support (I don’t care what form, that’s another discussion). Because in the end the more stuff we can train ai on freely the faster we automate away labour.

I think another good comparison is reparations. If you could come to me with some plan that perfectly pays out the correct amount of money to every person on earth that was impacted by slavery and other racist policies to make up what they missed out on, ids probably be fine with it. But that is such a complex (impossible, id say) task that it can’t be done, and so I end up being against reparations and instead just say “give everyone money, it might overcompensate some, but better that than under compensating others”. Why bother figuring out such a complex, costly and bureaucratic way to repay artists when we could just give everyone robust social services paid for by taxing ai products an amount equal to however much money they have removed from the work force with automation.

rottingleaf@lemmy.zip on 11 Jan 2024 05:42 next collapse

Which jobs are going to be affected really?

One thing is for certain, the “open” web is going to become a junkyard even more than it is now.

MrSqueezles@lemm.ee on 11 Jan 2024 08:01 next collapse

Journalist: Read a press release. Write it in my own words. See some Tweets. Put them together in a page padded with my commentary. Learn from, reference, and quote copyrighted material everywhere.

AI

I do that too.

Journalists

How dare AI learn! Especially from copyrighted material!

Boiglenoight@lemmy.world on 11 Jan 2024 12:27 collapse

Journalists need to survive. AI is a tool for profit, with no need to eat, sleep, pay for kids clothes or textbooks.

NeatNit@discuss.tchncs.de on 11 Jan 2024 13:05 collapse

We’re just trying to pit Disney and OpenAI against each other

/s(?)

NigelFrobisher@aussie.zone on 11 Jan 2024 05:01 collapse

“Impossible to build evil stronghold without walls made out of human skulls” claims necromancer.