Adobe’s ‘Ethical’ Firefly AI Was Trained on Midjourney Images (www.bloomberg.com)
from Stopthatgirl7@lemmy.world to technology@lemmy.world on 13 Apr 2024 09:23
https://lemmy.world/post/14238662

When Adobe Inc. released its Firefly image-generating software last year, the company said the artificial intelligence model was trained mainly on Adobe Stock, its database of hundreds of millions of licensed images. Firefly, Adobe said, was a “commercially safe” alternative to competitors like Midjourney, which learned by scraping pictures from across the internet.

But behind the scenes, Adobe also was relying in part on AI-generated content to train Firefly, including from those same AI rivals. In numerous presentations and public postsabout how Firefly is safer than the competition due to its training data, Adobe never made clear that its model actually used images from some of these same competitors.

#technology

threaded - newest

jimmydoreisalefty@lemmy.world on 13 Apr 2024 09:31 next collapse

Adobe said a relatively small amount — about 5% — of the images used to train its AI tool was generated by other AI platforms. “Every image submitted to Adobe Stock, including a very small subset of images generated with AI, goes through a rigorous moderation process to ensure it does not include IP, trademarks, recognizable characters or logos, or reference artists’ names,” a company spokesperson said.

Adobe Stock’s library has boomed since it began formally accepting AI content in late 2022. Today, there are about 57 million images, or about 14% of the total, tagged as AI-generated images. Artists who submit AI images must specify that the work was created using the technology, though they don’t need to say which tool they used. To feed its AI training set, Adobe has also offered to pay for contributors to submit a mass amount of photos for AI training — such as images of bananas or flags.

alexdeathway@programming.dev on 13 Apr 2024 10:22 next collapse

why would they do this, doesn’t that reduce the quality of training dataset?

General_Effort@lemmy.world on 13 Apr 2024 10:52 next collapse

No.

I feel I should explain this but I got nothing. An image is an image. Whether it’s good or bad is a matter of personal preference.

hyper@lemmy.zip on 13 Apr 2024 10:57 next collapse

I’m not so sure about that… if you train an ai on images with disfigured anatomy which it thinks is the “right” way it will generate new images with messed up anatomy. It gives a feedback loop, like when a mic picks up its own signal.

abhibeckert@lemmy.world on 13 Apr 2024 11:07 next collapse

Midjourney doesn’t generate disfigured anatomy. You’re think of Stable Diffusion which is a smaller model that can generate an image in 30 seconds on my laptop GPU. Even SD is pretty good at avoiding that, with decent hardware and larger models (that need more memory).

General_Effort@lemmy.world on 13 Apr 2024 11:17 collapse

Well, you wouldn’t train on images that you consider bad, or rather you’d use them as examples for what not to do.

Yes, you have to be careful when training a model on its own output. It already has a tendency to produce that, so it’s easy to “overshoot”, so to say. But it’s not a problem in principle. It’s also not what’s happening here. Adobe doesn’t use the same model as Midjourney.

[deleted] on 13 Apr 2024 11:14 next collapse

.

bionicjoey@lemmy.ca on 13 Apr 2024 12:06 collapse

When you process an image through the same pipeline multiple times, artifacts will appear and become amplified.

General_Effort@lemmy.world on 13 Apr 2024 12:36 collapse

What’s happening here is just nothing like that. There is no amplifier. Images aren’t run through a pipeline.

bionicjoey@lemmy.ca on 13 Apr 2024 13:03 collapse

The process of training is itself a pipeline

General_Effort@lemmy.world on 13 Apr 2024 13:50 collapse

Yes, but the model is the end of that pipeline. The image is not supposed to come out again. A model can “memorize” an image, but then you wouldn’t necessarily expect an amplification of artifacts. Image generators are not supposed to d lossy compression, though the tech could be used for that.

Grimy@lemmy.world on 13 Apr 2024 14:18 collapse

If the image has errors that are hard to spot by the human eye and the model gets trained on these images, thoses errors that came about naturally on real data get amplified.

Its not a model killer but it is something to watch out for.

General_Effort@lemmy.world on 13 Apr 2024 14:24 collapse

Yes, if you want realism. But that’s just one of the things that people look for. Personal preference.

SomeGuy69@lemmy.world on 13 Apr 2024 14:39 collapse

Invisible artifacts still cause result retardation, realistic or not. Like issue with fingers, shadows, eyes, colors etc.

General_Effort@lemmy.world on 13 Apr 2024 16:52 collapse

“Retardation”? Seriously?

Even_Adder@lemmy.dbzer0.com on 13 Apr 2024 11:46 next collapse

Supplementary synthetic data increases the quality of the model.

SomeGuy69@lemmy.world on 13 Apr 2024 12:08 next collapse

Correct. To a certain extend one can add AI data into AI, too much and you add noise, making the result worse, like a copy of a copy.

General_Effort@lemmy.world on 13 Apr 2024 12:45 collapse

Yes, though that’s not what they’re doing. They train on images uploaded to their marketplace and, of course, some of these are AI generated.

Even_Adder@lemmy.dbzer0.com on 13 Apr 2024 12:56 collapse

It’s fine as long as it’s not the majority.

General_Effort@lemmy.world on 13 Apr 2024 13:41 collapse

It doesn’t really matter how much it is. An image is an image.

Even_Adder@lemmy.dbzer0.com on 13 Apr 2024 13:55 next collapse

I’m just talking about synthetic images affect model quality.

General_Effort@lemmy.world on 13 Apr 2024 14:18 collapse

It doesn’t matter how the image was made. It only matters what it is like and how it is used to affect the model.

Even_Adder@lemmy.dbzer0.com on 13 Apr 2024 14:28 collapse

That’s what I’m saying. Synthetic images can help your model look better, but if you’re aiming for “realistic” output, but synthetic images are fundamentally not real images and too many will bias your model in a slightly different direction.

balder1991@lemmy.world on 13 Apr 2024 14:19 collapse

Data augmentation is a thing since a long time, but of course if the majority of your data is synthetic your model will suck on real world data. Though as these generative models get better and better at mimicking real world data and we select the results we want to use (removing the nonsense and hallucinations, artifacts etc.), we’re still feeding them “more data”.

I guess we’ll have to wait and see what effect it’ll produce on future models. I think overall the improvements on LLMs have been good, even at slow steps we’re still figuring out how to better turn them into useful tools. I don’t know how well the image generation models have improved in the last 2 years though.

General_Effort@lemmy.world on 13 Apr 2024 17:08 collapse

we’re still feeding them “more data”.

Yes, that’s one way of putting it. What gets into the Adobe stock database is already curated. They also have the sales and tracking data.

Though as these generative models get better and better at mimicking real world data

Also yes on this. It doesn’t matter if your data is synthetic but only if it’s fit for purpose. That’s especially true in this case, where the distinction between synthetic and real is so unclear. You’re already including drawings, renders, photomanips, etc. I have no idea what kind of misconception people have that they would think it matters if some piece of digital art is AI generated.

cynar@lemmy.world on 13 Apr 2024 12:08 collapse

Depends how it’s done.

Full generative images would definitely start creating a copying error type problem.

However it’s not quite that simple. An AI system can be used to distort an image. The derivatives force the learning AI to notice different things. This can vastly extend the pool of data to learn from, and so improve the end AI.

Adobe obviously decided that the copying errors were worth the extended datasets.

seaQueue@lemmy.world on 13 Apr 2024 11:07 next collapse

Oh hey, look. The cycle of AI ingesting garbage output from another AI model has begun. This can’t possibly impact quality or reliability in any way /s

h_ramus@lemm.ee on 13 Apr 2024 13:52 next collapse

The AI centipede era has begun

balder1991@lemmy.world on 13 Apr 2024 14:14 collapse

Time to save the models we have now, cause they’ll never the quite the same.

Mereo@lemmy.ca on 13 Apr 2024 11:27 next collapse

  • Garbage in -> Garbage out (x2)
  • Garbage in (x2) -> = Garbage out (x4)
  • Garbage in (x4) -> = Garbage out (x8)
  • Garbage in (x8) -> = Garbage out (x16)
Beetschnapps@lemmy.world on 13 Apr 2024 15:24 next collapse

Yea! Can you believe how long it took us to make garbage before all this?

photonic_sorcerer@lemmy.dbzer0.com on 13 Apr 2024 16:33 collapse

Y’all never heard of recycling?

CosmoNova@lemmy.world on 13 Apr 2024 11:51 next collapse

I said it around 2 years ago when the term “ethical” was first coined by media when talking about AI. Ehtical in this context just means those who own data centers and made a huge efford to extract and process user data (Facebook, Google, Amazon, etc.) have all the cards. Nevermind the technology being so new users couldn’t possibly consent to it years ago. They just update their TOS and get that consent retroactively while law makers are absent as they happily watch their strocks go up.

Grimy@lemmy.world on 13 Apr 2024 13:45 collapse

Its really frustrating to see people get riled up and manipulated into thinking legislating to make illegal anything “unethical” is in their interest.

Its a fantasy to think individual creators will get a slice of the pie and not just the data brokers. Its also a convenient way to destroy the competition.

People are getting emotional and they are going to use that to build one of the grossest monopoly ever seen.

TheGiantKorean@lemmy.world on 13 Apr 2024 12:32 next collapse

AI ingesting the output of AI ingesting the output of AI…

<img alt="" src="https://lemmy.world/pictrs/image/7a62f63b-ec9b-441d-aaf0-83159184addf.jpeg">

DarkThoughts@fedia.io on 13 Apr 2024 13:53 next collapse

Isn't this causing a huge degradation in quality? It's like compressing an image over and over again. Those "AI" models can only generate things on what they know, and already have a very real issue of looking samey because of it. So if we train models on that, and then another model on the new model, and repeat this over and over again, we'd end up with less and less quality & variety for each model, no?

Drewelite@lemmynsfw.com on 13 Apr 2024 13:57 next collapse

Well that’s what human knowledge is lol. This is the AI Internet 😂 My guess is they will begin to diverge from human interest/comprehension if they don’t have enough of their training data be human created.

balder1991@lemmy.world on 13 Apr 2024 14:27 next collapse

I suppose the AI images submitted are done so because they turned out good, so there’s still a human selection process there. It’s not as bad at automatically feeding random generated images into the training.

PapstJL4U@lemmy.world on 14 Apr 2024 09:57 collapse

But are they? The amount must be minuscle as searching and selecting costs time. What impact can thoughtful selected images have?

General_Effort@lemmy.world on 14 Apr 2024 11:26 collapse

Adobe trains on images submitted to their stock image marketplace. Deciding to submit is the first selection step. Then there is some quality control by Adobe; mainly AI powered, I’d guess. Adobe also has the sales data (again, human selection) and additional tracking data; how many people clicked a thumbnail and so on.

What people imagine here about quality loss is completely divorced from reality.

General_Effort@lemmy.world on 14 Apr 2024 12:03 collapse

That’s not what anyone would do in reality, though. In reality, when you train an AI model on AI output you get a quality increase, because the model learns to be better at doing the things it’s supposed to do, while forgetting the irrelevant. Where output looks samey, it’s because different people are chasing the same mainstream taste.

DarkThoughts@fedia.io on 14 Apr 2024 13:44 collapse

How do you get a quality increase if you by definition cut down on the variety of the generative aspects? That doesn't make any sense.

General_Effort@lemmy.world on 14 Apr 2024 15:16 collapse

Put like this, because too much variety is the biggest problem in terms of quality. People don’t want variety in terms of, say, number of limps or fingers. People have something specific in mind when they prompt an AI. They only want very limited and specific variability.

In a sense, limiting variety is the whole point of the AI. There is a vast number of possible images. Most of them would be simply indistinguishable noise to us. The proportion we would consider a sensible picture is tiny. We want to constrain the variety to within this tiny segment.

DarkThoughts@fedia.io on 14 Apr 2024 15:53 collapse

But "AI" generated images don't suffer from too much variety, they suffer from looking samey. It's the opposite of what you're arguing about.
Limbs aren't really the issue here since this is about Midjourney, which handles that part fairly well already.

General_Effort@lemmy.world on 14 Apr 2024 17:12 collapse

Adobe trained its AI “Firefly” on its stock library (and other images). Their library contains AI images. It’s unlikely that these are all from Midjourney.

I’m not sure what you mean by samey. As I said, people chase the same mainstream taste. If the images from one service looks samey, then they probably figure that’s what the customers want. It’s also possible that you only recognize this type of image as AI generated.

phoenixz@lemmy.ca on 13 Apr 2024 17:08 collapse

This actually leads to more conformist images with more errors, over time. Basically if an ai takes images from us, its gets loads of creativity, outputs less creativity, and more errors. So do they a couple of rounds and you indeed end up with utter crap.

SeaJ@lemm.ee on 13 Apr 2024 12:41 next collapse

I’ve seen Multiplicity enough times to know how this turns out.

GamingChairModel@lemmy.world on 14 Apr 2024 14:02 collapse

You’ve been watching the original movie multiple times? I just watch the most recent recording of myself describing the movie, and then record a new description over that, with each successive generation.

Soundhole@lemm.ee on 13 Apr 2024 14:43 next collapse

Okay so that cuts it. Every single AI model should be open source as they ALL use our collective knowledge. They should be treated like libraries, as publicly owned stores of knowledge for everyone’s use

I thought maybe Firefly was the one exception, although I suspected some kind of shenanigans. But nope. These corpos stole our collective knowledge and culture and are now ransoming it back to us for profit.

airrow@hilariouschaos.com on 13 Apr 2024 14:51 next collapse

the problem is “intellectual property” existing at all, just get rid of it entirely and make everything public domain

TheBat@lemmy.world on 13 Apr 2024 17:19 next collapse

AIuroboros

Zink@programming.dev on 14 Apr 2024 16:41 next collapse

We always thought the singularity is when our technology would take off advancing without us.

Maybe that moment when it decides it doesn’t need us will be a rapid disintegration by machine circle jerk.

uriel238@lemmy.blahaj.zone on 15 Apr 2024 07:18 collapse

They [the Golgafrincham] sent the B ship off first, but of course, the other two-thirds of the population stayed on the planet and lived full, rich and happy lives until they were all wiped out by a virulent disease contracted from a dirty telephone.

0nekoneko7@lemmy.world on 15 Apr 2024 07:23 collapse

AI daisy chain. One AI output is another AI input.