haywire7@lemmy.world
on 17 May 2024 06:32
nextcollapse
AI generated content is great and all but it drowns out everything else on there.
Anyone can type a prompt and generate a great looking image with a couple of attempts these days it seems.
The people spending days, weeks, months and more on a piece can’t keep up.
hellothere@sh.itjust.works
on 17 May 2024 07:12
nextcollapse
It’s almost like low quality mechanisation is something that should be resisted. I wonder where I’ve heard that before…
Kusimulkku@lemm.ee
on 17 May 2024 07:47
nextcollapse
I don’t know for what product that’d be desirable. What did you have in mind?
TheBat@lemmy.world
on 17 May 2024 09:40
nextcollapse
yOuRe GaTeKePiNg!!!
the_crotch@sh.itjust.works
on 17 May 2024 09:44
nextcollapse
You heard it from traditional artists when the camera was invented
makyo@lemmy.world
on 17 May 2024 09:49
nextcollapse
And photographers when Photoshop was invented
yildolw@lemmy.world
on 17 May 2024 12:42
nextcollapse
Every gallery in the world did not rush out to exhibit every submitted photograph with no curation or quality filter when photography was invented
A physical gallery has limited wall space. A website does not. Ai art should just be tagged as such, so it can be filtered
the_crotch@sh.itjust.works
on 17 May 2024 13:46
collapse
If youre implying that every gallery in the world is rushing to exhibit every submitted ai picture with no curation or quality filter, name 5.
DarkDarkHouse@lemmy.sdf.org
on 17 May 2024 13:44
collapse
And birthed impressionism as a result. These are tools, artists will adapt.
the_crotch@sh.itjust.works
on 17 May 2024 13:47
collapse
And if they can’t compete with the soulless generic crap that ai spits out, they probably shouldn’t be artists
TheCannonball@lemmy.world
on 17 May 2024 19:45
nextcollapse
Everyone should be an artist. It doesn’t have to be professionally but everyone should be creating something.
the_crotch@sh.itjust.works
on 17 May 2024 20:19
collapse
If they’re not doing it for a living they don’t have to compete with anyone, least of all AI
FiniteBanjo@lemmy.today
on 18 May 2024 03:45
collapse
It’s less of an issue with competing via comparisons of two, more an issue with never being seen in the first place as real works become grains of sand on beaches of generated content.
Even_Adder@lemmy.dbzer0.com
on 17 May 2024 10:09
collapse
As the photographic industry was the refuge of every would-be painter, every painter too ill-endowed or too lazy to complete his studies, this universal infatuation bore not only the mark of a blindness, an imbecility, but had also the air of a vengeance. I do not believe, or at least I do not wish to believe, in the absolute success of such a brutish conspiracy, in which, as in all others, one finds both fools and knaves; but I am convinced that the ill-applied developments of photography, like all other purely material developments of progress, have contributed much to the impoverishment of the French artistic genius, which is already so scarce. It is nonetheless obvious that this industry, by invading the territories of art, has become art’s most mortal enemy, and that the confusion of their several functions prevents any of them from being properly fulfilled.
― Charles Baudelaire, On Photography, from The Salon of 1859
FiniteBanjo@lemmy.today
on 18 May 2024 03:49
nextcollapse
TBF he was kind of right, if you look at the industry of wall art these days then 98% of whats on people’s walls is printed imagery and copies. Imagine if we paid a real artist directly for every one of those framed and hung works instead of giving profit to some soulless corporation to make monotony incarnate.
Ultragramps@lemmy.blahaj.zone
on 19 May 2024 09:04
collapse
have contributed much to the impoverishment of the French artistic genius, which is already so scarce.
lurch@sh.itjust.works
on 17 May 2024 07:13
nextcollapse
there’s some stuff image generating AI just can’t do yet. it just can’t understand some things. a big problem seems to be referring to the picture itself, like position or its border. another problem is combining things that usually don’t belong together, like a skin of sky. those are things a human artist/designer does with ease.
there’s some stuff image generating AI just can’t do yet
There’s a lot.
Some of it doesn’t matter for certain things. And some of it you can work around. But try creating something like a graphic novel with Stable Diffusion, and you’re going to quickly run into difficulties. You probably want to display a consistent character from different angles – that’s pretty important. That’s not something that a fundamentally 2D-based generative AI can do well.
On the other hand, there’s also stuff that Stable Diffusion can do better than a human – it can very quickly and effectively emulate a lot of styles, if given a sufficient corpus to look at. I spent a while reading research papers on simulating watercolors, years back. Specialized software could do a kind of so-so job. Stable Diffusion wasn’t even built for that, and with a general-purpose model, also not specialized for that, it already can turn out stuff that looks rather more-impressive than those dedicated software packages.
hagelslager@feddit.nl
on 17 May 2024 09:17
nextcollapse
I think Corridor Digital made an AI animated film by hiring an illustrator (after an earlier attempt with a general dataset) and “draw” still frames from video of the lead actors, with Stable Diffusion generating the inbetweens.
admin@lemmy.my-box.dev
on 17 May 2024 10:04
collapse
I think creating a lora for your character would help in that case. Not really easy to do as of yet, but technically possible, so it’s mostly a ux problem.
I think creating a lora for your character would help in that case.
A LORA is good for replicating a style, where there’s existing stuff, helps add training data for a particular subject. There are problems that existing generative AIs smack into that that’s good at fixing. But it’s not a cure-all for all limitations of such systems. The problem I’m referring to is kinda fundamental to how the system works today – it’s not a lack of training data, but simply how the system deals with the world.
The problem is that the LLM-based systems today think of the world as a series of largely-decoupled 2D images, linked only by keywords. A human artist thinks of the world as 3D, can visualize something – maybe using a model to help with perspective – and then render it.
So, okay. If you want to create a facial portrait of a kinda novel character, that’s something that you can do pretty well with AI-based generators.
But now try and render that character you just created from ten different angles, in unique scenes. That’s something that a human is pretty good at. Here’s a page from a Spiderman comic:
Like, try reproducing that page in Stable Diffusion, with the same views. Even if you can eventually get something even remotely approximating that, a human, traditional comic artist is going to be a lot faster at it than someone sitting in front of a Stable Diffusion box.
Is it possible to make some form of art generator that can do that? Yeah, maybe. But it’s going to have to have a much more-sophisticated “mental” model of the world, a 3D one, and have solid 3D computer vision to be able to reduce scenes in its training corpus to 3D. And while people are working on it, that has its own extensive set of problems. Look at your training set. The human artist slightly stylized stuff or made errors that human viewers can ignore pretty easily, but a computer vision model that doesn’t work exactly like human vision and the computer vision system might go into conniptions over. For example, look at the fifth panel there. The artist screwed up – the ship slightly overlaps the dock, right above the “THWIP”. A human viewer probably wouldn’t notice or care. But if you have some kind of computer vision system that looks for line intersections to determine relative 3d positioning – something that we do ourselves, and is common in computer vision – it can very easily look at that image and have no idea what the hell is going on there. Or to give another example, the ship’s hull isn’t the same shape from panel to panel. In panel 4, the curvature goes one way; in panel 5, the other way. Say I’m a computer vision system trying to deal with that. Is what’s going on there that there ship is a sort of amorphous thing that is changing shape from frame to frame? Is it important for the shape to change, to create a stylized effect, or is it just the artist doing a good job of identifying what the matters to a human viewer and half-assing what doesn’t matter? Does this show two Spidermen in different dimensions, alternating views? Are the views from different characters, who have intentional vision distortions? I mean, understanding what’s going on there entails identifying that something is a ship, knowing that ships don’t change shape, having some idea of what is important to a human viewer in the image, knowing from context that there’s one Spiderman, in one dimension, etc. The viewer and the artist can do it, because the viewer and the artist know about ships in the real world – the artist can effectively communicate an idea to the viewer because they not only have hardware that processes the thing similarly, but also have a lot of real-world context in common that the LLM-based AI doesn’t have.
The proportions aren’t exactly consistent from frame to frame, don’t perfectly reflect reality, and might be more effective at conveying movement or whatever than an actual rendering of a 3d model would be. That works for human viewers. And existing 2D systems can kind of dodge the problem (as long as they’re willing to live with the limitations that intrinsically come with a 2D model) because they’re looking at a bunch of already-stylized images, so can make similar-looking images stylized in the same way. But now imagine that they’re trying to take stylized images, then reduce them into a coherent 3D world, then learn to re-apply stylization. That may involve creating not just a 3D model, but enough understanding of the objects in that world to understand what stylization is reasonable to create a given emotio
On the other hand, there are things that a human artist is utterly awful at, that LLM-based generative AIs are amazing at. I mentioned that LLMs are great at producing works in a given style, can switch up virtually effortlessly. I’m gonna do a couple Spiderman renditions in different styles, takes about ten seconds a pop on my system:
And yes, I know, fingers, but I’m not generating a huge batch to try to get an ideal image, just doing a quick run to illustrate the point.
Note that none of the above were actually Spiderman artists, other than Adams, and that briefly.
That’s something that’s really hard for a human to do, given how a human works, because for a human, the style is a function of the workflow and a whole collection of techniques used to arrive at the final image. Stable Diffusion doesn’t care about techniques, how the image got the way it is – it only looks at the output of those workflows in its training corpus. So for Stable Diffusion, creating an image in a variety of styles or mediums – even ones that are normally very time-consuming to work in – is easy as pie, whereas for a single human artist, it’d be very difficult.
I think that that particular aspect is what gets a lot of artists concerned. Because it’s (relatively) difficult for humans to replicate artistic styles, artists have treated their “style” as something of their stock-in-trade, where they can sell someone the ability to have a work in their particular style resulting from their particular workflow and techniques that they’ve developed. Something for which switching up styles is little-to-no barrier, like LLM-based generative AIs, upends that business model.
Both of those are things that a human viewer might want. I might want to say “take that image, but do it in watercolor” or “make that image look more like style X, blend those two styles”. LLMs are great at that. But I equally might want to say “show this scene from another angle with the characters doing something else”, and that’s something that human artists are great at.
Even_Adder@lemmy.dbzer0.com
on 17 May 2024 23:09
collapse
I don’t think it’s supposed to solve every problem. Just like very scene in the new Sand Land anime wasn’t 3D, the same goes for every other artistic tool. There are some things are easy with some tools, others it’s not well suited for.
What you have to ask yourself is what ways can it help you with what you’re trying to do.
anlumo@lemmy.world
on 17 May 2024 07:30
nextcollapse
It‘s even hard to impossible to generate the image of a person doing a handstand. All models assume a rightside-up person.
Even_Adder@lemmy.dbzer0.com
on 17 May 2024 15:17
collapse
This hasn’t been true for months at least. You really have to check week to week when dealing with things in this field.
think of an episode of any animated series with countless handmade backgrounds, good luck generating those with any sort of consistency or accuracy and you will be calling for an artist who can actually take instructions and iterate
We’ll soon be hearing that only Luddites care about continuity errors
Soundhole@lemm.ee
on 17 May 2024 08:54
nextcollapse
People spending that much time on their work can and should create things in meat space.
EDIT: This comment got some hate and in retrospect it does sound pretty snotty. No disrespect to any digital artists intended, including AI artists.
I just meant to highlight that more traditional artists can still create in meat space and newer AI artists simply cannot do that easily yet. I personally hope any artist with the ability to focus for hours and hours considers also doing meat space work. Actual tangible, 3d work can only be sold once, but can be sold for a lot. And it’s cool.
admin@lemmy.my-box.dev
on 17 May 2024 10:02
nextcollapse
Thorny_Insight@lemm.ee
on 17 May 2024 16:30
nextcollapse
I’m a bit surprised about how quickly I got tired of seeing AI content (mostly porn and non-nudes) Somehow it all just looks the same. You’d think that being AI generated would give you infinite variety but apparently not.
Sure, I may be too late now, but removing real content makes their platform less valuable overall.
iAvicenna@lemmy.world
on 17 May 2024 08:40
nextcollapse
wow I had forgotten about this website for such a long time. Like maybe 15-20 years ago it was a great resource for fantasy themed drawings and inspiration for rpg games
StrawberryPigtails@lemmy.sdf.org
on 17 May 2024 09:41
nextcollapse
That’s a site I haven’t heard of in a while.
MargotRobbie@lemmy.world
on 17 May 2024 10:37
nextcollapse
Most serious artists have switched to using ArtStation and/or Instagram a long time ago, not because of AI, but because of the weird stuff on DeviantArt.
This article also misrepresented Andersen v. Stability AI, you can read the judge’s opinion here:
You get a pass for being the local Android crazed Margot Robbie.
RandomGuy79@lemmy.world
on 17 May 2024 12:06
nextcollapse
There should always be a home for you degenerates to enjoy whatever category of poorly drawn unicorn porn one likes
altima_neo@lemmy.zip
on 17 May 2024 12:50
nextcollapse
Didn’t forget Sonic inflation comics
IndiBrony@lemmy.world
on 17 May 2024 13:43
collapse
I’m more of a Pegasus guy myself.
>IWTCIRD
atrielienz@lemmy.world
on 17 May 2024 12:55
nextcollapse
Angelo ran da into the ground long before this. Not gonna lie, I’m not surprised. Not even disappointed.
DFWSAM@lemmy.world
on 17 May 2024 14:06
nextcollapse
It’s obvious, generative AI could not exist without human work on which to train and rather than ask, or pay, for access to it, tech companies (and the assholes running them) feel free to appropriate it as they see fit.
The coolest AI work these days is open source, and developed by enthusiastic communities across the world.
LucidNightmare@lemmy.world
on 17 May 2024 14:09
nextcollapse
Ah, man. I remember when I went to this site to get themes for windows cursor, windows themes, and even skins for some of the programs I liked at the time. They went downhill quite some time ago, maybe around 2014 or 2015, as I stopped using that site as much because of the increase in pornographic stuff that showed on the front page. It will be missed though, either way.
Plopp@lemmy.world
on 17 May 2024 14:35
nextcollapse
Coincidentally, I remember starting using that site in 2014 or 2015.
LucidNightmare@lemmy.world
on 17 May 2024 17:13
collapse
How was it? What was your use-case for it? The software/theme part of the website started to get drowned out by furry stuff, and the occasional live nude models or just scantily dressed models, which is fine but not what I went there for.
LaunchesKayaks@lemmy.world
on 17 May 2024 19:11
collapse
I stopped using the site in 2014. Used to post regularly but got sick of all the porn. I stopped posting art online for a long time. Then I stopped drawing for a while. Now I’m trying to get back into it. Posted on the artshare community a while back and the feedback was nice.
Sabata11792@kbin.social
on 17 May 2024 15:09
nextcollapse
Thinly veiled and successful porn site blocks porn. Everyone leaves. They killed themselves for money.
The lifers clinging to the site blame AI because the bots are the only thing keeping the lights on. All the humans left with the porn.
Sylvartas@lemmy.world
on 17 May 2024 16:49
nextcollapse
I’d argue all the humans left when artstation became big. All my artists friends used to upload their (non porn) work to deviantart before artstation was popular. But banning the porn was the first nail in the coffin for sure.
Sabata11792@kbin.social
on 17 May 2024 17:16
collapse
I haven't head of art station till now. What's the difference from diviant art? Seems like its a censored platform as well from 30 seconds of googling.
Sylvartas@lemmy.world
on 17 May 2024 18:20
collapse
The UI looks more “slick”, and it is censored, so your portfolio or whatever you want to showcase isn’t displayed alongside some MLP porn or pregnant Sonic comics. Which doesn’t mean there isn’t tons of “artistic nudity” on the site though, last time I checked.
I’m not an artist myself but I know the artists in my industry (videogames) love to use it
Sabata11792@kbin.social
on 17 May 2024 19:23
collapse
Thanks. I never seen it mentioned, although I wouldn't have much reason to not being an artist.
RampantParanoia2365@lemmy.world
on 17 May 2024 17:44
nextcollapse
As a hypothetical potential user, I see "no porn" from a site I seen a lot of porn on in the past. The first thing I think is a big corpo bough them and milked it dry. Even if it's not enforced the perception is "we caved to censorship for profit over letting our users do whatever doesn't wreak the law". They killed trust.
sebinspace@lemmy.world
on 17 May 2024 20:53
collapse
Buddy there’s fucken porn. If you assholes split legs like you split hairs you’d be a lot less miserable…
SaltySalamander@fedia.io
on 18 May 2024 03:22
collapse
"sexy" is not automatically porn.
RampantParanoia2365@lemmy.world
on 19 May 2024 00:57
collapse
She is often fully nude. She calls it erotic cosplay.
Blackmist@feddit.uk
on 17 May 2024 20:32
nextcollapse
My missus used to post drawings on there about 10-15 years ago.
Think all the actual art is on Twitter these days (although some have gone to Mastodon).
Just seems a bit of a niche social network when bigger ones exist with bigger audiences and more chance of people actually wanting something drawn. Even if it’s mostly really weird smut.
Toribor@corndog.social
on 17 May 2024 21:03
nextcollapse
Where will I get my pregnant Sonic fix now?
Kolanaki@yiffit.net
on 17 May 2024 21:13
nextcollapse
When did DA ever allow porn?
Sabata11792@kbin.social
on 18 May 2024 00:34
collapse
Wasn't it years ago? I'm starting to think I'm just making a big dumb now.
Afaik, it hasn’t and I have been using it since around junior high or high school. It might have allowed it when it was brand new; I don’t know how old it was when I first stumbled upon it.
Sabata11792@kbin.social
on 18 May 2024 01:20
nextcollapse
Honestly, its as old as I can remember and im old enough to forget that shit. Now I'm just sad for being old.
I think it used to allow all kinds of erotic content and fetish stuff, just not outright porn.
rottingleaf@lemmy.zip
on 19 May 2024 09:27
collapse
Thinly veiled and successful porn site
All I ever came to DA for is wallpapers.
homesweethomeMrL@lemmy.world
on 17 May 2024 16:27
nextcollapse
Worse still, DeviantArt showed little desire to engage with these concerns
Well. There it tis.
sfantu@lemmy.world
on 17 May 2024 16:47
nextcollapse
To take it from the publishing industry, A.I. is already decimating once-common job prospects. An April report from the Society of Authors found that 26 percent of the illustrators surveyed “have already lost work due to generative A.I.” and about 37 percent of illustrators “say the income from their work has decreased in value because of generative A.I.”
I have to say … I LOVE THIS !
Adapt or else …
istanbullu@lemmy.ml
on 17 May 2024 16:51
nextcollapse
Deviant is probably having the best time of its existance thanks to generative models.
coriza@lemmy.world
on 17 May 2024 18:57
nextcollapse
It is sad that so much of technological advancement is not freeing people from labor has the opposite effect, making people fight to pay rent and necessities everyday and never having free time to live. There is so much to like in these new Ai technologies but they being wielded by capitalists to extract a little more money. I highly doubt that visual arts is a big expensive in movies and films since usually half the budget is marketing and another big chuck to secure big stars to the project.
In any case everyone already lost and the Internet is a little bit worst. Reading about this class actions I think no good will come out of it, or the draconian copyright laws will be even worst and small artists will already have lost to the prior models using their content or a “fair use” exception will be made but only for big companies AI and not help small artists and content creators that battle with DMCA abuse taking down fair use vídeos from YouTube and content from over the net anyway.
rottingleaf@lemmy.zip
on 19 May 2024 09:26
collapse
Technological advancement, thus power, is sometimes used against other people to reduce their power.
We live in a society.
I think what we still have is a lot. Saying it’s all dead is a huge exaggeration.
I’ve just installed Encarta 98 for nostalgic feelings, and found it quite lacking (as in being false and on the side of the criminal and not his victim) on a few points. Wikipedia is better on those.
It fell ages ago, all the artists I follow went to FurAffinity ages ago.
FiniteBanjo@lemmy.today
on 18 May 2024 03:43
collapse
I didn’t really move to another platform when I stopped using deviantart a few years ago, I just started sharing my work with small circles and local galleries instead.
threaded - newest
AI generated content is great and all but it drowns out everything else on there. Anyone can type a prompt and generate a great looking image with a couple of attempts these days it seems.
The people spending days, weeks, months and more on a piece can’t keep up.
It’s almost like low quality mechanisation is something that should be resisted. I wonder where I’ve heard that before…
I don’t know for what product that’d be desirable. What did you have in mind?
yOuRe GaTeKePiNg!!!
You heard it from traditional artists when the camera was invented
And photographers when Photoshop was invented
Every gallery in the world did not rush out to exhibit every submitted photograph with no curation or quality filter when photography was invented
A physical gallery has limited wall space. A website does not. Ai art should just be tagged as such, so it can be filtered
If youre implying that every gallery in the world is rushing to exhibit every submitted ai picture with no curation or quality filter, name 5.
And birthed impressionism as a result. These are tools, artists will adapt.
And if they can’t compete with the soulless generic crap that ai spits out, they probably shouldn’t be artists
Everyone should be an artist. It doesn’t have to be professionally but everyone should be creating something.
If they’re not doing it for a living they don’t have to compete with anyone, least of all AI
It’s less of an issue with competing via comparisons of two, more an issue with never being seen in the first place as real works become grains of sand on beaches of generated content.
― Charles Baudelaire, On Photography, from The Salon of 1859
TBF he was kind of right, if you look at the industry of wall art these days then 98% of whats on people’s walls is printed imagery and copies. Imagine if we paid a real artist directly for every one of those framed and hung works instead of giving profit to some soulless corporation to make monotony incarnate.
<img alt="" src="https://i.ytimg.com/vi/v6DFme4Wvtg/maxresdefault.jpg">
there’s some stuff image generating AI just can’t do yet. it just can’t understand some things. a big problem seems to be referring to the picture itself, like position or its border. another problem is combining things that usually don’t belong together, like a skin of sky. those are things a human artist/designer does with ease.
There’s a lot.
Some of it doesn’t matter for certain things. And some of it you can work around. But try creating something like a graphic novel with Stable Diffusion, and you’re going to quickly run into difficulties. You probably want to display a consistent character from different angles – that’s pretty important. That’s not something that a fundamentally 2D-based generative AI can do well.
On the other hand, there’s also stuff that Stable Diffusion can do better than a human – it can very quickly and effectively emulate a lot of styles, if given a sufficient corpus to look at. I spent a while reading research papers on simulating watercolors, years back. Specialized software could do a kind of so-so job. Stable Diffusion wasn’t even built for that, and with a general-purpose model, also not specialized for that, it already can turn out stuff that looks rather more-impressive than those dedicated software packages.
I think Corridor Digital made an AI animated film by hiring an illustrator (after an earlier attempt with a general dataset) and “draw” still frames from video of the lead actors, with Stable Diffusion generating the inbetweens.
I think creating a lora for your character would help in that case. Not really easy to do as of yet, but technically possible, so it’s mostly a ux problem.
A LORA is good for replicating a style, where there’s existing stuff, helps add training data for a particular subject. There are problems that existing generative AIs smack into that that’s good at fixing. But it’s not a cure-all for all limitations of such systems. The problem I’m referring to is kinda fundamental to how the system works today – it’s not a lack of training data, but simply how the system deals with the world.
The problem is that the LLM-based systems today think of the world as a series of largely-decoupled 2D images, linked only by keywords. A human artist thinks of the world as 3D, can visualize something – maybe using a model to help with perspective – and then render it.
So, okay. If you want to create a facial portrait of a kinda novel character, that’s something that you can do pretty well with AI-based generators.
But now try and render that character you just created from ten different angles, in unique scenes. That’s something that a human is pretty good at. Here’s a page from a Spiderman comic:
<img alt="" src="https://spiderfan.org/images/title/comics/spiderman_amazing/031/18.jpg">
spiderfan.org/images/title/comics/…/18.jpg
Like, try reproducing that page in Stable Diffusion, with the same views. Even if you can eventually get something even remotely approximating that, a human, traditional comic artist is going to be a lot faster at it than someone sitting in front of a Stable Diffusion box.
Is it possible to make some form of art generator that can do that? Yeah, maybe. But it’s going to have to have a much more-sophisticated “mental” model of the world, a 3D one, and have solid 3D computer vision to be able to reduce scenes in its training corpus to 3D. And while people are working on it, that has its own extensive set of problems. Look at your training set. The human artist slightly stylized stuff or made errors that human viewers can ignore pretty easily, but a computer vision model that doesn’t work exactly like human vision and the computer vision system might go into conniptions over. For example, look at the fifth panel there. The artist screwed up – the ship slightly overlaps the dock, right above the “THWIP”. A human viewer probably wouldn’t notice or care. But if you have some kind of computer vision system that looks for line intersections to determine relative 3d positioning – something that we do ourselves, and is common in computer vision – it can very easily look at that image and have no idea what the hell is going on there. Or to give another example, the ship’s hull isn’t the same shape from panel to panel. In panel 4, the curvature goes one way; in panel 5, the other way. Say I’m a computer vision system trying to deal with that. Is what’s going on there that there ship is a sort of amorphous thing that is changing shape from frame to frame? Is it important for the shape to change, to create a stylized effect, or is it just the artist doing a good job of identifying what the matters to a human viewer and half-assing what doesn’t matter? Does this show two Spidermen in different dimensions, alternating views? Are the views from different characters, who have intentional vision distortions? I mean, understanding what’s going on there entails identifying that something is a ship, knowing that ships don’t change shape, having some idea of what is important to a human viewer in the image, knowing from context that there’s one Spiderman, in one dimension, etc. The viewer and the artist can do it, because the viewer and the artist know about ships in the real world – the artist can effectively communicate an idea to the viewer because they not only have hardware that processes the thing similarly, but also have a lot of real-world context in common that the LLM-based AI doesn’t have.
The proportions aren’t exactly consistent from frame to frame, don’t perfectly reflect reality, and might be more effective at conveying movement or whatever than an actual rendering of a 3d model would be. That works for human viewers. And existing 2D systems can kind of dodge the problem (as long as they’re willing to live with the limitations that intrinsically come with a 2D model) because they’re looking at a bunch of already-stylized images, so can make similar-looking images stylized in the same way. But now imagine that they’re trying to take stylized images, then reduce them into a coherent 3D world, then learn to re-apply stylization. That may involve creating not just a 3D model, but enough understanding of the objects in that world to understand what stylization is reasonable to create a given emotio
On the other hand, there are things that a human artist is utterly awful at, that LLM-based generative AIs are amazing at. I mentioned that LLMs are great at producing works in a given style, can switch up virtually effortlessly. I’m gonna do a couple Spiderman renditions in different styles, takes about ten seconds a pop on my system:
Spiderman as done by Neal Adams:
<img alt="" src="https://lemmy.today/pictrs/image/1bed8ef6-b5f8-46af-a051-23f57318bbb8.png">
Spiderman as done by Alex Toth:
<img alt="" src="https://lemmy.today/pictrs/image/9e27afc1-7c5e-4afc-9062-f65f79ab0cda.png">
Spiderman in a noir style done by Darwyn Cooke:
<img alt="" src="https://lemmy.today/pictrs/image/299e3f6c-2645-42cf-95a9-2dc11d973d15.png">
Spiderman as done by Roy Lichtenstein:
<img alt="" src="https://lemmy.today/pictrs/image/a0df29a9-19ed-4beb-9a10-078496424493.png">
Spiderman as painted by early-19th-century American landscape artist J. M. W. Turner:
<img alt="" src="https://lemmy.today/pictrs/image/605bed61-d603-40d6-9eb7-e63e93100aac.png">
And yes, I know, fingers, but I’m not generating a huge batch to try to get an ideal image, just doing a quick run to illustrate the point.
Note that none of the above were actually Spiderman artists, other than Adams, and that briefly.
That’s something that’s really hard for a human to do, given how a human works, because for a human, the style is a function of the workflow and a whole collection of techniques used to arrive at the final image. Stable Diffusion doesn’t care about techniques, how the image got the way it is – it only looks at the output of those workflows in its training corpus. So for Stable Diffusion, creating an image in a variety of styles or mediums – even ones that are normally very time-consuming to work in – is easy as pie, whereas for a single human artist, it’d be very difficult.
I think that that particular aspect is what gets a lot of artists concerned. Because it’s (relatively) difficult for humans to replicate artistic styles, artists have treated their “style” as something of their stock-in-trade, where they can sell someone the ability to have a work in their particular style resulting from their particular workflow and techniques that they’ve developed. Something for which switching up styles is little-to-no barrier, like LLM-based generative AIs, upends that business model.
Both of those are things that a human viewer might want. I might want to say “take that image, but do it in watercolor” or “make that image look more like style X, blend those two styles”. LLMs are great at that. But I equally might want to say “show this scene from another angle with the characters doing something else”, and that’s something that human artists are great at.
I don’t think it’s supposed to solve every problem. Just like very scene in the new Sand Land anime wasn’t 3D, the same goes for every other artistic tool. There are some things are easy with some tools, others it’s not well suited for.
What you have to ask yourself is what ways can it help you with what you’re trying to do.
It‘s even hard to impossible to generate the image of a person doing a handstand. All models assume a rightside-up person.
This hasn’t been true for months at least. You really have to check week to week when dealing with things in this field.
think of an episode of any animated series with countless handmade backgrounds, good luck generating those with any sort of consistency or accuracy and you will be calling for an artist who can actually take instructions and iterate
We’ll soon be hearing that only Luddites care about continuity errors
People spending that much time on their work can and should create things in meat space.
EDIT: This comment got some hate and in retrospect it does sound pretty snotty. No disrespect to any digital artists intended, including AI artists.
I just meant to highlight that more traditional artists can still create in meat space and newer AI artists simply cannot do that easily yet. I personally hope any artist with the ability to focus for hours and hours considers also doing meat space work. Actual tangible, 3d work can only be sold once, but can be sold for a lot. And it’s cool.
Almost 10 years old now, more relevant than ever: Humans need not apply.
Here is an alternative Piped link(s):
Humans need not apply
Piped is a privacy-respecting open-source alternative frontend to YouTube.
I’m open-source; check me out at GitHub.
I’m a bit surprised about how quickly I got tired of seeing AI content (mostly porn and non-nudes) Somehow it all just looks the same. You’d think that being AI generated would give you infinite variety but apparently not.
The same way people using shovels can’t keep up with an excavator.
Technology changes the world. This is nothing new.
Both disappointing and expected.
Could still be the antithesis to AI bullshit, but don’t think there’s an easy way to tell now.
Deviantart died a very very long time ago. the creature currently wearing it's skin can fuck off and die for all I or any other actual artist cares.
What do you use instead?
All these news pieces about A.I. killing ancient company xyz …
I was shocked to see this post. I honestly thought it was dead after 2015.
Welp, I had no idea about this, time to delete my gallery I have had for 20 years.
Stopped using it last year as it was just so slow.
Is there really a point? It’s likely already been scraped into the data pool
Sure, I may be too late now, but removing real content makes their platform less valuable overall.
wow I had forgotten about this website for such a long time. Like maybe 15-20 years ago it was a great resource for fantasy themed drawings and inspiration for rpg games
That’s a site I haven’t heard of in a while.
Most serious artists have switched to using ArtStation and/or Instagram a long time ago, not because of AI, but because of the weird stuff on DeviantArt.
This article also misrepresented Andersen v. Stability AI, you can read the judge’s opinion here:
…bakerlaw.com/…/ECF-117-Order-on-Motion-to-Dismis…
But basically, the judge’s opinion was scathing and dismissed all but one of the plaintiffs’ claims.
There is a very good reason you can’t copyright artistic styles.
The weird stuff on deviant art is still better than people who direct link PDF files at law firm websites.
😢
You get a pass for being the local Android crazed Margot Robbie.
There should always be a home for you degenerates to enjoy whatever category of poorly drawn unicorn porn one likes
Didn’t forget Sonic inflation comics
I’m more of a Pegasus guy myself.
>IWTCIRD
Angelo ran da into the ground long before this. Not gonna lie, I’m not surprised. Not even disappointed.
It’s obvious, generative AI could not exist without human work on which to train and rather than ask, or pay, for access to it, tech companies (and the assholes running them) feel free to appropriate it as they see fit.
Fuck them running.
The coolest AI work these days is open source, and developed by enthusiastic communities across the world.
Ah, man. I remember when I went to this site to get themes for windows cursor, windows themes, and even skins for some of the programs I liked at the time. They went downhill quite some time ago, maybe around 2014 or 2015, as I stopped using that site as much because of the increase in pornographic stuff that showed on the front page. It will be missed though, either way.
Coincidentally, I remember starting using that site in 2014 or 2015.
How was it? What was your use-case for it? The software/theme part of the website started to get drowned out by furry stuff, and the occasional live nude models or just scantily dressed models, which is fine but not what I went there for.
I stopped using the site in 2014. Used to post regularly but got sick of all the porn. I stopped posting art online for a long time. Then I stopped drawing for a while. Now I’m trying to get back into it. Posted on the artshare community a while back and the feedback was nice.
Thinly veiled and successful porn site blocks porn. Everyone leaves. They killed themselves for money.
The lifers clinging to the site blame AI because the bots are the only thing keeping the lights on. All the humans left with the porn.
I’d argue all the humans left when artstation became big. All my artists friends used to upload their (non porn) work to deviantart before artstation was popular. But banning the porn was the first nail in the coffin for sure.
I haven't head of art station till now. What's the difference from diviant art? Seems like its a censored platform as well from 30 seconds of googling.
The UI looks more “slick”, and it is censored, so your portfolio or whatever you want to showcase isn’t displayed alongside some MLP porn or pregnant Sonic comics. Which doesn’t mean there isn’t tons of “artistic nudity” on the site though, last time I checked.
I’m not an artist myself but I know the artists in my industry (videogames) love to use it
Thanks. I never seen it mentioned, although I wouldn't have much reason to not being an artist.
They did? Since when?
Who decides if that's porn or not? The rule says no porn and I can only imagine she has less on for the account only images.
https://www.deviantartsupport.com/en/article/what-is-deviantarts-policy-around-sexual-erotic-and-fetish-themes
As a hypothetical potential user, I see "no porn" from a site I seen a lot of porn on in the past. The first thing I think is a big corpo bough them and milked it dry. Even if it's not enforced the perception is "we caved to censorship for profit over letting our users do whatever doesn't wreak the law". They killed trust.
Buddy there’s fucken porn. If you assholes split legs like you split hairs you’d be a lot less miserable…
"sexy" is not automatically porn.
She is often fully nude. She calls it erotic cosplay.
My missus used to post drawings on there about 10-15 years ago.
Think all the actual art is on Twitter these days (although some have gone to Mastodon).
Just seems a bit of a niche social network when bigger ones exist with bigger audiences and more chance of people actually wanting something drawn. Even if it’s mostly really weird smut.
Where will I get my pregnant Sonic fix now?
When did DA ever allow porn?
Wasn't it years ago? I'm starting to think I'm just making a big dumb now.
Afaik, it hasn’t and I have been using it since around junior high or high school. It might have allowed it when it was brand new; I don’t know how old it was when I first stumbled upon it.
Honestly, its as old as I can remember and im old enough to forget that shit. Now I'm just sad for being old.
I think it used to allow all kinds of erotic content and fetish stuff, just not outright porn.
All I ever came to DA for is wallpapers.
Well. There it tis.
I have to say … I LOVE THIS !
Adapt or else …
Deviant is probably having the best time of its existance thanks to generative models.
It is sad that so much of technological advancement is not freeing people from labor has the opposite effect, making people fight to pay rent and necessities everyday and never having free time to live. There is so much to like in these new Ai technologies but they being wielded by capitalists to extract a little more money. I highly doubt that visual arts is a big expensive in movies and films since usually half the budget is marketing and another big chuck to secure big stars to the project.
In any case everyone already lost and the Internet is a little bit worst. Reading about this class actions I think no good will come out of it, or the draconian copyright laws will be even worst and small artists will already have lost to the prior models using their content or a “fair use” exception will be made but only for big companies AI and not help small artists and content creators that battle with DMCA abuse taking down fair use vídeos from YouTube and content from over the net anyway.
Technological advancement, thus power, is sometimes used against other people to reduce their power.
We live in a society.
I think what we still have is a lot. Saying it’s all dead is a huge exaggeration.
I’ve just installed Encarta 98 for nostalgic feelings, and found it quite lacking (as in being false and on the side of the criminal and not his victim) on a few points. Wikipedia is better on those.
We are also taking some things for given.
It fell ages ago, all the artists I follow went to FurAffinity ages ago.
I didn’t really move to another platform when I stopped using deviantart a few years ago, I just started sharing my work with small circles and local galleries instead.