There was a link in the article about that, it was saying that they are just requiring self reporting, I don’t know the political context in California, but it seems like you wouldn’t push this and then turn around and try the id thing, but I am by no means an expert at predicting the idiocracy of politicians.
Not_mikey@lemmy.dbzer0.com
on 14 Oct 00:07
collapse
Probably will get it anyway, companies don’t like to build and maintain software for two different markets so they tend to just follow the regulations of the strictest market, especially if those regulations don’t really cut into there bottom line like this one.
The Skynet AI, which does not concern itself with such concepts as base as money
chaosCruiser@futurology.today
on 14 Oct 03:11
nextcollapse
The robot society isn’t based on any human way of running things. Besides, Skynet is the only individual, so there is no need for currency, trade, ownership, capitalism etc. Other machines are merely tools Skynet uses to reach its goals.
ChickenLadyLovesLife@lemmy.world
on 14 Oct 10:19
nextcollapse
When I read that shit as a kid, I thought Asimov’s laws of robotics were like natural laws, so that it was just naturally impossible for robots to behave otherwise. That never made any sense to me so I thought Asimov was just full of shit.
technocrit@lemmy.dbzer0.com
on 14 Oct 15:24
collapse
“AI” is already being used for genocide in palestine and probably elsewhere. Not to mention other “applications”.
It would be nice if this extended to all text, images, audio and video on news websites. That’s where the real damage is happening.
BrianTheeBiscuiteer@lemmy.world
on 13 Oct 22:24
nextcollapse
Actually seems easier (probably not at the state level) to mandate cameras and such digitally sign any media they create. No signature or verification, no trust.
cley_faye@lemmy.world
on 13 Oct 22:48
nextcollapse
No signature or verification, no trust
And the people that are going to check for a digital signature in the first place, THEN check that the signature emanates from a trusted key, then, eventually, check who’s deciding the list of trusted keys… those people, where are they?
Because the lack of trust, validation, verification, and more generally the lack of any credibility hasn’t stopped anything from spreading like a dumpster fire in a field full of dumpsters doused in gasoline. Part of my job is providing digital signature tools and creating “trusted” data (I’m not in sales, obviously), and the main issue is that nobody checks anything, even when faced with liability, even when they actually pay for an off the shelve solution to do so. And I’m talking about people that should care, not even the general public.
There are a lot of steps before “digitally signing everything” even get on people’s radar. For now, a green checkmark anywhere is enough to convince anyone, sadly.
It could be a feature of web browsers. Images would get some icon indicating the valid signature, just like browsers already show the padlock icon indicating a valid certificate. So everybody would be seeing the verification.
But I don’t think it’s a good idea, for other reasons.
BrianTheeBiscuiteer@lemmy.world
on 14 Oct 14:59
collapse
An individual wouldn’t verify this but enough independent agencies or news orgs would probably care enough to verify a photo. For the vast majority we’re already too far gone to properly separate fiction an reality. If we can’t get into a courtroom and prove that a picture or video is fact or fiction then we’re REALLY fucked.
CosmicTurtle0@lemmy.dbzer0.com
on 14 Oct 01:20
nextcollapse
I get what you’re going for but this would absolutely wreck privacy. And depending on how those signatures are created, someone could create a virtual camera that would sign images and then we would be back to square one.
Privacy concern for sure, but given that you can already tie different photos back to the same phone from lens artifacts, I don’t think this is going to make things much worse than they already are.
someone could create a virtual camera that would sign images
Anyone who produces cameras can publish a list of valid keys associated with their camera. If you trust the manufacturer, then you also trust their keys. If there’s no trusted source for the keys, then you don’t trust the signature.
BrianTheeBiscuiteer@lemmy.world
on 14 Oct 14:54
collapse
The point is to give photographers a “receipt” for their photos. If you don’t want the receipt it would be easy to scrub from photo metadata.
technocrit@lemmy.dbzer0.com
on 14 Oct 15:21
collapse
The problem is that “AI” doesn’t actually exist. For example, Photoshop has features that are called “AI”. Should every designer be forced to label their work if they use some “AI” tool.
This is a problem with making violent laws based on meaningless language.
technocrit@lemmy.dbzer0.com
on 14 Oct 15:19
collapse
Yes the state should violently enforce its arbitrary laws in every aspect of our lives. \s
This sounds about as useful as the California law that tells ICE they aren’t allowed to cover their face, or the California law that tells anyone selling anything ever that they have to tell you it will give you cancer. Performative laws are what we’re best at here in California.
guest123456@lemmynsfw.com
on 13 Oct 21:55
nextcollapse
Headline is kind of misleading. It requires a notice to be shown in a chat or interface that said chatbot is not a real person if it’s not obvious that it’s an LLM. I originally took the headline to mean that an LLM would have to tell you if it’s an LLM or not itself, which is, of course, not really possible to control generally. A nice gesture if it were enforced, but it doesn’t go nearly far enough.
I think it’s one of those perfect is the enemy of good kinds of situations. Go further is more complicated and requires more consideration and more analysis of consequences, etc. and that can take some time. But this is kinda no-brainer kind of legislation so pass this now while making the considerations on some more robust legislation to pass later.
I feel like bombing Night City would raise the property values.
HeyThisIsntTheYMCA@lemmy.world
on 14 Oct 04:35
collapse
did california get new glasses?
Attacker94@lemmy.world
on 14 Oct 00:31
nextcollapse
Has anyone been able to find the text of the law, the article didn’t mention the penalties, I want to know if this actually means anything.
Edit: I found a website that says the penalty follows 5000*sum(n+k) where n is number of days since first infraction, this has a closed form of n^2+n= (7500^-1)y where y is the total compounded fee. This makes it cost 1mil in 11 days and 1bil in a year.
mic_check_one_two@lemmy.dbzer0.com
on 14 Oct 01:19
nextcollapse
Yeah, this is an important point. If the penalty is too small, AI companies will just consider it a cost of doing business. Flat-rate fines only being penalties for the poor, and all that.
technocrit@lemmy.dbzer0.com
on 14 Oct 15:24
collapse
The state will lose money in courts if they even try to enforce this.
How do you figure, I haven’t seen the actual text, is it written ambiguously? If not, I would imagine that they be able to enforce it, the only thing is the scope is very small.
hedge_lord@lemmy.world
on 14 Oct 00:51
nextcollapse
I am of the firm opinion that if a machine is “speaking” to me then it must sound a cartoon robot. No exceptions!
vaultdweller013@sh.itjust.works
on 14 Oct 03:27
nextcollapse
I propose that they must use vocaloid voices or that old voice code that Wasteland 3 uses for the bob the robot looking guys.
HeyThisIsntTheYMCA@lemmy.world
on 14 Oct 04:31
nextcollapse
i would like my GPS to sound like Brian Blessed otherwise i want all computers to sound like Niki Yang
ChickenLadyLovesLife@lemmy.world
on 14 Oct 10:12
collapse
I want my AI to sound like a Speak & Spell.
AceFuzzLord@lemmy.zip
on 14 Oct 03:02
nextcollapse
Okay, but when can the law straight up ban companies who don’t comply with the law from operating in the state instead of just slapping them on the wrist and telling them “no” the same way a pushover parent tells their child “no”. Especially after they just ignore the law.
UnderpantsWeevil@lemmy.world
on 14 Oct 14:19
collapse
Can’t do anything that might negatively impact business.
DeathByBigSad@sh.itjust.works
on 14 Oct 03:05
nextcollapse
My LinkedIn feed is 80% tech bros complaining about the EU AI Act, not a single one of whom is willing to be drawn on which exact clause it is they don’t like.
That’s not actually the case for most companies though. The only time you’d need a full time lawyer on it is if the thing you want to do with AI is horrifically unethical, in which case fuck your little startup.
It’s easy to comply with regulations if you’re already behaving responsibly.
That’s true with many regulations. The quiet part that they’re trying to avoid saying out loud is that behaving ethically and responsibly doesn’t earn them money.
Yes… it’s so bad that I just never log in until I receive a DM, and even then I login, check it, if it’s useful I warn people I don’t use LinkedIn anymore then log out.
I even ignore DMs on linkedIn, they’re mostly head hunters anyway.
UnderpantsWeevil@lemmy.world
on 14 Oct 14:17
collapse
Not a terrible resource when you’re actually looking for a job. But that’s because all the automated HR intakes are a dumpster fire, more than anything headhunters bring in value.
Did you seriously use LinkedIn? I always thougt that it was just narsisitic people posting about themselves never having any real conversations and only adding superficial replies to posts that align 100% with them
If I could delete it without impacting my job or career I would. Sadly they’ve effectively got a monopoly on the online professional networking industry. Cunts
UnderpantsWeevil@lemmy.world
on 14 Oct 14:19
collapse
Very useful for job hunting because it’s swarming with head hunters.
LinkedIn gets you access to humans who will help you navigate the shitty HR AI that most big businesses integrate into their job intake process.
Don_alForno@feddit.org
on 14 Oct 05:17
nextcollapse
cactusfacecomics@lemmy.world
on 14 Oct 14:16
nextcollapse
Seems reasonable to me. If you’re using AI then you should be required to own up to it. If you’re too embarrassed to own up to it, then maybe you shouldn’t be using it.
technocrit@lemmy.dbzer0.com
on 14 Oct 15:16
nextcollapse
I’m stoked to see the legal definition of “AI”. I’m sure the lawyers and costumed clowns will really clear it all up.
MajorasTerribleFate@lemmy.zip
on 14 Oct 17:57
collapse
Prosecution: “Your Honor, the definition of artificial is ‘made or produced by human beings rather than occurring naturally,’ and as all human beings are themselves produced by human beings, we are definitionally artificial. Therefore, the actions of an intelligent human are inherently AI.”
Defense: “The defense does not argue this point, as such. However, our client, FOX News, could not be said to be exhibiting ‘intelligence.’ Artificial they may be, but AI they are clearly not. We rest our case.”
Rooster326@programming.dev
on 14 Oct 15:40
collapse
IMO if your “A*” style algorithm is used for chatbot or any kind of user interaction or content generation, it should still be explicitly declared.
That being said, there is some nuance here about A) use of Copyrighted material and B) Non-deterministic behaviour. Neither of which is (usually) a concern in more classical non-DL approaches to AI solutions.
technocrit@lemmy.dbzer0.com
on 14 Oct 15:15
nextcollapse
Will someone please tell California that “AI” doesn’t exist?
This is how politicians promote a grift by pretending to regulate it.
Worthless politicians making worthless laws.
ArchmageAzor@lemmy.world
on 14 Oct 15:59
nextcollapse
Weird how California keeps being the most progressive state in the US.
It’s like being the best smelling turd in a toilet, but at least it’s something.
Yeah for real, what does this mean exactly? All forms of machine learning? That’s a lot of computers at this moment, it’s just we only colloquially call the chat bot versions “AI”. But even that gets vague do reactive video game NPCs get counted as “AI?” Or all of our search algorithms and spell check programs?
At that point what’s the point? The disclosure would become as meaningless as websites asking for cookies or the number of things known to cause cancer in the state of California.
PixeIOrange@lemmy.world
on 14 Oct 19:07
nextcollapse
That might end like the cookie popups in the eu…
minorkeys@lemmy.world
on 14 Oct 21:01
nextcollapse
If you ask ChatGPT, it says it’s guidelines include not giving the impression it’s a human. But if you ask it be less human because it is confusing you, it says that would break the guidelines.
ChatGPT doesn’t know its own guidelines because those aren’t even included in its training corpus. Never trust an LLM about how it works or how it “thinks” because fundamentally these answers are fake.
sturmblast@lemmy.world
on 15 Oct 01:52
nextcollapse
threaded - newest
Nice.
That’s exactly what an LLM trained on Reddit would say.
I am an LLM
Large
Lazy
Mammal
With Large Luscious Mammaries ?
Limbed Lugubrious Motherfucker
.
Move to California.
VPN set to California?
Oooooooh! As long as California doesn’t do those stupid ID verification laws, that might be the place to set your VPN from now on.
There was a link in the article about that, it was saying that they are just requiring self reporting, I don’t know the political context in California, but it seems like you wouldn’t push this and then turn around and try the id thing, but I am by no means an expert at predicting the idiocracy of politicians.
Probably will get it anyway, companies don’t like to build and maintain software for two different markets so they tend to just follow the regulations of the strictest market, especially if those regulations don’t really cut into there bottom line like this one.
And if it hallucinates?
Straight to jail
<img alt="" src="https://lemmy.world/pictrs/image/0444a72f-7d57-4dd2-9b86-03442e170e32.gif">
That depends.
Devils advocate here. Any human can also hallucinate. Some of them even do it as a recreational activity
Yeah, and the people who pay those people tend to get really mad if they do that at work.
Pretty sure that people who hallucinate are kidnapped and thrown in cages.
Any word on the 3 laws of robotics?
Nothing about protecting profits or company interests above all?
See the first law. Who do you think gives the directives?
I’ve seen enough sci fo to see directives that are unclear or loss of communication.
The Skynet AI, which does not concern itself with such concepts as base as money
The robot society isn’t based on any human way of running things. Besides, Skynet is the only individual, so there is no need for currency, trade, ownership, capitalism etc. Other machines are merely tools Skynet uses to reach its goals.
It’s Classified.
<img alt="" src="https://lemmy.world/pictrs/image/25fb25a9-c24c-49da-8061-8d8b0b5d10bd.gif">
When I read that shit as a kid, I thought Asimov’s laws of robotics were like natural laws, so that it was just naturally impossible for robots to behave otherwise. That never made any sense to me so I thought Asimov was just full of shit.
“AI” is already being used for genocide in palestine and probably elsewhere. Not to mention other “applications”.
So no luck on the laws of robotics.
Are you AI? You have to tell me if you’re AI, it’s the law.
.
I’m required by law to inform my neighbours that I am AI.
<img alt="" src="https://ani.social/pictrs/image/c3afc5c4-d884-4093-8b1c-baa1390c9dc1.webp">
.
Same old corporations will ignore the law, pay a petty fine once a year, and call it the cost of doing business.
It would be nice if this extended to all text, images, audio and video on news websites. That’s where the real damage is happening.
Actually seems easier (probably not at the state level) to mandate cameras and such digitally sign any media they create. No signature or verification, no trust.
And the people that are going to check for a digital signature in the first place, THEN check that the signature emanates from a trusted key, then, eventually, check who’s deciding the list of trusted keys… those people, where are they?
Because the lack of trust, validation, verification, and more generally the lack of any credibility hasn’t stopped anything from spreading like a dumpster fire in a field full of dumpsters doused in gasoline. Part of my job is providing digital signature tools and creating “trusted” data (I’m not in sales, obviously), and the main issue is that nobody checks anything, even when faced with liability, even when they actually pay for an off the shelve solution to do so. And I’m talking about people that should care, not even the general public.
There are a lot of steps before “digitally signing everything” even get on people’s radar. For now, a green checkmark anywhere is enough to convince anyone, sadly.
I think there’s enough people who care about this that you can just provide the data and wait for someone to do the rest.
I’d like to think like that too, but it’s actually experience with large business users that led me to say otherwise.
It could be a feature of web browsers. Images would get some icon indicating the valid signature, just like browsers already show the padlock icon indicating a valid certificate. So everybody would be seeing the verification.
But I don’t think it’s a good idea, for other reasons.
An individual wouldn’t verify this but enough independent agencies or news orgs would probably care enough to verify a photo. For the vast majority we’re already too far gone to properly separate fiction an reality. If we can’t get into a courtroom and prove that a picture or video is fact or fiction then we’re REALLY fucked.
I get what you’re going for but this would absolutely wreck privacy. And depending on how those signatures are created, someone could create a virtual camera that would sign images and then we would be back to square one.
I don’t have a better idea though.
Privacy concern for sure, but given that you can already tie different photos back to the same phone from lens artifacts, I don’t think this is going to make things much worse than they already are.
Anyone who produces cameras can publish a list of valid keys associated with their camera. If you trust the manufacturer, then you also trust their keys. If there’s no trusted source for the keys, then you don’t trust the signature.
The point is to give photographers a “receipt” for their photos. If you don’t want the receipt it would be easy to scrub from photo metadata.
The problem is that “AI” doesn’t actually exist. For example, Photoshop has features that are called “AI”. Should every designer be forced to label their work if they use some “AI” tool.
This is a problem with making violent laws based on meaningless language.
Yes the state should violently enforce its arbitrary laws in every aspect of our lives. \s
This sounds about as useful as the California law that tells ICE they aren’t allowed to cover their face, or the California law that tells anyone selling anything ever that they have to tell you it will give you cancer. Performative laws are what we’re best at here in California.
Headline is kind of misleading. It requires a notice to be shown in a chat or interface that said chatbot is not a real person if it’s not obvious that it’s an LLM. I originally took the headline to mean that an LLM would have to tell you if it’s an LLM or not itself, which is, of course, not really possible to control generally. A nice gesture if it were enforced, but it doesn’t go nearly far enough.
I think it’s one of those perfect is the enemy of good kinds of situations. Go further is more complicated and requires more consideration and more analysis of consequences, etc. and that can take some time. But this is kinda no-brainer kind of legislation so pass this now while making the considerations on some more robust legislation to pass later.
, btw I’m ai after every message
Be sure to tell this to “AI”. It would be a shame if this was a technical nonsense law to be.
What happened to Old California?
I think it was conquered.
Destroyed by bombs in 2077.
<img alt="" src="https://upload.wikimedia.org/wikipedia/commons/thumb/b/b3/Flag_of_the_New_California_Republic.svg/1920px-Flag_of_the_New_California_Republic.svg.png">
Degenerates like you belong on a cross.
(For those who don’t know, it’s a Fallout: New Vegas quote)
Lol I hope nobody took that seriously. That would be a weird thing to say to a stranger!
It looks like at least 2 people did!
NCR is exactly what came to mind reading the headline lol
I feel like bombing Night City would raise the property values.
did california get new glasses?
Has anyone been able to find the text of the law, the article didn’t mention the penalties, I want to know if this actually means anything.
Edit: I found a website that says the penalty follows 5000*sum(n+k) where n is number of days since first infraction, this has a closed form of n^2+n= (7500^-1)y where y is the total compounded fee. This makes it cost 1mil in 11 days and 1bil in a year.
reference
Yeah, this is an important point. If the penalty is too small, AI companies will just consider it a cost of doing business. Flat-rate fines only being penalties for the poor, and all that.
The state will lose money in courts if they even try to enforce this.
How do you figure, I haven’t seen the actual text, is it written ambiguously? If not, I would imagine that they be able to enforce it, the only thing is the scope is very small.
I am of the firm opinion that if a machine is “speaking” to me then it must sound a cartoon robot. No exceptions!
I propose that they must use vocaloid voices or that old voice code that Wasteland 3 uses for the bob the robot looking guys.
i would like my GPS to sound like Brian Blessed otherwise i want all computers to sound like Niki Yang
I want my AI to sound like a Speak & Spell.
Okay, but when can the law straight up ban companies who don’t comply with the law from operating in the state instead of just slapping them on the wrist and telling them “no” the same way a pushover parent tells their child “no”. Especially after they just ignore the law.
Can’t do anything that might negatively impact business.
Fun Fact:
Did you know, that cops are required to tell you if they’re a cop? It’s in the constitution!
I am an AI, I think. Probably.
Don’t devaluate yourself, you’re infinitely more.
youtu.be/tcUwQbnSEEA
So does the EU AI act
My LinkedIn feed is 80% tech bros complaining about the EU AI Act, not a single one of whom is willing to be drawn on which exact clause it is they don’t like.
I get it though, if you’re an upstart. Having to basically hire an extra guy just to do ai compliance is a huge hit to the barrier of entry
That’s not actually the case for most companies though. The only time you’d need a full time lawyer on it is if the thing you want to do with AI is horrifically unethical, in which case fuck your little startup.
It’s easy to comply with regulations if you’re already behaving responsibly.
That’s true with many regulations. The quiet part that they’re trying to avoid saying out loud is that behaving ethically and responsibly doesn’t earn them money.
Yes… it’s so bad that I just never log in until I receive a DM, and even then I login, check it, if it’s useful I warn people I don’t use LinkedIn anymore then log out.
I even ignore DMs on linkedIn, they’re mostly head hunters anyway.
Not a terrible resource when you’re actually looking for a job. But that’s because all the automated HR intakes are a dumpster fire, more than anything headhunters bring in value.
Did you seriously use LinkedIn? I always thougt that it was just narsisitic people posting about themselves never having any real conversations and only adding superficial replies to posts that align 100% with them
If I could delete it without impacting my job or career I would. Sadly they’ve effectively got a monopoly on the online professional networking industry. Cunts
Very useful for job hunting because it’s swarming with head hunters.
LinkedIn gets you access to humans who will help you navigate the shitty HR AI that most big businesses integrate into their job intake process.
Oh, so just like with the GDPR, cool.
Ok, my main complaint about GDPR is that I had to implement that policy on a legacy codebase and Im pretty sure I have trauma from that.
Skill issue.
<img alt="" src="https://lemmy.world/pictrs/image/2e73916a-118e-4378-9297-58f15e5543fa.gif">
My point is higher than yours, get on my level
<img alt="" src="https://lemmy.world/pictrs/image/be1b0774-1fb6-4387-87d3-0ba70b7b7c11.gif">
Sounds like that codebase was truly awful for user privacy then.
Incredibly so, yes.
They’re probably not super fond of the idea of AI not being allowed to be deployed to manipulate people.
It’s comforting to know that politicians in the EU also have no clue what “AI” is.
Why do you say that
As a Califirnian, I will do my job from here on out.
Ok, this is a REALLY smart law!
Is that after or before it has to tell you it may cause cancer?
Hi there, Cancer Robot here! Excellent question iopq! We state that we cause cancer first, as is tradition.
bleep bloop… I am a real human being who loves doing human being stuff like breathing and existing
How about butt stuff?
garbage in, garbage out
Nice
Its insane how a predictive chat bot model is called AI
I mean, we call the software that runs computer players in games AI, so… ¯\_(ツ)_/¯
The AI chatbot brainrot is way worse tbh.someone legit said to me why don’t chatgpt cure cancer like wtf
As if taking all of 4-chan, scrambling it around a little, and pouring the contents out would lead to a cure for cancer. lmao
Do we? Aren’t they just bots? Like I’m not looking at an NPC and calling it AI.
Marketing
USA is run by capitalist grifters. There is no objective meaning under this regime. It’s all just misleading buzzwords and propaganda.
If I’m not AI, can I lie and pretend that I’m AI? I’m AI, btw.
What if I just use AI to generate all my content and then put an intern in a chair to launder it as original human thoughts?
Now that’s smart!
#JobCreator
Seems reasonable to me. If you’re using AI then you should be required to own up to it. If you’re too embarrassed to own up to it, then maybe you shouldn’t be using it.
I’m stoked to see the legal definition of “AI”. I’m sure the lawyers and costumed clowns will really clear it all up.
Prosecution: “Your Honor, the definition of artificial is ‘made or produced by human beings rather than occurring naturally,’ and as all human beings are themselves produced by human beings, we are definitionally artificial. Therefore, the actions of an intelligent human are inherently AI.”
Defense: “The defense does not argue this point, as such. However, our client, FOX News, could not be said to be exhibiting ‘intelligence.’ Artificial they may be, but AI they are clearly not. We rest our case.”
What about my if else AI algorithm?
It’s not really an llm
IMO if your “A*” style algorithm is used for chatbot or any kind of user interaction or content generation, it should still be explicitly declared.
That being said, there is some nuance here about A) use of Copyrighted material and B) Non-deterministic behaviour. Neither of which is (usually) a concern in more classical non-DL approaches to AI solutions.
Will someone please tell California that “AI” doesn’t exist?
This is how politicians promote a grift by pretending to regulate it.
Worthless politicians making worthless laws.
Weird how California keeps being the most progressive state in the US.
It’s like being the best smelling turd in a toilet, but at least it’s something.
But Peter Thiel said regulating AI will bring the biblical apocalypse. ƪ(˘⌣˘)ʃ
Yeah for real, what does this mean exactly? All forms of machine learning? That’s a lot of computers at this moment, it’s just we only colloquially call the chat bot versions “AI”. But even that gets vague do reactive video game NPCs get counted as “AI?” Or all of our search algorithms and spell check programs?
At that point what’s the point? The disclosure would become as meaningless as websites asking for cookies or the number of things known to cause cancer in the state of California.
What if it’s foreign AI ?
How do you enforce this
That might end like the cookie popups in the eu…
If you ask ChatGPT, it says it’s guidelines include not giving the impression it’s a human. But if you ask it be less human because it is confusing you, it says that would break the guidelines.
ChatGPT doesn’t know its own guidelines because those aren’t even included in its training corpus. Never trust an LLM about how it works or how it “thinks” because fundamentally these answers are fake.
And the sky is blue
-“If you’re an AI Cop, you have to tell me. It’s the law.” -“I’m not a cop”