autotldr@lemmings.world
on 08 Dec 2023 23:30
nextcollapse
This is the best summary I could come up with:
LONDON (AP) — European Union negotiators clinched a deal Friday on the world’s first comprehensive artificial intelligence rules, paving the way for legal oversight of technology used in popular generative AI services like ChatGPT that has promised to transform everyday life and spurred warnings of existential dangers to humanity.
Negotiators from the European Parliament and the bloc’s 27 member countries overcame big differences on controversial points including generative AI and police use of facial recognition surveillance to sign a tentative political agreement for the Artificial Intelligence Act.
The European Parliament will still need to vote on it early next year, but with the deal done that’s a formality, Brando Benifei, an Italian lawmaker co-leading the body’s negotiating efforts, told The Associated Press late Friday.
Generative AI systems like OpenAI’s ChatGPT have exploded into the world’s consciousness, dazzling users with the ability to produce human-like text, photos and songs but raising fears about the risks the rapidly developing technology poses to jobs, privacy and copyright protection and even human life itself.
However, negotiators managed to reach a tentative compromise early in the talks, despite opposition led by France, which called instead for self-regulation to help homegrown European generative AI companies competing with big U.S rivals including OpenAI’s backer Microsoft.
Rights groups also caution that the lack of transparency about data used to train the models poses risks to daily life because they act as basic structures for software developers building AI-powered services.
The original article contains 846 words, the summary contains 241 words. Saved 72%. I’m a bot and I’m open source!
sugarfree@lemmy.world
on 09 Dec 2023 00:08
nextcollapse
They believe they can regulate AI, but it remains to be seen whether or not that is true. Especially as the rules come into place in 2025, that’s a very long time in AI.
And I would just like to say, that the term ‘AI’ is a marketing term; all the generative models are just complex digital Galton Boards. Put thing in, different thing comes out. But if you leave them alone, nothing happens. Is that really intelligence, or is that just data transformation?
I’m far more concerned about the things that do things when you leave them alone… Drones, Boston Robotics, that kind of thing.
SendMePhotos@lemmy.world
on 09 Dec 2023 05:05
nextcollapse
AI is a term I was taught a long time ago to mean that a computer has transcended and become sentient. Artificial Intelligence.
VI - is what I think we almost have now. Simulated intelligence. It’s at this point now where so many people think that Ai can think for itself that we may as well call it VI just to get on with the progress.
Virtual/Simulated Intelligence. That’s a far better term. I’m going to start using that.
KingRandomGuy@lemmy.world
on 09 Dec 2023 08:13
collapse
I’m a researcher in ML and that’s not the definition that I’ve heard. Normally the way I’ve seen AI defined is any computational method with the ability to complete tasks that are thought to require intelligence.
This definition admittedly sucks. It’s very vague, and it comes with the problem that the bar for requiring intelligence shifts every time the field solves something new. We sort of go “well, given these relatively simple methods could solve it, I guess it couldn’t have really required intelligence.”
The definition you listed is generally more in line with AGI, which is what people likely think of when they hear the term AI.
SendMePhotos@lemmy.world
on 09 Dec 2023 20:16
collapse
Maybe a defined term should be used. The use of the term “AI” makes me puke in my mouth because most people associate it with “omg the robots are coming!” when it’s really just a program still. So much so that people have made videos of them implementing Ai inside of video games (see: Matrix game) and then contemplate if the computer program is suffering.
nihilisticfuq@lemmy.world
on 09 Dec 2023 21:51
collapse
That’s from April 2021. I wonder what are the new rules?
Muffi@programming.dev
on 09 Dec 2023 07:41
collapse
Cars are destroying the world way faster than any “AI”. Can we regulate those to hell first?
Doing one thing does not stop you from also doing a second thing in parallel, and the best time to regulate a technology is when it’s emerging. That way you have a chance to stop it getting out of hand.
threaded - newest
This is the best summary I could come up with:
LONDON (AP) — European Union negotiators clinched a deal Friday on the world’s first comprehensive artificial intelligence rules, paving the way for legal oversight of technology used in popular generative AI services like ChatGPT that has promised to transform everyday life and spurred warnings of existential dangers to humanity.
Negotiators from the European Parliament and the bloc’s 27 member countries overcame big differences on controversial points including generative AI and police use of facial recognition surveillance to sign a tentative political agreement for the Artificial Intelligence Act.
The European Parliament will still need to vote on it early next year, but with the deal done that’s a formality, Brando Benifei, an Italian lawmaker co-leading the body’s negotiating efforts, told The Associated Press late Friday.
Generative AI systems like OpenAI’s ChatGPT have exploded into the world’s consciousness, dazzling users with the ability to produce human-like text, photos and songs but raising fears about the risks the rapidly developing technology poses to jobs, privacy and copyright protection and even human life itself.
However, negotiators managed to reach a tentative compromise early in the talks, despite opposition led by France, which called instead for self-regulation to help homegrown European generative AI companies competing with big U.S rivals including OpenAI’s backer Microsoft.
Rights groups also caution that the lack of transparency about data used to train the models poses risks to daily life because they act as basic structures for software developers building AI-powered services.
The original article contains 846 words, the summary contains 241 words. Saved 72%. I’m a bot and I’m open source!
They believe they can regulate AI, but it remains to be seen whether or not that is true. Especially as the rules come into place in 2025, that’s a very long time in AI.
5 links later, here are the actual rules (apparently): ec.europa.eu/commission/presscorner/…/ip_21_1682
And I would just like to say, that the term ‘AI’ is a marketing term; all the generative models are just complex digital Galton Boards. Put thing in, different thing comes out. But if you leave them alone, nothing happens. Is that really intelligence, or is that just data transformation?
I’m far more concerned about the things that do things when you leave them alone… Drones, Boston Robotics, that kind of thing.
AI is a term I was taught a long time ago to mean that a computer has transcended and become sentient. Artificial Intelligence.
VI - is what I think we almost have now. Simulated intelligence. It’s at this point now where so many people think that Ai can think for itself that we may as well call it VI just to get on with the progress.
Virtual/Simulated Intelligence. That’s a far better term. I’m going to start using that.
I’m a researcher in ML and that’s not the definition that I’ve heard. Normally the way I’ve seen AI defined is any computational method with the ability to complete tasks that are thought to require intelligence.
This definition admittedly sucks. It’s very vague, and it comes with the problem that the bar for requiring intelligence shifts every time the field solves something new. We sort of go “well, given these relatively simple methods could solve it, I guess it couldn’t have really required intelligence.”
The definition you listed is generally more in line with AGI, which is what people likely think of when they hear the term AI.
Maybe a defined term should be used. The use of the term “AI” makes me puke in my mouth because most people associate it with “omg the robots are coming!” when it’s really just a program still. So much so that people have made videos of them implementing Ai inside of video games (see: Matrix game) and then contemplate if the computer program is suffering.
That’s from April 2021. I wonder what are the new rules?
Cars are destroying the world way faster than any “AI”. Can we regulate those to hell first?
Doing one thing does not stop you from also doing a second thing in parallel, and the best time to regulate a technology is when it’s emerging. That way you have a chance to stop it getting out of hand.