The EU AI Act is the thing that imposes the big fines, and it’s pretty big and complicated, so companies have complained that it’s hard to know how to comply. So this voluntary code of conduct was released as a sample procedure for compliance, i.e. “if you do things this way, you (probably) won’t get in trouble with regulators”.
It’s also worth noting that not all the complaints are unreasonable. For example, the code of conduct says that model makers are supposed to take measures to impose restrictions on end-users to prevent copyright infringement, but such usage restrictions are very problematic for open source projects (in some cases, usage restrictions can even disqualify a piece of software as FOSS).
cupcakezealot@piefed.blahaj.zone
on 19 Jul 13:26
nextcollapse
there's not a single thing lost by blocking access to meta imo.
undefined@lemmy.hogru.ch
on 19 Jul 13:36
nextcollapse
I don’t see the downside of blocking everything Meta.
cupcakezealot@piefed.blahaj.zone
on 19 Jul 13:25
nextcollapse
nothing immediately makes me side with europe more than the phrase "meta takes hard line against"
HootinNHollerin@lemmy.dbzer0.com
on 19 Jul 13:49
nextcollapse
I’ve convinced a friend to stop using Facebook in the past week after getting him to pay attention to the algorithm feed and how it’s picking up on shit he’s talking about but not searching for, and increasing his anxiety. Including confiding private conversation he had with me. Feels good
!demeta@programming.dev
Sequence5666@lemmy.world
on 19 Jul 15:53
nextcollapse
I have a question. Why do we see this every alternate week about META breaking a law, not agreeing to standards, hacking their way with loopholes and legislation. And STILLLL nothing ever bad happens to Meta?
Why are they still existing?
If any company causes depression to teenagers,
Creates or aides genocide,
Misuses data,
How are they still existing?
Are we daft? Or are the lawmakers? Or the government?
Meta is evil. Period. They cant keep getting away with this.
undrwater@lemmy.world
on 19 Jul 19:39
nextcollapse
“too big to fail”
Too rich to not move politicians and mountains.
flambonkscious@sh.itjust.works
on 20 Jul 07:59
collapse
One part of this is jurisdiction. I’m being very simplistic here and only have a vague sense of the picture, really (my own prejudice - I find just about everything about meta abhorrent)
They are based in a country that’s solely oritentated towards liberty - not fairness or common sense.
There are other parts, of course, like lobbying, tax breaks and so on, but a big part is because they’re not based in the EU.
gravitas_deficiency@sh.itjust.works
on 19 Jul 18:39
nextcollapse
EU:
you’re done here
MonkderVierte@lemmy.zip
on 19 Jul 18:44
nextcollapse
that the social media giant will not sign the European Union’s voluntary AI code of practice. His reasoning was stark: “Europe is heading down the wrong path on AI.”
These rules, which take effect next month, are meant to help companies navigate compliance with Europe’s sweeping AI Act passed last year.
🤦 And later then they complain about the difficulty to navigate the rules and about the fines, you bet?
The kneejerk reaction is gonna be “Meta bad”, but it’s actually a bit more complicated.
Whatever faults Meta has in other areas, it’s been mostly a good player in the AI space. They’re one of the major reasons we have strong open-weight AI models today. Mistral, another maker of open AI models and Europe’s only significant player in AI, has also rejected this code of conduct. By contrast, OpenAI a.k.a. ClosedAI has committed to signing it, probably because they are the incumbents and they think the increased compliance costs will help kill off competitors.
Personally, I think the EU AI regulation efforts are a big missed opportunity. They should have been used to force a greater level of openness and interoperability in the industry. With the current framing, they’re likely to end up entrenching big proprietary AI companies like OpenAI, without doing much to make them accountable at all, while also burying upstarts and open source projects under unsustainable compliance requirements.
threaded - newest
Meta does not want to comply? Well, fine them to kingdom come, and if they still don’t come to heel, block them at IP level.
Emphasis added. If the result of not signing a voluntary code of practice is massive fines and IP blocks, was it really "voluntary?"
If it is really only voluntary, it is a failure from the word go. Voluntary never worked with predatory buisinesses.
Well, direct your ire at the EU for that, I suppose. I'm just pointing out that calling for massive retribution against Meta isn't warranted here.
The EU AI Act is the thing that imposes the big fines, and it’s pretty big and complicated, so companies have complained that it’s hard to know how to comply. So this voluntary code of conduct was released as a sample procedure for compliance, i.e. “if you do things this way, you (probably) won’t get in trouble with regulators”.
It’s also worth noting that not all the complaints are unreasonable. For example, the code of conduct says that model makers are supposed to take measures to impose restrictions on end-users to prevent copyright infringement, but such usage restrictions are very problematic for open source projects (in some cases, usage restrictions can even disqualify a piece of software as FOSS).
there's not a single thing lost by blocking access to meta imo.
I’ve been doing it for 10 years 🥰
I like my cheap vr headset, I hope it doesn’t turn to brick. Anyone know how to operate it without the meta bloat?
For real though, bring on the absolutely crippling fines
I don’t see the downside of blocking everything Meta.
nothing immediately makes me side with europe more than the phrase "meta takes hard line against"
I’ve convinced a friend to stop using Facebook in the past week after getting him to pay attention to the algorithm feed and how it’s picking up on shit he’s talking about but not searching for, and increasing his anxiety. Including confiding private conversation he had with me. Feels good !demeta@programming.dev
I have a question. Why do we see this every alternate week about META breaking a law, not agreeing to standards, hacking their way with loopholes and legislation. And STILLLL nothing ever bad happens to Meta?
Why are they still existing? If any company causes depression to teenagers, Creates or aides genocide, Misuses data, How are they still existing?
Are we daft? Or are the lawmakers? Or the government? Meta is evil. Period. They cant keep getting away with this.
“too big to fail”
Too rich to not move politicians and mountains.
One part of this is jurisdiction. I’m being very simplistic here and only have a vague sense of the picture, really (my own prejudice - I find just about everything about meta abhorrent)
They are based in a country that’s solely oritentated towards liberty - not fairness or common sense.
There are other parts, of course, like lobbying, tax breaks and so on, but a big part is because they’re not based in the EU.
EU:
🤦 And later then they complain about the difficulty to navigate the rules and about the fines, you bet?
I would love to see Europe ban/block the API endpoints that AI communicates over.
Then ban all Meta endpoints if/when meta moves AI communication onto the same endpoints as non-AI communication.
European laws are not perfect, but they at least make an effort to put the needs of the people ahead of corporations and the Parasite Class.
The kneejerk reaction is gonna be “Meta bad”, but it’s actually a bit more complicated.
Whatever faults Meta has in other areas, it’s been mostly a good player in the AI space. They’re one of the major reasons we have strong open-weight AI models today. Mistral, another maker of open AI models and Europe’s only significant player in AI, has also rejected this code of conduct. By contrast, OpenAI a.k.a. ClosedAI has committed to signing it, probably because they are the incumbents and they think the increased compliance costs will help kill off competitors.
Personally, I think the EU AI regulation efforts are a big missed opportunity. They should have been used to force a greater level of openness and interoperability in the industry. With the current framing, they’re likely to end up entrenching big proprietary AI companies like OpenAI, without doing much to make them accountable at all, while also burying upstarts and open source projects under unsustainable compliance requirements.