autotldr@lemmings.world
on 06 Feb 2024 16:45
nextcollapse
This is the best summary I could come up with:
Mistral is also among the companies that believe in sharing this technology as open-source software — computer code that can be freely copied, modified and reused — providing outsiders with everything they need to quickly build chatbots of their own.
Rival companies like OpenAI and Google argue that the open-source approach is dangerous and that the raw technology could be used to spread disinformation and other harmful material.
Mistral’s fate has taken on considerable importance in France, where leaders like Bruno Le Maire, the finance minister, have pointed to the company as providing the nation a chance to challenge U.S. tech giants.
Europe has not produced many meaningful tech companies dating back to the dot-com boom and sees artificial intelligence as a field where it can gain ground.
Companies like OpenAI and Google believe that this technology is so powerful, they can release it to the public only in the form of an online chatbot after spending months applying digital guardrails that prevent it from spewing disinformation, hate speech and other toxic material.
Widely sharing the underling code for A.I., Mr. Midha said, is the safest path because more people can review the technology, find its flaws and work to remove or mitigate them.
The original article contains 745 words, the summary contains 204 words. Saved 73%. I’m a bot and I’m open source!
PeepinGoodArgs@reddthat.com
on 06 Feb 2024 16:52
nextcollapse
Companies like OpenAI and Google believe that this technology is so powerful, they can release it to the public only in the form of an online chatbot after spending months applying digital guardrails that prevent it from spewing disinformation, hate speech and other toxic material.
Google Bard is currently free to use for now, so the danger is not locking up tech behind a subscription (though Google will 100% do that eventually).
yogthos@lemmy.ml
on 06 Feb 2024 16:56
nextcollapse
The only reason Bard is free for now is because Google is building up a base of users who become invested in the service before turning it into a subscription. The business model will clearly be to sell the access to the service, and people being able to run their own models is the core danger for them.
PeepinGoodArgs@reddthat.com
on 06 Feb 2024 17:01
collapse
I absolutely agree with you. That is the internet platform business model after all.
Still though, OpenAI and Google, I think, have a legitimate argument that LLMs without limitation may be socially harmful.
That doesn’t mean a $20 subscription is the one and only means of addressing that problem though.
In other words, I think we can take OpenAI and Google at face value without also saying their business model is the best way to solve the problem.
yogthos@lemmy.ml
on 06 Feb 2024 17:03
nextcollapse
Personally, I think it’s far more socially harmful to allow a handful of megacorps to control this technology going forward.
andrew_bidlaw@sh.itjust.works
on 06 Feb 2024 17:46
collapse
I agree, but I think that computational power requirements already do that – complex models that do interesting stuff need a bunch of special v-cards to train for days, and they need a lot of data to train on – so it’s natural that those who already have data and money to process it get there first.
I think, their argument is not even about their monopoly, but to shut down the question of why and how to trust THEM with policing their LLMs before it happened. Open system can be investigated and we can find out that they over or underregulated some stuff, made it biased, find copyrighted materials, personal information, gore or CSAM in their training samples et cetera. They save metric tons of possible lawsuits by making it a rule in the industry that no one can see under the roof of their machines.
Initial training of the models is expensive, but a trained model can be run on a laptop from that point. The problem of initial training can also be addressed by doing it in distributed fashion. There are also open source projects, such as Petals, that allow you running distributed models Bittorrent style. Other approaches like LoRA allow taking existing models and turning them for a particular task without the need to do training from scratch. There’s a pretty good article from Steve Yegge on the recent advances in open source models.
I do agree that avoiding regulation and scrutiny are most definitely additional goals these companies have. They want to keep this tech opaque and frame themselves as responsible guardians of the technology that shouldn’t fall into the hands of unwashed masses who can’t be trusted with it.
Still though, OpenAI and Google, I think, have a legitimate argument that LLMs without limitation may be socially harmful.
I actually do agree with this. Because of the massive potential for harm if it goes wrong, AI development is one of very, very few types of technology where it does actually make some sense to try restrict its development to the big entities so that you can have some vaguely-realistic hope of placing regulation on it so that it can be developed safely and responsibly. (Whether the big entities will develop it safely and responsibly is a separate, though related, issue.)
That said, good fuckin luck, the genie's pretty much out at this point.
threaded - newest
This is the best summary I could come up with:
Mistral is also among the companies that believe in sharing this technology as open-source software — computer code that can be freely copied, modified and reused — providing outsiders with everything they need to quickly build chatbots of their own.
Rival companies like OpenAI and Google argue that the open-source approach is dangerous and that the raw technology could be used to spread disinformation and other harmful material.
Mistral’s fate has taken on considerable importance in France, where leaders like Bruno Le Maire, the finance minister, have pointed to the company as providing the nation a chance to challenge U.S. tech giants.
Europe has not produced many meaningful tech companies dating back to the dot-com boom and sees artificial intelligence as a field where it can gain ground.
Companies like OpenAI and Google believe that this technology is so powerful, they can release it to the public only in the form of an online chatbot after spending months applying digital guardrails that prevent it from spewing disinformation, hate speech and other toxic material.
Widely sharing the underling code for A.I., Mr. Midha said, is the safest path because more people can review the technology, find its flaws and work to remove or mitigate them.
The original article contains 745 words, the summary contains 204 words. Saved 73%. I’m a bot and I’m open source!
Google Bard is currently free to use for now, so the danger is not locking up tech behind a subscription (though Google will 100% do that eventually).
The only reason Bard is free for now is because Google is building up a base of users who become invested in the service before turning it into a subscription. The business model will clearly be to sell the access to the service, and people being able to run their own models is the core danger for them.
I absolutely agree with you. That is the internet platform business model after all.
Still though, OpenAI and Google, I think, have a legitimate argument that LLMs without limitation may be socially harmful.
That doesn’t mean a $20 subscription is the one and only means of addressing that problem though.
In other words, I think we can take OpenAI and Google at face value without also saying their business model is the best way to solve the problem.
Personally, I think it’s far more socially harmful to allow a handful of megacorps to control this technology going forward.
I agree, but I think that computational power requirements already do that – complex models that do interesting stuff need a bunch of special v-cards to train for days, and they need a lot of data to train on – so it’s natural that those who already have data and money to process it get there first.
I think, their argument is not even about their monopoly, but to shut down the question of why and how to trust THEM with policing their LLMs before it happened. Open system can be investigated and we can find out that they over or underregulated some stuff, made it biased, find copyrighted materials, personal information, gore or CSAM in their training samples et cetera. They save metric tons of possible lawsuits by making it a rule in the industry that no one can see under the roof of their machines.
Initial training of the models is expensive, but a trained model can be run on a laptop from that point. The problem of initial training can also be addressed by doing it in distributed fashion. There are also open source projects, such as Petals, that allow you running distributed models Bittorrent style. Other approaches like LoRA allow taking existing models and turning them for a particular task without the need to do training from scratch. There’s a pretty good article from Steve Yegge on the recent advances in open source models.
I do agree that avoiding regulation and scrutiny are most definitely additional goals these companies have. They want to keep this tech opaque and frame themselves as responsible guardians of the technology that shouldn’t fall into the hands of unwashed masses who can’t be trusted with it.
I actually do agree with this. Because of the massive potential for harm if it goes wrong, AI development is one of very, very few types of technology where it does actually make some sense to try restrict its development to the big entities so that you can have some vaguely-realistic hope of placing regulation on it so that it can be developed safely and responsibly. (Whether the big entities will develop it safely and responsibly is a separate, though related, issue.)
That said, good fuckin luck, the genie's pretty much out at this point.
(Mistral is actually open source (code and models), which is very much not true of GPT or Bard.)
CAN’T YOU SEE THAT THEY’RE HURTING THE DEFENSELESS MONEY???
Strange how much Google used to loooove open source when it hurt Microsoft…
Google supporting an open source project ain’t a life raft…
It’s a warning sign.
indeed
Site doesn’t load without JavaScript