Mahlzeit@feddit.de
on 06 Dec 2023 16:18
nextcollapse
Took em long enough.
I wonder if they used ChatGPT to create any of the training data.
eager_eagle@lemmy.world
on 06 Dec 2023 16:27
nextcollapse
The transformer architecture GPT is based on came from Google. I’m sure the delay has more to do with Google trying to mitigate liability issues that arise with large scale general public usage, and letting OpenAI “test the waters” first.
Quite possible. Whatever the case, they apparently saw no pressure to innovate. It implies that tech development is being slowed down by the Big Tech monopolies.
eager_eagle@lemmy.world
on 06 Dec 2023 16:55
collapse
lack of innovation on what exactly? They’re all exploring new things if you search for it.
It just seems that Google should have been able to move faster. Yes, they did publish a lot of important stuff, but seeing the splash that came from Stability and OpenAI, they seem to have done so little with it. What their researchers published was important but I can’t help thinking, that a public university would have disseminated such research more openly and widely. Well, I may be wrong. I don’t have inside knowledge.
Not likely. They may have tested it as an adversarial feedback tool, but it would be much more accurate and efficient to get the source data rather than paying OpenAI for maybe correct information.
They did, I believe, trick ChatGPT into exposing some of its source data though, but it was only a few hundred MB’s.
For the fine-tuning stage at the end, where you turn it into a chatbot, you need specific training data (eg OpenOrca). People have used ChatGPT to generate such data. Come to think of it, if you use Mechanical Turk, then you almost certainly include text from ChatGPT.
Yes it could be done that way, and maybe GPT models were used, but calling these API’s isn’t free and there are plenty of open and surely internal models that could be used for that purpose.
NewPerspective@lemmy.world
on 06 Dec 2023 17:25
nextcollapse
After it was supposedly updated to support helping with code, I asked bard about how to access password protected zips in Go and it made up libraries and gave me a very simple non-functioning example. When asked to describe Octopath Traveler 2, it invented a new main character and setting that isn’t in the game or its predecessor or the mobile game. I don’t have a lot of faith that Gemini will be better.
NewPerspective@lemmy.world
on 06 Dec 2023 19:35
nextcollapse
threaded - newest
Took em long enough.
I wonder if they used ChatGPT to create any of the training data.
The transformer architecture GPT is based on came from Google. I’m sure the delay has more to do with Google trying to mitigate liability issues that arise with large scale general public usage, and letting OpenAI “test the waters” first.
Quite possible. Whatever the case, they apparently saw no pressure to innovate. It implies that tech development is being slowed down by the Big Tech monopolies.
lack of innovation on what exactly? They’re all exploring new things if you search for it.
It just seems that Google should have been able to move faster. Yes, they did publish a lot of important stuff, but seeing the splash that came from Stability and OpenAI, they seem to have done so little with it. What their researchers published was important but I can’t help thinking, that a public university would have disseminated such research more openly and widely. Well, I may be wrong. I don’t have inside knowledge.
Not likely. They may have tested it as an adversarial feedback tool, but it would be much more accurate and efficient to get the source data rather than paying OpenAI for maybe correct information.
They did, I believe, trick ChatGPT into exposing some of its source data though, but it was only a few hundred MB’s.
For the fine-tuning stage at the end, where you turn it into a chatbot, you need specific training data (eg OpenOrca). People have used ChatGPT to generate such data. Come to think of it, if you use Mechanical Turk, then you almost certainly include text from ChatGPT.
Yes it could be done that way, and maybe GPT models were used, but calling these API’s isn’t free and there are plenty of open and surely internal models that could be used for that purpose.
After it was supposedly updated to support helping with code, I asked bard about how to access password protected zips in Go and it made up libraries and gave me a very simple non-functioning example. When asked to describe Octopath Traveler 2, it invented a new main character and setting that isn’t in the game or its predecessor or the mobile game. I don’t have a lot of faith that Gemini will be better.
<img alt="" src="https://lemmy.world/pictrs/image/94bbd183-80f4-4efa-a9d8-0bb977e310c2.png">
How did you you play around with it?
That was the one and only prompt I gave it in that chat. New bard sucks as bad as old bard.
It’s geographically restricted. I had to choose VPN servers in the US to be able to use it, otherwise bard would only load a useless landing page.
I wonder when will this join the Google Graveyard
consider I thought bard was their answer to chaptgpt..... EDITED - oh it essentially is bard just the version of the backend.
Same. I’m very confused right now. What is Bard then?
Gemini is the model backing Bard.
.
If Gemini goes, so does Bard since Gemini is what powers Bard.
They still call Google a search company here… So weird. It’s an ad company.
Who, exactly, was waiting for this?
I’m annoyed they had to use the same name as a web protocol.