Last week, researchers at the Allen Institute for Artificial Intelligence (Ai2) released a new family of open-source multimodal models competitive with state-of-the-art models like OpenAI’s GPT-4o—but an order of magnitude smaller.
That's in reference to the size of the model itself.
They then compiled a more focused, higher quality dataset of around 700,000 images and 1.3 million captions to train new models with visual capabilities. That may sound like a lot, but it’s on the order of 1,000 times less data than what’s used in proprietary multimodal models.
That's in reference to the size of the training data that was used to train the model.
Minimizing both of those things is useful, but for different reasons. Smaller training sets make the model cheaper to train, and a smaller model makes the model cheaper to run.
General_Effort@lemmy.world
on 07 Oct 2024 19:08
collapse
After a quick skim, seems like the article has lots of errors. Molmo is trained on top of Qwen. The smallest ones are trained on something by the same company as Molmo.
remotelove@lemmy.ca
on 06 Oct 2024 20:06
nextcollapse
This kind of skill might help developers build AI agents that identify buttons or fields on a webpage to handle tasks like making a reservation at a restaurant.
… to improve efficiency of click farms and to bypass captchas.
pennomi@lemmy.world
on 06 Oct 2024 20:19
nextcollapse
Daaaang, Apache license AND open dataset + training tools.
This reads like an ad. They claim to use 1000 times less data than proprietary models, except nobody knows how much data they use or how big proprietary models actually are. Also there’s a giant asterisk here they fail to mention: Molmo outperforms the competition at visual benchmarks, not actual text chat.
chemical_cutthroat@lemmy.world
on 06 Oct 2024 20:56
nextcollapse
And a modern calculator has more computer power than the Apollo program… This is how tech works.
Instead of writing captions, the team asked annotators to record 60- to 90-second verbal descriptions answering a list of questions about each image. They then transcribed the descriptions—which often stretched across several pages—and used other large language models to clean up, crunch down, and standardize them.
threaded - newest
I’m pretty sure that would be three orders of magnitude.
They're not talking about the same thing.
That's in reference to the size of the model itself.
That's in reference to the size of the training data that was used to train the model.
Minimizing both of those things is useful, but for different reasons. Smaller training sets make the model cheaper to train, and a smaller model makes the model cheaper to run.
After a quick skim, seems like the article has lots of errors. Molmo is trained on top of Qwen. The smallest ones are trained on something by the same company as Molmo.
… to improve efficiency of click farms and to bypass captchas.
Daaaang, Apache license AND open dataset + training tools.
This reads like an ad. They claim to use 1000 times less data than proprietary models, except nobody knows how much data they use or how big proprietary models actually are. Also there’s a giant asterisk here they fail to mention: Molmo outperforms the competition at visual benchmarks, not actual text chat.
And a modern calculator has more computer power than the Apollo program… This is how tech works.
So those other LLMs are needed to train this one?