from Aatube@kbin.melroy.org to technology@lemmy.world on 15 Aug 2024 03:53
https://kbin.melroy.org/m/technology@lemmy.world/t/406343
A paper[1] presented in June at the NAACL 2024 conference describes "how to apply large language models to write grounded and organized long-form articles from scratch, with comparable breadth and depth to Wikipedia pages." A "research prototype" version of the resulting "STORM" system is available online and has already attracted thousands of users. This is the most advanced system for automatically creating Wikipedia-like articles that has been published to date.
The authors hail from Monica S. Lam's group at Stanford, which has also published several other papers involving LLMs and Wikimedia projects since 2023 (see our previous coverage: WikiChat, "the first few-shot LLM-based chatbot that almost never hallucinates" – a paper that received the Wikimedia Foundation's "Research Award of the Year" some weeks ago).
Please read the article before commenting. Also, coming right up, another paper creates a structural diagram in comic sans.
threaded - newest
Sooooo much fact checking will need to be done if we actually want accurate articles… Soon it will also write the facts and start to blur that line of reality vs trumptalk
As said in the article, a big evaluation criteria of the research was whether it provided a good-enough first draft ("pre-write") for actual editors.
The problem is that a lot of people will use it for the entire process, like the research papers that got published with “as a large language model I don’t have access to patient data but I can…” buried inside
Only the bad people who write promotional articles would trust this for the entire thing. Serial article creators know better
Forgot the \s?
What do you mean, you think long-time article creators don’t understand verifiability policies?
Extremely cool. Perhaps a right step in the direction of hallucination free LLMs?
It’s more like a wrong step towards wikipedia being more full of spam, disinfo, etc.