BitSound@lemmy.world
on 26 Jul 2023 02:12
nextcollapse
This seems really short-sighted. Why would I go to How Stuff Works when I can just ask the LLM myself?
Maybe there’s just no possible business model for them anymore with the advent of LLMs, but at least if they focused on the “actually written by humans!” angle there’d be some hook to draw people in.
LoafyLemon@kbin.social
on 26 Jul 2023 02:41
nextcollapse
LLM cannot create new concepts, it can only create a mishmash of things it has been fed on.
Arbiter@lemmy.world
on 26 Jul 2023 02:43
nextcollapse
Just like Hollywood!
roguetrick@kbin.social
on 26 Jul 2023 02:53
collapse
Isn't that exactly how howstuffworks operates though?
chaogomu@kbin.social
on 26 Jul 2023 02:45
nextcollapse
The thing is, the LLM doesn't actually know anything, and lies about it.
So you go to How Stuff Works now, and you get bullshit lies instead of real information, you'll also get nonsense that looks like language at first glance, but is gibberish pretending to be an article. Because sometimes the language model changes topics midway through and doesn't correct, because it can't correct. It doesn't actually know what it's saying.
See, these language models are pre-trained, that the P in chatGPT. They just regurgitate the training data, but put together in ways that sort of look like more of the same training data.
There are some hard coded filters and responses, but other than that, nope, just a spew of garbage out from the random garbage in.
And yet, all sorts of people think this shit is ready to take over writing duties for everyone, saving money and winning court cases.
bane_killgrind@kbin.social
on 26 Jul 2023 02:48
nextcollapse
Absolutely. Creating new documentation will always be a human sport.
Blackmist@feddit.uk
on 26 Jul 2023 16:23
nextcollapse
Literally predictive text but for whole articles.
It doesn’t know the limits of it’s knowledge or indeed know anything. It just “knows” what an answer smells like. It even “knows” what excuses are supposed to look like when you call it out.
salient_one@lemmy.villa-straylight.social
on 27 Jul 2023 05:14
collapse
This is a very good write up about how ChatGPT works.
AlmightySnoo@lemmy.world
on 26 Jul 2023 02:55
nextcollapse
It’s a combination of three things:
1- most people still google things;
2- the more content you have the more organic traffic you’re likely to attract from Google;
3- displaying ads on your website makes you money.
Websites full of LLM generated content are just the natural continuation of MFAs (Made For AdSense) and there were lots of tools on sale back then in the 2006~2008 period that promised to automatically create websites for you and fill them with randomized content that is optimized for AdSense.
mrbubblesort@kbin.social
on 26 Jul 2023 03:46
collapse
Correct me if I'm wrong, but isn't AI generated content not copyrightable? Therefore nothing is stopping someone from taking all their content, rebranding it as "how stuff really works" or something, and then start stealing their business & ad revenue.
Yewb@kbin.social
on 26 Jul 2023 02:12
nextcollapse
Creating a market for real human content? Sounds tasty
Molehill8244@beehaw.org
on 26 Jul 2023 03:42
collapse
Maybe make it a co-op and put the CEO + AI article company out of business
I’m down for that
circuitfarmer@lemmy.sdf.org
on 26 Jul 2023 02:38
nextcollapse
This is going to happen for a while. Execs who actually have no clue have now been sold on the idea that AI lets them keep making money without paying labor.
It will fail eventually when the execs eventually take the time to learn what AI is capable of and what it isn’t capable of.
Who am I kidding? It’ll continue indefinitely because there are few consequences for clueless executives.
worfamerryman@beehaw.org
on 26 Jul 2023 03:03
collapse
I see a possibility where these sites eventually become terrible and there is a new person can come in and make content made by humans.
ProIsh@lemmy.world
on 26 Jul 2023 02:54
nextcollapse
This is fine. Just let us know so we know what shows to avoid.
altima_neo@lemmy.zip
on 26 Jul 2023 03:03
nextcollapse
This seems like a really dumb idea.
worfamerryman@beehaw.org
on 26 Jul 2023 03:05
nextcollapse
How long until we can a browser extension that lets us know when we are on a site written by AI?
I don’t mean AI detection, but instead, sites that announce they are laying off editors in favor of AI.
Tygr@lemmy.world
on 26 Jul 2023 03:09
nextcollapse
How about instead of all the tracking cookie popups for permission, we force these sites to display a message that the content is AI generated.
vrighter@discuss.tchncs.de
on 26 Jul 2023 03:51
nextcollapse
Used to be one of my favourite sites when I was younger. Haven’t visited that site in ages. Holy crap, has it gone to complete shit. Like way worse than I thought possible
bazmatazable@reddthat.com
on 26 Jul 2023 14:24
collapse
When I read the OP I thought “oh yeah I used to use that all the time, I wonder what its like now?”. Seems to have changed a lot…
vrighter@discuss.tchncs.de
on 27 Jul 2023 04:31
collapse
totally unrecognizable. I can’t believe I actually used to learn stuff there
KiloPapa@lemmy.world
on 26 Jul 2023 05:42
nextcollapse
Considering most articles on the internet that don’t come from legitimate newspapers sound like they’re written by a 6-year-old who gets paid by the word, how much worse could it get?
DidacticDumbass@lemmy.one
on 26 Jul 2023 07:32
nextcollapse
Bizarre. Not even keep a few editors for… the editing??
I wonder how this will affect the Stuff You Should Know podcsst.
waterplants@lemm.ee
on 26 Jul 2023 07:50
nextcollapse
People really don’t understand the current state of LLM, like the pictures generated “Its a really good picture of what a dog would look like, it’s not actually a dog”.
Like a police sketch, with a touch of “randomeness” so you don’t always get the same picture.
I’m guessing they will try to solve this issue with some cheap human labour to review what is being generated. These verifers will probably not be experts on all the subjects that the llm will be spitting out, more of a “That does kind of look like a dog, APPROVED”.
Let’s say I’m wrong, and LLM’s can make as good of an article as any human. The content would be so saturated (even a tumblr user could now make as good and as much content as one of these companies), I would expect companies to be joining in on all the strikes 😆.
Funny world we are all going into.
Boas Entradas
Thorny_Thicket@sopuli.xyz
on 26 Jul 2023 12:49
nextcollapse
One thing I disagree with is the assumption that anyone could create the same article by themselves. Coming up with a good prompt is a skill in itself and not everyone is equally good at it. I actually believe a prompt writer is going to be a new profession in the near future.
lemming007@lemmy.world
on 26 Jul 2023 13:22
nextcollapse
As usual with new technology, some jobs go away and new ones appear.
Are we assuming AI won’t be able to create a good prompt? 😂
VoxAdActa@beehaw.org
on 26 Jul 2023 16:00
collapse
I’m guessing they will try to solve this issue with some cheap human labour to review what is being generated.
They already do. These current "AI"s are starting to look more and more like Mechanical Turks, except with a couple hundred third-world wage-slaves inside the box.
lemann@lemmy.one
on 26 Jul 2023 13:18
nextcollapse
Someone should create a blocklist for all these new AI-driven websites.
For me personally thee primary appeal of websites are that there’s human authors behind the content… otherwise I’d just ask an ‘AI’ myself.
GiantBasil@beehaw.org
on 26 Jul 2023 14:02
collapse
It would be great to have a list of sites so id know whose links I can just immediately ignore.
waspentalive@lemmy.one
on 26 Jul 2023 13:23
nextcollapse
I wonder what they will do when the tripe the LLM spews doesn’t match the pictures they are showing.
nickajeglin@lemmy.one
on 26 Jul 2023 15:12
collapse
Nothing lol.
waspentalive@lemmy.one
on 26 Jul 2023 13:51
nextcollapse
ChatGPT comments on the news:
Prompt : In recent news, the management of “How it’s Made” has fired all the writers and editors on the show and intends to replace them with a large language model. What do you think?
ChatGPT
As an AI language model, I don’t have personal opinions or feelings, but I can provide an analysis based on the information available up to my last update in September 2021.
If the management of “How It’s Made” decides to fire all the writers and editors on the show and replace them with a large language model like me, there are both potential benefits and drawbacks to consider:
Benefits:
Cost Savings: Employing an AI language model could potentially reduce production costs as it eliminates the need for paying human writers and editors.
Speed and Efficiency: AI models can generate content at a rapid pace, which may allow for more frequent episode releases or updates.
Access to Data: AI models have access to vast amounts of data, which could result in more comprehensive and detailed explanations of the manufacturing processes.
Consistency: AI-generated content can maintain a consistent style and tone throughout the episodes.
Drawbacks:
Lack of Creativity: While AI models can provide factual information, they might struggle to inject creative elements or engage with the audience on a more emotional level.
Human Touch: Human writers and editors bring their unique perspectives and storytelling abilities to the show, which might be missed with an AI-only approach.
Reliability: AI-generated content can sometimes produce errors or inaccurate information, which would require careful human oversight and fact-checking.
Audience Response: The audience's perception of AI-generated content might not be as positive as content created by human professionals, leading to a potential decrease in viewership.
Ultimately, the success of this decision would depend on how well the AI language model can replicate the engaging and informative aspects that made “How It’s Made” popular in the first place. It’s important for the management to carefully evaluate the pros and cons and ensure a seamless transition if they decide to go down this route. Additionally, audience feedback and receptiveness to the AI-generated content will play a significant role in determining the long-term viability of such a change.
emeralddawn45@discuss.tchncs.de
on 26 Jul 2023 14:27
collapse
That’s interesting, but just so you know it’s the website “how stuff works” not the tv show “how it’s made”
waspentalive@lemmy.one
on 26 Jul 2023 15:01
nextcollapse
Ah, thanks. I suppose ChatGPT’s take would be the same. - I will have to check that site out.
nickajeglin@lemmy.one
on 26 Jul 2023 15:17
collapse
No shade on “how it’s made”, it’s one of my favorite shows. But I think a LLM could probably write most of the narration. They primarily describe what is happening on screen. You might have to train one special to have information on industrial and manufacturing processes.
one of the hallmarks of how its made’s narration is the terse, accurate representation of whats going on. llms are often wrong about things and dont do great at precise descriptions to begin with.
kerneltux@lemmy.world
on 26 Jul 2023 13:56
nextcollapse
I’ve read articles that were clearly created using ChatGPT, there was no extrapolation to add context/details to illustrate their points, and parts of it read like it just pulled from a Wikipedia page. The tone felt more robotic than pieces they published 6~8 months ago.
ChatGPT can be useful when it’s part of a larger writing process, but I have a feeling that sites that create prompts and paste the output as their articles will slowly die-off because the quality isn’t there.
pingveno@lemmy.ml
on 26 Jul 2023 17:04
nextcollapse
We’re probing the limits of generative AI right now. I expect a snapback of sorts as people find what does and does not work.
I was checking something on a Fandom “wiki” the other day and I swear to god the summary for a bunch of episodes for several shows was either written or rewritten by AI. You can tell because it uses a bunch of nonsense synonyms, like replacing the name Ray with Beam.
Hagels_Bagels@lemmygrad.ml
on 26 Jul 2023 15:24
nextcollapse
Great. Now people are going to read up a bunch of bs generated by a language model and confidently spread around “hallucinations” as facts.
trimmerfrost@lemm.ee
on 26 Jul 2023 17:33
collapse
They must be having proof readers
MstrDialUp@lemm.ee
on 26 Jul 2023 18:21
nextcollapse
That’s optimistic.
ToastyWaffles@lemmygrad.ml
on 26 Jul 2023 21:08
nextcollapse
You know that’s not how this works.
itchy_lizard@feddit.it
on 01 Aug 2023 02:49
collapse
No, that’s exactly how this stuff works. Lay off 80% of writers and keep all your fact checkers and editors.
salient_one@lemmy.villa-straylight.social
on 27 Jul 2023 04:58
collapse
Probably, though it might be too optimistic to assume that. However, I believe it will still result in more mistakes simply because it’s harder to spot errors in an existing text than to not put errors in the text in the first place by fact-checking beforehand and then having another person proof-read.
One of the reasons for that is that LLMs don’t feel guilty when they hallucinate while most humans don’t like to lie or be too lazy to fact check, and even if they don’t care about that, they still have to think about getting caught and damaging their reputation, which again LLMs don’t have. And you can’t call stating something false as a fact in an article an honest mistake (it’s negligence at best) unlike an editor’s missing something (due to a looming deadline, perhaps), especially when it’s assumed there won’t be too many hallucinations, which isn’t a certainty.
Infinity187@lemm.ee
on 26 Jul 2023 16:14
nextcollapse
I wonder how Josh and Chuck from SYSK feel about this.
dantheclamman@lemmy.world
on 27 Jul 2023 05:06
nextcollapse
They got their start on HSW, but I believe the podcast division is now separate, owned by iHeart?
Chat-GPT became far less useful to me when I realized it will actively lie to you. It was too good to be true it turned out. These people will figure it out eventually, Chat-GPT is not an AI, it’s a god damn “Chinese Room” (It’s a thing in philosophy, look it up)
threaded - newest
This seems really short-sighted. Why would I go to How Stuff Works when I can just ask the LLM myself?
Maybe there’s just no possible business model for them anymore with the advent of LLMs, but at least if they focused on the “actually written by humans!” angle there’d be some hook to draw people in.
LLM cannot create new concepts, it can only create a mishmash of things it has been fed on.
Just like Hollywood!
Isn't that exactly how howstuffworks operates though?
The thing is, the LLM doesn't actually know anything, and lies about it.
So you go to How Stuff Works now, and you get bullshit lies instead of real information, you'll also get nonsense that looks like language at first glance, but is gibberish pretending to be an article. Because sometimes the language model changes topics midway through and doesn't correct, because it can't correct. It doesn't actually know what it's saying.
See, these language models are pre-trained, that the P in chatGPT. They just regurgitate the training data, but put together in ways that sort of look like more of the same training data.
There are some hard coded filters and responses, but other than that, nope, just a spew of garbage out from the random garbage in.
And yet, all sorts of people think this shit is ready to take over writing duties for everyone, saving money and winning court cases.
Absolutely. Creating new documentation will always be a human sport.
Literally predictive text but for whole articles.
It doesn’t know the limits of it’s knowledge or indeed know anything. It just “knows” what an answer smells like. It even “knows” what excuses are supposed to look like when you call it out.
This is a very good write up about how ChatGPT works.
It’s a combination of three things:
1- most people still google things;
2- the more content you have the more organic traffic you’re likely to attract from Google;
3- displaying ads on your website makes you money.
Websites full of LLM generated content are just the natural continuation of MFAs (Made For AdSense) and there were lots of tools on sale back then in the 2006~2008 period that promised to automatically create websites for you and fill them with randomized content that is optimized for AdSense.
Correct me if I'm wrong, but isn't AI generated content not copyrightable? Therefore nothing is stopping someone from taking all their content, rebranding it as "how stuff really works" or something, and then start stealing their business & ad revenue.
Creating a market for real human content? Sounds tasty
Maybe make it a co-op and put the CEO + AI article company out of business
I’m down for that
This is going to happen for a while. Execs who actually have no clue have now been sold on the idea that AI lets them keep making money without paying labor.
It will fail eventually when the execs eventually take the time to learn what AI is capable of and what it isn’t capable of.
Who am I kidding? It’ll continue indefinitely because there are few consequences for clueless executives.
I see a possibility where these sites eventually become terrible and there is a new person can come in and make content made by humans.
This is fine. Just let us know so we know what shows to avoid.
This seems like a really dumb idea.
How long until we can a browser extension that lets us know when we are on a site written by AI?
I don’t mean AI detection, but instead, sites that announce they are laying off editors in favor of AI.
How about instead of all the tracking cookie popups for permission, we force these sites to display a message that the content is AI generated.
Why not both?
Used to be one of my favourite sites when I was younger. Haven’t visited that site in ages. Holy crap, has it gone to complete shit. Like way worse than I thought possible
When I read the OP I thought “oh yeah I used to use that all the time, I wonder what its like now?”. Seems to have changed a lot…
totally unrecognizable. I can’t believe I actually used to learn stuff there
Considering most articles on the internet that don’t come from legitimate newspapers sound like they’re written by a 6-year-old who gets paid by the word, how much worse could it get?
Bizarre. Not even keep a few editors for… the editing??
I wonder how this will affect the Stuff You Should Know podcsst.
People really don’t understand the current state of LLM, like the pictures generated “Its a really good picture of what a dog would look like, it’s not actually a dog”. Like a police sketch, with a touch of “randomeness” so you don’t always get the same picture.
I’m guessing they will try to solve this issue with some cheap human labour to review what is being generated. These verifers will probably not be experts on all the subjects that the llm will be spitting out, more of a “That does kind of look like a dog, APPROVED”.
Let’s say I’m wrong, and LLM’s can make as good of an article as any human. The content would be so saturated (even a tumblr user could now make as good and as much content as one of these companies), I would expect companies to be joining in on all the strikes 😆.
Funny world we are all going into.
Boas Entradas
One thing I disagree with is the assumption that anyone could create the same article by themselves. Coming up with a good prompt is a skill in itself and not everyone is equally good at it. I actually believe a prompt writer is going to be a new profession in the near future.
As usual with new technology, some jobs go away and new ones appear.
Are we assuming AI won’t be able to create a good prompt? 😂
They already do. These current "AI"s are starting to look more and more like Mechanical Turks, except with a couple hundred third-world wage-slaves inside the box.
Someone should create a blocklist for all these new AI-driven websites.
For me personally thee primary appeal of websites are that there’s human authors behind the content… otherwise I’d just ask an ‘AI’ myself.
It would be great to have a list of sites so id know whose links I can just immediately ignore.
I wonder what they will do when the tripe the LLM spews doesn’t match the pictures they are showing.
Nothing lol.
ChatGPT comments on the news:
Prompt : In recent news, the management of “How it’s Made” has fired all the writers and editors on the show and intends to replace them with a large language model. What do you think?
ChatGPT
As an AI language model, I don’t have personal opinions or feelings, but I can provide an analysis based on the information available up to my last update in September 2021.
If the management of “How It’s Made” decides to fire all the writers and editors on the show and replace them with a large language model like me, there are both potential benefits and drawbacks to consider:
Benefits:
Drawbacks:
Ultimately, the success of this decision would depend on how well the AI language model can replicate the engaging and informative aspects that made “How It’s Made” popular in the first place. It’s important for the management to carefully evaluate the pros and cons and ensure a seamless transition if they decide to go down this route. Additionally, audience feedback and receptiveness to the AI-generated content will play a significant role in determining the long-term viability of such a change.
That’s interesting, but just so you know it’s the website “how stuff works” not the tv show “how it’s made”
Ah, thanks. I suppose ChatGPT’s take would be the same. - I will have to check that site out.
No shade on “how it’s made”, it’s one of my favorite shows. But I think a LLM could probably write most of the narration. They primarily describe what is happening on screen. You might have to train one special to have information on industrial and manufacturing processes.
it absolutely could not.
one of the hallmarks of how its made’s narration is the terse, accurate representation of whats going on. llms are often wrong about things and dont do great at precise descriptions to begin with.
I’ve read articles that were clearly created using ChatGPT, there was no extrapolation to add context/details to illustrate their points, and parts of it read like it just pulled from a Wikipedia page. The tone felt more robotic than pieces they published 6~8 months ago.
ChatGPT can be useful when it’s part of a larger writing process, but I have a feeling that sites that create prompts and paste the output as their articles will slowly die-off because the quality isn’t there.
We’re probing the limits of generative AI right now. I expect a snapback of sorts as people find what does and does not work.
I was checking something on a Fandom “wiki” the other day and I swear to god the summary for a bunch of episodes for several shows was either written or rewritten by AI. You can tell because it uses a bunch of nonsense synonyms, like replacing the name Ray with Beam.
Great. Now people are going to read up a bunch of bs generated by a language model and confidently spread around “hallucinations” as facts.
They must be having proof readers
That’s optimistic.
You know that’s not how this works.
No, that’s exactly how this stuff works. Lay off 80% of writers and keep all your fact checkers and editors.
Probably, though it might be too optimistic to assume that. However, I believe it will still result in more mistakes simply because it’s harder to spot errors in an existing text than to not put errors in the text in the first place by fact-checking beforehand and then having another person proof-read.
One of the reasons for that is that LLMs don’t feel guilty when they hallucinate while most humans don’t like to lie or be too lazy to fact check, and even if they don’t care about that, they still have to think about getting caught and damaging their reputation, which again LLMs don’t have. And you can’t call stating something false as a fact in an article an honest mistake (it’s negligence at best) unlike an editor’s missing something (due to a looming deadline, perhaps), especially when it’s assumed there won’t be too many hallucinations, which isn’t a certainty.
I wonder how Josh and Chuck from SYSK feel about this.
They got their start on HSW, but I believe the podcast division is now separate, owned by iHeart?
Ah ok, I thought they were still some how legally related.
What’s SYSK?
Stuff You Should Know podcast.
Stuff You Should Know! It’s a great podcast that’s affiliated with How Stuff Works, from my understanding.
Holy shit. Haven’t heard of How Stuff Works since like 2002…
Chat-GPT became far less useful to me when I realized it will actively lie to you. It was too good to be true it turned out. These people will figure it out eventually, Chat-GPT is not an AI, it’s a god damn “Chinese Room” (It’s a thing in philosophy, look it up)