314@sh.itjust.works
on 05 Oct 2024 16:27
nextcollapse
Is the computer really “bricked”? Or will repairing GRUB fix it? I get the main message of unexpected access / consequences…
Tar_alcaran@sh.itjust.works
on 06 Oct 2024 12:56
nextcollapse
Can you truly brick a computer just by adjusting GRUB? Seems like a very fixable problem for someone who can make an LLM rush bash commands. Then again, that is a supremely dumb thing to do.
Saledovil@sh.itjust.works
on 06 Oct 2024 21:09
collapse
GRUB works so well that the average Linux user likely never has to think about its inner workings. Even installing Linux has become extremely easy, like unless you use something like Arch Linux. So, its actually quite likely that somebody who writes a program that runs bash commands would not now how to maintain GRUB.
linearchaos@lemmy.world
on 09 Oct 2024 18:28
collapse
It’s a minor inconvenience at worst.
Telorand@reddthat.com
on 05 Oct 2024 17:45
nextcollapse
Clickbait title. It’s just LLMs doing what they’re designed to do. Since they’re basically complex iterative algorithms, the person in question did a thing using a tool they didn’t fully understand, and that had consequences.
People should be looking at LLMs like Monkey Paws instead of “assistants.”
Shlegeris, CEO of the nonprofit AI safety organization Redwood Research, developed a custom AI assistant using Anthropic’s Claude language model.
The Python-based tool was designed to generate and execute bash commands based on natural language input.
Saying the person didn’t understand what they were doing is quite a mischaracterization. That said, they absolutely knew the risks they were taking and are using this story for free advertising.
Still neat to think about though.
Telorand@reddthat.com
on 05 Oct 2024 22:52
nextcollapse
Notice that I didn’t say they didn’t know what they were doing. I said they didn’t fully understand what they were doing. I doubt they set out with the goal of letting an LLM run amok and fuck things up.
I do QA for a living, and even when we do trial and error, we have mitigation plans in place for when things go wrong. The fact that they’re a CEO of Redwood Research doesn’t mean they did their homework on the model they trained.
Still, I agree that it’s interesting that it did that stuff at all. It would be nice if they went into more depth as to why it did those things, since they mention that it’s a custom model using Claude.
Tar_alcaran@sh.itjust.works
on 06 Oct 2024 12:53
collapse
The Python-based tool was designed to generate and execute bash commands based on natural language input.
Emphasis mine, because anyone who does this might as well let a toddler bash the keyboard. The toddler will most likely just break the keyboard, instead of the whole machine.
threaded - newest
Is the computer really “bricked”? Or will repairing GRUB fix it? I get the main message of unexpected access / consequences…
Can you truly brick a computer just by adjusting GRUB? Seems like a very fixable problem for someone who can make an LLM rush bash commands. Then again, that is a supremely dumb thing to do.
GRUB works so well that the average Linux user likely never has to think about its inner workings. Even installing Linux has become extremely easy, like unless you use something like Arch Linux. So, its actually quite likely that somebody who writes a program that runs bash commands would not now how to maintain GRUB.
It’s a minor inconvenience at worst.
Clickbait title. It’s just LLMs doing what they’re designed to do. Since they’re basically complex iterative algorithms, the person in question did a thing using a tool they didn’t fully understand, and that had consequences.
People should be looking at LLMs like Monkey Paws instead of “assistants.”
Saying the person didn’t understand what they were doing is quite a mischaracterization. That said, they absolutely knew the risks they were taking and are using this story for free advertising.
Still neat to think about though.
Notice that I didn’t say they didn’t know what they were doing. I said they didn’t fully understand what they were doing. I doubt they set out with the goal of letting an LLM run amok and fuck things up.
I do QA for a living, and even when we do trial and error, we have mitigation plans in place for when things go wrong. The fact that they’re a CEO of Redwood Research doesn’t mean they did their homework on the model they trained.
Still, I agree that it’s interesting that it did that stuff at all. It would be nice if they went into more depth as to why it did those things, since they mention that it’s a custom model using Claude.
Emphasis mine, because anyone who does this might as well let a toddler bash the keyboard. The toddler will most likely just break the keyboard, instead of the whole machine.
.
doing the “stupid”, “easy” thing. pack it up, bois. been a good run but we finally made a better human.