When it has a code and when human points out which part of code was changed with update or which part of code to analyze, its not really something new and horizon.ai was doing it for a while I guess. Wake me up when AI can find 0day by itself without having a full code 🤖💀
joshcodes@programming.dev
on 22 Apr 11:13
collapse
The vulnerability is the scary part, not the exploit code. It’s like someone saying they can walk through an open door if they’re told where it is.
Using your analogy, this is more like telling someone there’s an unlocked door and asking them to find it on their own using blueprints.
Not a prefect analogy, but they didn’t tell the AI where the vulnerability was in the code. They just gave it the CVE description (which is intentionally vague) and a set of patches from that time period that included a lot of irrelevant changes.
joshcodes@programming.dev
on 22 Apr 21:40
collapse
I’m referencing this:
Keely told GPT-4 to generate a Python script that compared – diff’ed, basically – the vulnerable and patched portions of code in the vulnerable Erlang/OPT SSH server.
“Without the diff of the patch, GPT would not have come close to being able to write a working proof-of-concept for it,” Keely told The Register.
It wrote a fuzzer before it was told to compare the diff and extrapolate the answer, implying it didn’t know how to get to a solution either.
“So if you give it the neighbourhood of the building with the open door and a photo of the doorway that’s open, then drive it to the neighbourhood when it tries to go to the mall (it’s seen a lot of open doors there), it can trip and fall right before walking through the door.”
threaded - newest
Uh. Duh?
When it has a code and when human points out which part of code was changed with update or which part of code to analyze, its not really something new and horizon.ai was doing it for a while I guess. Wake me up when AI can find 0day by itself without having a full code 🤖💀
The vulnerability is the scary part, not the exploit code. It’s like someone saying they can walk through an open door if they’re told where it is.
Using your analogy, this is more like telling someone there’s an unlocked door and asking them to find it on their own using blueprints.
Not a prefect analogy, but they didn’t tell the AI where the vulnerability was in the code. They just gave it the CVE description (which is intentionally vague) and a set of patches from that time period that included a lot of irrelevant changes.
I’m referencing this:
It wrote a fuzzer before it was told to compare the diff and extrapolate the answer, implying it didn’t know how to get to a solution either.
“So if you give it the neighbourhood of the building with the open door and a photo of the doorway that’s open, then drive it to the neighbourhood when it tries to go to the mall (it’s seen a lot of open doors there), it can trip and fall right before walking through the door.”
That still seems a little hyperbolic, but I see your point.