Are you sure about that? If I remember correctly, Aurich (the guy that does all the graphics at Ars Technica) isn’t fond of AI. This looks more like a Photoshop job to me.
There’s a bounty on reported vulnerabilities (meaning money is paid out) and you could get a lot of fame, if you’re the security researcher who found something in Curl. When it takes basically zero effort to generate a report and there’s a theoretical non-zero chance for the AI to generate a valid report (or at least some people are convinced of that), then you’ll have people hoping to make a quick buck.
There’s two levels to this. You have big tech trying to prove that their AI is capable of contributing positively and then you also have little people who with the best of intentions are trying to bug fix but don’t have the skills. Both will become more prevalent.
Yes, but I mean exactly this headline with this contents article. Someone submitted something very badly ai generated to the curl hackerone and the curl team complains.
For unknown reporters we might see a future where a demoable POC is attached to the issue, so it can be verified as not a hallucinogen… To be fair there are real issues that can’t be demoed, but this cover most of the slop
threaded - newest
Are you sure about that? If I remember correctly, Aurich (the guy that does all the graphics at Ars Technica) isn’t fond of AI. This looks more like a Photoshop job to me.
Could be, his face expression is just very extreme
Underneath the image is the credit:
I think Getty Images has banned the submission of AI content
Why bother submitting vulnerability reports just because some AI claims one with no POC?
<img alt="" src="https://lemmy.ml/pictrs/image/7bdf1dfe-1958-47ee-9fa4-daa72337d29d.jpeg">
There’s a bounty on reported vulnerabilities (meaning money is paid out) and you could get a lot of fame, if you’re the security researcher who found something in Curl. When it takes basically zero effort to generate a report and there’s a theoretical non-zero chance for the AI to generate a valid report (or at least some people are convinced of that), then you’ll have people hoping to make a quick buck.
I feel like I read this same headline every month
There’s two levels to this. You have big tech trying to prove that their AI is capable of contributing positively and then you also have little people who with the best of intentions are trying to bug fix but don’t have the skills. Both will become more prevalent.
Yes, but I mean exactly this headline with this contents article. Someone submitted something very badly ai generated to the curl hackerone and the curl team complains.
For unknown reporters we might see a future where a demoable POC is attached to the issue, so it can be verified as not a hallucinogen… To be fair there are real issues that can’t be demoed, but this cover most of the slop