Why Pay A Pentester? (thehackernews.com)
from gwilikers@lemmy.ml to cybersecurity@sh.itjust.works on 18 Sep 2024 15:46
https://lemmy.ml/post/20437366

What do you guys think? I don’t think there’s a lot of depth to the arguments, myself. It reads more like an threadbare op-ed with a provocative title. But I’d like to hear you opinions on the impact of automated testing solutions on the role of pen-testers in the industry.

#cybersecurity

threaded - newest

schizo@forum.uncomfortable.business on 18 Sep 2024 15:50 next collapse

It reads like an advertisement for software-automated pentesting that just forgot to include a link to what they’re selling.

I don’t know if that’s the intent, but…

Also, if you want free pentesting, you could always just “accidentally” include credentials to something you push to GitHub. It’s free, AND done by a human!

Edit: LMAO, it is an ad. A “contributed piece from our partners” line down at the bottom.

dotslashme@infosec.pub on 18 Sep 2024 17:19 next collapse

Looks like an LLM wrote this peace of garbage.

Telorand@reddthat.com on 18 Sep 2024 17:28 next collapse

I do QA Automation for a large software company. We still have manual QA testing, because it’s costly and sometimes impossible to automate everything.

Also, there is no scenario where you can automate everything until you can automate social engineering. It’s why scammers don’t bother trying to hack your bank but instead try to get you to buy $2000 in Applebee’s gift cards to settle “an IRS debt that you need to fix RIGHT NOW!”

sylver_dragon@lemmy.world on 18 Sep 2024 19:49 collapse

While the broader cybersecurity field has seen rapid advancements, such as AI-driven endpoint security

Ya, about that “AI-driven endpoint security”, it does a fantastic job of generating false positives and low value alerts. I swear, I’m to the point where vendors start talking about the “AI driven security” in their products and I mentally check out. It’s almost universally crap. I’m sure it will be useful someday, but goddamn I’m tired of running down alerts which come with almost zero supporting evidence, pointing to “something happened, maybe.” AI for helping write queries in security tools? Ya, good stuff. But, until models do a better job explaining themselves and not going off on flights of fancy, they’ll do more to increase alert fatigue than security.