Source: OpenAI
If you’re still asking whether we should start figuring out AI defense or wait until it’s more developed—you’re already wrong.
Jeremy Dodson
CISO and DevRel Lead
NextLink Labs
As AI progresses faster than we can secure it, enterprises that demand safety struggle to put safe measures in place to protect consumers before it's too late. LLMs and chatbots lacking proper guardrails have the capacity to instantly confuse consumers, hurt reputation, and much worse. From a cybersecurity expert's POV, playing traditional defense will no longer cut it.
An attacker mindset: "You always hear this nonsense about how the attacker only has to be right once and the defender has to be right 100% of the time," says Jeremy Dodson, Chief Information Security Officer and DevRel Lead at NextLink Labs. He built his career with an attacker-informed approach to build resilient security programs—and now he’s urging defenders to change their mindset, fast.
Attackers will use whatever tools are available to them, he argues, without hesitation or philosophical debates. "As an attacker, I'm going to use whatever makes me successful," he says. "The problem on the defense side is there’s always this ‘so what’ factor. It’s always, 'Tell me why I should spend money on that.'"
Too late for a plan: Dodson has spent over two decades working across critical infrastructure and private enterprise, and says the real problem lies in the way defenders justify inaction. "If you’re still asking whether we should start figuring out AI defense or wait until it’s more developed—you’re already wrong." His approach is about realism over perfection. That mindset is also why he takes issue with one of the industry’s most supported practices: security awareness training.
My approach isn’t to be afraid of [AI]—it’s to use it the right way. Start thinking about policy, think about what you’re exposing yourself to, and then mitigate that.
Jeremy Dodson
CISO and DevRel Lead
NextLink Labs
Let them click: "I’ll say it straight—I hate security awareness training," he says. "I know it’s not a popular opinion, but here’s why: you're not going to stop everyone from clicking the malicious link. That’s just not going to happen."
The goal, he says, should be making attackers fail after the click. "Let them click the link. Let them try to harvest credentials. But if we’re doing our job, we’ve either put them in a honeynet so we can study them, or we’ve got zero trust architecture that makes their next step a dead end."
Giving them an edge: AI, he says, is the latest example of defenders finding themselves stuck. "If an attacker is using every tool at their disposal, and defenders choose not to—even though AI clearly enhances the capabilities of someone who knows their craft—then we’ve already handed them the edge. Attackers aren’t debating whether it’s good for their data or if it’s mature enough."
Risk appetite: For Dodson, the path forward isn’t about fear—it’s about intentional action. "I spent a lot of time learning how to do cloud the right way," he says. "If you're going to innovate—if you're choosing to be on the risk-seeking side of your risk appetite instead of risk-averse—then fine, accept that risk. But you’ve got to mitigate it, and you’ve got to change your approach to match. It’s the same thing with AI. My approach isn’t to be afraid of it—it’s to use it the right way. Start thinking about policy, think about what you’re exposing yourself to, and then mitigate that." For defenders willing to act early and smart, he suggests, there’s still time to close the gap.