AI & Cybersecurity – What You Need to Know
Alex Johnson, Senior Systems Engineer, Arctic Wolf.
Alex works with organisations across industries to strengthen their defences against evolving threats, helping executives translate cybersecurity complexity into practical strategies. Drawing on findings from the Global Cyber Risk Report, Alex will share where AI is creating both risk and opportunity and what leaders can do right now to protect their businesses.
Alex Johnson’s keynote is a necessary cold shower in a day otherwise full of AI optimism. He starts with a story that could have happened to any company in the room: hackers used AI to build a lifelike video avatar of a CFO, joined a Zoom call, and calmly convinced staff to transfer millions of dollars to a fraudulent account. In the same way AI is supercharging productivity for legitimate businesses, he says, it is also supercharging the hackers.
Johnson is a sales engineer with Arctic Wolf, which he describes as one of the largest global security operations platforms in the world. Across more than 10,000 customers they see roughly eight trillion security “observations” a week. For a typical organisation in Australia or New Zealand, that translates to around 2.5 million events every three days – DNS lookups, login requests, web queries and more. No human team can triage that volume unaided. The only realistic way to reduce the noise quickly enough is to wire AI into the back end, so that suspicious patterns are surfaced fast and the security team has a chance to stop an attack before it bites. In his words, when you combine AI and cybersecurity, everything comes back to speed and efficacy: how quickly you can see something, and how effectively you can act on it.
Arctic Wolf has already used AI to turn that firehose into something usable, but Johnson says the next step is making security intelligence accessible to non-specialists. Most organisations do not have deep benches of security experts, and there simply are not enough people in the market to staff every company. That is why Arctic Wolf has partnered with Anthropic to develop a front-end AI assistant customers can talk to directly, asking plain-language questions about their security posture. The idea is the same as the best business AI use cases discussed earlier in the day: use the technology to amplify human capability, not replace it.
Much of his talk is anchored in a “Human + AI” study Arctic Wolf ran with around 2,000 decision makers globally. Over 93 percent of respondents said AI will influence their security strategy, but the way they see the roles splitting is telling. AI is expected to excel at identifying threats and minimising errors, while humans are still seen as critical for providing context and maintaining compliance. In other words, the models can spot anomalies and reduce false positives at speed, but people still need to decide what is acceptable risk, what needs to be reported, and how to interpret alerts in the context of regulation and business reality.
The tool landscape, he admits, is chaotic. There are roughly 3,500 different security tools in the ecosystem. New products are being rushed to market to capitalise on the AI wave, many designed primarily for slick user experience rather than hardened security. He highlights a worrying pattern identified at the New Zealand Internet Task Force conference: tools developed quickly, or even partly by other tools, can smuggle new risks into an environment. When you add that to the fact that only around half of New Zealand organisations surveyed are actively considering AI-informed technology to support their cyber readiness, it is clear that many are under-prepared and overexposed.
Johnson then tackles a phrase many in the room will have heard: the network is no longer the perimeter. Once upon a time, you could imagine your office as a castle and build walls around it. Now, with work-from-home culture, mobile devices and conference Wi-Fi that everyone casually joins, you rarely know what the real perimeter looks like. Zero-trust architectures have emerged as a response, and they also rely on AI to maintain speed and efficacy, constantly checking identities, devices and traffic rather than blindly trusting anything inside a notional firewall. For business leaders intoxicated by shiny new AI tools, he boils the trade-off down to safety versus convenience and urges them to choose safety when it comes to cybersecurity, even if that slows down some of the experimentation.
The stakes are rising because the attackers have changed. Johnson name checks nation-state groups like Sandworm, linked to the Russian GRU, and recommends two books – This Is How They Tell Me the World Ends and Sandworm – for anyone who wants to understand how the game has evolved. In the past, he says, a traditional ransomware crew might sit undetected in your network for 200 days, mapping systems, negotiating internally, and weirdly even caring about their “Net Promoter Score” as criminals. Many would deliver decryption keys and avoid re-selling stolen data so victims would be more likely to pay and even quietly recommend them. Those days are over.
Now there is a new class of threat actor, often a single individual in Russia or North Korea, who does not care about reputation at all. They break in, hunt for your cyber insurance policy, your incident response plan and your most sensitive data, and within roughly 36 hours they are on the phone threatening to leak everything. They will tell you where your kids go to school because they have cross-referenced contact details with your social media. If you do not pay quickly enough, they simply destroy your systems. They are not interested in playing the long game; they are interested in maximum impact in minimum time. That, Johnson says, is why your own tools have to be equally fast and effective.
The people challenge is just as serious. To run a fully staffed, 24/7 security operations centre in New Zealand, allowing for four weeks’ leave, you need roughly 14 to 15 people. That is a salary bill in the two to three million dollar range, which is why only banks and giants like Fonterra run their own SOCs. Cybersecurity specialists are in such demand that they move constantly to whoever pays the most, and now there is a growing expectation that they will also be AI specialists. For most organisations with a small IT team of one to three people, this is simply unrealistic, which is why partnering with an external security platform is becoming the only viable way to maintain a strong posture.
Despite this, the sentiment in his survey is cautiously optimistic. Around three quarters of respondents believe AI will improve their security posture. At the same time, there is a growing anxiety about staff using AI tools in unsafe ways: pasting sensitive IP, customer data or internal documents into consumer apps and unknowingly leaking them. Johnson notes that there are emerging tools to monitor and control AI usage inside an organisation, but he is clear that security has to drive this conversation. AI policies cannot sit off to the side; they must be part of the core security strategy.
In the on-stage Q&A afterwards, the MC gently accuses him of being a “Debbie Downer” compared with the more inspirational AI talks. Johnson laughs and agrees that cyber is not the sexiest topic, but he points out that productivity is irrelevant if your data is exposed, your operations are crippled and you are facing legal consequences. New Zealand’s Privacy Act fines have only just doubled from $10,000 to $20,000 per incident, which looks trivial against Australia’s penalties of up to 30 percent of revenue capped at $50 million. The sting here is different: in New Zealand, directors are personally liable, and partners in law firms carry similar personal exposure. He says many do not fully grasp that risk.
He also tackles a familiar local complacency: the idea that “little old New Zealand” is too small and remote to be interesting. Threat actors are not targeting a brand or a country, he says, they are targeting IP addresses. If they find a way in, they will use it, regardless of who you are. Everyone has something worth protecting, from customer records to private correspondence. His final message is both sobering and reassuring. On the one hand, we are all just “another IP address five minutes away from being breached.” On the other hand, there are specialist organisations that do nothing but help you improve your security posture over time. For leaders feeling overwhelmed by the threat landscape, he says, the key is to invest wisely, pick the right partners and tools, and remember that you do not have to handle this alone.
Action points:
Treat AI as a security risk as well as a productivity boost
Whenever you evaluate a new AI tool, ask not only what it can do, but how it might be abused or introduce vulnerabilities into your environment.
Make speed and efficacy your security north stars
Review your current monitoring and incident response. How quickly would you know something has gone wrong, and how effectively could you act within that 36-hour window?
Assume you are a target, regardless of size or location
Retire the “we’re too small” mindset. For attackers, you are an IP address with potentially valuable data, not a special case at the bottom of the world.
Bake AI into your security operations, not just your business ops
Look for ways AI can help triage logs, surface anomalies and support your security team, in the same way you are already using it to improve sales, service or productivity.
Put security in charge of your AI usage policy
Develop a clear, security-led policy on what staff can share with AI tools, which platforms are approved, and how usage is monitored. Make sure it is explained, not just emailed.
Be realistic about your in-house capability
If your IT team is two or three people, do not pretend you can run a full SOC and become AI experts on top. Explore partnerships or managed security services that can scale where you cannot.
Revisit your incident response plan and personal exposure
Check that your response plan, director responsibilities and insurance settings reflect current threats and law. Directors and partners should explicitly understand their personal liability.
Prioritise safety over convenience with networks and devices
From conference Wi-Fi to working-from-home setups, favour secure configurations and zero-trust principles over whatever is quickest or easiest in the moment.
Educate staff on social engineering and deepfakes
Use real-world examples like the deepfake CFO scam to train your teams. Build verification steps into high-risk processes such as payment approvals and data access changes.
