AI & Cybersecurity – What You Need to Know
Simon Jordon, cybersecurity consultant at Resilient IT, gave the audience a mild panic attack with his talk on how AI is accelerating both the sophistication and scale of cyber threats. He described a shift in the threat landscape, where generative tools are no longer theoretical risks but active enablers of fraud, phishing, and impersonation. What once required coordinated human effort can now be executed faster and more convincingly by automated systems.
Jordon began with examples of how AI-generated content is being used in social engineering attacks. He demonstrated how voice cloning tools can replicate someone’s speech patterns with just a few seconds of audio and how deepfake video can now convincingly mimic trusted figures. One case he cited involved a financial transaction of 25 million dollars triggered by an AI-generated video call from a fake executive. The audio and video both passed internal checks because they sounded and looked real enough. “The phone call sounds like me. The video looks like me. But it is not me,” he warned.
Beyond external threats, Jordon emphasised the internal risks many organisations overlook. These include the rise of unsanctioned AI tools being used by staff, often without formal policy or awareness, as well as the danger of proprietary data being unintentionally submitted to public large language model platforms. He likened this phenomenon to the old issue of shadow IT, where employees would adopt tools outside official channels. In this new context, shadow AI has the added risk of contributing valuable internal data to third-party models.
He outlined the slow emergence of regulation, referencing the EU AI Act and its risk-based framework, and made the case for proactive self-regulation in the meantime. ISO standards, such as 27001 for information security and the newer 42001 for AI governance, provide a template for organisations looking to create internal guardrails. These standards help define responsibilities, assess exposure, and manage AI risks in an organised way.
Jordon’s broader message was that companies must integrate cybersecurity into their AI development and adoption strategies from the start. Treating it as an afterthought will not hold up in a world where threats are not only growing, but learning. As he noted, “Every organisation is vulnerable somewhere. If you call twenty people, odds are one of them will give you the keys to the front door.”
Takeaways
AI is transforming the speed and precision of cyberattacks
Tools like ChatGPT and voice synthesizers are already being used to automate phishing campaigns, impersonate executives, and carry out fraud. These methods are faster, more targeted, and harder to detect than traditional attacks.
Deepfake fraud is now a real operational threat
Jordon shared an example where a company was manipulated into transferring millions after receiving a video call from a synthetic version of a known executive. Traditional verification methods, such as recognising someone’s voice or face, are no longer enough.
Internal exposure often begins with good intentions
Employees using generative AI tools to boost productivity may inadvertently paste customer data or confidential IP into chat interfaces or code generators. These actions can violate privacy laws or leak strategic information if the tools are not properly vetted.
The regulatory environment is developing but not yet consistent
The European Union is leading with its AI Act, which categorises systems into four levels of risk. While New Zealand lacks formal AI-specific regulation, many businesses will need to prepare for similar requirements due to international partnerships.
International standards can offer structure while legislation catches up
Jordon recommended organisations consider adopting ISO/IEC 42001 to structure their AI governance strategy. It aligns well with ISO 27001, which many companies already use for broader cybersecurity protocols.
Shadow AI is emerging as a critical weak point
Much like shadow IT in the past, employees are now adopting AI tools without formal oversight. This makes it difficult to manage exposure, ensure compliance, or detect where sensitive data may be going.
Real-time monitoring and controls are more effective than after-the-fact alerts
Waiting for a dashboard to report an issue is too late in an environment where AI threats move quickly. Jordon argued for proactive guardrails, such as spend limits, access restrictions, and automated policy enforcement.
Cybersecurity is not just a technical issue but a cultural one
Building a secure environment requires company-wide awareness and shared responsibility. Training, simulations, and clear internal policies are essential. Without a culture of security, even the most advanced tools can be rendered ineffective by human error.

