AI & Cybersecurity – What You Need to Know
Mantel Group’s Adam Durbin and Natalie Rouse tackled the intricate relationship between AI and cybersecurity, blending reality checks with strategic insights.
Durbin, the CTO of Mantel Group, kicked off the keynote with a call to action. “We’re here to educate you on AI, but more importantly, on AI security,” he announced. Durbin didn’t sugarcoat the risks. He laid out a stark vision of how AI, while a game-changer for business productivity, is also a powerful tool for adversaries.
“Phishing is no longer about bad grammar or obvious scams. With AI, attacks are now polished and almost undetectable,” Durbin explained. He illustrated this with a chilling example: deepfake technology was used in Hong Kong to impersonate colleagues on a video call, leading to a fraudulent transfer of $25 million. “It’s no longer science fiction,” Durbin warned. “The future is here, and it’s both impressive and terrifying.”
Natalie Rouse, Mantel Group’s Client Engagement Lead in New Zealand, highlighted the importance of responsible AI use. “We need to ensure AI serves our people and our unique environment,” Rouse asserted, referencing New Zealand’s Trustworthy AI Principles, which emphasise human oversight and accountability.
Rouse recounted a recent ethical misstep involving OpenAI and Scarlett Johansson’s voice, used without consent. “This is a clear violation of explicit consent and highlights how even the good guys can slip if they don’t hold themselves accountable to their principles.”
Durbin and Rouse didn’t just highlight the problems; they offered solutions. Durbin emphasised that while AI security practices mirror general cybersecurity, they require additional layers of protection. “Best practices for security don’t change for AI; they just get more complex,” he noted.
Rouse detailed the various vulnerabilities in AI development and deployment, from data poisoning to evasion attacks. “Your standard data protection and security controls will account for a lot of these vulnerabilities,” she reassured the audience. Rigorous testing and standard security measures, she suggested, can go a long way in safeguarding AI applications.
To ground their points in reality, Rouse shared case studies of AI adoption across industries. A major bank, for example, used AI to streamline the validation of security controls during a cloud transformation, drastically speeding up the process. An energy provider integrated a chatbot to unlock their knowledge base, achieving over 90% accuracy in responses.
Rouse advocated for a cautious, iterative approach to AI adoption. “Start with a small, controlled use case with a restricted group of educated users while you experiment and understand the pitfalls,” she advised. This strategy allows businesses to expand their risk appetite gradually, building confidence in their AI systems.
At the wrapup of the keynote, Rouse recommended rigorous testing and curiosity to detect latent biases in AI. Durbin highlighted the importance of asking key questions when doing vendor due diligence, focusing on data collection and protection. “Ask them what data they are collecting, how they de-identify that data, and how they protect their systems,” he advised.
An overall takeaway is to embrace AI, but do so with eyes wide open and a robust security strategy in place. As Durbin aptly put it, “Let’s adopt AI, but adopt it safely.”