Innovation, Regulation and the Exponential Rise of Gen A
Dermot Butterfield, CEO and Founder of Wych, delivered a focused and timely presentation on how artificial intelligence is intersecting with regulation, privacy, and the ownership of personal data. His talk emphasised that the rise of generative AI is not just a technological shift, but a regulatory and ethical one, particularly as new laws around consent and data access begin to take hold across the financial, insurance, and utility sectors.
Butterfield unpacked the implications of New Zealand’s upcoming Customer and Product Data (CPD) Bill and the international movement towards user data sovereignty. He argued that businesses can no longer treat data as a static asset to be collected and mined indefinitely. Instead, data access is becoming conditional, purposeful, and subject to revocation at any time. This shift presents real challenges for companies using AI models, especially those trained on datasets that cannot easily be unpicked when a customer revokes consent.
He pointed to the growing complexity of managing consent in environments where AI models are trained on large, blended datasets. Unlike a conventional database, AI systems do not forget when rows are deleted or users are removed. Butterfield questioned what happens when a customer exercises their right to be forgotten after their data has already contributed to a model’s decision logic.
In parallel, he explored the security risks of relying on generative AI in high-stakes processes like document verification. With AI-generated forgeries on the rise, organisations need to move toward machine-verified data sourced directly from trusted institutions rather than depending on static uploads. These changes require not just new tools, but new thinking around responsibility, transparency, and strategic design.
Butterfield’s key message was that the regulatory environment is catching up to the speed of AI, and companies that don’t rework their infrastructure and workflows to align with these shifts will fall behind. Consent is no longer a compliance task to tick off. It is a cornerstone of trust, and trust is now a competitive advantage.
Takeaways
Consent is becoming regulated and specific
The CPD Bill and IPP3A framework are moving businesses toward a model where data access is defined by strict criteria. Consent must now be explicit, time-limited, purpose-driven, and capable of being withdrawn without downstream risk. Organisations will need to design data models and AI systems that respect these conditions from the outset.
AI systems create permanence that clashes with revocable consent
Traditional databases allow rows to be deleted. AI systems do not. Once customer data is used to train a model, it becomes part of that model’s internal structure. This makes it difficult to comply with future requests like “the right to be forgotten.” Companies will need strategies to work around this, including model retraining and more granular data lineage.
Re-identification is easier than most assume
Even anonymised data is at risk. Butterfield cited a study showing that 87 percent of individuals can be re-identified using just three points of information. As AI improves pattern recognition, anonymisation will no longer be enough to protect privacy.
Shift from document uploads to data verification via APIs
Fake payslips, bank statements, and identification documents generated by AI are becoming indistinguishable from legitimate ones. To counter this, businesses need to switch to API-based access to trusted data sources such as banks or insurance providers. This ensures that the data being used is accurate, verified, and not fabricated.
Operationalising trust builds long-term advantage
Trust will be the key differentiator for businesses using AI. Customers are more likely to share data when they understand how it will be used, who will access it, and how it will benefit them. Butterfield urged companies to communicate this clearly and treat consent as an ongoing relationship, not a one-time signature.
AI agents should reduce friction, not just replace tasks
Automating document collection or form filling is not enough. The real opportunity lies in reducing the cognitive burden on users. One example shared was mortgage applications, where drop-off rates fall sharply when users are not asked to repeat information already shared elsewhere in the process.
Speed and accuracy must go hand in hand
Butterfield cautioned against focusing solely on AI for speed. He gave the example of a client who could process loan applications in 24 hours, but still lost customers during the process. Real-time scoring, better onboarding, and verified data all help improve both speed and retention.

