Innovation, Regulation and the Exponential Rise of Gen AI
Cowan Henderson, Founder, Avocado AI
Cowan focuses on bridging the gap between cutting-edge AI technology and the people who use it. His approach is highly practical and people-centric, rather than overwhelming teams with tools, he empowers them with the habits, safety frameworks and playful experimentation that make AI adoption sustainable. Cowan is known for turning abstract concepts into hands-on learning experiences.
Cowan Henderson’s session sits right at the junction of innovation, regulation, and the very human reality of how teams actually work. As co-founder of Avocado AI, he’s not building models or platforms; he’s building AI-ready people. His background is in business development, strategy, and digital adoption, helping Kiwi and Australian businesses bring new tech into the fold without losing their minds or their culture. At Avocado AI he now designs training and habits that make AI practical and sustainable in everyday work, rather than a one-off “demo day” that everyone forgets.
In his M2 AI Summit keynote, Innovation, Regulation and the Exponential Rise of Gen AI, Henderson starts from a simple but uncomfortable premise: generative AI is racing ahead exponentially, but most organisations are still crawling culturally. Tools are getting smarter, regulators are scrambling to keep up, and many teams are stuck in pilot purgatory because their habits, safety frameworks, and leadership mindset haven’t moved at the same pace. In his framing, AI adoption is “90 percent people, 10 percent tools” – and most companies invert that ratio, obsessing over model names and vendors while leaving culture and behaviour as an afterthought.
He spends time unpacking that 90 percent. Adoption, he argues, is rarely blocked by capability. The tools are powerful, often cheap, and readily available. What holds businesses back is fear, vague policy, and a lack of shared understanding about what “safe and smart” use actually looks like. Leaders worry about privacy, hallucinations, regulatory lag and reputational risk, but respond with either blanket bans or woolly statements instead of clear, practical guardrails. Frontline staff, meanwhile, default to whatever is easiest – including unsanctioned consumer tools – because they still have jobs to do. Henderson’s answer is to bring people, policy and practice together: make AI safety tangible, invite teams into the design of those guardrails, and then actively coach new habits rather than just issuing a PDF.
A big chunk of the session is deliberately hands-on. Rather than talk abstractly about “prompt engineering,” he walks through a simple six-block prompt framework that anyone can use. The blocks cover things like role, objective, constraints, tone, inputs, and outputs – in other words, how to brief an AI system with the same clarity you’d use to brief a good colleague. He uses that structure live to show how you can move from messy questions to repeatable, high-quality outputs, and he extends it into custom instructions and “house style” so teams can bake their own context and preferences into every interaction with a model. The point isn’t to turn everyone into a prompt nerd; it’s to teach people a pattern they can re-use across tools.
From there, Henderson shows how to convert that framework into something more durable: a tailored GPT or equivalent custom assistant. On stage he demonstrates building a simple but specific GPT that knows a particular business context, applies the six-block structure by default, and behaves as a reusable “digital colleague” instead of a blank chat box. For many in the room, that’s the leap from “AI as occasional assistant” to “AI as part of the workflow.” He reinforces that the real magic is not in the UI, but in the thinking teams do upfront about who the GPT is for, what problems it’s solving, what it should never do, and how its outputs will be checked.
He keeps circling back to the practical, almost playful side of adoption. Henderson is a big believer in low-friction experimentation and small daily habits over grand transformation programmes. Phone-based AI hacks are one of his favourite on-ramps: using voice notes to draft emails, summarise meetings, rewrite tricky messages, or translate on the fly. These are the sorts of quick wins people can feel immediately. For him, they serve two purposes. First, they build trust – staff see that AI can genuinely save them time without blowing anything up. Second, they start rewiring behaviour: opening an AI app becomes as natural as opening email, and “thinking with AI” becomes a habit, not a special event.
Action points:
Frame AI as 90% people, 10% tools
In leadership conversations, start with culture, habits and safety, then choose tools that fit – not the other way around. Make it explicit that adoption is a human challenge first.
Co-design your AI safety guardrails
Bring frontline staff into workshops to define what “safe, smart AI use” looks like in your context: which data is off-limits, where human sign-off is required, and how to report issues. Turn that into a living playbook, not just a one-time policy.
Teach a simple prompt framework to everyone
Roll out a six-block prompt structure (role, objective, constraints, tone, inputs, outputs) in internal training so people have a shared pattern for talking to AI across different tools.
Create one or two tailored GPTs for your organisation
Start with obvious use cases – e.g. writing in your brand voice, summarising internal docs, or answering FAQs from your own knowledge base – and build custom assistants that encode your style, rules and context.
Use phone-based AI hacks as an on-ramp
Encourage teams to use voice notes and mobile apps for small tasks like drafting replies, summarising meetings, or translating. Celebrate these early wins to normalise daily AI use.
Bake AI habits into everyday workflows
Add “have you tried this with AI?” checkpoints into existing processes – marketing reviews, reporting cycles, project kick-offs – so experimentation becomes routine rather than optional.
Get ahead of regulation with your own standards
Don’t wait for AI-specific law to arrive. Set internal rules for transparency, explainability, data retention and audit trails that reflect your brand values and risk profile, and review them regularly.
Invest in AI education, not just licences
Budget for workshops, coaching and follow-up sessions that build AI fluency and confidence at all levels, from executives to frontline staff, so tools don’t sit idle.
Identify and empower your AI champions
Look for people already experimenting with AI in their day-to-day work, regardless of job title. Give them time, recognition and a mandate to share what they’re learning across the business.
