Key Areas Businesses Can Start Today That Are Simple & Effective
Andrew Nicol, Founder, Preductive
Through Preductive, Andrew works directly with CEOs and business owners to unlock clarity, establish solid foundations, and run hands-on “agentic sprints” that deliver AI-driven solutions within weeks, not years. He interviewed 100 senior leaders last year to understand their challenges, the rates of adoption and the market. His approach is software-agnostic, practical, and always centred on amplifying people rather than replacing them.
Andrew Nicol’s keynote is a straight-talking guide for leaders who are serious about productivity but don’t want to lose sight of the humans doing the work. He frames the talk around more than 120 coffees he has had with senior leaders over the last year, and says that regardless of sector they keep circling the same three questions: what the future of leadership looks like in an AI-saturated world, how to make their teams genuinely more productive, and where AI really fits beyond pilots and hype.
He then lays out the problem with some uncomfortable numbers. Over the last decade, labour productivity across the OECD has grown by just 0.8 percent, and New Zealand is basically on that flat line, despite massive technology investment. Locally, medium-sized businesses are around 21 percent less productive than corporates. At the same time, a recent AI Forum survey reports that 91 percent of businesses say they are getting productivity gains from AI, which he gently skewers by pointing out that most of them don’t actually measure productivity. MIT adds another reality check: 95 percent of AI projects are failing. For Nicol, this gap between what organisations say about AI and what they can prove is the real challenge.
To make “agentic AI” less mysterious, he describes a simple two-axis map. On the vertical axis sits integration, which in practice means APIs, wiring tools together, and all the hidden complexity that IT people politely call “integration.” On the horizontal axis is autonomy, or how much work you let the AI do on its own. At the bottom left sits the basic large language model, a text prediction machine that turns words into numbers, runs algebra and calculus over those vectors, and then turns them back into language we can read. It is useful but very manual. As you move to the right you get into agents, small semi-autonomous tools that can act on your behalf.
He uses his own working life as an example of the next step along that axis. In the car he can dictate a quick instruction such as “send Tim an email about the AI Summit,” and an agent will draft the email in his CEO tone, complete with bullet-point structure and a sign-off as “A”, without him touching a keyboard. Further along, he describes a multi-step agentic flow that manages his consulting billing: he logs short voice notes after each CEO meeting, and at month end an agentic workflow ingests the notes, structures them, sends segments out for polishing, builds line items and pushes draft invoices into Xero. The point is not that this is science fiction; it is a very current example of reasoning and action chained together across tools in a way that saves real hours.
In the top-right corner of his map are deeply integrated, highly autonomous systems – the kind of thing enterprises fantasise about when they talk about end-to-end automation. Nicol recalls earlier work using years of workforce and geospatial data to predict staff reliability based on commute distance, and notes how much faster that sort of analytics could be with today’s tools. But he also warns that it is in this quadrant that most failures happen. Organisations jump from “we’ve played with ChatGPT” straight to complex, expensive, heavily integrated agentic systems, often at the urging of vendors, and end up as part of that 95 percent failure statistic.
Cost, he suggests, is not the true barrier. The core language models are effectively free or extremely cheap to experiment with. Simple agents sit in the realm of modest licence fees. The big ticket items are the highly integrated, autonomous systems, which have already dropped from the hundreds of thousands into the thousands or tens of thousands, and will likely continue to fall. The real constraint is what he calls the learning gap: leaders and teams are not being taken on a structured journey from basic tools and prompts through to more advanced agentic workflows.
To close that gap, Nicol offers three big levers: clarity, experimentation and humanity. Clarity, for him, starts with replacing “butt-covering policies” that nobody reads with practical frameworks people will actually use. He jokes that most policies exist so that, when something goes wrong, someone can point at a PDF and say “we covered ourselves,” but that is not good enough for AI. Instead, he wants leaders to give their people a one-page view of what tools are allowed, how data will be protected, where they are encouraged to experiment, and how a scrappy experiment becomes something supported and maintained.
On data protection, he demystifies some of the jargon. Learning from your data means the model is allowed to share what you put in with others; training on your data means your examples help improve the model’s capabilities. He is relaxed about training in the right context but very wary of unwanted learning and leakage. As a rule of thumb, he says, you move to paid “pro” or “team” versions of tools such as ChatGPT, Copilot or Gemini, configure them properly, and stop them learning from your proprietary information. Otherwise, staff will simply use whatever is easy – the “shadow AI” phenomenon he illustrates with a story about his daughter at a large insurance broker filling out a survey about whether the company should use AI while colleagues on either side were quietly using AI already.
Experimentation, in his view, has to be deliberate and resourced. If you roll out AI with no training and no time to learn, you are planning to fail. He argues that every person with a computer should have access to a paid AI tool, and that the economics are almost embarrassingly simple: at a salary of around $100,000, someone only needs to save about ten hours a year – roughly half a percent of their working time – for the licence to pay for itself. He argues for making AI use and training an expectation, not an optional hobby for tech-curious staff, and for carving out specific time to experiment. His preferred pattern is the six-week AI sprint: a small group spends two to three hours a week on a clearly defined problem, forms a hypothesis, designs a solution, builds a simple agentic flow, tests it and either deploys it or loops back and starts again.
To keep those experiments grounded, he advocates separating problem meetings from solution meetings. In the first, no tools or vendors are allowed, only a deep dive into what the problem actually is and how the work is currently done. Only in later sessions do you move to solutions. He plays with the audience by referencing “Sumi’s Law” – that most solutions are just problems waiting to happen – and then admits he invented it, just to show how easily people cling to neat-sounding ideas instead of examining the real situation.
Failure, he says, needs to be normalised. Nicol tells the story of WD-40, whose name comes from “Water Displacement” and the fact that it was the fortieth attempt after 39 failures. In other words, small repeated failure is literally baked into the brand. He challenges leaders to ask themselves whether they would let a team fail five times and then keep backing them, or whether they would shut the project down. Some of his own AI experiments, he admits, felt like a waste for weeks, only for the learning to be repurposed later.
The final lever, humanity, is a warning not to forget that there is always a person on the other side of whatever AI you deploy. He has seen organisations jump straight to chatbots and automation without asking whether customers actually want to deal with a bot for that task, especially when even cutting-edge models can still hallucinate a meaningful percentage of the time. He combines this with a blunt reminder from habit science and neuroscience: humans are creatures of habit, and changing behaviour is “insanely hard.” Leaders who can’t stick to their own goals – to get to the gym more often, to eat better, to read more – are often the first to assume that staff will happily change long-entrenched work habits just because a new tool has appeared.
To help with this, he outlines a simple behavioural framework. Start with the “when” and “why”: the purpose and outcomes you want. Then identify the specific habits, good and bad, that will have to change for those outcomes to be possible. Next, understand the ecosystem of systems, culture and constraints those habits live inside. Only then do you choose and deploy tools, and you keep nurturing them so they embed into real routines. In other words, clarity and experimentation only work if you design for human reality, not just for technology.
Nicol closes on a deeply personal note. He shares the gist of a letter he wrote to his three adult children about the world they are stepping into, a world in which we will increasingly struggle to tell whether what we see on a screen was created by a human. In that world, he believes two things will matter more than ever: people who not only spot problems but stick around to solve them, and people who can build genuine human connection, laughter and aroha in the middle of all the synthetic noise. His advice to them – and to the audience – is to ask questions continuously, to listen intently, to hold beliefs loosely, to share knowledge and help generously, to be authentic and to bring joy, while recognising that many people are already struggling with this transition and need support. He defines productivity not as working more hours but as “the ability to create more value in a given hour,” and says he is optimistic about New Zealand’s ability to lift that value with AI, even as he admits he is “scared like crazy” about what happens if we get the human side wrong.
Action points:
Define productivity in plain language
Agree internally that productivity means creating more value per hour, not working longer. Use that definition as the test for every AI project.
Sketch your own agentic AI map
Draw the integration-versus-autonomy axes and honestly mark where you are now (mostly prompts) and where you want to be in 6–12 months. Avoid leaping straight to heavily integrated, fully autonomous systems.
Give every knowledge worker a paid AI tool
For anyone who works at a computer, provide a secure, paid AI licence and set a minimum expectation that they save at least ten hours a year using it.
Bring shadow AI into the open
Assume people are already using AI informally. Reduce risk by giving them sanctioned tools and clear guardrails instead of pretending it is not happening.
Run structured six-week AI sprints
Pick a real business problem, put a small cross-functional team on it for two to three hours a week, and have them design, test and either deploy or discard an AI-powered workflow.
Separate problem discovery from solution design
Hold one session purely to map the current process and pain points. Only after that do you allow tools, vendors and solution ideas into the conversation.
Normalise repeated, small-scale failure
Use examples like WD-40 to celebrate teams who can show what they tried, what they learned and how they iterated, rather than only rewarding flawless success.
Keep the end human front and centre
Before deploying a bot or agent, ask whether you would want to interact with it in that situation, pilot with real users, and keep humans in the loop for high-stakes decisions.
Promote curiosity and connection as core skills
Reward people who ask good questions, share what they learn, and build real human connection. Treat those traits as just as important as technical ability.
Use AI to buy back human time
Make it explicit that the goal of automation is to free up more time for customer conversations, creative work, mentoring and leadership – the things only humans can do well.
