3 Years of AI Madness and What It Means for Business
Dr Peter Catt, Director of Quantum & AI, Virtual Blue
Peter spearheads the integration of advanced AI, machine learning, and automation into business strategies and solutions. Holding a doctorate in applied computing, he brings deep expertise in predictive modeling and forecasting, helping organisations transform complex data into confident and actionable insights. He’s also a recognised global panellist for MIT Technology Review on ethical innovation in AI.
Dr Peter Catt’s keynote sits squarely where heavy-duty data science meets the very pragmatic question every leader has: “So what should we do next?” As Chair and Director of Virtual Blue, with a doctorate in applied computing, he has spent his career building predictive models and decision systems into real organisations. At the Christchurch Summit, his session Predict, Plan and Act with Confidence – The Machine Learning Advantage was a deliberate antidote to AI hype. Instead of speculating about tech that might arrive in five years, he focused on tools that businesses in New Zealand are already using to reduce risk, improve planning and maintain competitiveness in a turbulent environment.
He opened with a simple but important point: unpredictability itself isn’t new. What’s new is the mismatch between the volatility of the world and the planning methods many organisations still use. Too many forecasts are still essentially “yesterday’s logic in spreadsheets” – static models full of fixed assumptions about demand, costs or freight that crumble as soon as conditions shift. “The issue isn’t turbulence itself,” he told the room. “The issue is how we respond to it.”
From there he moved into concrete New Zealand case studies: companies using AI-powered forecasting to better align inventory with actual demand, free up working capital tied in the wrong stock, and respond more quickly to supply chain disruptions. None of these examples were presented as moonshots. They were practical demonstrations that when you let modern machine learning chew through variables like freight movements, exchange rates and even social sentiment, you can make better day-to-day decisions than a human with a spreadsheet ever could.
Catt is also careful to remind people that AI is a broad field and that generative AI is just one piece of it. “There’s a lot of AI out there that’s very robust — it doesn’t hallucinate,” he said, pointing to long-standing techniques in machine learning and symbolic AI that underpin many decision-support systems. In his view, the most reliable forecasting and planning tools often sit far away from chatbots and text generation and much closer to classic, explainable models that regulators and boards are comfortable with. He even suggests that if something like artificial general intelligence does emerge, it is more likely to come from “neuro-symbolic AI” – hybrids that combine neural networks with rules-based logic – than from pure GenAI alone.
The heart of his Christchurch talk is the “predict–plan–act” loop treated as a discipline rather than a slogan. Traditional BI tells you what happened. Early machine learning could hint at what might happen. The systems he’s working with now can keep forecasts updated continuously and can be wired directly into workflows so that predictions actually trigger changes in pricing, inventory, routing or outreach. Prediction is only the start: churn scores, demand curves, risk probabilities. Planning is where those numbers are used to stress-test scenarios, challenge assumptions and explore “what if?” before decisions are locked in. Action is when you embed the results directly into the tools people already use – CRM, ERP, ticketing, maintenance scheduling – so that insight becomes behaviour, not just a slide in a quarterly review. If you can’t do anything with a forecast, he says, “it’s totally academic.”
A big part of building confidence in that loop is how you talk about uncertainty. This is where Catt introduces “Probably Approximately Correct” (PAC) learning, a statistical framework that sounds like a dad joke but is deadly serious. Instead of pretending to be 100 percent right, a PAC-style statement might say, “we’re 90 percent confident that our prediction is within five percent of the actual outcome.” That nuance gives leaders a firmer basis for action: you know the likely band the future will fall into, and you can plan hedges or contingencies accordingly. For Catt, prediction requires more than instinct, but it also requires more than a single number; it requires a language for confidence that non-technical decision-makers can actually use.
He also pushes the audience beyond simple correlation-based models. Causal AI – building on the work of Judea Pearl – tries to answer not just “what moves together?” but “what is really driving what?” If you understand cause and effect, you can design much better interventions. Alongside that, he champions ensemble methods: combining multiple imperfect models and averaging them, because “two weak models averaged together can outperform one strong model.” In his world, the magic is not in one heroic algorithm, but in a portfolio of models whose strengths and weaknesses balance each other out.
Despite the buzz around AI, he is adamant that “statistics are your friend.” Many of the breakthroughs in machine learning mirror ideas from decades-old statistical practice, from teasing out seasonality and trend to identifying hidden structures in noisy data. Marrying the two disciplines – modern ML and robust statistics – is, in his view, how you stop models from becoming black boxes and instead turn them into tools you can audit, explain and improve over time.
When he turns to automation, Catt balances excitement with caution. The ability to let systems act directly on their predictions – adjusting bids, triaging customer tickets, scheduling maintenance before something fails, prioritising sales outreach – is where the productivity gains stack up. But he argues strongly for “earned autonomy.” You begin with humans-in-the-loop, observe how the model behaves, and only widen the scope of automatic action once it has consistently proved itself in narrow, well-defined slices of the business. Anything that touches safety, livelihoods or long-term financial wellbeing, he says, should keep a human in the chain even as you automate the heavy lifting around it.
Ethics and governance run through the whole conversation. As a global panellist for MIT Technology Review on ethical innovation, Catt is clear that predictive power without guardrails is a liability, not an asset. Fairness means asking who consistently loses if the model is wrong in a particular direction. Transparency means someone can explain, in plain language, why the system is making a call. Accountability means knowing who is ultimately responsible when an automated decision causes harm. For him, responsible AI is as much about escalation paths, overrides and monitoring as it is about clever code.
He finishes with a very commercial challenge: treat your data as strategic capital. Most organisations are sitting on under-used internal data that, when combined with external indicators – macro trends, logistics data, market signals – can fuel forecasting systems that materially improve competitiveness. With the right structure, sponsorship and focus, he says, a breakthrough predictive system doesn’t have to be a three-year science project; it can be operational “in as little as six weeks.” The real differentiator, especially now that models and platforms are widely available, is tempo: how quickly you can close the loop from data to prediction to planning to action, and how relentlessly you learn from each cycle.
By the end of his keynote, forecasting, automation and competitive strategy have been stitched into a single story. Forecasts are no longer numbers for a deck; they are the front end of a live system that, if designed well, helps you predict, plan and act with more confidence than gut-feel and spreadsheets ever allowed. But that confidence is earned through clear governance, honest communication about uncertainty, careful automation design, and a commitment to turning complex predictive outputs into decisions that real people can actually make.
Action points:
Map your “predict–plan–act” loops
Pick one core area – demand, churn, risk or maintenance – and write down how predictions are generated today, who sees them, how they influence planning, and what concrete actions follow (if any). Treat every gap in that loop as an opportunity to redesign.
Upgrade from spreadsheets to AI-enabled forecasting
Identify where you’re still relying on static spreadsheets to handle variables like freight, FX, weather or sentiment. Pilot forecasting models that can handle non-linear, multi-variable relationships and output probabilities or ranges instead of single numbers.
Standardise how you express uncertainty (PAC-style)
Work with your data team to define simple, business-friendly ways to talk about prediction quality – e.g. “90% confident we’re within 5%.” Use those bands consistently across forecasts so leaders can compare apples with apples.
Explore causal and ensemble models for key decisions
For high-impact areas, look beyond “what correlates with what” and invest in causal modelling or ensembles. Even modest models, combined intelligently, can outperform a single “hero” model and give you better levers for intervention.
Design clear action policies around predictions
For each model, agree up front: when is it “inform only”, when is a recommendation made, when is human sign-off mandatory, and where (if at all) you allow full automation. Document those thresholds and review them as performance improves.
Introduce “earned autonomy” for automation
Start with humans-in-the-loop and log every model-driven recommendation and outcome. Once the data shows consistent performance and acceptable risk, gradually expand the areas where the system can act without direct approval.
Invest in the translation layer
Identify people who can speak both “model” and “business,” and make them central to your planning rhythms. Their job is to turn complex outputs into narratives, options and trade-offs that boards, execs and frontline managers can work with.
Shorten the learning cycle
Don’t wait for year-end to see if your models worked. Wherever possible, design ways to feed actual outcomes back into both the models and human reviews monthly – or even continuously – so your “predict–plan–act” muscle gets stronger with each iteration.
