AI & AI Agents Your Team Should Be Using Now For Growth
Caelan Huntress, Head of Training & Enablement, Agentic Intelligence
Caelan Huntress starts off with some simple definitions. GPT is “generative, pre trained, transformer”, LLM is a large language model, NLP in this context is natural language processing, and MCP is the Model Context Protocol, the emerging “language” that lets different AI agents talk to each other. Those building blocks set up his core idea. We have moved from analytical AI (pattern recognition) to predictive AI (Spotify and Netflix style recommendation), then to generative AI, and now into the next phase. That phase is agentic AI, where systems can reason, plan and then execute tasks on our behalf.
Huntress uses a small Christchurch painting company to show how accessible this already is. Their operations manager recorded a short video once, then used a tool called HeyGen to create a multilingual avatar that now sends updates to staff in their own languages. The company was never going to hire a team of translators, but AI made an entirely new capability affordable. Like tractors did not remove farmers but changed the job, AI will not simply delete roles, it will reshape them.
He references the World Economic Forum’s Future of Jobs report. The numbers sound stark at first. An estimated 92 million jobs globally could be displaced by AI this decade, but 170 million new roles are expected to be created, for a net gain of 78 million. Work is changing rather than disappearing. AI agents are accelerating that shift by becoming “force multipliers” for strategy, creativity and productivity.
Huntress then gives a New Zealand context. A recent State of AI Index surveying local companies with more than 100 staff found that AI adoption has jumped from 48 percent in 2023 to 87 percent this year, and 88 percent of adopters report positive benefits. Most organisations are still using basic tools. Only a minority are working with advanced capabilities like agents, which is where he sees the next competitive edge. The goal is not to turn everyone into an AI engineer. The goal is to make leaders and teams AI literate.
The heart of his keynote is about what makes something an AI agent. It has two halves. There is the reasoning side, where it can analyse live data, plan and even review its own work. Then there is action, where it interacts with external systems to book, update, notify and execute. That can be a single agent, or a multi agent setup where one “manager” agent creates and directs specialist sub agents. He points to early examples. HubSpot’s Breeze agents resolving up to half of all support tickets. Salesforce’s AgentForce sales agents that can qualify and warm leads. Hebia, a multi agent finance system that can automate up to 90 percent of paperwork. Locally, he highlights Voxpop, which used an agentic news engine to publish the first global report of a major earthquake within nine minutes, and Workable, a voice agent for on site incident reporting that requires nothing more than a phone call.
Huntress also touches on personality and ethics. An LLM recently “passed” a Turing style test when instructed to act as an “internet savvy introvert”, being judged human 78 percent of the time in one experiment. Personality and tone will matter as much as capability. At the same time, he says agents are not yet ready for full consumer deployment. Before that happens they will need to reliably be three things. Helpful. Honest. Harmless. Without those, giving an agent your credit card and saying “book whatever looks good” can become a nightmare.
He notes that the agentic market is projected to reach around 200 billion dollars within a decade, with more than two billion dollars already invested into AI agent startups. Microsoft estimates that the average New Zealander could free up around 275 hours a year through AI, the equivalent of seven and a half weeks. The question is what leaders will do with that reclaimed time.
Drawing on Reid Hoffman’s new book “Super Agency”, Huntress describes four tribes forming around AI. The doomers who think AI will destroy us, the gloomers who see a net negative, the zoomers who want to go as fast as possible, and the bloomers who want optimistic but ethical progress. He identifies himself as a zoomer but is clear that the future he wants is not AI first. It is human first, with AI powered humans. Our emotional intelligence, taste, adaptability and ethics are the differentiators that will matter more as agents take on routine work. The keynote closes with an invitation to stay involved and curious.
Action points:
Lift AI literacy across the business
Run short internal sessions that demystify GPT, LLMs, NLP and what “agents” actually are. Aim for AI literate leaders and teams, not just one technical specialist.
Audit your workflows for agent opportunities
Map where staff spend time in customer support, sales, finance, reporting and compliance. Circle repetitive, rules based work. These are prime candidates for early agents.
Start with a small, safe pilot agent
Choose one narrow use case, such as FAQ style customer support, internal incident reporting or simple lead qualification. Limit its access, define success metrics and run a structured pilot rather than a loose experiment.
Use AI to add capabilities you never had, not only to cut costs
Think like the painting company. Ask, “what useful functions would we love but could never justify hiring for?” Multilingual internal comms, always on triage, real time document summaries and similar can all be unlocked by agents.
Set explicit guardrails: helpful, honest, harmless
Before connecting any agent to your real systems, define what it can and cannot do. Create clear rules about spending limits, data access, escalation to humans and logging of actions.
Design for human plus AI, not human versus AI
Plan how staff can use freed time to focus on relationship building, creative strategy, complex problem solving and leadership. Make it clear the goal is to upgrade jobs, not simply reduce headcount.
Experiment with personality and tone
When you build chat or voice agents, be deliberate about their “voice”. Decide how formal or casual they should be, how much initiative they have and how they align with your brand.
Create an internal “agent owner” role
Nominate someone to be responsible for agent projects. Their job is to coordinate pilots, monitor risk, collect feedback from users and report value back to leadership.
Watch the local ecosystem, not just global tools
Keep an eye on New Zealand examples like AI powered news, compliance and safety tooling. Local vendors often understand regulatory and cultural context better than global platforms.
