How Has Tech and Business Developed In The Last 12 Months?
Nyssa Waters, Co-Founder & CEO, Possibl.ai
Prior to founding Possibl.ai in 2023, Nyssa held senior roles including Head of Financial Services & Fintech at Google Cloud NZ, and Sales Leadership with Spark and Telstra. She partners with organisations to transform data into actionable AI insights, driving efficiency, improved customer experiences and strategic advantage across industries.
Nyssa Waters comes at AI not as a hype merchant, but as someone who has helped big industries wrestle complex technology into real commercial outcomes. With more than two decades across telco and cloud, she’s held senior sales and strategy roles at Spark and Telstra, and led Financial Services & Fintech for Google Cloud New Zealand before moving into AI consulting and, most recently, founding possibl.ai. Her work now centres on helping organisations turn messy, fragmented data into AI-powered decisions that lift efficiency, sharpen customer experience and create genuine strategic advantage.
Her Christchurch keynote, starts with a blunt point: the last year hasn’t just been another step in the AI curve, it’s been a phase shift in what’s possible inside ordinary businesses. One slide simply reads “The numbers are exciting” and then walks through three headline figures: up to 90 percent of code can now be written by AI, three-quarters of apps are forecast to be created by end users rather than professional developers, and teams that embrace these tools properly can see creativity and problem-solving capacity triple.
For Waters, those numbers set the stakes. If AI can write most of your code and non-developers can build most of your apps, then the centre of gravity in technology shifts from the IT department to the wider business. In her framing, the last 12 months have been about moving from “AI as a thing the dev team does” to “AI as a capability everyone, in every function, can use.” The question for leaders is no longer whether the technology works. It’s whether they are ready for what happens when everybody can suddenly build software and automate decisions with a paragraph of natural language.
She uses the concept of “vibe coding” as a shorthand for this new reality. Coined by Andrej Karpathy, vibe coding is the style of software development where you describe what you want in plain English, let the AI generate and fix the code, and iterate conversationally until it works. That shift, from writing syntax to directing outcomes, underpins much of her talk. Over the last year, she argues, models have become good enough that for many internal tools and workflows, the bottleneck is no longer technical skill, it is clarity of intent: can your people articulate the process they want well enough for an AI to build it?
From there she walks the audience through “the journey we are on.” The first phase, which many Kiwi businesses are still in, is ad-hoc experimentation: a few power users with ChatGPT or Claude tabs open, a handful of pilots, some impressive demos in board packs, but very little that is wired into core systems. Next comes structured adoption, where organisations standardise on a small number of models and tools, set basic governance rules and start to build repeatable patterns – approved use cases, templates, and internal playbooks. The last 12 months, in her view, have seen global leaders push into a third phase: AI-native ways of working where automation is baked into processes from the start, and “asking the AI” is as normal as opening email.
She uses her own work as a bridge between those phases. One of the closing slides in her deck invites people to “sign up for early access to Harper – bringing vibe coding to the enterprise.” Rather than pitching it as a magic tool, she positions Harper as an example of the kind of layer enterprises will need: something that lets employees describe the workflows and apps they want in natural language, while still operating inside the guardrails of corporate security, data access and architecture. The point is less about the specific product and more about the direction of travel – away from locked-down, IT-only development towards supervised, governed creation at the edge of the organisation.
Throughout the session Waters keeps returning to a theme that will resonate with M2 readers: speed without discipline is dangerous, but discipline without speed is now a liability. In the last year, the external environment has accelerated on multiple fronts at once: foundation models have become more capable and cheaper to run, “agentic” AI that can act on our behalf is moving from concept pieces into early products, and global enterprises are starting to factor AI-native workflows into their three- to five-year plans, not just their labs. In New Zealand and Australia, by contrast, many organisations are still treating generative AI as a side project, confined to a handful of enthusiasts.
She argues that this is the year that posture has to change. Data platforms, security models and operating rhythms all need to be refreshed with AI in mind. That includes everything from how you store and label data so it can be used safely with LLMs, to how you think about intellectual property when non-technical staff can “build” things with a prompt, to how you train teams so they understand both the power and limits of the models they’re using. Her message is not “throw out your existing systems,” but “design the next iteration assuming AI will be in the loop,” whether that’s reviewing a contract, triaging customer requests or generating the first draft of a financial scenario.
The final part of her keynote looks a little further over the horizon, anchored in a simple predictions slide titled “So what next?” The four headings are short but provocative: Autonomous AI, Just-in-time UI, Enterprise Adoption, Physical AI.
Autonomous AI, in her telling, is the natural end-point of today’s copilots and agents – systems that don’t just answer questions, but monitor signals, propose actions and execute within tightly defined scopes. Just-in-time UI is the design shift where interfaces appear when and where they are needed, instead of forcing users to go hunting for them. Enterprise adoption is the moment when AI ceases to be a differentiator and becomes table stakes – a default expectation baked into procurement, customer experience and internal tooling. Physical AI is the long-term piece: AI-driven reasoning and perception embedded in hardware, from robots on the warehouse floor to smarter devices in the built environment.
Rather than treating those predictions as sci-fi, Waters ties them directly back to decisions leaders need to make in the next annual planning cycle. If it’s plausible that a large share of your future software will be assembled by AI and non-developers, what does that mean for your skills strategy? If interfaces will increasingly be generated on demand, how does that shape your brand, your UX and your data permissions? If AI is going to sit in more and more “always on” roles, what is your threshold for autonomy, and how do you audit what these systems are doing on your behalf?
Nyssa Waters’ core argument is that the last 12 months have turned generative AI from something you experiment with into something you must actively design around. The organisations that treat it as a structural shift in how work gets done, not a passing feature set, are the ones that will benefit from that 300 percent lift in creativity instead of being blindsided by it.
Action points:
Plan for AI-written code and citizen-built apps
Assume that a growing share of your internal tools and workflows will be created by AI and non-developers. Start now on governance: approved platforms, code review patterns, security checks and clear rules about where these apps can run.
Teach “vibe coding” as a business skill, not a party trick
Run practical training that shows staff how to describe outcomes, constraints and edge cases in natural language so models can generate reliable code or automations. Focus on thinking clearly about processes, not learning a particular tool.
Create a safe sandbox for experimentation
Stand up an AI “playground” environment with access to your non-sensitive data where people can build and test automations without risking production systems. Make it easy to graduate successful experiments into supported, IT-owned solutions.
Refresh your data and security foundations for an AI-first world
Review how data is stored, labelled and accessed so it can be used safely with LLMs. Tighten policies on sensitive information, and make sure any new tools (including vibe-coding platforms) meet your security and compliance standards.
Bake AI into next year’s planning and budgeting cycles
Treat AI as an input to strategy, not an afterthought. For each major function, ask: what could we automate, augment or re-design using today’s models, and what investments (skills, tools, data work) would be needed to get there? Tie those answers to specific line items in your plan.
Start experimenting with “just-in-time” interfaces
Look for one or two workflows where you can surface AI assistance directly in the moment of need – inside your CRM, intranet or line-of-business apps – instead of sending people out to a separate chat window. Use those pilots to learn what good, contextual AI UX looks like for your brand.
Define your comfort zone for autonomous AI
Decide where you’re willing to let AI act on your behalf today (for example, drafting and filing internal reports, triaging low-risk support tickets) and where you insist on human sign-off. Write those thresholds down and revisit them every six to twelve months as the tech matures.
Measure creativity and problem-solving gains, not just hours saved
In addition to productivity metrics, track how AI changes the range and quality of ideas your teams produce – number of concepts explored, speed to first prototype, options evaluated – so you can see whether you are capturing the 300 percent creativity lift Nyssa talks about.
