Using AI to Make Your Team Great
Under Danu’s technical leadership, Rush has built platforms and products used daily by millions of people, including the NZ COVID Tracer app, large-scale digital systems for global brands, and AI-powered solutions that now serve 1 in 10 New Zealanders. Trained as an engineer and known for his ability to translate cutting-edge research into applied technology, Danu has led teams that have delivered complex projects spanning computer vision, machine learning, and large-scale cloud architecture.
Danu Abeysuriya’s keynote is part stand-up, part history lesson, and part very practical playbook for how a people-first tech company can go hard on AI without burning its culture. He starts by grounding the room in the scale of what his team at Rush already does: around 100 people building digital products for brands like Z, Farmlands and Wilson Parking, with the proud statistic that one in ten New Zealanders are using software they’ve built and operate. The COVID Tracer app? That was Rush too, cloned and exported to the NHS and other markets off the back of an email that initially looked so unlikely they thought it was a prank.
From there he rewinds to Gordon Moore and the Apollo guidance computer to explain why all of this is happening now. Moore predicted more than 50 years ago that processing power and memory would double roughly every 18–24 months. That exponential curve has held, and Danu illustrates it with a simple comparison. The Apollo lander’s guidance computer, built by Raytheon as part of a US$60 billion programme, had around 2 MHz of processing power, 1 MB of RAM and only 75 units were ever made. The original iPhone, mass-produced in the billions for roughly US$800 a unit, packs the equivalent of around 18,000 MHz, 8 GB of RAM and is, on cost and performance, roughly 17 million times more powerful than the computer that took humans to the Moon. We carry that in our pockets and, as he jokes, mostly use it for Snapchat. That surplus computing power, multiplied across billions of devices and data streams, is what made large language models and multimodal AI possible.
For Rush, the question three years ago was whether this was just a curiosity or something that would fundamentally change how they work. Their computer science team had been tracking GPT-1 and GPT-2, even wiring up a command-line tool that could spit out basic marketing copy if you had an engineer to drive it. GPT-3 and beyond changed the game. It wasn’t just that ChatGPT-style tools could produce smooth human language; if they could produce structured text, they could likely produce structured code too. To test whether that mattered, Danu points to an MIT study of 800 consultants that has become a touchstone for the industry: giving them access to a GPT-4-class tool created a 12 percent uplift in tasks completed per day, made each task about 25 percent faster, and, critically, improved output quality by around 40 percent. That last number, he says, is often overlooked. In most businesses, rework, checking and review cycles quietly erode productivity. If quality goes up as well as speed, those hidden costs start to shrink.
Rush decided to treat AI as an organisational transformation, not just a tool rollout. Their approach, which they use on client projects as well, is “crawl, walk, run.” At the crawl stage they focused on leadership literacy. The senior team spent serious time, without formal training, living in the tools and trying to understand them in context so they could see both risk and potential. They then broke their own business into two streams: step-based innovation and leapfrog innovation. For step-based change, they canvassed every department and asked staff to nominate a single text-heavy process that consumed a lot of time. From those processes they picked just one step in each workflow and used off-the-shelf generative tools to accelerate that single step. No big bang, no rewiring of entire departments – just shaving friction out of real work.
Knowing that deadlines rarely move and people don’t have hours spare to “play with AI,” they brought in an external enablement partner to sit alongside the team, drive outcomes and share practical IP so staff could learn quickly. As they moved into the walk phase, they ran internal hackathons to design tools that improved their own processes and to prototype new AI-powered customer experiences for clients. They layered on mentorship programmes, tracked usage statistics and, crucially, resisted punitive approaches. If someone was struggling or under-using the tools, the response was training, one-on-one coaching and more support, not finger-pointing. Danu notes that fundamentals training gets you far, but the tools themselves change so quickly that ongoing enablement is essential if you want to keep unlocking new features and modalities rather than freezing at version one.
One of the most interesting effects they saw was on juniors. Traditionally, a junior engineer might ask a senior for help once or twice on the same issue before embarrassment kicks in. With AI, those same juniors would happily ask the same question twenty or thirty times from different angles until they really understood the concept, because the tool doesn’t judge them. That, Danu suggests, is part of the quality uplift – AI as an infinitely patient mentor that makes it psychologically safe to admit you don’t get it yet. It also forced the team to rediscover “the art of delegation.” Good prompts are essentially good delegation: clarity of instructions, clear goals, useful context. Learning to brief an AI properly turns out to be the same skill as briefing a human properly. His tongue-in-cheek slide summarises it: take your existing org chart, add AI and teach people to delegate well.
He contrasts step change with leapfrogging using a few live demos. Step examples include using Grok as a first-pass triage on highly regulated marketing copy. He takes a real Kiwibank page, asks the model to review it against the rules of the Financial Markets Conduct Act and the Privacy Act 2020, and it comes back with a surprisingly sharp list of compliance issues. He is clear this doesn’t replace a lawyer, but it is a powerful first filter in any compliance-heavy workflow. Another example is using multimodal AI to roast his own slide design. By uploading a screenshot and asking a model to act as “the Jony Ive of slides” and be brutally honest, he gets a highly entertaining but also very specific critique of layout, hierarchy and typography – the kind of design review an engineer might never get otherwise.
For leapfrogging, he moves away from structured data altogether and into screenshots and images. Instead of wiring Netflix viewing history into a recommendation engine, he simply uploads a screenshot of his profile and asks Grok to tell him who this person is as a customer. The answer is part accurate, part brutal, but the point lands: multimodal AI lets you bypass a lot of integration work by feeding in what you can see, not just what sits in a database. It is a different way to think about recommendation, customer insight and segmentation.
Danu then broadens the lens to traditional AI, reminding the audience that “the algorithm” has been quietly shaping behaviour for more than a decade. Recommendation engines at Amazon and Netflix are not new, but they are powerful: Amazon now accounts for around four percent of US retail, and roughly a third of its sales are driven by AI recommendations, which means one to two percent of all US retail is arguably being steered by AI. The difference now is multimodality. Instead of focusing only on getting every data point into neat rows and columns, you can increasingly point AI at video, audio and imagery and get useful outcomes without heavy data engineering.
He closes on the human side of all this. On Rush’s “Human in the Loop” podcast, he spoke with Frances Valintine about the widening gap between people who are actively engaging with these tools and those who still think AI is just something that “writes funny poems.” Her litmus test is simple: go to a US job board, search your current job title and see what the role looks like there. Those markets are a few years ahead because they have more capital; whatever you see is coming here soon. For a 40-year-old suddenly told their career is heading for a cliff, with the “platform already on fire” and an expectation that they somehow jump to something entirely new mid-flight, that is a huge psychological and social load. Danu’s question is whose responsibility it is to bridge that gap: government, employers, workers themselves, or some shared social compact. He doesn’t pretend to have the full answer, but he is clear that ignoring the problem isn’t an option. His final, tongue-in-cheek Arnold Schwarzenegger impersonation loops it back to action: you have to “get to GPT, get to the chopper and learn how to use the tools.”
Action points:
Quantify your own “Moore’s law moment”
Use Danu’s Apollo-to-iPhone comparison as a lens on your business. Ask where you are still operating as if computing power and AI capabilities are scarce, and where you could behave as if they are abundant.
Adopt a crawl–walk–run roadmap
Start with leadership literacy and one or two text-heavy process steps per department. Use off-the-shelf tools to accelerate just those steps before you commit to deeper integration or custom builds.
Invest in ongoing enablement, not one-off training
Assume tools will change under your feet. Budget for recurring workshops, coaching and internal champions so staff can keep up with new features and modalities.
Teach delegation as an AI skill
Frame prompting as delegation. Train people to give clear, goal-oriented instructions to AI in the same way you’d expect them to brief a colleague, and bake that into your leadership and communication training.
Use AI as a first-pass for compliance and quality checks
Let models review marketing copy, policies or designs against regulatory rules or brand guidelines as a triage step, while keeping humans in the loop for final sign-off.
Run small leapfrog experiments with multimodal AI
Try using screenshots, photos or short video clips as inputs for customer insight, design review or recommendation, instead of waiting for perfect structured data.
Track sentiment and everyday usage, not just hard ROI
Survey staff regularly about how AI is affecting their workload, stress and enjoyment. Aim for high voluntary adoption like Rush’s 90-plus percent daily use rather than purely chasing a specific percentage productivity number.
Leverage AI as a mentor for juniors
Encourage less experienced staff to use AI to ask “obvious” questions repeatedly until they truly grasp a concept, then pair that with periodic human coaching to keep them on the right track.
Acknowledge the career cliff and plan for retraining
Openly discuss with mid-career staff how their roles may change, and point them toward concrete upskilling paths rather than leaving them to discover the “burning platform” alone.
Clarify who owns the reskilling challenge in your context
Decide what mix of employer support, individual responsibility and external partnership makes sense in your organisation, and make that explicit rather than assuming someone else will handle it.
