AI or Die: How Technology is Reshaping Business
We’ve been talking about AI for a bit at M2. We’ve interviewed the people who have been behind building different models, and creating chatbots for customer service, and we’ve joked about the beginnings of Skynet and Terminator along the way. But profound cross-industry disruption has always seemed like something a little bit down the track. With the release of the latest generative AI models like ChatGPT for text, and Dall E for images, that doesn’t seem like the case anymore.
And the exponential mainstream adoption shows that a lot of us have jumped in. We are seeing short films written and directed by AI, as well as songs and websites. You can even write a bio for yourself in the style of Shakespeare if you feel the urge.
But where is the real juice for industry? What are some of the really profound, disruptive potentials? What are some of the pitfalls? And regardless of industry, what do business leaders need to be doing today to utilise AI and not be upended by it?
To help us negotiate this new world of machine-assisted intelligence, we were joined by a group of CEOs from a wide range of industries at a breakfast discussion in the Papuke Room at The Hotel Britomart, along with Google Global Learning Futurist Dr. Sydney Savion via Zoom and Danu Abeysuriya, the founder of Rush.
Our conversation kicks off with Dr. Sydney drawing on some of her past work and research. Dr. Sydney has also been recognised as a Chief Learning Officer of the Year for her time with Air New Zealand. She has worked developing future strategies for some of the world’s largest corporations, like Dell EMC. Prior to that, Dr. Sydney was an aerospace officer in the US Air Force, where she was responsible for the F-16 fleet. She also worked with Joint Forces Command and the training analysis and Simulation Center, where she used advanced technology to design, develop and distribute learning globally for a quarter of a million US military officers. And of course, as a Global Learning Futurist, Dr. Sydney has been thinking a lot about AI over the years and also our connection with it.
If we can wind the clock back a little bit to when you were tasked with shaping the learning strategy of a quarter of a million military staff, how do you think the current crop of AI tools would’ve changed your approach?
Dr. SS: That’s a very good question. But before we answer that, I think we should ground ourselves in the definition of AI, natural intelligence – because that’s what we’re comparing it against -, and then emotional intelligence. Last time we spoke, I said that artificial intelligence was the claim that human intelligence can be so precisely described that we can actually create a machine to simulate it. That’s artificial intelligence; intelligence demonstrated by machines.
Natural intelligence is intelligence demonstrated by animals, including humans, because we know that other than humans, animals can also display natural intelligence. Emotional intelligence is the capacity for us to be self-aware of our own feelings and emotions and that of another, and use that to navigate a plethora of social situations and conflicts.
In my field, human capital and learning, we use something called ADDIE; analysis, design, development, implementation, and evaluation. It’s a process that was invented by the United States Army way back in the seventies to help expedite the development of training content. If I had to do it all over again with AI, I would save myself the trouble of doing ADDIE altogether because AI could do all of that and then focus on the learning experience.
That’s what I would focus on more. Now, of course, just keep in mind, I think with anything, there are always checks and balances, but that’s definitely the approach I would’ve taken if I had AI in my possession back then.
At a coding level, Chat-GPT just breaks down letters or sub-words into integers and then works out what’s logical based on the context of the dataset and the prompt. It seems so removed from any kind of understanding of the words. Are we conflating this artificial intelligence with human intelligence? Is it as magic as we think it is?
I think that might be one of those myths about AI. I’ve been talking about this long before ChatGPT or any other AI came along, as it relates to the trend that’s going on now with it. I do not believe you can replace the human ability to think creatively and solve complex problems and make decisions based on intuition. I do believe those are inherently human, if you will, in terms of natural intelligence.
It doesn’t mean that AI can’t solve a problem, but I do feel like there needs to be a human overlay to make sure that you get the context squared away.
This comes down to the foundation of a lot of your research and career – that connection between AI, EI and human intelligence. So how do we move forward together?
Here’s the thing, in terms of AI, there are lots of things that it can do. Everything from data analysis, decision making, problem-solving, and customer service, in terms of providing answers to customers. We’ve already been using chatbots in the industry. We used it at Air New Zealand, we used it at Dell Technologies.
And then of course, marketing and sales, the ability to be able to market and sell products. You see that in Amazon, you see that in Netflix and other commercial applications that we’ve been using already and not realising that maybe there was AI that was underlying those things.
I feel like there’s optimism, but I think while people are thinking AI is going to take a lot of work off their plate and people are going to be more productive, what I do believe is it’s also an opportunity to free up the time to spend on cultural engagement and happiness.
I’m going to turn to a report on the state of the global workplace from 2022 from Gallup; Gallup reports every year on employee satisfaction globally. The report found that along with dissatisfaction, workers are experiencing staggering rates of both disengagement and unhappiness. 60% of people reported being emotionally detached at work and 19% as being miserable. We need to be turning our attention to these areas.
While AI is focusing on productivity, as leaders, we get to now focus on the employee in the way it relates to culture and engagement.
As a business leader, how do you utilise it to leverage your time to be able to focus more on culture and happiness?
I think every company needs to identify what areas would AI be best placed. Not everybody’s going to want to use it for the things I mentioned, whether it’s data analysis, decision-making, problem-solving, customer service, or marketing and sales. Some may choose to bifurcate how they use it, whether they use it in part for some of their analysis or decision making or problem-solving.
I think this is going to be an opportunity for AI to complement and help make humans more effective in the workplace.
What are some really important action points that you want leaders to think about?
I think it is important to be thinking about how the competitive landscape will be changing if you haven’t seen it already. It’s knocking on the front door of every industry, if it hasn’t already. The other things to think about, of course, are ethical considerations, whether it’s around privacy, bias or safety.
As it relates to bias, an incredible documentary on Netflix called Coded Bias is a documentary that investigates bias and algorithms after a MIT media lab researcher uncovered flaws in facial recognition and technology. If you haven’t seen that, it’s a pretty fascinating watch as it relates to these ethical considerations around bias, privacy and safety.
That’s such a fascinating point as well, because while things like ChatGPT use the internet up until 2021 as a data set, the parameters are still coded by some white dudes from Silicon Valley. That surely has a built-in inherent bias there.
Oh, of course. That goes back to the definitions we were grounding ourselves in, that humans can learn new things and solve new problems while artificial intelligence is limited to the tasks that it has been programmed for. So you’re absolutely right there.
So I used AI to generate some questions for you. How do you see AI changing the job market and the skills required for various roles in the learning and development field?
I think there’s going to be an increased demand for employees with skills in data science, machine learning, and artificial intelligence. Going back to my own example, when I was at the Joint Forces Command, in terms of the approach I would use, if I had AI at my disposal then I’m focusing more on the learner experience or the employee experience.
Then also as it relates to my profession, human capital and learning, a shift from being a content creator and focusing on content to focusing on the experience. And then lastly, I just think that the use will be more widespread in terms of its viability and use as a tool to help make work more efficient and more effective.
Do you see AI affecting the competitive landscape of any particular industry? From your perspective at Google, are you seeing the kind of industries that are being disrupted right now?
Before I was at Google, I was at City Block Health, which is a Google venture. Their focus is on increasing the quality of healthcare for marginalised communities. Now you have AI-powered diagnostic tools that can help revolutionise the way doctors diagnose and treat diseases. In the finance industry, you’re seeing AI-powered trading algorithms being used.
I think industries that are not using AI will be at a disadvantage and I think those that are using it smartly will definitely gain a competitive advantage. That’s evident in the news every day.
You might have touched on it with bias, but AI is also really interested in the ethical considerations for itself. Can you talk about any that you’re seeing within various industries? Anything that we should be conscious of?
It’s privacy and systems that can be used to collect personal information on people without their consent. Bias towards a particular group of people. Safety, in terms of the way systems are designed in terms of their safety and security. I think those are the three things that I see that keep coming up in the news and also across pretty much every industry.
Just going back to a human question. It seems like we’ve evolved a lot in terms of leadership and it’s all about empathy and showing vulnerability, leaning more into the human side of things. As we’ve started to rely a little bit more on machine learning, does that start to shift the dial a little bit?
Again, I think it’s going to shift the dial in that it should be freeing leaders up to spend more time in the business. I’m big on that, learning what’s going on in the business, spending time with people, and figuring out what’s working and what’s not working. The further you get away from the ground troops, if you will, it seems like there’s a big distance between what’s really happening and what you might hear about happening.
So I think AI actually might do us a favour by freeing us up so that we can spend more time in the business with the people, help increase engagement, doing more listening tours, understanding what’s working for people, what’s not working for people and what we can do differently about it. And given Australia and Aotearoa are doing it pretty well and during my time at Air New Zealand, I felt like there was a big difference in how we approach our connectivity with our employees, but I know there’s always room for continued improvement.
It seems like all these productivity tools have popped up and that’s been the utopian dream that we can leverage these tools and have more time. But then it turns out that you can answer emails at three in the morning or while on holiday. Is there a danger that we don’t free up that time?
There’s a danger that we won’t free up the time. I think that is going to be up to the discretion of every individual leader in terms of once they get the free time, what they’ll do with it. It’s like saying you’re going to make time to go to the gym and you just never make your way there. I think it’s going to be the same thing.
We’ve also got in the room Danu Abeysuriya, the founder of Rush Digital. Since 2009, Rush has been developing digital tools for some of the largest corporations here and across the globe. Companies like Disney, Google, Microsoft, Nestlé, Heineken, Air New Zealand, the BBC, TVNZ, Fonterra, Samsung, Nokia and The Warehouse.
There’s a lot of talk about AI these days. What does that mean for businesses and what they are asking of you?
DA: It’s pretty fascinating. My job at Rush is to be the innovation guy. I’m wheeled in whenever we have to take something that’s new and we have to deliver it and scale quickly. A lot of the key part of my job is poking holes because you have to pick technologies that are going to win and actually operate at scale.
Covid tracers are a go-to example that we have. A lot of other countries were doing Bluetooth chasing with custom stacks and all of this stuff. Our suggestion was actually to leave that stuff off the table because it’s going to break, and go with something straightforward that people understand and scale.
There were other concerns around privacy and those bureaucratic demands, all the way to more interesting stuff, like pay by plate. When you go through a Z station and your card/licence plate gets recognised and you pay, again, a whole bunch of practical implementation questions have been raised. It requires a deep dive into the technology before everybody else has looked at it.
OpenAI is absolutely fascinating. There are a whole bunch of question marks about what it can do and what it can’t do. But I would say every task you give it, you are dramatically shrinking the time for an expert to produce outflow.
One example is jumping between projects, switching database engines. I don’t have that knowledge. Normally when I have to run a particular query, I would go to the documentation, I’ll go to Google, I’ll filter a few results, pick out the results, and my expertise would tell me that’s the right answer and then I’d try a few things. That might take me 15, 20 minutes to do every time I do that. With something like ChatGPT, I can reduce that to a couple of seconds because I can phrase that in an inarticulate question.
The really interesting thing is that we’ve been using it operationally for a whole bunch of camera deployments because we’re doing health and safety detections with cameras. It’s a point about bias there that we should talk about as well. When we have a physical problem with a particular model of camera hardware, our ops guys are not going to the manual, they’re not going to the internet. They’re literally saying, ‘I have a low light situation, this particular model of camera is unable to detect these kinds of incidents, what should I do?’ And it’s literally spitting out the instructions to ‘do this, change the F Stop to this, change the lighting conditions, consider adding flood lights’. It’s absolutely fascinating.
But we have run into a few incidents where it actually hallucinates the answer. An example of this is three different brands of camera that we’re using, two of the brands and many other brands have a particular feature that allows a camera to automatically reboot, but the one brand that we’re using at a particular site doesn’t have this feature. So we’re running into a similar problem. AI has spelled out a very, very convincing answer, just ignoring the facts about how we should go about setting the reboot schedule, and a couple clicks in, you realise this thing’s completely wrong. It’s completely hallucinated what that feature should be on that camera, but it doesn’t exist.
You can ask ChatGPT what are the health benefits of eating crushed glass? And it will do its damnedest to convince you of the benefits of eating crushed glass. From a practical implementation perspective, those are the things to watch out for. You should always allow your staff to vet because they’re the experts and not for another 50 years will a full replacement of a human be practical.
The ability to use OpenAI is fascinating. The speed that the company is moving is incredible and Microsoft is behind it. They’re a very interesting company. Microsoft will produce a lot of commercial viability for OpenAI.
OpenAI’s APIs are already available. There are APIs available where you can feed it a set of documents that are labelled by humans; think insurance policies, legal questions, all of this stuff. You can train those AIs relatively easily to answer questions specifically related to those documents.
The human-in-the-loop flow would connect that to a CSR workflow. An email comes into an inbox that gets fed straight to ChatGPT, it produces output, and instead of replying back to the customer, it replies to your customer service rep. The customer service rep can just hit send if they believe it’s accurate, or maybe make a couple of minor tweaks. So if you think about an overworked call centre or having to deal with spiky loads like the floods, those kinds of situations are very practical and they prevent the weaknesses, like the hallucination.
That level of hallucination is a really fundamental point. I can imagine that there’s a whole lot of SEO-driven content that’s going out there that’s just a lot of nonsense, really.
I think its ability to produce copy is pretty interesting. It does produce copy in a pretty simplified way. I don’t know if anyone’s noticed, but humans tend to have an ego and we sometimes use words bigger than we should or is necessary. But ChatGPT is refined to produce the shortest, easiest understanding answer.
It’s pretty interesting that any output it produces, a human still has to agree with that opinion and then you have to copy, paste and stick it on LinkedIn. So if you were a fitness pro and you said you should eat crushed glass, for example, ChatGPT produced that, but then you backed it by supporting it.
What are some of the other limitations?
Cost and strategic control. Those two are closely intertwined because of these large language models. The AI architecture was actually developed ironically by Google in a paper in 2017 called ‘Attention is All You Need’. Fundamentally ChatGPT, DALL-E, all of these generative AIs, the actual AI architecture hasn’t changed in five years. They’re literally using minor variations from that original research from 2017, which is very strange for technology.
It’s very strange that something, especially in the software field goes five years without evolution and you see such a dramatic change in use. What’s actually happening is the model architecture’s capability hasn’t been maxed out yet because it hasn’t been supplied all of the data that it could be supplied.
So when you think about AI, it all comes down to data labelling. Data labelling is expensive because you need humans and computing resources to do it. And then you need to take that training data and actually train a new model, which is also very expensive.
To train something like DALL-E, it’s probably a couple of million dollars per run. So when you come back to strategic control companies, like Microsoft and Google, as you create dependency on your business workflows, you’re actually giving up a little bit of strategic control. If you are reliant on this service, then they have the ability to increase the price, and decrease the quality. So there is a little bit of a hedging element that you need to take there. As well as the core cost, compared to costs we are normally comfortable with, is a bit higher.
Is that where it’s going? Or is it more about loading specific data sets? Is bigger better, necessarily?
Absolutely. Bigger, better, properly labelled is the way to go. The ‘P’ in GPT stands for pre-trained. For an organisation, you should be asking yourself, what data do I have that nobody else has? Because you can take a pre-trained model and then augment it with your private data and you’ve created IP that way.
The main message and takeaway there is data labelling takes ages. If you’ve got millions of documents that need to be labelled and you think ChatGPT is here to stay, you don’t necessarily need to implement ChatGPT or AI straight away, but you damned as hell need to start labelling some data. Because all the quality control metrics and how you spin up all of that stuff, it’s effort.
What do you mean by labelling data?
An AI is basically just a statistic. In its most simplest form, you’re taking a piece of data that is objects on a table and you’re trying to teach an AI how to recognise objects on a table. What you would do is you’d take 20 photos of different kinds of jugs, 20 photos of different kinds of glasses and cups. And then you’d introduce a couple of random objects, like chairs and tables and stuff like that, and you take a small portion of that data out and that would be training data, like test data, and you would take out a labelling set as well.
That labelling set, say you have 500 images, you’d sample about 50 of those images and you’d tell humans to label that image as a cup, that image as a jug, that image as a table, that image as a chair. And then using software, the AI learns statistically what features in an image constitute having a cup in the image or a chair in the image, and then you just use a big stick. You tell the AI, ‘Yes, you got that right’, so the next time it does that again, it’s going to reinforce that behaviour.
Effectively you end up with the model, which is a statistical representation that’s able to take any image and look at its features and say, statistically because it has those features, I’m going to label that as a glass or a cup or a chair. The training data is your insurance documents and your customer support queries, for example, what question gives what answer. It refers to which part of the document that would be your labelled set, so that AI could learn, every time it sees that question, it should be referring to this documentation or vice versa. It gives you documentation and knows which question to produce.
We’re getting to the point here where we’re starting to talk about some of the future skills that we need. Do we need to start thinking about the sorts of skill sets that we bring on board?
I’m hopeful. They reckon the jet engine is the most complicated thing humanity’s ever built. And if you were to take the jet engine and work your way backward and drop yourself in like 10,000BC, there’s no way we could make a jet engine, right? All the way back, it required someone to make a tool to then create forging for metals and work your way up and get slide rules. We got to the moon using slide rules, and then someone invented a calculator and everyone was like, ‘we’re never going to have to learn’.
I know it’s a simple analogy, but humans have so much capacity that we waste every day doing menial things. The only thing that moves forward is time. That’s the only thing you can’t buy back, you can’t substitute. Anything that saves time and saves complexity allows us to move up the ladder, in terms of high-load problems. We go from foraging to building a jet engine, using calculators, et cetera.
If we have AI as the new calculator, what high-order problem can we just focus on? The fact that we learn calculus in six and seven forms now, which was only invented 200 years ago, will the next generation of kids take quantum physics as their baseline? And if their baseline is quantum physics, then the output could be magnificent.
Frances Valintine (Founder & CEO of academyEX): I can see in a very short time that people will move away from search engines and start to utilise AI, which is not about promoting a company, but providing an answer.
So if we rely on getting our business on the top page of the search, what do we do when people aren’t using search?
It’s very insightful and absolutely spot on. One thing I’ve learned in my career is never underestimate Microsoft. I remember them coming in with Xbox and gaming and everyone laughing them out of the building. Now they’re the dominant player.
DOS, the very first product, was not written by them. Gates bought it for $50,000 from some guy in a garage, searched and replaced QDOS with MS-DOS and shipped it off to IBM. Word Perfect, Microsoft Office, Excel, all acquired. There are actually hardly any products in the Microsoft suite that they built. I think Windows 3.1 was the only original product that they ever had.
Interestingly, Microsoft is such a diversified business that they can afford to take huge risks because they don’t have a lot of reputational damage of being producers of hallucinated answers, as long as they’ve got a couple of checks and balances.
So if somebody walked into Satya Nadella’s office and said, ‘I can give you half of Google’s revenue overnight’, they’re going to put everything they have behind this.
The really interesting thing that is unique to the situation is the technology behind it is open source. Google had their private search engine, which is why they absolutely dominated the industry, because no one could replicate what they had at the time.
Whereas ChatGPT, the paper is published, there are open source implementations of it. Everyone understands transforming architecture. It’s going to come down to the data, so these bigger companies can now enter the market.
In terms of GPT-4 or the next version, I don’t think the step change is going to be as dramatic from what we have right now because the architecture hasn’t changed. They’re just adding incrementally more data, which creates its own problem because I think it means that there’ll be a stalemate in technology, which means that you focus on commercialisation, which brings us back to your first point.
So you need to be taking a good hard look at your customer channels and saying, ‘Which one isn’t going to be affected as much by having to surface a sponsored answer than a question, or in a search query?’
You might find channels like TikTok and Instagram – image-based things that are based on physical things, real-world events, live events – we might actually see a swing back to trying to focus there a little bit more.
And consumers might also ask more of that because I think it’s going to take a little while for them to adapt. Because sometimes there are search queries where you don’t know exactly what you’re looking for, and that’s where searching for a context helps rather than an answer.