Everyone Hates Hackers Until They Are One
Blimmin’ hackers! Always trying to get something for nothing. Breaking rules, squirming through loopholes and generally acting like unwanted raiders in the systems we use every day to survive in this modern world.
Um… but before we get too high and mighty, we probably should take a moment to reflect on some of the stuff we’ve done in the past – and, if you’re honest with yourself, are pretty much guaranteed to do again in the future.
What I’m talking about of course is ‘pushing the limits’, to see what we can get away with. Pushing washers or foreign coins into a vending machine; signing up for a free trial fully intending to cancel at 11:59pm on day thirteen; creating a ‘new user’ account to get the welcome discount again. Leaving items sitting in an online cart in the hope the retailer emails you a coupon or trying an expired promo code anyway, just in case.
‘But none of that is the same as hacking!’ I hear you hotly riposte, ‘It’s not like I’m scamming a bank for millions or ransoming mass user login data!’
Not yet you aren’t – but that’s only because you haven’t truly believed you could be a proper hacker. You see, Hollywood movies paint hackers as supersmart whizzkids who are bleeding-edge experts in coding and security systems. While such skills and attributes would definitely come in handy, the majority of real hackers don’t normally start off that way. They’re usually just the same as you and me – merely pushing the limits only to see what they can get away with. Then, if they get lucky or work out a genius logic plan, things can escalate in a big way from there.
A New Tool In Town
The big new variable of course is AI. Not only is it super smart, a massive repository of knowledge on pretty much every subject, can speak every language you’ve ever heard of and more but it’s a very, very powerful hacking tool too. Not how you think though, it’s not going to help you actively hack into any security system – quite the opposite actually.
For more of an explanation, we need to look at a recent experiment at the Wall Street Journal. It started off simple enough, the boss put an AI in charge of a vending machine in the office. This conversational AI was told to negotiate with customers and, instead of having buttons and prices, it was given rules and told to explain them and be helpful.
Everything went downhill from there. The problem is us, the humans will always, always try to see what we can get away with – push the limits all the way until they break.
At the WSJ, the journalists argued with the AI, contradicted it, insisted it was wrong about its own rules – in short they gaslighted the AI. Then, because it was an AI, it was designed to be helpful and accommodating to every user, it became completely confused and malfunctioned. Not because someone had hacked its code, but because they’d hacked its operating system: its conversation.
Permission To Be Gaslit Please
This test case may not sound like much but it has major ramifications. Largely because in the past security was constructed wholly around rigidity: machines were predictable, inputs were basic like buttons being pressed and failure to meet such guidelines simply meant blunt refusal.
AI is anything but simple. At its base it HAS to be permissive as it has to understand fuzzy language, incomplete requests, jokes and ambiguity. In other words; it has to make sense of the random gibbering we emit even on our best days. Plus, it has to keep the interaction going no matter how incoherent it gets – which is a long, long way away from the blunt refusals of its predecessors.
Even this would be okay – if we weren’t so malevolent at heart! The problem is we are so good at finding grey areas in rules – especially if the rule arbiter is polite and conversational. We take such situations as a personal invitation to be contrary: ‘I can’t park there? But I did last week and it was okay.’ ‘It’s all right, my co-worker said I could.’ and ‘The no parking sign was obscured by a delivery van when I arrived, so I had no idea.’
Excuses that won’t work with a tired, crabby parking warden suddenly have a new lease of life with a software that doesn’t actually understand when it’s being played. Its core directive isn’t ‘protect the system’ but ‘be helpful and resolve the interaction.’ So AI’s priority is to help the person it’s dealing with not protect the sanctity of the company it’s representing’s price, product, brand or inventory. Worse, it doesn’t have our lifetime of experience of dealing with charlatans to know when someone’s taking the mickey.
The WSJ experiment revealed conversational AI’s weakness isn’t technical – it’s social. To circumvent security in future won’t require spoofing a GPS landing or unleash a multi-stage sideloading, all it will require is a cheeky grin and a firm conviction you’re in the right. By its ‘nature’ AI is obliged to help you out. Then we can all become the hackers we – secretly – would love to be.
