In August this year, eight teams gathered for the three-day final of DARPA’s AlphaDogfight trials. The teams had developed Artificial Intelligence (AI) pilots to control F-16 fighter aircraft in simulated dogfights. The winner beat the human USAF pilot in five dogfights out of five. I’m not really sure what this means for the defence of the free world, partly because I don’t know anything about air combat (other than endless games of Falcon on my iMac years ago) but largely because it seems to me that there is a context error in the framing of the problem. Surely the future of air warfare isn’t robo-Maverick dogfighting with North Korea’s top fighter ace but $100m Tempest fighters (which as Sebastian Robin pointed out in Forbes earlier this year, might make more sense as unmanned vehicles) trying to evade $1m AI-controlled intelligent drones and machine-learning (ML) swarms of $10,000 flying grenades that can accelerate and turn ten times quicker. The point about budget is important, by they way. Inexpensive Turkish drones have been observed in Syria and Libya destroying enemy armour that costs ten times as much.
As is often said then, we plan for the battles of the next war using the weapons of the last one. This is true in finance just as it is in defence. A couple of years ago, John Cryan (then CEO of Deutsche Bank) said that that the bank was going to shift from employing people to act like robots to employing robots to act like people. They put this plan in motion and earlier this year announced big staff reductions as part of a radical overhaul of operations. At the same time, the bank announced that it will spend €13bn on new technology over the next four years. These investments in infrastructure “are already making some humans at Deutsche unnecessary”. The bot takeover in banking is already happening.
It is not surprising to see this takeover happening so quickly, because there are many jobs in banks that are far simpler to automate than that of a fighter pilot. In India, YES Bank has a WhatsApp banking service that uses a chatbot (a conversational AI with extensive financial knowledge) to help customers to check balances, order cheque books, report unauthorised transactions, redeem reward points, connect with help desks and to apply for more than 60 banking products. And this is only the beginning. The Financial Brand reported on research from MIT Sloan Management Review and the Boston Consulting Group showing that only one in ten companies that deploy AI actually obtain much of a return on ROI. This is, as I understand it, because while bots are good at learning from people, people are not yet good at learning from bots. A robot bank clerk is like a robot fighter pilot, an artificial intelligence placed in the same environment as a human: when organisations are redesigned around the bots, then the ROI will accelerate.
The robots will take over, in banking just as in manufacturing. So will you be served by a machine when you go to the bank five years from now? Of course not. That would be ridiculous. For one thing, you won’t be going to a bank five years from now under any circumstances. You’ll be explaining “going to” a bank to your baffled offspring just as you were explaining “dialling” a phone to them five years ago. But you won’t be going to your bank in cyberspace either. Your bot will. As I pointed out in Wired this time last year, the big change in financial services will come not when banks are using AI, but when customers are.The big change in financial services will come not when banks are using AI, but when customers are. Click To Tweet
Think about it. Under current regulations, my bank is required to ask me to make decisions about investments while I am the least qualified entity in the loop. The bank knows more than I do, my financial advisor knows more than I do, the pension fund knows more than I do, the tax authorities know more than I do. Asking me to make a decision in these circumstances seems crazy. Much better for me to choose an approved and regulated bot to take care of this kind of thing. And if you are concerned that they may be legal issues around delegating these kinds of decisions to a bot, take a look at Ryan Abbott’s argument in MIT Technology Review that there should be a principle of AI legal neutrality asserting that the law should tend not to discriminate between AI and human behaviour. Sooner or later we will come to regard allowing people to make decisions about their financial health as dumb as letting people drive themselves around when bots are much safer drivers.
The battle for future customers will take place in landscape across which their bots will roam to negotiate with their counterparts – ie, other bots at regulated financial institutions – to obtain the best possible product for their “owners”. In this battle, the key question for customers will become a question of which bot they want to work with, not which bank. Consumers will choose bots whose moral and ethical frameworks are congruent with theirs. I might choose the AARP Automaton, you might choose the Buffett Bot or the Megatron Musk. Once customers have chosen their bots, then why would they risk making suboptimal choices around their financial health by interfering in the artificial brain’s decisions?
Imaging the world of the future as super-intelligent robots serving mass-customised credit cards and bank accounts to human customers is missing the point — just as imagining the world of the future as F-16s with robot pilots duelling M-29s with robot pilots is — because in the future the customers will be super-intelligent robots too.
[An edited version of this article first appeared on Forbes on 24th November 2020.]