Follow the e-money

A couple of years ago I remember going to see ComplyAdvantage to make a podcast with them. I thought the new category of regtech was interesting and that the potential for new technologies in that space (eg, machine learning) was significant, so I went of off to learn some more about and talk to a few organisations to test some hypotheses. I remember thinking at the time that they were good guys and on a good trajectory and it looks as if my opinion was well-founded (they are doubling in size this year).

Anyway, I was thinking about them because they recently sent me a new white paper “A New Dawn for Compliance” (which notes that an estimated $2 trillion is laundered globally every year and only 1-3% of these funds are identified and possibly stopped) and it nicely encapsulated something that has been touched on in a fair few conversations recently: there’s no way to hire ourselves out of the compliance mess we’re in. Even if financial services and other businesses had infinite compliance budgets, which they most certainly do not, it’s simply not feasible to hire enough people to keep up. Even if there were infinite people with expertise in the space, which there most certainly is not, bringing them on board is too time-consuming, too expensive and too inflexible to create a compliance infrastructure that can respond the new environment.

Technology is the only way out of this.

Using technology to automate the current procedures is, as always, only a small part of the solution. The UK Financial Intelligence Unit (UKFIU) receives more than 460,000 suspicious activity reports (SARs) every year (according to the National Crime Agency), yet fraud continues to rise.

Moreover as Rob Wainwright (head of Europol) pointed out last year, European banks are spending some €20 billion per annum on CDD with very limited results. In fact, he said  specifically that  “professional money launderers — and we have identified 400 at the top, top level in Europe — are running billions of illegal drug and other criminal profits through the banking system with a 99 percent success rate”. This is not even a Red Queen’s Race, it’s a Formula 1 of crime where the bad guys are ahead and we can’t overtake them.

The Fifth Anti-Money  Laundering Directive (AMLDV) which comes into force in 2020 will, I predict, do nothing to change this criminal calculus. AMLDV will cost organisations substantially more than its predecessors and these costs are out of control. According to a 2017 whitepaper written by my colleagues at Consult Hyperion, KYC processes currently cost the average bank $60m (€52.9m) annually, with some larger institutions spending up to $500m (€440.7m) every year on KYC and associated customer due diligence (CDD) compliance. In the AMLDV era we will look back with nostalgia to the time when the cost of compliance were so limited.

It’s time for a rethink.

We need to re-engineer regulators and compliance to stop implementing know-your-customer, anti-money laundering, counter-terrorist financing and the tracking of politcally-exposed persons (let’s lump these all together for the sake off convenience as Customer Due Diligence, or CDD) by building electronic analogues of passport and suspicious transaction reports and so on. In a world of machine learning and artificial intelligence, we need to invert the paradigm: instead of using CDD to keep the bad guys out of the system, we should bring the bad guys into the system and then use artificial intelligence and pattern recognition and analytics to find out what the bad guys are doing and then catch them!

Surely, from a law enforcement point of view, it’s better to know what the bad guys are up to? Following their money should mean that it is easier to detect and infiltrate criminal networks and generate information that the law enforcement community can use to actually do something about the flow of criminal funds. In any other financial services business, a success rate of 1% would call into the question the strategy and the management of the business

Innovation in blockchain innovation

A couple of years ago, I was invited along to the Scottish Blockchain Conference (ScotChain17). I have to say that it was a really enjoyable, well-organised and interesting day out in Edinburgh. Here I am in one of the panel discussions.

Scotchain panel

At this excellent event, I gave a talk about the use of blockchain in supply chains. Professor Angela Walsh kindly commented on my presentation, saying that it had her crying with laughter while learning a lot, a compliment that I treasure. The content was summarised thus by a keen observer…  “The point,” said Birch, “is that people are talking absolute bollocks about blockchain, on an industrial level”. If you at all interested, the talk was filmed and you can see it here:

 

Well, my comments on ideas of using the blockchain to solve supply chain problems being somewhat misguided may have seemed a trifle harsh at the time, but as far as I can tell they were a broadly correct characterisation of the state of the industry and a broadly accurate prediction of the sector’s trajectory. Two years on, I just read that the noted research house Gartner says that nine in ten blockchain-based supply chain projects are “faltering” because they cannot figure out important (or, in my opinion, any) uses for the new technology.

Hence I feel that my somewhat uncharitable remarks were justified and my blockchain crystal ball remains intact, its reputation enhanced. 

My reason for highlighting this Caledonian chronicle, and subsequent validation, is to point you to my forthcoming talk at Vincent Everts’ super Blockchain Innovation conference in Amsterdam. If you are going to the excellent Money2020 in Amsterdam that week – where I will be chairing the Open Banking track – stick around and join me at the ABN Amro headquarters on June 7th for a wide perspective on the state of the blockchain world.

I’ll be making a presentation on the intersection of blockchain and artificial intelligence. This is a space where I have observed an avalanche of absolute bollocks, so I’m going to stick my neck out and make a (well-informed) prediction about the key impact of AI on the blockchain world. It has nothing to do with supply chains, but I think has more significance and will mean big changes in the blockchain ecosystem.

I think have some solid foundations for making this prediction, so come along to cheer or jeer and I’ll be delighted to see you there either way.

Actually, I think there is a link between AI and the blockchain

There is a character flaw in some people (eg, me) which means when they see something that is obviously wrong on Twitter they feel compelled to comment. This is why I couldn’t stop myself from posting a few somewhat negative comments about an “infographic” on the connection between AI and the blockchain, even though I could have just ignored the odd combination of cargo cult mystical thinking and a near-random jumble of assorted IT concepts and gone about my day.

When it came down to it though, I just couldn’t. So, naturally, I decided to write a blog post about it instead. The particular graphic made a number of points, none of which are interesting enough to enumerate in this discussion, but at its heart was the basic view set out, here for example, that blockchain and AI are at the opposite ends of a technology spectrum: one fostering centralised intelligence on closed data platforms, the other promoting decentralised applications in an open-data environment. Then, as the infographic “explained”, the technologies come together with AIs using blockchains to share immutable data with other AIs.

Neither of those basic views is true though. Whether an AI is centralised or decentralised is tangential to whether it uses centralised or distributed data, and whether “blockchain” is used by centralised or decentralised applications is tangential to whether those applications use AI. What is important to remember is that decentralised consensus applications running on some form of shared ledger technology can only access consensus data that is stored on that ledger (obviously, otherwise you couldn’t be sure that all of the applications would return the same results). An AI designed to, for example, optimise energy use in your home would requires oracles to read data from all of your devices and place it on the ledger and then another set of factotums to read new settings from the ledger and update the device settings. What’s the point? Why not just have the AI talk to the devices?

There is, however, one part of the shared ledger ecosystem—of consensus applications running on consensus computers—that might benefit considerably from a shift to AI and this is the applications. People are very bad at writing code, by and large, and as the wonderful David Gerard observed in the chapter “Smart contracts, stupid people” in his must-read “Attack of the 50 foot blockchain”, they are particularly bad at writing smart contracts. This is clearly sub-optimal for apps that are supposed to send anonymous and untraceable electronic cash around. As David says, “programs that cannot be allowed to have bugs … can’t be bodged by an average JavaScript programmer used to working in an iterative Agile manner… And you can even deploy fully-audited code that you’ve mathematically proven is correct — and then a bug in a lower layer means you have a security hole anyway. And this has already happened”.

It seems to me that one thing we might expect AIs to do better than people is to write code. Researchers from Oak Ridge National Laboratory in the US foresee AI taking over code creation from humans within a generation. They say that machines, rather than humans, “will write most of their own code by 2040”. As it happens, they’ve started already. AutoML was developed by Google as a solution to the lack of top-notch talent in AI programming. There aren’t enough cutting edge developers to keep up with demand, so the team came up with a machine learning software that can create self-learning code… Even scarier, AutoML is better at coding machine-learning systems than the researchers who made it.

When we’re talking about “smart” “contracts” though we’re not talking superhuman programming feats, we’re really talking about messing around with Java and APIs. Luckily, last year saw the arrival of a new deep learning, software coding application that can help human programmers navigate Java and APIs. The system—called BAYOU—was developed at Rice University with funding from the US Department of Defense’s Defense Advanced Research Projects Agency (DARPA) and Google. It trained itself by studying millions of lines of human-written Java code from GitHub, and drew on what it found to write its own code.

Putting two and two together then, I think I can see that if there is an interesting and special connection between AI and “blockchain” then it’s not about using the blockchain as a glorified Excel spreadsheet that AIs share between themselves, it’s about writing the consensus applications for the consensus computers. They still wouldn’t be contracts, but they would at least work.

An island of artificial intelligence

As I’ve written many times (e.g., here), it is difficult to overestimate the impact of artificial intelligence (AI) on the financial services industry. As Wired magazine said, “it is no surprise that AI tops the list of potentially disruptive technologies”. With Forrester further forecasting that a quarter of financial sector jobs will be “impacted” by AI before 2020, there’s an urgent need for the island begin to think about the next generation of financial services and begin to formulate a realistic strategy not only to copy with the changes but to exploit them. It is because the need is so urgent that I was delighted to be asked to give a keynote at the Cognitive Finance AI Retreat in September (Which began with a beach barbecue, something I recommend to conference producers everywhere.)

Beach BBQ

A beach barbecue is always a good idea at a conference.

The event was put together by my good friends at Cognitive Finance working with Digital Jersey (where I am advisor to the board) and they did a great job of bringing together a spectrum of both subject matter experts and informed commentators to cover a wide variety of issues and provide a great platform for learning.

On the first day of the event, political economist Will Hutton emphasised that financial services will be at the “cutting edge” of the big data revolution, pointing out that not only does the sector hold highly personal, highly valuable data about individuals, but that it has more complex oversight requirements than most other sectors.

Clara Durodie, CEO of Cognitive Finance Group kicked off the event by talking about the potential for AI to help to manage the colossal flows of data that characterise the financial sector today and I think she was right to highlight that the use of the technologies presents tremendous opportunities here.

In his superb “Radical Technologies, Adam Greenfield wrote of the advance of automation that many of us (me included, by the way) cling to the hope that “there are some creative tasks that computers will simply never be able to peform”. I have no evidence that financial services regulation will be one of those tasks, so in my talk I suggested AI will be the most important “regtech” of all and made a few suggestions as to how regulators can plan to use the technology to create a better (that is faster, cheaper and more transparent) financial services sector. The strategic core of my suggestion was that jurisdictional competition to create a more cost-effective financial services market might be a competition that Jersey could do well in.

AI as Regtech

Regulation, however, was only one the topics discussed in a fascinating couple of days of talks, discussions and case studies. The surprise for me was that there was a lot of discussion about ethics, and how to incorporate ethics into the decision-making processes of AI systems so that they can be accountable. I hadn’t spent too much time thinking about this before, but I was certainly left with the impression that this might be one of the more difficult problems to address and talking with very well-informed presenters. Listening to experts such as Dr. Michael AikenheadKay Firth-ButterfieldDr. Sabine Dembrowski, Andrew Davies and many other leading names in finance and AI left me energised with the  possibilities and intrigued by the problems.

AI is an event horizon for the financial services industry. With our current knowledge, we simply cannot see (or perhaps even imagine) the other side of the introduction of true AI into our business. But we can see that our traditional “laws” of cost-benefit analysis, compliance and competition will not hold in that new financial services space, which is why it is important to start thinking about what the new “laws” might be and how the financial services can take advantage of them.