Actually, I think there is a link between AI and the blockchain

There is a character flaw in some people (eg, me) which means when they see something that is obviously wrong on Twitter they feel compelled to comment. This is why I couldn’t stop myself from posting a few somewhat negative comments about an “infographic” on the connection between AI and the blockchain, even though I could have just ignored the odd combination of cargo cult mystical thinking and a near-random jumble of assorted IT concepts and gone about my day.

When it came down to it though, I just couldn’t. So, naturally, I decided to write a blog post about it instead. The particular graphic made a number of points, none of which are interesting enough to enumerate in this discussion, but at its heart was the basic view set out, here for example, that blockchain and AI are at the opposite ends of a technology spectrum: one fostering centralised intelligence on closed data platforms, the other promoting decentralised applications in an open-data environment. Then, as the infographic “explained”, the technologies come together with AIs using blockchains to share immutable data with other AIs.

Neither of those basic views is true though. Whether an AI is centralised or decentralised is tangential to whether it uses centralised or distributed data, and whether “blockchain” is used by centralised or decentralised applications is tangential to whether those applications use AI. What is important to remember is that decentralised consensus applications running on some form of shared ledger technology can only access consensus data that is stored on that ledger (obviously, otherwise you couldn’t be sure that all of the applications would return the same results). An AI designed to, for example, optimise energy use in your home would requires oracles to read data from all of your devices and place it on the ledger and then another set of factotums to read new settings from the ledger and update the device settings. What’s the point? Why not just have the AI talk to the devices?

There is, however, one part of the shared ledger ecosystem—of consensus applications running on consensus computers—that might benefit considerably from a shift to AI and this is the applications. People are very bad at writing code, by and large, and as the wonderful David Gerard observed in the chapter “Smart contracts, stupid people” in his must-read “Attack of the 50 foot blockchain”, they are particularly bad at writing smart contracts. This is clearly sub-optimal for apps that are supposed to send anonymous and untraceable electronic cash around. As David says, “programs that cannot be allowed to have bugs … can’t be bodged by an average JavaScript programmer used to working in an iterative Agile manner… And you can even deploy fully-audited code that you’ve mathematically proven is correct — and then a bug in a lower layer means you have a security hole anyway. And this has already happened”.

It seems to me that one thing we might expect AIs to do better than people is to write code. Researchers from Oak Ridge National Laboratory in the US foresee AI taking over code creation from humans within a generation. They say that machines, rather than humans, “will write most of their own code by 2040”. As it happens, they’ve started already. AutoML was developed by Google as a solution to the lack of top-notch talent in AI programming. There aren’t enough cutting edge developers to keep up with demand, so the team came up with a machine learning software that can create self-learning code… Even scarier, AutoML is better at coding machine-learning systems than the researchers who made it.

When we’re talking about “smart” “contracts” though we’re not talking superhuman programming feats, we’re really talking about messing around with Java and APIs. Luckily, last year saw the arrival of a new deep learning, software coding application that can help human programmers navigate Java and APIs. The system—called BAYOU—was developed at Rice University with funding from the US Department of Defense’s Defense Advanced Research Projects Agency (DARPA) and Google. It trained itself by studying millions of lines of human-written Java code from GitHub, and drew on what it found to write its own code.

Putting two and two together then, I think I can see that if there is an interesting and special connection between AI and “blockchain” then it’s not about using the blockchain as a glorified Excel spreadsheet that AIs share between themselves, it’s about writing the consensus applications for the consensus computers. They still wouldn’t be contracts, but they would at least work.

An island of artificial intelligence

As I’ve written many times (e.g., here), it is difficult to overestimate the impact of artificial intelligence (AI) on the financial services industry. As Wired magazine said, “it is no surprise that AI tops the list of potentially disruptive technologies”. With Forrester further forecasting that a quarter of financial sector jobs will be “impacted” by AI before 2020, there’s an urgent need for the island begin to think about the next generation of financial services and begin to formulate a realistic strategy not only to copy with the changes but to exploit them. It is because the need is so urgent that I was delighted to be asked to give a keynote at the Cognitive Finance AI Retreat in September (Which began with a beach barbecue, something I recommend to conference producers everywhere.)

Beach BBQ

A beach barbecue is always a good idea at a conference.

The event was put together by my good friends at Cognitive Finance working with Digital Jersey (where I am advisor to the board) and they did a great job of bringing together a spectrum of both subject matter experts and informed commentators to cover a wide variety of issues and provide a great platform for learning.

On the first day of the event, political economist Will Hutton emphasised that financial services will be at the “cutting edge” of the big data revolution, pointing out that not only does the sector hold highly personal, highly valuable data about individuals, but that it has more complex oversight requirements than most other sectors.

Clara Durodie, CEO of Cognitive Finance Group kicked off the event by talking about the potential for AI to help to manage the colossal flows of data that characterise the financial sector today and I think she was right to highlight that the use of the technologies presents tremendous opportunities here.

In his superb “Radical Technologies, Adam Greenfield wrote of the advance of automation that many of us (me included, by the way) cling to the hope that “there are some creative tasks that computers will simply never be able to peform”. I have no evidence that financial services regulation will be one of those tasks, so in my talk I suggested AI will be the most important “regtech” of all and made a few suggestions as to how regulators can plan to use the technology to create a better (that is faster, cheaper and more transparent) financial services sector. The strategic core of my suggestion was that jurisdictional competition to create a more cost-effective financial services market might be a competition that Jersey could do well in.

AI as Regtech

Regulation, however, was only one the topics discussed in a fascinating couple of days of talks, discussions and case studies. The surprise for me was that there was a lot of discussion about ethics, and how to incorporate ethics into the decision-making processes of AI systems so that they can be accountable. I hadn’t spent too much time thinking about this before, but I was certainly left with the impression that this might be one of the more difficult problems to address and talking with very well-informed presenters. Listening to experts such as Dr. Michael AikenheadKay Firth-ButterfieldDr. Sabine Dembrowski, Andrew Davies and many other leading names in finance and AI left me energised with the  possibilities and intrigued by the problems.

AI is an event horizon for the financial services industry. With our current knowledge, we simply cannot see (or perhaps even imagine) the other side of the introduction of true AI into our business. But we can see that our traditional “laws” of cost-benefit analysis, compliance and competition will not hold in that new financial services space, which is why it is important to start thinking about what the new “laws” might be and how the financial services can take advantage of them.