I notice that in the considerable press comment concerning the possible introduction of a Facebook payment system and perhaps even a Facebook currency of some kind, commentators continually refer to a Facebook “stablecoin”. I am certain that they are wrong to use this term, because it does not mean what they think it means. I may well be facing a losing battle about this, but I am stickler for correct currency terminology.
So. Stablecoin. What?
In the Bank of England’s excellent “Bank Underground” blog, there was a post on this topic that said “The chances of a stablecoin keeping a stable price depends on its design. There are generally two designs of stablecoin: those backed by assets, and those that are unbacked or ‘algorithmic’”. They are right, of course, but I would like to present slightly more granular classification of stablecoin currencies. I think there are three kinds:
Algorithmic Currencies, in which algorithms manage supply and demand to obtain stability of the digital currency. This is what a stable cryptocurrency is: since a cryptocurrency is backed by nothing other than mathematics, it is mathematics that manages the money supply to hold the value of the steady against some external benchmark. This is what is meant by stablecoin in the original crypto use of the term.
Asset–backed Currencies, in which an asset or basket of assets are used to back the digital currency. I don’t know why people refer to these a stablecoins, since they are stable only against the specific assets that back them. An asset that is backed by, say, crude oil is stable against crude oil but nothing else.
Fiat-backed (aka Currency Boards), which are similar to a asset-backed currencies but where the assets backing the digital currency are fiat currencies only. There are mundane versions of these already: in Bulgaria, for example, where the local currency (the Lev) is backed by a 100% reserve of Euros.
As for that last category, it is effectively what is currently defined as electronic money under the existing EU directives, and therefore already regulated. Those coins backed by fiat currency, such as JPM Coin, simply provide a convenient way to transfer value around the internet without going through banking networks. Now, this may well be an advantage in cost and convenience for some uses cases but it is a long way from an algorithmic currency. If this is indeed what Facebucks turn out to be (ie, actual bucks that you can send around on Facebook, something along the lines of Apple Cash), then I have written before why I think they will be successful.
So will any or all of these catch on?
Predictions are of course difficult, but my general feeling is that it is the asset-backed currencies that are most interesting and most likely to succeed in causing an actual revolution in finance and banking. Algorithmic stablecoins and fiat “stablecoins” exist to serve a demand for value transfer, but this is increasingly served well by conventional means. I notice this week, for example, that Transferwise can now send money from the UK to Hong Kong in 11 seconds, a feat made possible by their direct connection to the payments networks of both countries. Why would I use a fiat token when I can send fiat money faster and cheaper?
Of course, you might argue that a digital currency board might allow people who are excluded from the global financial system to hold and transfer value but I am unconvinced. There plenty of ways to hold and transfer electronic value (eg, M-PESA) without using bank accounts. Generally speaking, people around the world are excluded because of regulation (eg, KYC) and if we want to do something about inclusion we should probably start here. If you are going to require KYC for the electronic wallet needed to hold your digital currency they customers may as well open a bank account, right?
(I’ve written before about how the need for an account hampered Mondex. When it was first launched, I went to a bank branch with £50 expecting to walk out with a Mondex card with £50 on it. What I actually walked out with was a multi-page form to open a bank account so that I could get a Mondex card which arrived some time later. And since I had to put my debit card into the ATM in order to load the Mondex card, I did what most other people did and drew out cash instead.)
I suppose there are some people who think that the anonymity and pseduonymity of cryptocurrencies might make them an attractive alternative to certain sectors, but this is probably a window. If cryptocurrencies were used for crime on a large scale then efforts would be made to police them. Bitcoin, in particular, is not a good choice for criminals since it leaves a public and immutable record of their actions but you can imagine a future in which the mere possession of an anonymous cryptocurrency becomes a prima facie cash of money laundering.
Looking at the “stable” stable, then, I’ll put my money on the middle way. I’ve said it before and I’ll say it again, there is a real marketplace logic to the trading of asset-backed currencies in the form of tokens and I expect to see an explosion of different kinds.
I’ve been reading an interesting paper from Northumbria University called “Recipes from Programmable Money“. The paper looks at what customers of the UK challenger bank Monzo have done with its integration with IFTTT (the “if this, then that” automation software) to draw some early lessons that may have wide applicability to post-PSD2 financial services infrastructure. This is fascinating to me (even though I think the title is wrong, because it’s not the money that is being programmed but the bank accounts) because it is natural to wonder what, once third-parties are free to build on banks’ interfaces because of PSD2, customers will want from the new product and service providers.
The paper goes about examining how real users (albeit savvy early adopters in the UK) used the ability to automate a selection of Monzo account actions. Since these automations are a small window into what users might want from from more general third-party API-based interactions, I think the researchers have uncovered useful insights about just how important XS2A will be. After all the speculation about what API access to accounts might mean for Europe’s banks, there’s no substitute for looking at what consumers actually do with the new technology.
It seems to me that the key finding of the paper is that “some of the most intriguing recipes in our corpus were those that integrated Monzo with applications that ordinarily have little to do with banking”. (“Recipes” are the IFTTT automation scripts.) That is, in general, consumers use banking services as integral to other services, which is what you might expect on reflection because users don’t want to do banking, which is boring, they want to do other more interesting things that happen to be facilitated by banking.
The authors also observe that “this proliferation of financial data across different platforms, and channels, highlights the way in which programmable money may cut across services” and that “we are seeing how money and transactions are potentially just another form of data, to be pushed and pulled around integrated services”. I am sure they are correct about this, which is why it will be so hard for banks to find effective strategies to compete with other providers of those integrated services. It may well be that only the lower margin “‘pipe” services are available to them, in which case they need to focus on operational efficiency to compete.
All very interesting, and wholly congruent with earlier analyses from informed industry observers (eg, me). But it’s another point made in the “programmable money” paper that caught my eye. It’s impossible to disagree with it when it concludes that technologies such as machine learning, AI and smart contracts “foreground the delegation of significant financial power to automated systems and agents”. As I wrote last year, in the context of competition in retail banking, the future choice of banking services provider (the AS-PSP, in the euro-jargon) will be made not by customers, but by bots. It seems to me that the early indications from the real world are that this is correct, and that it has many ramifications.
I’ll give you an example. If you live in the UK and are over the age of around 30, you may have seen an advertisement with a man in a spacesuit in it.
No, not that one. I mean an advert on TV, the sort of thing that no-one under 30 ever sees any more. It’s an advert for a bank. It doesn’t matter which one. The point is that it’s about brand and image. But what will be the point of it a world where an AI-powered child-of-IFTTT is doing the heavy lifting? Consumers may neither know nor care who their bank is. This will pose a challenge to those with a career in marketing, but it may have some positives too. For example, I can assure Barclaycard that my bot will pay no attention whatsoever to their advertisement with Simon Cowell in it, whereas like most normal people I would cancel my card because of it.
My bot will chose your bank on the basis of interest rates, response times, jurisdiction, functionality, service uptimes and other such measurable parameters. Your logo? Your sponsorships? Your history? Whatever.
The US is behind some other parts of the world, perhaps, but it is trending in the same direction. According to recent research, almost a third of American adults use no cash at all for their weekly purchases (it was a quarter back in 2015). Conversely, a fifth of Americans says that make nearly all of their purchases in cash. Against this backdrop, it is no surprise that some retailers, in some locations, are starting to go cash free. Now, as far as I am concerned, that’s up to them. Writing in the CATO Journal last year — “Special Interest Politics Could Save Cash or Kill It” CATO Journal 38(2): 489-502 (Spring 2018) — Norbert Michel said “it seems risky, at best, to give the government so much control over the form of payment citizens choose, but that is exactly what many policymakers are hoping to do”. He was talking about laws to ban cash, but the argument applies both ways. Should regulators care whether you pay in cash or not and, if they do care, what should they do about it?
Here’s a specific example. In March, Atlanta’s Mercedes-Benz stadium, home of the Atlanta Falcons, stopped accepting cash for sporting events. Now, I imagine the people who run the Mercedes-Benz to be business persons who operate according to the principles of profit and loss. They’re not making this decision because of some idealogical position about notes and coins. They wouldn’t be doing it unless they thought they would be better off without the costs of cash.
There is no US law on the subject. I see in Payment Law Advisor that the US Treasury Department has guidance on the issue, but it states that refusing cash may be allowable “on a reasonable basis, such as when doing so increases efficiency, prevents incompatibility problems with the equipment employed to accept or count the money, or improves security”. Security and efficiency are precisely the factors causing retailers to shift to cashless operators as far as I can see, so the Treasury guidelines seem to be working.
That does not, however, seem to matter to the State and City legislators who rising to the challenge of dragging America back into the 1950s, when the payment card was a notion restricted to future fiction and the concept of a mobile phone so alien as to be unimaginable. At that level there is a patchwork of regulation. Massachusetts apparently has a little-known 1978 law requiring retail stores to accept both cash and credit although it does not seem to be enforced and the legislature has yet to say whether it applies to restaurants. Food and drink are in the vanguard elsewhere, such as in Pennsylvania, where the head of the Pennsylvania Restaurant and Lodging Association says that there are lots of restaurants (as well as other businesses) that want to go cashless because “places that handle cash are less safe than those that don’t have cash on hand” and that in a cash business “taxes aren’t always paid”.
Yet US legislators seem to be in favour of maintaining this costly and inefficient state of affairs. The New York Times reports that the New Jersey Legislature and the Philadelphia City Council have already passed measures this year that would ban cashless stores and New York City, Washington, San Francisco and Chicago are consider doing something similar. Their objection is that cashlessness marginalises low-income communities. If this is true, and I have no reason to doubt the sincerity of these lawmakers, then it is a problem with the financial system not retailing. Penalising retailers by forcing them to accept cash because the financial system does not make a reliable, secure electronic alternative available to low-income (or, indeed, any other) communities is peverse.
I don’t want to discuss the causes here – that’s for another time – but the specifically US problem around financial inclusion is the root cause of the problem and that’s what should be tackled. If low-income people in Somalia can buy produce in the local market using their mobile phones, you can’t help but wonder why low-income people in Philadelphia can’t do the same, much to the benefit of society as a whole.
I’ve said many times that we need an identity infrastructure that deals with the realities of this modern world, the world of the Nth industrial revolution (where N is 4, or 5, or something similar). As things go from bad to worse, we need this infrastructure be a government priority and we need the private and public sectors to come together to deliver it. And if they don’t want to, if you don’t want to, then you should be made to. I’m not standing here flattered to be asked to deliver this keynote because digital identity is about making life easier when you log in to your bank or to do your taxes. I’m here because it is far more important than that. Digital identity is vital national infrastructure
We don’t have long to get our act together and we are starting from scratch. In the UK we have no tradition of identity cards or national identification systems, or anything like it. To the British, national identification is “papers, please”: something associated with authoritarian tyrannies, France and wartime. And even in wartime, the idea of requiring people to hold some form of identification was regarded as so fundamentally incompatible with the customs and practices of Her Majesty’s subjects that the last British identity cards (from the first and second world wars, essentially) drew on what Jon Agar memorably labelled “parasitic vitality” from other systems such as conscription and food rationing. Identity infrastructure was created as a form of mobilisation against the enemies of the Realm and the chosen implementation, the identity card, was not an end in itself, but a means to support those other activities in to aid the war effort.
This dislike of identification as a State function is hardly unique to the United Kingdom. In America there are similarly strong opinions on the topic and the failure of the Australia Card back in 2007 stems, I think, from the same common law roots. These views of course stand in stark contrast to the views of almost all other nations of the world. The majority of people on Earth have some form of state identification and would find it impossible to navigate daily life without it. That doesn’t make the need to be identified by the state at all times either right or proper, by the way, but that’s a different discussion for another day.
If the development of national identity infrastructure is, however, only possible as part of a war effort… well, I have to tell you that we are at war. It’s just that this time we’re in a cyberwar and our identity infrastructure needs to support mobilisation across virtual and mundane realms. World War 3.0 has already started but a lot of people haven’t noticed because it’s in the matrix. There was no specific date when this war broke out and there is no conceivable Armistice Day on which it will end. Rather, as Bruce Schneier put it in his excellent book Click Here to Kill Everybody last year, cyberwar is the new normal.
(This will, unfortunately, make the war movies of the future rather dull. No more Dunkirk or Saving Private Ryan, no more The Dambusters or Enemy at Gate. Instead movies will be about solitary individuals sitting in dimly-lit bedsits typing lines of Perl or Solidity while eating tuna out of a can.)
The advent of cyberspace conflict is not because computers and communications technologies have only just reached the Armed Forces. Far from it: the very first computers were developed to compute ballistic trajectories and part of my young life was spent trying to work out how to use radio and satellite technologies to keep NATO systems connected after a first strike against command and control infrastructure, which is why talk of white noise jamming and direct-sequence spread spectrum transmission still gives me a shiver. But in those far-off days, the reason for knocking out the NATO’s IT infrastructure was so that you could then send tank columns through the Fulda Gap or drop the Spetsnatz into Downing Street. There were cyber aspects to war, but it wasn’t a cyberwar. Now it’s all out cyberwar and as historian Niall Ferguson said in his book The Square and The Tower, it’s war between networks.
(The early British response to this new state of affairs was comfortingly backward-looking. Back in 2013 there was a plan for the creation of a digital Home Guard made up from well-meaning volunteers to stand on the cyber-landing grounds to repel invasion.)
Now, I’m sure that behind the scenes the Department of Defense have been working around the clock to defend our payment systems and water supplies against foreign hackers but I do wonder if the insidious threat from the intersection of post-modernism and social media had as a high a priority? It should have done, because as it turned out the enemy stormed Facebook, not the Fulda Gap. We need a wall right enough, but we need it to around our data.
Marshall McLuhan saw this coming, just as he saw everything else coming. Way back in 1970, when the same Cold War that I played my part in was well under way, he wrote in Culture is our Business that “World War III is a guerrilla information war with no division between military and civilian participation”. Indeed. And as we are now beginning to understand, it is a war where quiet subversion of the enemy’s mental assets is as important as the destruction of their physical assets. Social media are creating entirely new opportunities for what The Economist referred to as “influence operations” (IO) and the manipulation of public opinion. We all understand why! In the future, “fake news” put together with the aid of artificial intelligence will be so realistic that even the best-resourced and most professional news organisation will be hard pressed to tell the difference between the real and the made-up sort.
Smart cyber-rebels will want to take over social media, just as rebel forces set off to capture the radio and TV stations first: not to shut them down, but to control them. The lack of identity infrastructure makes it easy for them: at least you could see when your favourite news reader had been replaced by a colonel in a flak jacket, but you’ve no idea who is feeding the “news” to your social media timeline. It’s probably not even people anymore. While writing these words I read of (yet another) complaint about social media companies doing nothing to control co-ordinated bot attacks. But how are they supposed to know who is a bot and who isn’t? Whether a troll army is controlled by enemies of the state or commercial interests? If an account is really that of a first-hand witness to some event or a spy manufacturing an event that never happened?
The need to tell “us” from “them”, real from fake, insiders from outsiders, attackers from defenders is critical and the lack of an identity infrastructure (as much as the creation of identity infrastructures that are too easy to subvert) leaves us open to manipulation. We need to create an effective infrastructure as a matter of urgency but it should not be framed in the context of a 20th-century bureaucracy responding to the urban anonymity of the industrial revolution by conceiving of people as index cards, but in a 21st-century context based on McLuhan’s notions of identity forged in relationships. We need to create an environment of ambient safety, where both security and privacy are strengthened, twin foundations for the structures we need to build to prevent chaos.
(America may or may not need a Space Force, but it most certainly needs a Cyberspace Force.)
So this is my challenge to you. This is a conference I take very seriously and an audience that I respect. I am looking to you to man the barricades. I want you to begin the process of assembling the infrastructure that we so desperately need, so that I can tell my e-mail package to ignore messages that say they came from bank but didn’t, my web browser to put a red border around “news” that does not come from a reputable, cross-checked source and set my phone to ignore tweets that come from bots rather than people.
If this all sounds over-dramatic: it isn’t. I think it is perfectly reasonable to interpret the current state of cyberspace in these terms because the foreseeable future is one of continuous cyberattack from both state and non-state actors and digital identity is a necessary building block of our key defences. I sincerely hope that over the next couple of days you will find new ideas, new ways of co-operating and perhaps even a new mission to protect and survive in this new era of amazing opportunities, astonishing threats and terrifying risks.
Well, I’ve never appeared in a cartoon before (to the best of my knowledge) so my sincere thanks Richard Parry and “The Chaps” for their kind comment on this keynote. I should point out that I am well aware of the market failure around cybersecurity, but that’s a topic for another day!
Around a decade ago my son was, as is rather the fashion with teenagers, in a band. With some friends of his, he arranged a “gig” (as I believe they are called) at a local venue. There were five bands involved and the paying public arrived in droves, ensuring a good time was had by all. All of this was arranged through Facebook. All of the organisation and all of the coordination was efficient and effective so that the youngsters were able to self-organise in an impressive way. Everything worked perfectly. Except the payments.
When it came to reckoning up the gig wonga (as my old friend Paul Pike of Intelligent Venues would call it), we we had a couple of weeks worth of “can you send PayPal to Simon’s dad” and “he gave me a cheque what I do with it?” and “Andy paid me in cash but I need to send it to Steve“ and so on. Some of them had bank accounts, some of them didn’t. Some of them had bank accounts that you could use online and others didn’t. Some of them had mobile payments of one form or another and others didn’t. I can remember that at one point my son turned to me and asked “why can’t just send them the money on Facebook?”.
As I wrote at the time, I didn’t have a good answer to this because I thought that sending the money through Facebook would be an extremely good idea and I can remember discussing with some clients at the time what sort of services they might be able to offer to Facebook or other social networks that were empowered through an Electronic Money Issuing (ELMI) license and Payments Institution (PI) licence. The rudimentary business modelling was quite positive, and so I naturally assumed that there would be some sort of Facebook money fairly soon, especially because I am something of a proponent of community monies of one form or another.
I also wrote at the time that Facebook money, or Zuckbucks ($ZUC), could easily become the biggest virtual currency in the world given that there are so many people with Facebook accounts and the ability to send value instantly from one account to another via Facebook would be so attractive. You’ll remember that Facebook launched “Facebook Credits” so time ago but they weren’t really a currency, just a way of prepaying for virtual goods with the service. A virtual currency is something more, it’s true electronic money that you can send from one person to another. Well, it looks as if this is coming, as I read in the crypto press that Facebook “is talking to exchanges about potentially listing a cryptocurrency” [CoinDesk]. It looks as $ZUC might be just around the corner, and people are getting excited.
As I understand things, Mr. Zuckerberg has already decided integrate the social network’s three different messaging services — WhatsApp, Instagram and Facebook Messenger — on a single unified messaging platform and, according to the New York Times, have that platform implement end-to-end encryption. This would naturally be an ideal platform for a universal currency so it’s no surprise to hear that the company is now looking at just such an enterprise. Even if Facebook couldn’t read the details of a transaction, it would know that I just paid a car insurance company and might find some use for the data in the future.
My suspicions that a Facebook money might me rather successful were further strengthened while listening to one of my favourite podcasts, Pivot with Kara Swisher and Scott Galloway, on a plane last week. Scott said that his biggest friction in the physical world is charging (I couldn’t agree more – battery life is the bane of my road warrior existence) and that his biggest friction in the virtual world is payment. He cited the example of trying to buy wifi on a flight and having to mess around typing in card numbers like it was 1995 and pointed out just how much Facebook could gain by adding payments to their platform. Scott is surely right, and since the people at Facebook are smart, they must be looking at the potential to develop a new revenue stream that is separate from advertising with some enthusiasm.
Barclays equity research note on the subject (Ross Sandler and Ramsey El-Assal, 11th March 2019) reckon that a successful micro-payment service could add some $19 billion to Facebook’s revenues, so clearly I’m not the only one who is a little surprised that they haven’t already leveraged the technologies of strong authentication to get something off the ground already. It also notes that one of the problems with the original Facebook Credits business was the cost of interchange, a problem that has a very different shape now with interchange caps in place in various parts of the world and open banking giving the potential for direct access to consumer bank accounts (so that exchanges between fiat bank accounts and $ZUC would be free).
Facebook Marketplace has just added card payments [91Mobiles], as shown in the screenshot below, so that marketplace users can pay for goods directly without having to come out of Facebook. I think this is, frankly, a window into a one possible future for financial services!
These are boring old Visa and Mastercard payments, but presumably $ZUC can’t be far behind. Unfortunately, since there are no details that I can find on what exactly “Facebook Coin” is going to be, I can’t really offer any informed comment on the chosen implementation. If, however, it is something along the lines of JPM Coin then it will be a form of electronic money and governed by the appropriate rules and regulations (which is good, and since they have very smart people at Facebook I’m sure they’ve already spotted the advantages of providing a trusted, regulated global payment service). You can kind of see the idea: your Facebook account sprouts an automatic, opt-out, wallet. You can buy coins for this wallet using a debit card and then send them to anyone else with a wallet (why this needs the blockchain is not entirely clear, by the way, but that’s another discussion).
Wallets that have been KYC’d (put to one side what exactly this might entail) could store up to say $ZUC 10,000, wallets without KYC would be limited to say $ZUC 150. I think this might be a great opportunity for banks to use their federated and standardised digital identity infrastructure* to provide an attractive service to Facebook that might relieve them of onerous regulatory burdens. All Facebook has to do is get me log in to my bank and have them return some cryptographic token (with no personal information in it) to Facebook to indicate that the bank has done KYC and knows who I am. A bit of a win win.
This, at a stroke, would provide teenagers with a means to settle gig wonga, provide online retailers with instant payment across borders and provide brands a mean to reward consumer behaviour. If Facebook make it free to buy ZUC$ and guarantee to redeem at par for consumers, they could be on to a real winner. In Europe, if the Facebook wallet is combined with PSD2 to deliver instant load and instant payout, it delivers a serious play that will give people are reason to use the Facebook platform to organise their gigs, lay out their online wares and promote their brands instead of messing around with Snapchat or Youtube or email or blogs or whatever else they are using now.
* Note: does not exist. Images not from actual gameplay.
There is a character flaw in some people (eg, me) which means when they see something that is obviously wrong on Twitter they feel compelled to comment. This is why I couldn’t stop myself from posting a few somewhat negative comments about an “infographic” on the connection between AI and the blockchain, even though I could have just ignored the odd combination of cargo cult mystical thinking and a near-random jumble of assorted IT concepts and gone about my day.
When it came down to it though, I just couldn’t. So, naturally, I decided to write a blog post about it instead. The particular graphic made a number of points, none of which are interesting enough to enumerate in this discussion, but at its heart was the basic view set out, here for example, that blockchain and AI are at the opposite ends of a technology spectrum: one fostering centralised intelligence on closed data platforms, the other promoting decentralised applications in an open-data environment. Then, as the infographic “explained”, the technologies come together with AIs using blockchains to share immutable data with other AIs.
Neither of those basic views is true though. Whether an AI is centralised or decentralised is tangential to whether it uses centralised or distributed data, and whether “blockchain” is used by centralised or decentralised applications is tangential to whether those applications use AI. What is important to remember is that decentralised consensus applications running on some form of shared ledger technology can only access consensus data that is stored on that ledger (obviously, otherwise you couldn’t be sure that all of the applications would return the same results). An AI designed to, for example, optimise energy use in your home would requires oracles to read data from all of your devices and place it on the ledger and then another set of factotums to read new settings from the ledger and update the device settings. What’s the point? Why not just have the AI talk to the devices?
It seems to me that one thing we might expect AIs to do better than people is to write code. Researchers from Oak Ridge National Laboratory in the US foresee AI taking over code creation from humans within a generation. They say that machines, rather than humans, “will write most of their own code by 2040”. As it happens, they’ve started already. AutoML was developed by Google as a solution to the lack of top-notch talent in AI programming. There aren’t enough cutting edge developers to keep up with demand, so the team came up with a machine learning software that can create self-learning code… Even scarier, AutoML is better at coding machine-learning systems than the researchers who made it.
When we’re talking about “smart” “contracts” though we’re not talking superhuman programming feats, we’re really talking about messing around with Java and APIs. Luckily, last year saw the arrival of a new deep learning, software coding application that can help human programmers navigate Java and APIs. The system—called BAYOU—was developed at Rice University with funding from the US Department of Defense’s Defense Advanced Research Projects Agency (DARPA) and Google. It trained itself by studying millions of lines of human-written Java code from GitHub, and drew on what it found to write its own code.
Putting two and two together then, I think I can see that if there is an interesting and special connection between AI and “blockchain” then it’s not about using the blockchain as a glorified Excel spreadsheet that AIs share between themselves, it’s about writing the consensus applications for the consensus computers. They still wouldn’t be contracts, but they would at least work.
The media recently reported, somewhat breathlessly (eg, CNBC), that JP Morgan Chase (JPMC)is launching a “cryptocurrency to transform the payments business”. This sounded amazing so I was very excited to learn more about this great leap forward in the future history of money.
Now, many people took a look at this and pointed out that it is simply JPMC deposits by another name, and uncharitable persons (of whom I am not one) therefore dismissed it as a marketing gimmick. But it is more interesting than that. Here is the problem that it is trying to solve…
Suppose I am running apps (referred to by less well-informed media commentators as “smart” “contracts” when they are neither) on JPMC’s Quorum blockchain. Quorum is, in the terminology that I developed along with Richard Brown (CTO of R3) and my colleague Salome Parulava, their double-permissioned Ethereum fork (that is, it requires permission to access it and a further permission to take part in the consensus-forming process). I’m quite partial to Quorum (this is what I wrote about it back in 2017) and am always interested to see how it is developing and helping to define what I call the Enterprise Shared Ledger (ESL) software category.
Now suppose my Quorum app wants to make a payment – not in imaginary internet play money, but in US dollars – in return for some service. How can it do this? Remember that our apps can’t send a wire transfer or use a credit card because they can only access data on the blockchain. If the app has to pay using a credit card, and that app could be executing on a thousand nodes in the blockchain network, then you would have a thousand credit card payments all being fired off within a few seconds! You can see why this can’t work.
One way to solve this problem would be to have “oracles” reporting on the state of bank accounts to the blockchain and “watchers” (or “custom executors” as Darius calls them here) looking for state changes in the blockchain bank accounts that they could then instruct in the actual bank accounts. But that would mean putting the safe-to-spend limits for millions of bank accounts on to the blockchain. Another more practical solution would be to add tokens to Quorum and allow the apps to send these tokens to one another. This is, as far as I can tell from a distance, is what JPM Coins are for.
I have to say that this is a fairly standard way of approaching this problem. A couple of months ago, Signature Bank of New York, launched just such a service for corporate customers — with a minimum $250,000 balance — using another permissioned Ethereum fork, similarly converting Uncle Sam’s dollars into ERC-20 tokens. If you’re interested, I gave a presentation to the Dutch Blockchain Innovation Conference last year on this approach and why I think it will grow and the video is online [23 minutes].)
Animal, vegetable or mineral?
These JPM Coins (I simply cannot resist calling them Dimon Dollars, or $Dimon, for obvious reasons) have attracted considerable discussion but I thought I might contribute something different to the debate by trying to reason my way through to a categorisation. I talked about this on the panel in the “Blockchain and Cryptocurrencies” session at Merchant Payments Ecosystem in Berlin today, and you can see my slides here:
On the panel, I said that the $Dimon is e-money. Here’s why…
Is it “money”? No it isn’t. It is certainly a cryptoasset – a digital asset that has an institutional binding to a real-world asset – that in certain circumstances exhibits money-like behaviour. Personally, I am happy to classify such assets as forms of digital money, the logical reason that they are bearer instruments that can be traded without clearing or settlement.
Is it a “cryptocurrency”? No, it isn’t. A cryptocurrency has a value determined, essentially, by mathematics in that the algorithm to produce the currency is known and the value of the cryptocurrency depends only that known supply and the unknown demand (and, of course, market manipulation of various kinds). It is not set by an institution, government or otherwise.
Is it a “stablecoin”? No, it is isn’t. A stablecoin has its value maintained at a certain level with reference to a fiat currency by managing the supply of the coins. But the value of the $Dimon is maintained by the institution of JP Morgan irrespective of the demand for it.
Is it a “currency board”? No, it isn’t. A currency board maintains the value of one currency using a reserve in another currency. So, for example, you might have a Zimbabwean currency board that issues Zim Dollars against a 100% reserve of South African Rand.
In fact, as far as I can tell, the $Dimon is e-money, which is one particular kind of digital money. There are two main reasons for this:
First, according to the EU Directive 2009/110/EC, “Electronic money” is defined as “electronically, including magnetically, stored monetary value as represented by a claim on the issuer which is issued on receipt of funds for the purpose of making payment transactions […], and which is accepted by a natural or legal person other than the electronic money issuer”. This sounds awfully like, as Bloomberg put it, the $Dimon is “a digital coin representing United States Dollars held in designated accounts at JPMorgan Chase N.A.”. It is a bearer instrument (so “coin” is a reasonable appellation) that entitles the holder to obtain a US dollar from that bank and therefore seems to fall within that EU definition since people other than JPMC, albeit customers of JPMC, accept it in payment. (I would pull back from calling it digital cash because of this need to establish an account with JPMC in order to hold it.)
Second, because my good friend Simon Lelieveldt, who knows more about electronic money than almost anyone else, says so. Simon and I have long agreed that the trading of digital assets in the form of tokens is the most interesting aspect of current developments in cryptocurrency, a point I made more than once in my MPE talk.
It’s one of my favourite days of the year today! I am a payments romantic, so you will undoubtedly know why! Today across the civilised world, we celebrate Saint Valentine, the patron saint of customer verification methods (CVMs). We buy flowers and eat chocolates on this day every year cto commemorate the introduction of chip and PIN. Yes, chip and PIN was launched in the UK on 14th February 2006.
Yes, it’s lovely St. Valentine’s Day. Was it really thirteen years ago? The beautiful day, the day unromantically dubbed “chip and PIN day”, when we stopped pretending that anyone was looking at cardholders’ signatures on the backs of cards and instead mechanised the “computer says no” alternative. It really was! Thirteen years!
I’m sorry to say that in Merrie England, chip and PIN is on the wane. The majority of card transactions are contactless and, according to Worldpay (who should know), they have been for a few months now. Fraud is manageable because most transactions are authorised online now and would be whether we had chip and PIN or not. The offline PIN and “floor limit” world has gone. The world’s first optimised-for-offline payment system was launched after the world had already got online. This is why you see Brian Rommele writing that “by the time the UK implemented chip & PIN, the base concept and much of the technology was already almost 40 years old”.
Early chip and PIN focus group.
It is time to remind people what Saint Valentine stood for and reiterate why we are using chip and PIN at all. In ancient times, when European retailers could not go online to verify PINs due to the anticompetitive pricing of the monopoly public telephone providers, it made sense to verify the PIN locally (ie, offline). But this is 2019. We have smart phones and laser beams and holiday snaps of Ultima Thule. We can probably think about verifying PINs online again, or even replacing PINs with fingerprints or DNA or whatever.
Smart phone in particular mean change and, as I have bored people on Twitter senseless by repeatedly tagging “#appandpay rather than #tapandpay”, this will take us forward to a new retail payment environment in which the retail payment experience will converge across channels to the app. As payments shift in-app so the whole dynamic of the industry will change. Introducing a new payment mechanism faces the well-known “two-sided market” problem: retailers won’t implement the new payment mechanism until lots of consumers use it, consumers won’t use it until they see lots of retailers accepting it. This gives EMV a huge lock-in, since the cost of adding new terminals is too great to justify speculative investment.
When you go in-app, however, the economics change vastly. For Tesco to accept DavePay in store is a big investment in terminals, staff training, management and so on. But for the Tesco app to accept DavePay is… nothing, really. Just a bit of software. However traditional we might be, the marginal cost of adding new payment mechanisms is falling (particularly direct-to-account mechanisms because of open banking) and our industry needs to think about what that means.
I’m not saying that cards and PINs are going to go away any time soon, but what I am saying is that it’s time to start thinking about what might come next. Right now, that looks like smartphones with biometric authentication, but who knows what technologies are lurking around to corner to link identification and continuous passive authentication to create an ambient payments environment in which cards (and for the matter, terminals) are present only in a very limited number of use cases.
The Paris FinTech Forum this year was a superb event. I take my hat off to Laurent Nizri for pulling it all together and especially for his terrific first day panel with Christine Lagarde (who is Managing Director of the IMF and is therefore the woman in charge of money), Stefan Ingves (the governor of the Bank of Sweden), Carlos Torres Vila (Group Executive Chairman BBVA) and Kathryn Petralia (President of Kabbage) [video].
At one point, the conversation shifts to data. Carlos said that we should treat ownership of data as a human right, which I have to say I am not entirely sure about, and that “we should have regulation that forces data to flow” rather than the limited prescriptions of the 2nd Payment Services Directive (PSD2) “so that all sectors have to share their data, with consent, as banks have to do”.
(The reason that I’m not sure about the data ownership thing is that, as discussed in the MIT Technology Review recently, it may be a counterproductive way of thinking that “not only does not fix existing problems; it creates new ones”. Instead, was that article says, we need a framework that gives people the ability to stipulate how their data is used without requiring them to take ownership of it.)
That is a very interesting perspective on a very important issue.
What Carlos was talking about is the asymmetry at the heart of PSD2, an asymmetry that the regulators created and which if left to its own devices means an uncomfortable future for banks. I wrote about this back in 2017 for Wired, pointing out that the winner in this new environment will not be innovative startups across Europe but the people who already have all the data in world and can use data from the financial system to obtain even greater leverage from it. In other words, the GAFA-BAT data-industrial complex.
In Prospect (August 2018) there was a debate between Vince Cable, the former chief economist at Shell, and the economist John Kay. The issue was whether the internet giants should be broken up. Mr. Cable felt that the new data-industrial complexes (the DICs, as I call them, of course) need regulatory taming and that competition authorities should take a wider view of social welfare rather than focus solely on price, while Mr. Kay felt that regulators should focus elsewhere on higher priorities and let internet competition sort itself out. He has a point, because regulators have so far failed in this respect. As The Economist (Antitrust theatre, 21st July 2018) noted, despite headline grabbing fines and other antitrust actions, the European Commission has done little to strengthen competition.
So what to do? Do we sit back and allow the DICs to form unassailable oligarchies or should there be, as Carlos clearly thinks, a regulatory response? And if so, what response?
Mr. Cable’s call for some form of regulatory response is hardly unique. Last year I had the honour of chairing Professor Scott Galloway at a conference in Washington, DC. Scott is the author of “The Four”, a book about the power of internet giants (specifically Google, Apple, Facebook and Amazon). In his speech, and his book, he sets out a convincing case for regulatory intervention to manage the power of these platform businesses. Just as the US government had to step in with the anti-trust act in the late 19th century and deal with AT&T in the late 20th century, so Scott argues that they will have to step in again to save capitalism. His argument centres on the breaking up of the internet giants, as Mr. Cable called for, but I cannot help but wonder if this is an already outdated response to changing economic dynamics in a world where data is the new oil (and personal data is the new toxic waste). Perhaps there is a post-industrial alternative to replace that industrial age regulatory recipe for healthy competition in a future capitalist framework. As Viktor Mayer-Schönberger and Thomas Range note in Foreign Affairs (A Big Choice for Big Tech, Sep. 2018), a better solution is a “progressive data sharing mandate”. They suggest sharing anonymised subsets of data to boost competition, but I think there might be an alternative.
The Banking Example
To see what this might look like, consider the example of the UK’s banking sector where regulation at both the UK and European levels has turned it into a laboratory for what is called “open banking”. Here, a “perfect storm” of the combination of the Competition and Markets Authority (CMA) “remedies”, the European Commission’s Second Payment Services Directive (PSD2) “XS2A” (weird euro-shorthand for access to accounts) provisions and the Treasury’s push for competition in retail banking mean that new business models, never mind new product and services, will be developed and explored here first.
(The rest of Europe will move to open banking in September 2019, when PSD2 comes into force, and other jurisdictions such as Australia are bringing in similar regimes — more on this later.)
Under the open banking regime, the banks are required by the regulator to install sockets in customer accounts so that anyone can plug in and access those accounts (with the customers’ permission, of course). Who knows what new businesses will be created by companies using these standard plugs to access your bank account? Who knows what new services will be delivered through the wires? It is an earthquake in the finance world and no-one can be completely sure as to what the competitive landscape will look like when the shocks have settled.
At the heart of the new regime, which began in January of this year, is the requirement for banks to implement these sockets, technically known as Application Programming Interfaces (APIs), for third-parties to obtain direct access to bank accounts. Just as apps on your smartphone can use map data through the Google Maps API or post to your Twitter stream using the Twitter API, open banking means that apps will be able to pull your statement out through an HSBC API and tell my bank to send money through a Barclays API.
Thus there is a genuinely new financial services environment coming into existence. But who will take maximum advantage of it? The incumbent banks or fintech startups? Financial services innovators or entrepreneurs who want to harness the banking infrastructure for social good? Customers taking control or challenger banks able to deliver better services to them?
I don’t think it’s any of these. Deutsche Bank Research published a note PSD 2, open banking and the value of personal data (June 2018) noting that while the new, free interfaces open up opportunities with respect to payment services, retail financing and other tailored products for fintechs who can “seamlessly attach their innovative services to the existing (banking) infrastructure”, there are others who can similarly take advantage. Retailers with a large customer bases, for example. And of course the internet giants and, somewhat surprisingly perhaps, the existing retail banks. As Deutsche Bank point out, the incumbents could also benefit and act as third-party providers “vis-à-vis other account servicing banks” and offer an array of new or extended services to their customers, which will intensify competition among all providers.
We already see these responses out in the market. Deutsche Bank themselves have announced a project with IATA and there is great work being done by other incumbents (see for example, my Barclays mobile app) as well as challengers. Of particular interest I think is Starling Bank’s strategy to create a platform for new players. But… as I have said before, I think the regulators have made a miscalculation in their entirely laudable effort to increase competition in the banking sector. In brief, forcing the banks to open up their treasure trove of customer transaction data to third parties is not going to mean a thousand fintech flowers blooming, precisely because of the advantages it affords the incumbents vs. incomers. And while some big retailers will take advantage, the overall impact will be to tip the balance of power to a new, different and potentially more problematic oligarchy (to use Vince’s label).
What is going wrong?
Back in 2016, I said about the regulators demanding that banks open up their APIs that “if this argument applies to banks, that they are required to open up their APIs because they have a special responsibility to society, then why shouldn’t this principle also apply to Facebook?”. My point was, I thought, rather obvious. If regulators think that banks hoarding of customers’ data gives them an unfair advantage in the marketplace and undermines competition then why isn’t it true for other organisations in general and the “internet giants” in particular? As the Diane Coyle, Bennett Professor of Public Policy at the University of Cambridge, pointed out in the Financial Times a year ago (Digital platforms force a rethink in competition policy, 17th Aug. 2017), economies of scale and insurmountable network effects mean that it will be very difficult for fintech startups to obtain significant market traction when they are competing with these giants.
Now, of course, when I wrote about this last year for the Wired magazine Wired World in 2018, no-one paid any attention because I’m just some tech guy. But when someone like Ana Botin (Executive Chairman of Santander) started talking about it, the regulators, law makers and policy wonks began to sit up and pay notice. In the Financial Times earlier this year (Santander chair calls EU rules on payments unfair, 16th April 2018) she remarked on precisely that asymmetry in the new regulatory landscape. In short, the banks are required to open up their customer data to the internet giants but there is no reciprocal requirement for those giants to open up their customer data to the banks. Amazon gets Santander’s data, but Santander doesn’t get Amazon data. Therefore, as Ana (and many others) suspect, the banks will be pushed into being heavily regulated, low-margin pipes while the power and control of the giants will become entrenched (broadly speaking, the distribution of financial services has a better return on equity than the manufacturing of them).
It boils down to this: If Facebook can persuade me that it’s in my interest to give them access to my bank account, I can press the button to give it to them and that’s that. They can use the PSD2 APIs to get to my data. On the other hand, if a financial services provider can persuade me to give them access to my Facebook data… well, hard luck. Carlos said, rather elegantly, that one of the nice things about data as a resource is that it doesn’t get used up.
What is to be done?
Ms. Botin suggested that organisations holding the accounts of more than (for example) 50,000 people ought to be subject to some regulation to give API access to the consumer data. Not only banks, but everyone else should provide open APIs for access to customer data with the customer’s permission. This is what is being planned in Australia, where open banking is part of a wider approach to consumer data rights and there will indeed be a form of symmetry imposed by rules that prevent organisations from taking banking data without sharing their own data. If a social media company (for example) wants access to Australian’s banking data it must make its data available in a format determined by a Consumer Data Standards Body. (Note that these standards do not yet exist, and as I understand things the hope is that the industry will come forward with candidates.)
This sharing approach creates more of a level playing field by making it possible for banks to access the customer social graph but it would also encourage alternatives to services such as Instagram and Facebook to emerge. If I decide I like another chat service better than WhatApp but all of my friends are on WhatsApp, it will never get off the ground. On the other hand, if I can give it access to my WhatsApp contacts and messages then WhatsApp will have real competition.
This is approach would not stop Facebook and Google and the other from storing my data but it would stop them from hoarding it to the exclusion of competitors. As Jeni Tennison wrote for the ODI in June, a good outcome would be for “data portability to encourage and facilitate competition at a layer above these data stewards, amongst the applications that provide direct value to people”, just as the regulators hope customer-focused fintechs will do using the resource of data from the banks (who are, I think, a good example of data stewards). Making this data accessible via API would be an excellent way to obtain such an outcome.
It seems to me that this might kill two birds with one stone: it would make it easier for competitors to the internet giants to emerge and might lead to a creative rebalancing of the relationship between the financial sector and the internet sector. Instead of turning back to the 19th and 20th century anti-trust remedies against monopolies in railroads and steel and telecoms, perhaps open banking adumbrates a model for the 21st century anti-trust remedy against all oligopolies in data, relationships and reputation.
I don’t think that a digital ID card is quite the solution though, because I prefer a more sophisticated solution that is based on digital identities for everything and multiple personae for transactional purposes, but that’s splitting hairs at high level. I am right behind Mr. Carney on the need for a solution, although I think he was wrong when he went on to say that such a scheme could also prove controversial and could “only be introduced by the Government rather than the Bank of England”. In my opinion he is mixing up the controversial idea of a national digital identity card of some kind (and he may well be unaware of the government’s decision to stop funding their gov.verify online identity scheme) with the uncontroversial notion of a some form of secure and convenient identity management for the purposes of interacting with regulated financial institutions.
Only a day after Mr. Carney’s remarks, the Emerging Payments Association (EPA) released its report on money laundering and payments-related financial crime, calling for UK financial institutions and payment processors to create a “national digital identity scheme to tackle these threats”. So let’s take this national digital identity for financial services and digital ID card for online identity checking in Mr. Carney’s terms and call the concept, for sake of brevity, the Financial Services Passport, or FSP.
I don’t know if Mr. Carney has read my 2014 book Identity is the New Money (still available from all good bookshops and Amazon), but in there I wrote that one very specific use of a digital identity infrastructure “should be to greatly reduce the cost and complexity of executing transactions in the UK by explicitly recognising that reputation will be the basis of trust and therefore transaction costs. The regulators should therefore set in motion plans for a Financial Services Passport”.
A few year ago, I spent some time as co-chair (with Ian Jenkins of Deloitte) of the techUK Financial Services Passport Working Group, I was working on the concept of a financial services passport with a bunch of smart people and no-one took the slightest interest in this obviously sensible concept and I do not remember observing any inclination by the UK’s banks to work together on it.
That techUK Working Group, incidentally, was created because of recommendations of an earlier techUK report “Towards a New Financial Services” developed through 2013. Section 3 of this report is actually called “Identity and Authentication: Time for a Digital Financial Services Passport”. The conclusion of that section was:
There is clearly a need to look again at identity authentication in financial services. In addition to creating inconvenience for consumers, the current approach is expensive to maintain and inadequate in serving an increasingly digital financial services industry. As trusted authenticators of identity, a new standardised approach by financial services organisation could enable wider societal benefits, while also unlocking new opportunities for the industry. However, moving from the current fragmented identity infrastructure to a standardised financial services passport would require overcoming several challenges; from the competitive dynamics in financial services, to the extent and scope of liability, whilst simultaneously maintaining KYC and AML compliance.
In the first instance, the scope of a financial services passport needs to be more clearly defined. This requires a technology roadmap that can match objectives and requirements in managing digital identities in financial services with technical solutions and provide a feel for how trends may already be shaping the market in this space.
So what would a practical financial services passport actually look like? In the techUK discussions, we explored three broad architectures using the technology roadmap referred to above.
A centralised solution, some sort of KYC utility funded by the banks. This was seen as being the cheapest solution, but with some problems of governance and control. It could also be a single point of failure for the financial system and therefore unwise given that we are now in a cyberwar without end.
A decentralised “blockchain” (it wouldn’t really be a blockchain, of course, it would be some form of shared ledger) where financial institutions (and regulators) would operate the nodes and all of the identity crud (“create, read, update and delete”) would be recorded permanently.
A federated solution where each bank would be responsible for managing the identities of its own customers and providing relevant information to other banks as and when required.
At the time, I thought that the third option was probably best but I’m open to rational debate around the topic. The way that I envisage this working was straightforward: my bank creates a financial services passport using the KYC data that it already has and “stamps” the passport with a minimum set of attributes needed to enable transactions. So Barclays would create an FSP for me. Then, when I go to Nationwide to apply for a mortgage, I could present that FSP to Nationwide and save them (and me) the time, trouble and cost of KYC. Instead of asking me for my bank account details, home address and inside leg measurement, Nationwide can use the stamps in my passport.
As I recall, the technology bit of this was easy but there were two discussions about this that were difficult. One was about liability (I advocate the “Identrust model” of transaction liability) and the other was about payment (I advocate an interchange model where the organisation using the passport pays the passport originator).
Let’s just say for sake of argument though that in response to Mr. Carney’s comments, the FCA decided on a federated solution using the three-domain identity (3DID) model. It would look like this:
All of the standards and technologies needed to make this happen already exist except in one area. The banks already do the KYC in the Identification Domain, we have FIDO and biometrics and mandatory Secure Customer Authentication (SCA) in the Authentication Domain and the tools that we need in the Authorisation Domain.
Let’s imagine that the digital identity is, basically, a key pair. In this case, the virtual identity is then a public key certificate that carries the attributes – the data about a person – that is necessary to enable transactions, as shown below. The attributes are digitally-signed by organisations that are trusted. This is where we need some standardisation to define attributes (eg, IS_A_PERSON, IS_OVER_18, HAS_OVERDRAFT_AGREEMENT or whatever). Were the Bank of England to make the banks get their act together and start doing something about this, maybe they could do what they did for Open Banking and set up an Financial Passport Implementation Entity (FPIE) to draw up the formats and standards for Persona that can be used by developers to start work right away.
Note that this special case, where the virtual identity is the same as the “real” identity is only one case. Barclays and others might well give me (or charge me for) other virtual identities, with the most obvious example being an “adult” identity that does not contain any personally-identifiable information for use in internet dating and so on.