The media recently reported, somewhat breathlessly (eg, CNBC), that JP Morgan Chase (JPMC)is launching a “cryptocurrency to transform the payments business”. This sounded amazing so I was very excited to learn more about this great leap forward in the future history of money.
Now, many people took a look at this and pointed out that it is simply JPMC deposits by another name, and uncharitable persons (of whom I am not one) therefore dismissed it as a marketing gimmick. But it is more interesting than that. Here is the problem that it is trying to solve…
Suppose I am running apps (referred to by less well-informed media commentators as “smart” “contracts” when they are neither) on JPMC’s Quorum blockchain. Quorum is, in the terminology that I developed along with Richard Brown (CTO of R3) and my colleague Salome Parulava, their double-permissioned Ethereum fork (that is, it requires permission to access it and a further permission to take part in the consensus-forming process). I’m quite partial to Quorum (this is what I wrote about it back in 2017) and am always interested to see how it is developing and helping to define what I call the Enterprise Shared Ledger (ESL) software category.
Now suppose my Quorum app wants to make a payment – not in imaginary internet play money, but in US dollars – in return for some service. How can it do this? Remember that our apps can’t send a wire transfer or use a credit card because they can only access data on the blockchain. If the app has to pay using a credit card, and that app could be executing on a thousand nodes in the blockchain network, then you would have a thousand credit card payments all being fired off within a few seconds! You can see why this can’t work.
One way to solve this problem would be to have “oracles” reporting on the state of bank accounts to the blockchain and “watchers” (or “custom executors” as Darius calls them here) looking for state changes in the blockchain bank accounts that they could then instruct in the actual bank accounts. But that would mean putting the safe-to-spend limits for millions of bank accounts on to the blockchain. Another more practical solution would be to add tokens to Quorum and allow the apps to send these tokens to one another. This is, as far as I can tell from a distance, is what JPM Coins are for.
I have to say that this is a fairly standard way of approaching this problem. A couple of months ago, Signature Bank of New York, launched just such a service for corporate customers — with a minimum $250,000 balance — using another permissioned Ethereum fork, similarly converting Uncle Sam’s dollars into ERC-20 tokens. If you’re interested, I gave a presentation to the Dutch Blockchain Innovation Conference last year on this approach and why I think it will grow and the video is online [23 minutes].)
Animal, vegetable or mineral?
These JPM Coins (I simply cannot resist calling them Dimon Dollars, or $Dimon, for obvious reasons) have attracted considerable discussion but I thought I might contribute something different to the debate by trying to reason my way through to a categorisation. I talked about this on the panel in the “Blockchain and Cryptocurrencies” session at Merchant Payments Ecosystem in Berlin today, and you can see my slides here:
On the panel, I said that the $Dimon is e-money. Here’s why…
Is it “money”? No it isn’t. It is certainly a cryptoasset – a digital asset that has an institutional binding to a real-world asset – that in certain circumstances exhibits money-like behaviour. Personally, I am happy to classify such assets as forms of digital money, the logical reason that they are bearer instruments that can be traded without clearing or settlement.
Is it a “cryptocurrency”? No, it isn’t. A cryptocurrency has a value determined, essentially, by mathematics in that the algorithm to produce the currency is known and the value of the cryptocurrency depends only that known supply and the unknown demand (and, of course, market manipulation of various kinds). It is not set by an institution, government or otherwise.
Is it a “stablecoin”? No, it is isn’t. A stablecoin has its value maintained at a certain level with reference to a fiat currency by managing the supply of the coins. But the value of the $Dimon is maintained by the institution of JP Morgan irrespective of the demand for it.
Is it a “currency board”? No, it isn’t. A currency board maintains the value of one currency using a reserve in another currency. So, for example, you might have a Zimbabwean currency board that issues Zim Dollars against a 100% reserve of South African Rand.
In fact, as far as I can tell, the $Dimon is e-money, which is one particular kind of digital money. There are two main reasons for this:
First, according to the EU Directive 2009/110/EC, “Electronic money” is defined as “electronically, including magnetically, stored monetary value as represented by a claim on the issuer which is issued on receipt of funds for the purpose of making payment transactions […], and which is accepted by a natural or legal person other than the electronic money issuer”. This sounds awfully like, as Bloomberg put it, the $Dimon is “a digital coin representing United States Dollars held in designated accounts at JPMorgan Chase N.A.”. It is a bearer instrument (so “coin” is a reasonable appellation) that entitles the holder to obtain a US dollar from that bank and therefore seems to fall within that EU definition since people other than JPMC, albeit customers of JPMC, accept it in payment. (I would pull back from calling it digital cash because of this need to establish an account with JPMC in order to hold it.)
Second, because my good friend Simon Lelieveldt, who knows more about electronic money than almost anyone else, says so. Simon and I have long agreed that the trading of digital assets in the form of tokens is the most interesting aspect of current developments in cryptocurrency, a point I made more than once in my MPE talk.
It’s one of my favourite days of the year today! I am a payments romantic, so you will undoubtedly know why! Today across the civilised world, we celebrate Saint Valentine, the patron saint of customer verification methods (CVMs). We buy flowers and eat chocolates on this day every year cto commemorate the introduction of chip and PIN. Yes, chip and PIN was launched in the UK on 14th February 2006.
Yes, it’s lovely St. Valentine’s Day. Was it really thirteen years ago? The beautiful day, the day unromantically dubbed “chip and PIN day”, when we stopped pretending that anyone was looking at cardholders’ signatures on the backs of cards and instead mechanised the “computer says no” alternative. It really was! Thirteen years!
I’m sorry to say that in Merrie England, chip and PIN is on the wane. The majority of card transactions are contactless and, according to Worldpay (who should know), they have been for a few months now. Fraud is manageable because most transactions are authorised online now and would be whether we had chip and PIN or not. The offline PIN and “floor limit” world has gone. The world’s first optimised-for-offline payment system was launched after the world had already got online. This is why you see Brian Rommele writing that “by the time the UK implemented chip & PIN, the base concept and much of the technology was already almost 40 years old”.
Early chip and PIN focus group.
It is time to remind people what Saint Valentine stood for and reiterate why we are using chip and PIN at all. In ancient times, when European retailers could not go online to verify PINs due to the anticompetitive pricing of the monopoly public telephone providers, it made sense to verify the PIN locally (ie, offline). But this is 2019. We have smart phones and laser beams and holiday snaps of Ultima Thule. We can probably think about verifying PINs online again, or even replacing PINs with fingerprints or DNA or whatever.
Smart phone in particular mean change and, as I have bored people on Twitter senseless by repeatedly tagging “#appandpay rather than #tapandpay”, this will take us forward to a new retail payment environment in which the retail payment experience will converge across channels to the app. As payments shift in-app so the whole dynamic of the industry will change. Introducing a new payment mechanism faces the well-known “two-sided market” problem: retailers won’t implement the new payment mechanism until lots of consumers use it, consumers won’t use it until they see lots of retailers accepting it. This gives EMV a huge lock-in, since the cost of adding new terminals is too great to justify speculative investment.
When you go in-app, however, the economics change vastly. For Tesco to accept DavePay in store is a big investment in terminals, staff training, management and so on. But for the Tesco app to accept DavePay is… nothing, really. Just a bit of software. However traditional we might be, the marginal cost of adding new payment mechanisms is falling (particularly direct-to-account mechanisms because of open banking) and our industry needs to think about what that means.
I’m not saying that cards and PINs are going to go away any time soon, but what I am saying is that it’s time to start thinking about what might come next. Right now, that looks like smartphones with biometric authentication, but who knows what technologies are lurking around to corner to link identification and continuous passive authentication to create an ambient payments environment in which cards (and for the matter, terminals) are present only in a very limited number of use cases.
The Paris FinTech Forum this year was a superb event. I take my hat off to Laurent Nizri for pulling it all together and especially for his terrific first day panel with Christine Lagarde (who is Managing Director of the IMF and is therefore the woman in charge of money), Stefan Ingves (the governor of the Bank of Sweden), Carlos Torres Vila (Group Executive Chairman BBVA) and Kathryn Petralia (President of Kabbage) [video].
At one point, the conversation shifts to data. Carlos said that we should treat ownership of data as a human right, which I have to say I am not entirely sure about, and that “we should have regulation that forces data to flow” rather than the limited prescriptions of the 2nd Payment Services Directive (PSD2) “so that all sectors have to share their data, with consent, as banks have to do”.
(The reason that I’m not sure about the data ownership thing is that, as discussed in the MIT Technology Review recently, it may be a counterproductive way of thinking that “not only does not fix existing problems; it creates new ones”. Instead, was that article says, we need a framework that gives people the ability to stipulate how their data is used without requiring them to take ownership of it.)
That is a very interesting perspective on a very important issue.
What Carlos was talking about is the asymmetry at the heart of PSD2, an asymmetry that the regulators created and which if left to its own devices means an uncomfortable future for banks. I wrote about this back in 2017 for Wired, pointing out that the winner in this new environment will not be innovative startups across Europe but the people who already have all the data in world and can use data from the financial system to obtain even greater leverage from it. In other words, the GAFA-BAT data-industrial complex.
In Prospect (August 2018) there was a debate between Vince Cable, the former chief economist at Shell, and the economist John Kay. The issue was whether the internet giants should be broken up. Mr. Cable felt that the new data-industrial complexes (the DICs, as I call them, of course) need regulatory taming and that competition authorities should take a wider view of social welfare rather than focus solely on price, while Mr. Kay felt that regulators should focus elsewhere on higher priorities and let internet competition sort itself out. He has a point, because regulators have so far failed in this respect. As The Economist (Antitrust theatre, 21st July 2018) noted, despite headline grabbing fines and other antitrust actions, the European Commission has done little to strengthen competition.
So what to do? Do we sit back and allow the DICs to form unassailable oligarchies or should there be, as Carlos clearly thinks, a regulatory response? And if so, what response?
Mr. Cable’s call for some form of regulatory response is hardly unique. Last year I had the honour of chairing Professor Scott Galloway at a conference in Washington, DC. Scott is the author of “The Four”, a book about the power of internet giants (specifically Google, Apple, Facebook and Amazon). In his speech, and his book, he sets out a convincing case for regulatory intervention to manage the power of these platform businesses. Just as the US government had to step in with the anti-trust act in the late 19th century and deal with AT&T in the late 20th century, so Scott argues that they will have to step in again to save capitalism. His argument centres on the breaking up of the internet giants, as Mr. Cable called for, but I cannot help but wonder if this is an already outdated response to changing economic dynamics in a world where data is the new oil (and personal data is the new toxic waste). Perhaps there is a post-industrial alternative to replace that industrial age regulatory recipe for healthy competition in a future capitalist framework. As Viktor Mayer-Schönberger and Thomas Range note in Foreign Affairs (A Big Choice for Big Tech, Sep. 2018), a better solution is a “progressive data sharing mandate”. They suggest sharing anonymised subsets of data to boost competition, but I think there might be an alternative.
The Banking Example
To see what this might look like, consider the example of the UK’s banking sector where regulation at both the UK and European levels has turned it into a laboratory for what is called “open banking”. Here, a “perfect storm” of the combination of the Competition and Markets Authority (CMA) “remedies”, the European Commission’s Second Payment Services Directive (PSD2) “XS2A” (weird euro-shorthand for access to accounts) provisions and the Treasury’s push for competition in retail banking mean that new business models, never mind new product and services, will be developed and explored here first.
(The rest of Europe will move to open banking in September 2019, when PSD2 comes into force, and other jurisdictions such as Australia are bringing in similar regimes — more on this later.)
Under the open banking regime, the banks are required by the regulator to install sockets in customer accounts so that anyone can plug in and access those accounts (with the customers’ permission, of course). Who knows what new businesses will be created by companies using these standard plugs to access your bank account? Who knows what new services will be delivered through the wires? It is an earthquake in the finance world and no-one can be completely sure as to what the competitive landscape will look like when the shocks have settled.
At the heart of the new regime, which began in January of this year, is the requirement for banks to implement these sockets, technically known as Application Programming Interfaces (APIs), for third-parties to obtain direct access to bank accounts. Just as apps on your smartphone can use map data through the Google Maps API or post to your Twitter stream using the Twitter API, open banking means that apps will be able to pull your statement out through an HSBC API and tell my bank to send money through a Barclays API.
Thus there is a genuinely new financial services environment coming into existence. But who will take maximum advantage of it? The incumbent banks or fintech startups? Financial services innovators or entrepreneurs who want to harness the banking infrastructure for social good? Customers taking control or challenger banks able to deliver better services to them?
I don’t think it’s any of these. Deutsche Bank Research published a note PSD 2, open banking and the value of personal data (June 2018) noting that while the new, free interfaces open up opportunities with respect to payment services, retail financing and other tailored products for fintechs who can “seamlessly attach their innovative services to the existing (banking) infrastructure”, there are others who can similarly take advantage. Retailers with a large customer bases, for example. And of course the internet giants and, somewhat surprisingly perhaps, the existing retail banks. As Deutsche Bank point out, the incumbents could also benefit and act as third-party providers “vis-à-vis other account servicing banks” and offer an array of new or extended services to their customers, which will intensify competition among all providers.
We already see these responses out in the market. Deutsche Bank themselves have announced a project with IATA and there is great work being done by other incumbents (see for example, my Barclays mobile app) as well as challengers. Of particular interest I think is Starling Bank’s strategy to create a platform for new players. But… as I have said before, I think the regulators have made a miscalculation in their entirely laudable effort to increase competition in the banking sector. In brief, forcing the banks to open up their treasure trove of customer transaction data to third parties is not going to mean a thousand fintech flowers blooming, precisely because of the advantages it affords the incumbents vs. incomers. And while some big retailers will take advantage, the overall impact will be to tip the balance of power to a new, different and potentially more problematic oligarchy (to use Vince’s label).
What is going wrong?
Back in 2016, I said about the regulators demanding that banks open up their APIs that “if this argument applies to banks, that they are required to open up their APIs because they have a special responsibility to society, then why shouldn’t this principle also apply to Facebook?”. My point was, I thought, rather obvious. If regulators think that banks hoarding of customers’ data gives them an unfair advantage in the marketplace and undermines competition then why isn’t it true for other organisations in general and the “internet giants” in particular? As the Diane Coyle, Bennett Professor of Public Policy at the University of Cambridge, pointed out in the Financial Times a year ago (Digital platforms force a rethink in competition policy, 17th Aug. 2017), economies of scale and insurmountable network effects mean that it will be very difficult for fintech startups to obtain significant market traction when they are competing with these giants.
Now, of course, when I wrote about this last year for the Wired magazine Wired World in 2018, no-one paid any attention because I’m just some tech guy. But when someone like Ana Botin (Executive Chairman of Santander) started talking about it, the regulators, law makers and policy wonks began to sit up and pay notice. In the Financial Times earlier this year (Santander chair calls EU rules on payments unfair, 16th April 2018) she remarked on precisely that asymmetry in the new regulatory landscape. In short, the banks are required to open up their customer data to the internet giants but there is no reciprocal requirement for those giants to open up their customer data to the banks. Amazon gets Santander’s data, but Santander doesn’t get Amazon data. Therefore, as Ana (and many others) suspect, the banks will be pushed into being heavily regulated, low-margin pipes while the power and control of the giants will become entrenched (broadly speaking, the distribution of financial services has a better return on equity than the manufacturing of them).
It boils down to this: If Facebook can persuade me that it’s in my interest to give them access to my bank account, I can press the button to give it to them and that’s that. They can use the PSD2 APIs to get to my data. On the other hand, if a financial services provider can persuade me to give them access to my Facebook data… well, hard luck. Carlos said, rather elegantly, that one of the nice things about data as a resource is that it doesn’t get used up.
What is to be done?
Ms. Botin suggested that organisations holding the accounts of more than (for example) 50,000 people ought to be subject to some regulation to give API access to the consumer data. Not only banks, but everyone else should provide open APIs for access to customer data with the customer’s permission. This is what is being planned in Australia, where open banking is part of a wider approach to consumer data rights and there will indeed be a form of symmetry imposed by rules that prevent organisations from taking banking data without sharing their own data. If a social media company (for example) wants access to Australian’s banking data it must make its data available in a format determined by a Consumer Data Standards Body. (Note that these standards do not yet exist, and as I understand things the hope is that the industry will come forward with candidates.)
This sharing approach creates more of a level playing field by making it possible for banks to access the customer social graph but it would also encourage alternatives to services such as Instagram and Facebook to emerge. If I decide I like another chat service better than WhatApp but all of my friends are on WhatsApp, it will never get off the ground. On the other hand, if I can give it access to my WhatsApp contacts and messages then WhatsApp will have real competition.
This is approach would not stop Facebook and Google and the other from storing my data but it would stop them from hoarding it to the exclusion of competitors. As Jeni Tennison wrote for the ODI in June, a good outcome would be for “data portability to encourage and facilitate competition at a layer above these data stewards, amongst the applications that provide direct value to people”, just as the regulators hope customer-focused fintechs will do using the resource of data from the banks (who are, I think, a good example of data stewards). Making this data accessible via API would be an excellent way to obtain such an outcome.
It seems to me that this might kill two birds with one stone: it would make it easier for competitors to the internet giants to emerge and might lead to a creative rebalancing of the relationship between the financial sector and the internet sector. Instead of turning back to the 19th and 20th century anti-trust remedies against monopolies in railroads and steel and telecoms, perhaps open banking adumbrates a model for the 21st century anti-trust remedy against all oligopolies in data, relationships and reputation.
I don’t think that a digital ID card is quite the solution though, because I prefer a more sophisticated solution that is based on digital identities for everything and multiple personae for transactional purposes, but that’s splitting hairs at high level. I am right behind Mr. Carney on the need for a solution, although I think he was wrong when he went on to say that such a scheme could also prove controversial and could “only be introduced by the Government rather than the Bank of England”. In my opinion he is mixing up the controversial idea of a national digital identity card of some kind (and he may well be unaware of the government’s decision to stop funding their gov.verify online identity scheme) with the uncontroversial notion of a some form of secure and convenient identity management for the purposes of interacting with regulated financial institutions.
Only a day after Mr. Carney’s remarks, the Emerging Payments Association (EPA) released its report on money laundering and payments-related financial crime, calling for UK financial institutions and payment processors to create a “national digital identity scheme to tackle these threats”. So let’s take this national digital identity for financial services and digital ID card for online identity checking in Mr. Carney’s terms and call the concept, for sake of brevity, the Financial Services Passport, or FSP.
I don’t know if Mr. Carney has read my 2014 book Identity is the New Money (still available from all good bookshops and Amazon), but in there I wrote that one very specific use of a digital identity infrastructure “should be to greatly reduce the cost and complexity of executing transactions in the UK by explicitly recognising that reputation will be the basis of trust and therefore transaction costs. The regulators should therefore set in motion plans for a Financial Services Passport”.
A few year ago, I spent some time as co-chair (with Ian Jenkins of Deloitte) of the techUK Financial Services Passport Working Group, I was working on the concept of a financial services passport with a bunch of smart people and no-one took the slightest interest in this obviously sensible concept and I do not remember observing any inclination by the UK’s banks to work together on it.
That techUK Working Group, incidentally, was created because of recommendations of an earlier techUK report “Towards a New Financial Services” developed through 2013. Section 3 of this report is actually called “Identity and Authentication: Time for a Digital Financial Services Passport”. The conclusion of that section was:
There is clearly a need to look again at identity authentication in financial services. In addition to creating inconvenience for consumers, the current approach is expensive to maintain and inadequate in serving an increasingly digital financial services industry. As trusted authenticators of identity, a new standardised approach by financial services organisation could enable wider societal benefits, while also unlocking new opportunities for the industry. However, moving from the current fragmented identity infrastructure to a standardised financial services passport would require overcoming several challenges; from the competitive dynamics in financial services, to the extent and scope of liability, whilst simultaneously maintaining KYC and AML compliance.
In the first instance, the scope of a financial services passport needs to be more clearly defined. This requires a technology roadmap that can match objectives and requirements in managing digital identities in financial services with technical solutions and provide a feel for how trends may already be shaping the market in this space.
So what would a practical financial services passport actually look like? In the techUK discussions, we explored three broad architectures using the technology roadmap referred to above.
A centralised solution, some sort of KYC utility funded by the banks. This was seen as being the cheapest solution, but with some problems of governance and control. It could also be a single point of failure for the financial system and therefore unwise given that we are now in a cyberwar without end.
A decentralised “blockchain” (it wouldn’t really be a blockchain, of course, it would be some form of shared ledger) where financial institutions (and regulators) would operate the nodes and all of the identity crud (“create, read, update and delete”) would be recorded permanently.
A federated solution where each bank would be responsible for managing the identities of its own customers and providing relevant information to other banks as and when required.
At the time, I thought that the third option was probably best but I’m open to rational debate around the topic. The way that I envisage this working was straightforward: my bank creates a financial services passport using the KYC data that it already has and “stamps” the passport with a minimum set of attributes needed to enable transactions. So Barclays would create an FSP for me. Then, when I go to Nationwide to apply for a mortgage, I could present that FSP to Nationwide and save them (and me) the time, trouble and cost of KYC. Instead of asking me for my bank account details, home address and inside leg measurement, Nationwide can use the stamps in my passport.
As I recall, the technology bit of this was easy but there were two discussions about this that were difficult. One was about liability (I advocate the “Identrust model” of transaction liability) and the other was about payment (I advocate an interchange model where the organisation using the passport pays the passport originator).
Let’s just say for sake of argument though that in response to Mr. Carney’s comments, the FCA decided on a federated solution using the three-domain identity (3DID) model. It would look like this:
All of the standards and technologies needed to make this happen already exist except in one area. The banks already do the KYC in the Identification Domain, we have FIDO and biometrics and mandatory Secure Customer Authentication (SCA) in the Authentication Domain and the tools that we need in the Authorisation Domain.
Let’s imagine that the digital identity is, basically, a key pair. In this case, the virtual identity is then a public key certificate that carries the attributes – the data about a person – that is necessary to enable transactions, as shown below. The attributes are digitally-signed by organisations that are trusted. This is where we need some standardisation to define attributes (eg, IS_A_PERSON, IS_OVER_18, HAS_OVERDRAFT_AGREEMENT or whatever). Were the Bank of England to make the banks get their act together and start doing something about this, maybe they could do what they did for Open Banking and set up an Financial Passport Implementation Entity (FPIE) to draw up the formats and standards for Persona that can be used by developers to start work right away.
Note that this special case, where the virtual identity is the same as the “real” identity is only one case. Barclays and others might well give me (or charge me for) other virtual identities, with the most obvious example being an “adult” identity that does not contain any personally-identifiable information for use in internet dating and so on.