The CBDC privacy paradox

It seems to meet that there is something of a paradox around cash, digital cash and anonymity. The average consumer wants anonymity for their own payments because they are not crooks (and their purchasing decisions are no-one’s business except theirs and the merchant’s). On the other hand, the average consumer (not to mention the average law enforcement agent) doesn’t want anonymity for terrorists, lobbyists or fraudsters.

The Bank of England’s fintech director Tom Mutton said in a speech that privacy was “a non-negotiable” for a retail CBDC. Meanwhile, the Bank of Canada (just to pick one recent example) published a a staff analytical note on the risks associated with CBDCs stating that central banks should mitigate risks such as anonymity present in digital currencies. Note the formulation of anonymity as a “risk”. With stricter rules on the holding and exchange of cryptocurrencies coming into place around the globe. Just to give one example, South Korea’s Financial Services Commission has announced new rules to come into force in 2022, banning all anonymous digital currencies “that possess a high-risk of money laundering” (which, as far as I can see, is all anonymous digital currencies).

There is a payments privacy paradox, and cryptocurrency brings it into sharp relief. Good people should be allowed anonymous cash, but bad people should not. Click To Tweet

How can we resolve this? Well, I think that we can, if we spend a little time to think about what anonymity and privacy actually mean.

The Clinton Paradox

This is a special case of a more general paradox. Let me explain and illustrate. A few years ago, I was invited me along to “an event” in London to enjoy a morning of serious thinking about some key issues in information security. They had some pretty impressive speakers as I recall: Mike Lynch, the founder of Autonomy, was one of them. Alec Ross, who was Senior Advisor for Innovation and Technology to the Secretary of State Hilary Clinton, gave the keynote address on “ The promise and peril of our networked world ”. Alec was a good speaker, as you’d expect from someone with a background in diplomacy, and he gave some entertaining and illustrative examples of using security to help defeat Mexican drug cartels and Syrian assassins. He also spent part of the talk warning against an over-reaction to “Snowden” leading to a web Balakanisation that helps no-one.

A decade back, I wrote about what I called the  “Clinton Paradox”. This came about because I read a piece by Bob Gourley. the former CTO of the U.S. Defense Intelligence Agency, who framed a fundamental and important question about the future identity infrastructure when analysing Hillary Clinton’s noted speech on Internet freedom.

We must have ways to protect anonymity of good people, but not allow anonymity of bad people.

Mrs. Clinton had said that we need an infrastructure that stops crime but allows free assembly. I have no idea how to square that circle, except to say that prevention and detection of crime ought to be feasible even with anonymity, which is the most obvious and basic way to protect free speech, free assembly and whistleblowers: it means doing more police work, naturally, but it can be done. By comparison, “knee jerk” reactions, attempting to force the physical world’s limited and simplistic identity model into cyberspace, will certainly have unintended consequences. Hence, I had suggested, it might be better to develop an infrastructure that uses a persistent pseudonymous identity. I was looking to mobile operators to do this, because they had a mechanism to interact face-t0-face (they had retail shops at the time) and remotely, as well as access to tamper-resistant secure hardware (ie, the SIM) for key storage and authentication. It never happened, of course.

Why am I remembering this. Well, I challenged Alec about the Clinton Paradox —slightly mischievously, to be honest, because I suspected he may have had a hand in the speech that I referred to in that blog post—and he said that people should be free to access the internet but not free to break the law, which is a politician’s non-answer (if “the law” could be written out in predicate calculus, he might have had a point, but until then…). He said that he thought that citizens should be able to communicate in private even if that means that they can send each other unauthorised copies of “Game of Thrones” as well as battle plans for Syrian insurgents.

I think I probably agree, but the key here is the use of the phrase “in private”. I wonder if he meant “anonymously”? I’m a technologist, so “anonymous” and “private” mean entirely different things and each can be implemented in a variety of ways.

The Payments Paradox

How will the Bank of Canada mitigate the risk of anonymity and South Korea maintain a ban on “privacy coins” when faced with a Bank of England digital currency that has non-negotiable privacy? Well, the way to resolve this apparent paradox is to note the distinction above between privacy and anonymity.

In the world of cryptography and cryptocurrency, anonymity is unconditional: it means that it is computationally infeasible to discover the link between a person in the real world and value online. Privacy is conditional: it means that the link is hidden by some third party (eg, a bank) and not disclosed unless certain criteria are met.


You can own these cartoons!
NFTs available from the artist Helen Holmes at
(CC-BY-ND 4.0)

Surveying the landscape as of now, I think we can see these concepts bounding an expanding privacy spectrum. There will undoubtedly be anonymous cryptocurrencies out there, but I think it is fair to observe that they will incur high transaction costs. At the other end of the spectrum, the drive for techfins and embedded finance will mean even less privacy (for the obvious reason, as discussed before, that their payment business models around around data). One might argue, with some justification I think, that central banks are better positioned than banks or other intermediaries when it comes to safeguarding data, because a central bank has no profit motive to exploit payments data.

(I could go further and argue that if the central bank were to place transaction data into some form of data trust that would facilitate data sharing to the benefit of citizens, we might see some real disruption in the retail payments space. In a data trust, structure, data stewards and guardians would look after the data or data rights of groups of individuals with a legal duty to act in the interest of the data subjects or their representatives. In 2017, the UK government first proposed them as a way to make larger data sets available for training artificial intelligence and a European Commission proposal in early 2020 floated data trusts as a way to make more data available for research and innovation. And in July 2020, India’s government came out with a plan that prominently featured them as a mechanism to give communities greater control over their data.)

Digital Currency, Digital Privacy

As The Economist once noted on the topic of central bank digital currency, people might well be “uncomfortable with accounts that give governments detailed information about transactions, particularly if they hasten the decline of good old anonymous cash”. And, indeed, I am. But the corollary, that anonymous digital currency should be allowed because anonymous physical cash is allowed, is plain wrong.

No-one, not the Bank of England nor any other regulator, central bank, financial institution, law enforcement agency, legislator or, for that matter, sane citizen of any democracy, wants anonymous digital currency whether from the central bank or anyone else. The idea of giving criminals and corrupt politicians, child pornographers and conmen a free pass with payments is throughly unappealing. On the other hand, the Bank of England and all responsible legislators should demand privacy.

I think the way forward is obvious, and relies on distinguishing between the currency and the wallets that it is stored in. Some years ago, when head of the IMF, Christine Lagarde spoke about CBDCs, noting that digital currencies “could be issued one-for-one for dollars, or a stable basket of currencies”. Why that speech was reported in some outlets as being somewhat supportive of cryptocurrencies was puzzling, especially since in this speech she specifically said she remained unconvinced about the “trust = technology” (“code is law”) view of cryptocurrencies. But the key point of that speech about digital fiat that I want to highlight is that she said

Central banks might design digital currency so that users’ identities would be authenticated through customer due diligence procedures and transactions recorded. But identities would not be disclosed to third parties or governments unless required by law.

As a fan of practical pseudonymity as a means to raise the bar on both privacy and security, I am very much in favour of exploring this line of thinking. Technology gives us ways to deliver appropriate levels of privacy into this kind of transactional system and to do it securely and efficiently within a democratic framework. In particular, new cryptographic technology gives us the apparently paradoxical ability to keep private data on a shared or public ledger, which I think will form the basis on new financial institutions (the “glass bank” that I am fond of using as the key image) that work in new kinds of markets.

So, if I send ten digital dollars from my digital wallet to your digital wallet, that’s no-one business but ours. If, however, law enforcement agencies obtain a warrant to require the wallet providers to disclose the identity of the owners, then that information should be readily available. There is no paradox around privacy in payments, but there is an imperative for practical pseudonymity.

[An edited version of this article first appeared on Forbes, 6th April 2021.]

Covering up and COV-19

The current pandemic has thrown up a particularly interesting case where conventional thinking doesn’t help us to understand how things could work in the future. We’ve all read with interest the accounts coming from Asia, and now Israel, of the use of mobile phone location data to tackle the dread virus. In the UK, the government has used some aggregate and anonymised mobile phone location data to see whether people were following social distancing guidelines, but it can actually play a much bigger role in tackling pandemics.

China got the virus under control with lockdowns in areas where it was endemic and apps to stop it from getting a foothold where it wasn’t. In Shanghai, which has seen few death, QR codes were used to authorise entry to buildings and to collect a detailed contact history so that control could be targeted in the case of infection. The Economist (21st March 2020) reported that the use of these codes was pervasive, to the point where each individual carriage on a subway train had it’s own code so that if someone tests positive only their fellow passengers need be contacted rather than everyone on the train.

South Korea, a country of roughly 50 million people, appears to have dealt with the pandemic pretty effectively. By mid-March it was seeing less than a hundred new cases per day. It did so without locking down cities or using the kind of authoritarian methods that China had used. What it did was to test over a quarter of a million people and then using contact tracing and strict quarantine (with heavy fines and jail as punishment). They were able to do this because legislation enacted as a result of the Middle Easterners Respiratory Syndrome (MERS) epidemic in 2015 meant that the authorities can collect location data from mobile phones (along with payment data, such as credit card use) from the people who test positive. This data is used to track the physical path of the person and that data, with personally-identfiable information removed, is then shared via social media to alert other people that they need to go and be tested. At the time of writing, South Korea has seen a hundred deaths, Italy (with a similar population) has seen more than thirty times as many.

Infrastructure and Emergency

Why does this make me think about the future? Well, it’s really easy to design a digital identity infrastructure for the most of us for most of the time. Trying to figure out how to help a law-abiding citizen with a passport or driving licence to open a digital bank account or to login remotely to make an insurance claim or to book a tennis court at a local facility is all really easy. It doesn’t provide any sort of stress test of an identity infrastructure and it doesn’t tell us anything about the technological and architectural choices we should be making to construct that infrastructure. That’s why I’m always interested in the hard cases, the edge effects and the elephants in the room. If we are going to develop a working digital identity infrastructure for the always-on and always-connected society that we find ourselves in, then it must work for everybody and in all circumstances. We need an infrastructure that is inclusive and incorruptible.

This is why whenever somebody talks to me about an idea they have for how to solve the “identity problem” (let’s not get sidetracked into what that problem is, for the moment) then I’ll always reach into my back pocket for some basic examples of hard cases that must be dealt with.

(In conference rhetoric, I used to call these the “3Ws”: whistleblowing, witness protection and adult services. In fact, it was thinking about whistleblowing many, many years ago when I was asked to be part of a working group on privacy for the Royal Academy of Engineering. Their report on “Dilemmas of Privacy and Surveillance” has stood the test of time very well in my opinion.)

My general reaction to a new proposal for a digital identity infrastructure is then “tell me how your solution is going to deal with whistleblowers or witness protection and then I will listen to how it will help me pay my taxes or give third-party access to my bank account under the provisions of the second Payment Services Directive (PSD2) Strong Customer Authentication (SCA) for Account Information Service Providers (AISPs)…”. Or whatever.

Healthy Data

The pandemic has given me another “hard case” to add in to my thinking. Now I have 4Ws, because I can add “wellbeing” to the list.  A new question will be: How does your proposed digital identity infrastructure help in the case of a public health emergency?

Whatever we as a society might think about privacy in normal circumstances, it makes complete sense to me that in exceptional circumstances the government should be able to track the location of infectious people and warn others in their vicinity to take whatever might be the appropriate action. Stopping the spread of the virus clearly saves lives and none of us (with a few exceptions, I’m sure) would be against temporarily giving up some of our privacy for this purpose. In fact, in general, I am sure that most people would not object at all to opening their kimonos, as I believe the saying goes, in society’s wider interests. If the police are tracking down a murderer and they ask Transport for London to hand over the identities of everybody who went through a ticket barrier a certain time in order to solve the crime, I would not object at all.

(Transport for London in fact provides a very interesting use case because they retain data concerning the identity of individuals using the network for six weeks after which time the data is anonymized and retained for the purposes of traffic analysis and network improvement. This strikes me as a reasonable trade-off. If a murder is committed or some other criminal investigation is of sufficient seriousness to warrant the disclosure of location data, fair enough. If after six weeks no murders or serious crimes have come to light, then there’s no need to leave members of the public vulnerable to future despotic access.)

It seems to me that the same is true of mobile location data. In the general case, the data should be held for a reasonable time and then anonymized. And it’s not only location data. In the US, there is already evidence that smart (ie, IoT) thermometers can spot the outbreak of an epidemic more effectively than conventional Center for Disease Control (CDC) tracking that replies on reports coming back from medical facilities. Massively distributed sensor network produce vast quantities of data that they can deliver to the public good.

It is very interesting to think how these kinds of technologies might help in managing the relationship between identity, attributes (such as location) and reputation in such a way as to simultaneously deliver the levels of privacy that we expect in Western democracies and the levels of security that we expect from our governments. Mobile is a good case study. At a very basic level, of course, there is no need for a mobile operator to know who you are at all. They don’t need to know who you are to send a text message to your phone that tells you you were in close contact to a coronavirus character carrier and that you should take precautions or get tested or whatever. Or to take another example, Bill Gates has been talking about issuing digital certificates to show “who has recovered or been tested recently or when we have a vaccine who has received it”. But there’s no reason why your certificate to show you are recovered from COV-19 should give up any other personal information.

I think that through the miracles of cryptographic blinding, differential privacy and all sorts of other techniques that are actually quite simple to implement in the virtual world (but have no conventional analogues) we ought to be able to find ways to provide privacy that is a defence against surveillance capitalism or state invasion but also flexible enough to come to our aid in the case of national emergency.

(Many thanks to Erica Stanford for her helpful comments on an earlier draft of this post.)


I tend to agree with people who see privacy as a function of control over personal information. Not a thing, more like a trade off. It’s a big problem though that the trade-offs in any particular situation are multi-dimensional and nothing like as explicit as they should be. And what if you have no possibility of control? The always interesting Wendy Grossman made me think about this in her recent net.wars column about her neighbour’s doorbell camera

As Wendy puts it “we have yet to develop social norms around these choices”. Indeed.

Whether it is neighbours putting up doorbell cameras or municipalities installing camera for our comfort and safety, the infrastructure of cameras (much more cost effective and useful than the one imagined by George Orwell) and pervasive always-on networks is going to created a decentralised surveillance environment that is going to throw up no end of interesting ethical and privacy issues.

Here’s an example. What happens if you set up a camera trap to photograph badgers but accidentally capture a picture of someone doing something they shouldn’t be doing? This is called “human bycatch” apparently. According to a 2018 University of Cambridge study, a survey of 235 scientists across 65 countries found that 90% of them had human bycatch. I’d never heard the word before but I rather like it. Bycatch, meaning collateral damage in surveillance operators.

The concept, if not the word, has of course been around for a while. I remember thinking about it a while back when I came across a story about some Austrian wildlife photographers who had set up cameras in a forest in order to capture exotic forest creatures going about their business, but instead caught an Austrian politician “enjoying an explicit sexual encounter” (as Spiegel Online put it). This was big news although (as one comment I saw had it) “if it had been with his wife it would have been even bigger news”. Amusing, indeed. But the story does raise some interesting points about mundane privacy in a camera-infested world.

I don’t know whether, in a world of smartphones and social media, one might have a reasonable expectation of privacy when having sex out in the woods somewhere. I would have thought not, but I am not a lawyer (or a wildlife photographer). It’s getting really hard to think about privacy and what we want from it and cases like this one remind us that privacy is not a static thing. It is not an inherent property of any particular information or setting. It might even be described as a process by which people seek to have control over a social situation by information and context.

In order to obtain privacy online we can use cryptography. In order to obtain privacy offline we are stuff with ethics and ombudsmen and GPDR and such like. This makes me think that people will start to move more and more of their interactions online where privacy can be managed – I can choose which identity I want when I present to an online shop, but I can hardly walk into an offline shop wearing Mexican wrestling mask and affecting a limp to evade gait detection.

Jackie No

The “The Law of the Telephone” by Herbert Kellogg in The Yale Law Journal 4(6) (June 1895) is a fantastic read. It begins by establishing that the basis of the law of the telephone is the law of the telegraph:

Like all common carriers the telephone company may establish reasonable conditions which applicants must comply with; and the use of profane or obscene language over a telephone may justify a company in refusing further service, on the same ground that a telegraph is not liable for a failure to send immoral or gambling messages.

Thus the new medium inherits from the old one. But is this true in social terms? Whole books were written to set out an etiquette for the telephone and to explain to the person in the street how to use the new technology in a civilised manner. I predict we are weeks, perhaps hours, away from a similar book for new Google Glasses users. I can see that there has already been plenty of thinking about the ethics of wearable computing, so we should probably start there rather than wait for new regulation evolve to govern us.

He also said that in deference to social expectations, he puts his wearable glasses around his neck, rather than on his head, when he enters private places like a restroom.

[From Privacy Challenges of Wearable Computing –]

I remember reading something about memes once. I can’t remember where it was ever couldn’t find it through superficial googling, but I remember the example that was given, which was the way that women started to wear sunglasses pushed up on the top of their heads apparently in emulation of Jackie Kennedy, wife of the noted philanderer Jack Kennedy. I’ve no idea whether this is true or not and I’m sure someone will be else send me a picture of a woman wearing sunglasses on the top of her head before Jackie Kennedy was born, but the example stuck with me and returns whenever I think about the spread of means within a population, evolving social norms and the role of media. So it is with great pleasure that I announce the first new meme for Google Glasses. I call it the “Jackie No” rule. It is this: when you go into a public restroom, you should push your Google Glasses to the top of your head, Jackie Kennedy style, to signal to anyone you might meet that you are not a pervert. I imagine that there are many circumstances where merely wearing Google classes will arouse suspicion you are not entirely normal, but here is one case where the inherent boundaries that make a civilised society possible must be made explicit for the safe functioning of civil society.

In the future, everyone will be famous for fifteen megabytes.