Covering up and COV-19

The current pandemic has thrown up a particularly interesting case where conventional thinking doesn’t help us to understand how things could work in the future. We’ve all read with interest the accounts coming from Asia, and now Israel, of the use of mobile phone location data to tackle the dread virus. In the UK, the government has used some aggregate and anonymised mobile phone location data to see whether people were following social distancing guidelines, but it can actually play a much bigger role in tackling pandemics.

China got the virus under control with lockdowns in areas where it was endemic and apps to stop it from getting a foothold where it wasn’t. In Shanghai, which has seen few death, QR codes were used to authorise entry to buildings and to collect a detailed contact history so that control could be targeted in the case of infection. The Economist (21st March 2020) reported that the use of these codes was pervasive, to the point where each individual carriage on a subway train had it’s own code so that if someone tests positive only their fellow passengers need be contacted rather than everyone on the train.

South Korea, a country of roughly 50 million people, appears to have dealt with the pandemic pretty effectively. By mid-March it was seeing less than a hundred new cases per day. It did so without locking down cities or using the kind of authoritarian methods that China had used. What it did was to test over a quarter of a million people and then using contact tracing and strict quarantine (with heavy fines and jail as punishment). They were able to do this because legislation enacted as a result of the Middle Easterners Respiratory Syndrome (MERS) epidemic in 2015 meant that the authorities can collect location data from mobile phones (along with payment data, such as credit card use) from the people who test positive. This data is used to track the physical path of the person and that data, with personally-identfiable information removed, is then shared via social media to alert other people that they need to go and be tested. At the time of writing, South Korea has seen a hundred deaths, Italy (with a similar population) has seen more than thirty times as many.

Infrastructure and Emergency

Why does this make me think about the future? Well, it’s really easy to design a digital identity infrastructure for the most of us for most of the time. Trying to figure out how to help a law-abiding citizen with a passport or driving licence to open a digital bank account or to login remotely to make an insurance claim or to book a tennis court at a local facility is all really easy. It doesn’t provide any sort of stress test of an identity infrastructure and it doesn’t tell us anything about the technological and architectural choices we should be making to construct that infrastructure. That’s why I’m always interested in the hard cases, the edge effects and the elephants in the room. If we are going to develop a working digital identity infrastructure for the always-on and always-connected society that we find ourselves in, then it must work for everybody and in all circumstances. We need an infrastructure that is inclusive and incorruptible.

This is why whenever somebody talks to me about an idea they have for how to solve the “identity problem” (let’s not get sidetracked into what that problem is, for the moment) then I’ll always reach into my back pocket for some basic examples of hard cases that must be dealt with.

(In conference rhetoric, I used to call these the “3Ws”: whistleblowing, witness protection and adult services. In fact, it was thinking about whistleblowing many, many years ago when I was asked to be part of a working group on privacy for the Royal Academy of Engineering. Their report on “Dilemmas of Privacy and Surveillance” has stood the test of time very well in my opinion.)

My general reaction to a new proposal for a digital identity infrastructure is then “tell me how your solution is going to deal with whistleblowers or witness protection and then I will listen to how it will help me pay my taxes or give third-party access to my bank account under the provisions of the second Payment Services Directive (PSD2) Strong Customer Authentication (SCA) for Account Information Service Providers (AISPs)…”. Or whatever.

Healthy Data

The pandemic has given me another “hard case” to add in to my thinking. Now I have 4Ws, because I can add “wellbeing” to the list.  A new question will be: How does your proposed digital identity infrastructure help in the case of a public health emergency?

Whatever we as a society might think about privacy in normal circumstances, it makes complete sense to me that in exceptional circumstances the government should be able to track the location of infectious people and warn others in their vicinity to take whatever might be the appropriate action. Stopping the spread of the virus clearly saves lives and none of us (with a few exceptions, I’m sure) would be against temporarily giving up some of our privacy for this purpose. In fact, in general, I am sure that most people would not object at all to opening their kimonos, as I believe the saying goes, in society’s wider interests. If the police are tracking down a murderer and they ask Transport for London to hand over the identities of everybody who went through a ticket barrier a certain time in order to solve the crime, I would not object at all.

(Transport for London in fact provides a very interesting use case because they retain data concerning the identity of individuals using the network for six weeks after which time the data is anonymized and retained for the purposes of traffic analysis and network improvement. This strikes me as a reasonable trade-off. If a murder is committed or some other criminal investigation is of sufficient seriousness to warrant the disclosure of location data, fair enough. If after six weeks no murders or serious crimes have come to light, then there’s no need to leave members of the public vulnerable to future despotic access.)

It seems to me that the same is true of mobile location data. In the general case, the data should be held for a reasonable time and then anonymized. And it’s not only location data. In the US, there is already evidence that smart (ie, IoT) thermometers can spot the outbreak of an epidemic more effectively than conventional Center for Disease Control (CDC) tracking that replies on reports coming back from medical facilities. Massively distributed sensor network produce vast quantities of data that they can deliver to the public good.

It is very interesting to think how these kinds of technologies might help in managing the relationship between identity, attributes (such as location) and reputation in such a way as to simultaneously deliver the levels of privacy that we expect in Western democracies and the levels of security that we expect from our governments. Mobile is a good case study. At a very basic level, of course, there is no need for a mobile operator to know who you are at all. They don’t need to know who you are to send a text message to your phone that tells you you were in close contact to a coronavirus character carrier and that you should take precautions or get tested or whatever. Or to take another example, Bill Gates has been talking about issuing digital certificates to show “who has recovered or been tested recently or when we have a vaccine who has received it”. But there’s no reason why your certificate to show you are recovered from COV-19 should give up any other personal information.

I think that through the miracles of cryptographic blinding, differential privacy and all sorts of other techniques that are actually quite simple to implement in the virtual world (but have no conventional analogues) we ought to be able to find ways to provide privacy that is a defence against surveillance capitalism or state invasion but also flexible enough to come to our aid in the case of national emergency.

(Many thanks to Erica Stanford for her helpful comments on an earlier draft of this post.)


I tend to agree with people who see privacy as a function of control over personal information. Not a thing, more like a trade off. It’s a big problem though that the trade-offs in any particular situation are multi-dimensional and nothing like as explicit as they should be. And what if you have no possibility of control? The always interesting Wendy Grossman made me think about this in her recent net.wars column about her neighbour’s doorbell camera

As Wendy puts it “we have yet to develop social norms around these choices”. Indeed.

Whether it is neighbours putting up doorbell cameras or municipalities installing camera for our comfort and safety, the infrastructure of cameras (much more cost effective and useful than the one imagined by George Orwell) and pervasive always-on networks is going to created a decentralised surveillance environment that is going to throw up no end of interesting ethical and privacy issues.

Here’s an example. What happens if you set up a camera trap to photograph badgers but accidentally capture a picture of someone doing something they shouldn’t be doing? This is called “human bycatch” apparently. According to a 2018 University of Cambridge study, a survey of 235 scientists across 65 countries found that 90% of them had human bycatch. I’d never heard the word before but I rather like it. Bycatch, meaning collateral damage in surveillance operators.

The concept, if not the word, has of course been around for a while. I remember thinking about it a while back when I came across a story about some Austrian wildlife photographers who had set up cameras in a forest in order to capture exotic forest creatures going about their business, but instead caught an Austrian politician “enjoying an explicit sexual encounter” (as Spiegel Online put it). This was big news although (as one comment I saw had it) “if it had been with his wife it would have been even bigger news”. Amusing, indeed. But the story does raise some interesting points about mundane privacy in a camera-infested world.

I don’t know whether, in a world of smartphones and social media, one might have a reasonable expectation of privacy when having sex out in the woods somewhere. I would have thought not, but I am not a lawyer (or a wildlife photographer). It’s getting really hard to think about privacy and what we want from it and cases like this one remind us that privacy is not a static thing. It is not an inherent property of any particular information or setting. It might even be described as a process by which people seek to have control over a social situation by information and context.

In order to obtain privacy online we can use cryptography. In order to obtain privacy offline we are stuff with ethics and ombudsmen and GPDR and such like. This makes me think that people will start to move more and more of their interactions online where privacy can be managed – I can choose which identity I want when I present to an online shop, but I can hardly walk into an offline shop wearing Mexican wrestling mask and affecting a limp to evade gait detection.

Jackie No

The “The Law of the Telephone” by Herbert Kellogg in The Yale Law Journal 4(6) (June 1895) is a fantastic read. It begins by establishing that the basis of the law of the telephone is the law of the telegraph:

Like all common carriers the telephone company may establish reasonable conditions which applicants must comply with; and the use of profane or obscene language over a telephone may justify a company in refusing further service, on the same ground that a telegraph is not liable for a failure to send immoral or gambling messages.

Thus the new medium inherits from the old one. But is this true in social terms? Whole books were written to set out an etiquette for the telephone and to explain to the person in the street how to use the new technology in a civilised manner. I predict we are weeks, perhaps hours, away from a similar book for new Google Glasses users. I can see that there has already been plenty of thinking about the ethics of wearable computing, so we should probably start there rather than wait for new regulation evolve to govern us.

He also said that in deference to social expectations, he puts his wearable glasses around his neck, rather than on his head, when he enters private places like a restroom.

[From Privacy Challenges of Wearable Computing –]

I remember reading something about memes once. I can’t remember where it was ever couldn’t find it through superficial googling, but I remember the example that was given, which was the way that women started to wear sunglasses pushed up on the top of their heads apparently in emulation of Jackie Kennedy, wife of the noted philanderer Jack Kennedy. I’ve no idea whether this is true or not and I’m sure someone will be else send me a picture of a woman wearing sunglasses on the top of her head before Jackie Kennedy was born, but the example stuck with me and returns whenever I think about the spread of means within a population, evolving social norms and the role of media. So it is with great pleasure that I announce the first new meme for Google Glasses. I call it the “Jackie No” rule. It is this: when you go into a public restroom, you should push your Google Glasses to the top of your head, Jackie Kennedy style, to signal to anyone you might meet that you are not a pervert. I imagine that there are many circumstances where merely wearing Google classes will arouse suspicion you are not entirely normal, but here is one case where the inherent boundaries that make a civilised society possible must be made explicit for the safe functioning of civil society.

In the future, everyone will be famous for fifteen megabytes.