Crypto crimes and the risk of anonymity

I have written before that governments will never allow anonymous digital currencies and my comments attracted a certain amount of controversy. And I understand why. But to those who say that uncensorable, untraceable digital cash would be a shield against dictators, a force for the oppressed and a boon to free man everywhere… I say be careful what you wish for. The issue of anonymity in payments is complex and crucial and it deserves informed calm strategic thinking because digital currency touches on so many aspects of society.

One obvious and important aspect is crime. Would digital currency change crime? If I hire thugs to lure a cryptobaron to a hotel room and then beat him up to get $1m in bitcoins from him (as actually happened in Japan), is that a crypto-crime or just boring old extortion? If I use Craigslist to lure a HODLer to a street corner and then pull a gun on him and force him to transfer his bitcoins to me (as actually happened in New York), is that a crypto-crime or just boring old mugging? If I get hold of someone’s login details and transfer their cryptocurrency to myself (as has just happened in Springfield), is that a crypto-crime or just boring old fraud? If I kidnap the CEO of a cryptocurrency exchange and then release him after the payment of a $1 million bitcoin ransom is that, as the Ukrainian interior minister said at the time “bitcoin kidnapping” or just boring old extortion?

Holmes

Cash or charge? (CC-BY-ND 4.0)
NFT available direct from the artist at TheOfficeMuse (CC-BY-ND 4.0)

 These are just crimes, surely? And not very good ones at that, because they are recorded in perpetuity on an immutable public ledger. Personally, if I were to kidnap a cryptocurrency exchange CEO I would ask for the ransom to be paid in some more privacy-protecting cryptocurrency, because as I explained in the FT some years ago, Bitcoin is not a very good choice for this sort of cyber-criminality. It’s just not anonymous enough for really decent crimes or the darkest darknets. Hence my scepticism about claims that Bitcoin’s long term value will be determined by it’s use for crime.

Untraceable

But what if there were an actually untraceable cryptocurrency out there and it wasn’t up to governments to allow it or not? Would an aspiring cryptocriminal mastermind be able to use it for something more innovative than the physically-demanding felony of kidnapping? I’m sure the Mafia would be delighted to have anonymous digital cash to zip around the world, but what would they use it for? Might they come up with some dastardly enterprise that is not a virtual shadow of a crime that has been around since year zero, but a wholly new crime for the virtual world? What if they could find one with the potential to take over from drug dealing (currently approximately 40% of organised crime revenues) as the best option for the criminal entrepreneur?

Ransomware is one interesting candidate. It is certainly a major problem. Criminals seize control of organisations’ computer networks, encrypting their data and demanding payment to deliver the decryption keys. Companies paralyzed by the attacks paid hackers an average of more than $300K in 2020 (triple the average of the year before). A cyber security survey last year revealed that more than two-thirds of organisations in the United States had experienced a ransomware attack and had paid a ransom as a result! That’s a pretty decent business for criminals and it certainly was a driver for Bitcoin, although ransomware operators have been moving away from it for some time.

(Once again demonstrating the impending explicit pricing of privacy, the Sodinokibi payment website last year began charging 10% more for Bitcoin ransoms compared to the more private Monero cryptocurrency.)

On the whole, given the basic nature of most organisation’s cyber-defences (more than half of all ransomware attacks stem from spam e-mails), one might expect the ransomware rewards to continue to grow. Apart from anything else, the ransomware raiders are reinvesting their profits in increasingly efficient operations, making for even bigger and bolder attacks.

Assasinate and Win

So, ransomware. But what about a more sinister candidate for large-scale criminality though? Is it time for the “assassination market”? It’s not a new idea. A few years ago, Andy Greenberg wrote a great piece about this here on Forbes. He was exploring the specific case of “Kuwabatake Sanjuro” who had set up a Bitcoin-powered market for political assassinations, but in general an assassination market is a form prediction market where any party can place a bet on the date of death of a given individual, and collect a payoff if they “guess” the date accurately. This would incentivise the assassination of individuals because the assassin, knowing when the action would take place, could profit by making an accurate bet on the time of the subject’s death.

This idea originated, to the best of my knowledge, with Jim Bell. Way back in 1995 he set it out in an essay on “assassination politics“. I suppose it was inevitable that advent of digital cash would stimulate thought experiments in this area and it was interesting to me then (and now) because it showed the potential for innovation around digital money even in the field of criminality.

Here’s how the market works and why the incentive works, as I explained in my book “Before Babylon, Beyond Bitcoin“. Someone runs a public book on the anticipated death dates of public figures. If I hate some tech CEO (for example), I place a bet on when they will die. When the CEO dies, whoever had the closest guess to their date and time of death wins all of the money staked, less a cut for the house. Let’s say I bet $5 (using anonymous digital cash through the TOR network) that a specific tech CEO is going to die at 9am on April Fool’s Day 2022. Other people hate this person too and they put down bets as well. The more hated the person is, the more bets there will be.

April Fool’s Day 2020 comes around. There’s now ten million dollars staked on this particularly CEO dying at 9am. I pay a hit man five million dollars to murder the CEO. Hurrah! I’ve won the bet, so I get the ten million dollars sent to me in anonymous digital cash and give half to the hit man. No-one can pin the crime on me because I paid the hitman in untraceable anonymous digital cash as well.

I’m just the lucky winner of the lottery.

But better than that is that if I can get enough bets put on someone, then I don’t even have to take the risk of hiring the hitman. If I use some anonymous bots or friendly tolls to coordinate a social media campaign to get a million people to put a $5 bet on the date of the tech CEOs death, then some enterprising hit man will make their own bet and kill them. If the general public had bet five million bucks on 31st March and some enterprising cryptopsycho had murdered the CEO themselves the day before, then it would only have cost me a $5, and I would have regarded that as $5 well spent, as would (presumably) everyone else who bet $5!

(This is an edited version of an article first published on Forbes, 14th April 2021.)

All the news that’s fit to ID

I came across an interesting story via my old chum Charles Arthur’s consistently interesting “Overspill” blog. The story concerns on Oliver Taylor, a student at England’s University of Birmingham. From his picture, he appears to be normal looking twenty-something. From his profile he appears to be a coffee-loving politics junkie with an interest in anti-Semitism and Jewish affairs, with bylines in the Jerusalem Post and the Times of Israel.

Why is this interesting? For two reasons. First of all because I was involved in an interesting Twitter debate with two thoughtful identity commentators, Tim Bouma and Jonathan Williams during which this issue of “anonymous” contributions to newspapers happened to come in to the conversation and it made me think about the same issues as Charles’ story. Tim had mentioned writing for a newspaper that had kept his real name off of his stories, and I responded that if they knew who you were, then you were not anonymous.

Secondly, because Oliver’s picture was created by an AI. It’s a fake face that doesn’t belong to any living human being. It was composed to be a human face that any of us would be able to recognise and distinguish, but it is entirely synthetic.

Oh, and Oliver doesn’t exist.

Charles notes that “two newspapers that published his work say they have tried and failed to confirm his identity”. But wait. Shouldn’t newspapers try and fail to confirm someone’s identity before they publish a story?

Shouldn’t newspapers try and fail to confirm someone’s identity before they publish a story? Click To Tweet

Well, no. That doesn’t work. What about whistleblowers? What about privacy in general? If the newspaper knows who Tim Bouma is then his personal data is at risk should the newspaper be compromised or co-opted. There seems to be a conflict between newspapers wanting honest opinions and newspapers needing to know identities, even if they are hopeless at telling a real identity from a fake one.

The way out of this dead end is to understand that what the newspaper should be checking for this kind of story is not the identity of the correspondent but their credentials. I doesn’t matter who Oliver Taylor is, it matters what Oliver Taylor is. It ought to be part of our national digital identity strategy (which we don’t have) to create a National Entitlement Scheme (NES) instead of some daft 1950s throwback digitised version of a national identity card. In the NES, it then becomes part of the warp and weft of everyday life for a correspondent with something interesting to say to use his persistent pseudonym “Oliver” to post his comments along with his anonymous IS_A_PERSON credit and his anonymous IS_A_STUDENT (BIRMINGHAM) credential.

That way, the newspaper gets the information it needs to obtain a story of interest and perhaps worth publishing, while even if they are socially-engineered by genius hackers, they cannot disclose the real identity of the correspondent because they don’t know it. The mention of social-engineering, by the way, brings into focus the recent Twitter hack. What’s generally true for newspapers is generally true for Twitter: who I am is none of their business, something I written about at exhausting length before.

Incidentally, it doesn’t take hackers to obtain personal information from a platform because as I am sure you will recall, two of Twitter’s former employees have been charged in the US with spying for Saudi Arabia. The charges allege that Saudi agents sought personal information about Twitter users including known critics of the Saudi government. If Twitter doesn’t have your personal information, then it can’t  be leaked, stolen or corrupted.

There is a way forward, and cryptography can deliver it using tried and tested (albeit counterintuitive) techniques.

Identity at the sharp end

There’s a bit of a row going on about Twitter, Facebook, social media in general and bots. It’s a serious issue. Democracy was invented before bots and doesn’t seem to work terribly well in their presence, so in order to restore peace, low taxes and the tolerable administration of justice we need to do something about one or the other. Many people seem to think that we should do something about bots. The noted entrepreneur Mark Cuban, for example, caused some debate recently by saying that…

He’s wrong about the real name, because anyone familiar with the topic of “real” names knows perfectly well that they make online problems worse rather than better. He’s right about the real person though. Let me use a specific and prosaic example to explain why this is and to suggest a much better solution to the bot problem. The example is internet dating, a topic on which I am a media commentator. Or at least I was once. 

A few years ago, I appeared on a programme about internet dating on one of the more obscure satellite TV channels. They wanted an “internet expert” to comment on the topic and since no-one else would do it, eventually the TV company called me. I agreed immediately and set off for, if memory serves, somewhere off the M4 in West London. The show turned out to be pretty interesting. I didn’t have much to say (I was there to comment on internet security, which no-one really cares about), and I can’t remember much of what was said, but I do remember very clearly that the psychologist at the heart of the show made a couple of predictions. While interviewing a couple who had met online, she said (and I am paraphrasing greatly through the imperfect prism of my memory) that in the future people would think that choosing a partner when drunk in bar is the most ludicrous way of finding a soulmate, and that internet dating was a better mechanism for selecting partners for life. Now it seems that this prediction is being confirmed by the data, as the MIT Technology Review reports that “marriages created in a society with online dating tend to be stronger”.

The psychologist’s other prediction was that internet dating gave women a much wider range of potential mates to choose from and allowed them to review them in more detail before developing relationships. Of course, internet dating also increases the size of the pool for men, but think that her thesis was that men don’t seem to make as much use of this as women do. Anyway, the general point about the wider pool now seems to be showing up in the data, assuming that interracial marriages are a reasonable proxy for the pool size. When researchers from the National Academy of Sciences looked at statistics from 1967 to 2013, they found “spikes” in interracial marriages that coincided with the launch of online matchmaking sites.

Why am I telling you all this? Well, it’s to make the point is that internet dating is mainstream and that is it having a measurable impact on society. This is why it is such a good use case at the sharp end of digital identity. It is rife with fraud, it is a test case for issues around anonymity and pseudonymity, it is a mass market for identity providers and it is a better test of scale for an identity solution than logging on to do taxes once every year. Now, I am not the only person who thinks this and there are already companies exploring solutions. And you can see why they want to: online dating is a huge business. A third of the top 15 iOS apps (by revenue) were dating apps.

So. How to bring the benefits of digital identity to this world. One way not to do it is that Mark Cuban way of demanding “real” names. Last year, the dating platform OKCupid announced it would ask users go by their real names when using its service (the idea was to control harassment and promote community on the platform) but after something of a backlash from the users, they had to relent. Why on Earth would you want people to know your “real” name? That should be for you to disclose when you want to and to whom you want to. If fact the necessity to present a real name will actually prevent transactions from taking place at all, because the transaction enabler isn’t names, it’s reputations. And pretty basic reputations at that. Just knowing that the apple of your eye is a real person is probably the most important element of the reputational calculus central to online introductions, but after that? Your name? Your social media footprint? (Look at the approach of “Blue”, a dating service for Twitter-verified-users-only.)

I don’t think this is a solution, because if I were to be on an internet dating site, I would want the choice of whether to share my name, or Twitter identity, or anything else with a potential partner. I certainly would not want to log in with my “real” name or anything information that might identify me. In fact, this is an interesting example of a market that does not need “real” names at all. “Real” names don’t fix any problem. Your “real” name is not an identifier, it is just an attribute and it’s only one of elements that would need to be collected to ascertain the identity of the corresponding real-world legal entity anyway. Frankly, presenting “real” names will actually make identity problems worse rather than better since the real name is essentially nobody’s business and is not necessary in order to engage in the kinds of transactions that are being discussed here. Forcing the use of real names will mean harassment, abuse and perhaps even worse.

What internet dating needs, and what will solve Mark Cuban’s social media problem as well, is the ability to determine whether you are a person or a bot (remember, in the famous case of the Ashley Madison hack, it turned out that almost all of the women on the site were actually bots). On Twitter it’s not quite that bad yet, because there are still many people posting there, but with bot networks of 500,000 machines tweeting and re-tweeting it is not in good shape. The way forward is surely not for Twitter to try and figure out who is a bot and whether they should be banned (after all, there are plenty of good bots out there) but Twitter to give customers the choice. Why can’t I tell Twitter that I don’t want bot followers, that I want a warning if an account I follow is a bot, that I don’t want to see posts that originated from bots that I don’t follow and so on. Just as with internet dating, the problem is not real names but real people.

Now, working out whether I am a person or not is a difficult problem if you are going to go by reverse Turing tests or Captchas. It’s much easier to ask someone else who already knows whether I’m a bot or not. My bank, for example. So, when I go to sign up for internet dating site, then instead of the dating site trying to work out whether I’m real or not, the dating site can bounce me to my bank (where I can be strongly authenticated using existing infrastructure) and then the bank can send back a token that says “yes this person is real and one of my customers”. It won’t say which customer, of course, because that’s none of the dating site’s business and when the dating site gets hacked it won’t have any customer names or addresses: only tokens. This resolves the Cuban paradox: now you can set your preferences against bots if you want to, but the identity of individuals is protected.

One of my acid tests of whether a digital identity infrastructure is fit for the modern world is whether it can offer this kind of strong pseudonymity (that is, pseudonyms capable of supporting reputations). If we can construct an infrastructure that works for the world of internet dating, then it can work for cryptocurrency, cars, children and all sorts of other things we want to manage securely in our new always-on environment. We have to fix this problem, and soon, because in the connected world, if you don’t know who IS_A_PERSON and who IS_A_DOG and who is neither, you cannot interact online in a functional way.