By Sean McDonald
COVID-19 is no time to forget history. This is neither the public health sector, the humanitarian response sector, nor the technology sector’s first disaster — and we’d do well to avoid repeating the mistakes of the past.
As the world braces for the possibility of a prolonged pandemic, the governments managing cases of COVID-19 are ramping up their use of technology. Everyone from the World Health Organisation (WHO) to the UK’s National Health Service (NHS) to the Government of Pakistan is promising to roll out a mobile app to help track the virus, or, really, the people with the virus. To do what, exactly, is less clear.
As someone who has built and deployed mobile technologies in disasters for 10 years, it’s clear to me: this is the wrong idea, and the wrong approach. Most of the proposed apps use a combination of real-time location data and symptom tracking to try to calculate your risk of infection, and communicate that with the government and, sometimes, health researchers.
While a number of these technologies purport to ‘contact trace’ COVID-19 — they are, as often as not, used to measure and enforce public lockdowns and quarantines. And while containment is an important goal, the countries often held up as model examples — China, South Korea, and Singapore — have a number of characteristics that suggest… well, that success in containment wasn’t just down to the app.
Take the example of South Korea, largely credited as containing COVID-19. It used a combination of mass public testing, a world-class healthcare system, and an open data portal. Oh, and a patient-tracking mobile application.
“While a number of these technologies purport to ‘contact trace’ COVID-19 — they are, as often as not, used to measure and enforce public lockdowns and quarantines.”
So we don’t know what difference the app made in an otherwise robust national response, and we can’t judge whether the negative consequences were worth it.
Some South Koreans expressed discomfort with the way intrusive tracking and health messaging bullied and stigmatised them, and others were harassed after their identities were deduced from supposedly anonymous public data.
Technology will play an important — and often positive — role in this disaster response: it already is. But here are a few of the ways it can go wrong and, no, it’s not (just) because of privacy:
The basic “doing new things” problem. No need to linger here — technology products not only require specific problem definitions and deployment plans, but dedicated people, resources, and testing. Organisations that don’t normally develop or maintain large-scale technology deployments — or integrate them into public service delivery — typically make a lot of mistakes at each stage. In this case, the cost of mistakes are very high.
Snake oil. There are a lot of good intentions in disaster response but, as the proverb goes, they can pave the way to hell, or at least they can do without considerably more work. A lot of technology proposals are described broadly as ‘public interest’ or ‘open’, instead of doing the harder work of demonstrating practical value to scientifically-critical institutions.
For example, a lot of these applications are using location tracking data, under the presumption that location data is a good proxy indicator for risk of transmission. It’s the kind of thing that sounds like it should be true. Unfortunately, it isn’t. One of the effects of the urgency of disasters is that we expedite the trial process for promising new tools, like vaccines. Technology systems, however, unlike drug testing, don’t have institutional checks and balances to make sure they solve the intended problem. On top of that, disaster responders mid-crisis have even less ability to separate snake oil from quality products.
“It’s a disaster.” As it says on the tin, disasters are not the ideal time to do much of anything — certainly not try and teach everyone to calmly do something nuanced, like comparatively self-assess their health risk during a global shutdown, or, you know, download and use an app.
Equality of need, not data. There is always a tension in disaster response between delivering aid based on the greatest need versus delivering aid based on where it is easiest to reach. Technology hasn’t changed much about that, but it’s made the digital divide all the more stark when it comes to using technology to monitor or mediate public services. For instance, if a government is using a smartphone application to measure movement or administer services, it’s only capable of reaching the half of the world with a smartphone, likely missing the most vulnerable communities.
Using technology to identify needs or access resources is inherently biased toward the users of that specific technology. Technology markets are so fragmented that it’s difficult to deliver relief equally, and pandemic response isn’t meant to skew towards the most fortunate. For example, Android is the world’s most deployed software, with nearly three billion users, which is nowhere near enough to reach the world’s seven billion people. And, of course, there are a lot of different versions of Android. The reality in disaster response is that focusing on those with smartphone and internet access may end up helping those who need it least.
Shouting fire in a theatre — without exits. Public health system capacity is an issue with every disaster, and COVID-19 response is no different. A lot of proposed technologies are aimed at using non-traditional sources of data, like location, to estimate risk of infection. The main problem is that even if people are at risk — and most now are — the medical advice is to stay home unless you are clearly in need of hospitalisation. In other words, individual risk estimation isn’t helpful to systems known to be under extreme stress and, if it leads people to seek more testing than they might otherwise, it could actually contribute to overwhelming health systems — which may be the single largest cause of death from COVID-19.
The one-way ratchet. Call it what you want — Surveillance Capitalism, Digital Transformation, the Big Data Revolution — the obvious outcome of this is the medium-to-long-term increase of governmental and private intrusion into our lives. We often talk about surveillance as “harmful”, but really what we mean is: surveillance enables a significant number of harms. It’s easier to focus on the institutional ways to limit the power that gets exercised through surveillance with legal tools — like sunset clauses, due process accountability, and dispute resolution systems — than to come to societal agreement on the contextual appropriateness of every surveillance power. After all, these powers, once granted, rarely go away — in the United States, the post-9/11 PATRIOT Act (a raft of counter-terrorism measures) is set for bi-partisan renewal, 19 years later.
“For the technology industry — and governments — it will not only serve as a test of what they can do with data, but also, hopefully, of what they won’t do.”
This pandemic will certainly test our public health institutions’ capacity — and we will all suffer for the things they don’t have the resources or capacity to do. For the technology industry — and governments — it will not only serve as a test of what they can do with data, but also, hopefully, of what they won’t do.
Even in emergencies, we should expect governments to protect basic due process, focus on need, and only use extraordinary measures when they are absolutely necessary.
Companies, platforms, and communities are having to make difficult decisions about how to effectively and ethically wield — and hopefully one day wind down — the exceptional powers necessary to navigate the COVID-19 response. If they fail, it will be up to all of us, in every country, to mobilise so we can fix what’s wrong. I doubt we’ll need an app to diagnose it.