Corona apps as a site for techno-politics
I’m interested in looking at the current debate around contact tracing and other digital interventions of the corona crisis because they represent a new generation of techno-political debates compared to the last round of netpolitical surveillance and data protection critiques for about a decade ago.
The Dilemma of Digital Solutions to the Corona Crisis
The reason the introduction of proposed digital solutions during the corona crisis has become such a debated issue is not only that they risk doing harm or diverting resources during the crisis, but that there is a sense that they introduce norms and technological interventions that would also be used after the crisis, such as an increased tolerance for state surveillance and monitoring of the population or the use of available but protected personal data such as mobile positioning.
An open letter that was sent to the Dutch government (PDF) at the height of their attempts to design a contact tracing apps makes this point of not letting the crisis be an excuse to usher in a new form of surveillance:
Particularly in times of crisis, very careful social and legal considerations must be made to determine whether one wants to take such a highly invasive measure.
But the problem is not necessarily that the apps themselves will live on after the crisis and then start to be used for other purposes, similar to how anti-terrorist surveillance measures after 9/11 or the Swedish case of the FRA was designed for a particular purpose but quickly expanded to cover less and less severe situations. The problem is therefore not solved with putting a clear end date to the measures being taken during the corona crisis, such as this open letter to the Belgian government suggest. Not only is there no clear end date or even definition of the end of the crisis, but we are not likely to go around using contact tracing apps after this.
Instead, the interventions lower the threshold for using emergency digital solution in general for other problems and positioning rapid development and deployment of digital solutions as the way to handle a crisis as the norm. The corona apps rather represent a cultural change in the way digital solutions and states of exception tie into one another. Digital apps during the corona crisis has been presented as a softer measure compared to lockdowns and social isolation. Various social freedoms can be granted and societies opened up, but on the condition that the population is continuously monitored and tracked. This might be acceptable in this situation, assuming for a second that the choice really stands between social isolation or digital tracking, but it does set a dangerous precedence for a new form of governance where freedoms and digital tracking are closely intertwined. Being presented and perceived as a softer form of intervention than legal interventions or even strong social disciplinary norms, these solutions would be easier to gain acceptance for and implement in all sort of emerging crises. The climate crisis in its various forms would be one possible destination for them. The benefit of real legally demanded lockdown or other hard emergency intervention is that it is very clear when it begins and when it ends (or if it turns out it doesn’t end). The use of digital tracking as a softer intervention then becomes a sort of digital gaslighting where the lines between state of exception and normality becomes blurred. The fact that it is a softer intervention that could possible go on for longer than the hard lockdowns just makes it a stronger form of power.
A Desperate Attempt at Solving Collingridge Dilemma
The problem with this debate that has arisen in relation to proposed corona apps is that it’s almost so fast that the critique precedes the thing it is critiquing that often ends up as scraped proposals or – as in the case of the Singaporean bluetooth contact tracing app that kick-started it all and that turned out to not having contributed much at all – risks only contributing to a technology hype.
What we are dealing with here – and, I would argue, with other emerging technologies such as AI or autonomous vehicles – is a kind of accelerated Collingridge dilemma. This states that at the beginning of an emerging technology it is next to impossible to fully know its social consequences (the information problem), while at its later stages, once you know this, it is too late to do something about it because of how established the technology is (the power problem). As a desperate solution to this, technological critique has become a very aggressive, anything-goes speculation on all possible social and political downsides of emerging technologies, at the cost both of quality of scholarship and an ability to distinguish real threats from unrealizable techno-mirages.
Technological critique has now become so fast, skilled and populous that it often even precedes the technologies themselves. Since every critical scholar now seems to have become involved in ethical AI, data science and other quickly emerging fields, the critique is immediate and ubiqutous (sadly, also often predictable). The risk with this is not only that large critical efforts risks being wasted on vapourware that never emerges (see self-driving cars), or when it do emerge come on “dove’s feet” – to quote Nietzsche – and have social consequences that at the same time is less spectacular than imagined but also greater because of how immersed it is in social systems. The greater risk is perhaps that the critique risks normalizing the technology itself and contribute to its hype cycle. Especially when the discussion immediately shifts to how to regulate a technology or how to make it ethical – even before its realization is fully unavoidable.
DP-3T as Discursive Device
The discussion of contact tracing apps and the critique of it (of which I personally of course have a somewhat conflicted stake in) risks becoming exactly this. It is important to not loose sight of the limits of the technology itself before jumping into discussions of how it should be implemented in a fair and ethical way. This is a fine line to walk, especially when the true technology hype and solutionism today often comes from the public sector and policy rather than from the tech industry alone.
In this situation, it is interesting to again visit the case of DP-3T as a concrete technical intervention into the debate.
I just found this list of countries from a tweet and can’t vouch for the accuracy of it. But we can also wonder what it means to say that a protocol has been adopted by these countries. The status of DP-3T now is a protocol and some reference implementations. No fully developed software exist, but more importantly, no connection and even strategies for connections to healthcare systems epidemological strategies exist. As pointed out in the previous post, it is not even certain that contact tracing apps will or should be used.
DP-3T didn’t come as a mere technical proposal, it came as an intervention into an already existing technology hype around contact tracing apps where dreams of tech solutions as fast exits from lockdowns as well as data collection and surveillance as a fast tracks to knowledge were already prevalent. As such it successfully intervened by expanding the set of possible technical interventions to show that they didn’t need to be based on centralized data analysis and as such raised the bar which no country can now go below without having to face the scrutiny of why this other alternative was not chosen.
The question here is how to both engage in ongoing techno-political developments in their own language to steer them to another path, but at the same time make that path open up to an even broader discussion on the handling of the crisis and technology’s role in it, rather than becoming a slightly better techno-solutionism. I do believe that the de-centering of the discussion of contact tracing towards the concrete proposal of DP-3T has helped in this and that there is efforts – both from how the protocol is structured and the discussion taking place now around it on GitHub, Twitter and other arenas – to also open up questions of the politics of crisis management.
Interventions such as DP-3T is thus at the same time both a concrete technical intervention and a site for opening up political discussions.
This model of concrete technical intervention as discursive device is one that lots of lessons can be drawn from going forward. In one sense, it’s reminiscent of the hack as discursive device, for example the classic 2006 Chaos Computer Club hack of e-voting machines as a way of critiquing the practice of electronic voting. What they have in common is that – unlike standard forms of critique – they become impossible to ignore. Impossible to ignore, because they enter into the domain of what they critique and mirror them.
This also exposes the failure of technological alternatives as political strategy. The above interventions are something very different from the decentralized or open alternatives to social media or proprietary platforms. The difference is that they have the potential to actually be replacements. With just a little stretch and a shift in political priorities, these could be paths chosen. DP-3T is possible under the current circumstances, with just a slight shift in focus. This shift again open up a technocratic process to further challenging.
The role of technological alternatives is different. They work as discursive devices precisely in their impossibility. It is obvious that it is impossible that Twitter and Facebook would be replaced by their decentralized alternatives. It is impossible that the speed of content on the internet would be replaced by more reflexive modes of engagement. It is impossible that the computer would really become a “bicycle for the mind” rather than a cybernetic system of control. And precisely in this impossibility of the unrealized alternatives can we see a critique of the state of things. This is technopolitics as martyrdom on the graveyard of utopian free software alternatives. Or in the best case, the communitarianism of the chosen few in marginalized software cults (hey, I’m part of them too!).
Distinguishing these two ways in which technologies can become discursive instruments should be able to expand the space of possibilities of techno-political actions.