Review: Louise Amoore, ‘Cloud Ethics’
Reviewed by Suryansu Guha
Political Geographer Louise Amoore’s book Cloud Ethics takes up the ethico-political questions surrounding machine learning and deep neural network algorithms and how they have become arbitrators in governing significant spheres and spaces of human involvement. Algorithms have come to play a most influential role in the decision-making processes in a wide sphere of human activities ranging from law enforcement, medicine, border-crossing, finance even, and especially military warfare. Needless to say, these are crucial processes that, once executed, have the potential to cause irreparable damage even loss of human lives. Therefore, for Amoore the question of ethics assumes an utmost importance in the way we shape algorithms and the way algorithms shape us. In doing so, Amoore’s work builds on a body of humanities’ scholarship that have been asking these questions in the last five years like Safiya Noble’s Algorithms of Oppression (2018), Tung-Hui Hu’s A Prehistory of the Cloud (2015) and Wendy Chun’s Updating to Remain the Same (2016).
At first glance, Amoore’s object of study seems too expansive and diverse mainly because there are so many different types of algorithms, performing so many minute functions in our daily lives (functions we don’t even become aware of at times) with very different stakes. But as one steadily delves deeper into the book, her specific case studies gradually give us a clearer picture of how she is studying the functioning of machine learning algorithms, particularly the ones where the stakes are understood to be relatively higher. Most of the case studies she takes up are of algorithms that are used in mass surveillance, immigration and deportation, creditworthiness and medicine – aspects of our lives where the question of ethics and morality in decision making plays a very sensitive role.
The extensive scholarship that precedes Amoore’s work has already managed to demystify the idea that search engines and algorithms are value neutral, objective, rational and free from human bias (Noble 2018). But by and large, the tech companies’ response to algorithmic bias has primarily been to treat this as a ‘glitch’ which can be ‘fixed’ or ‘tweaked’. Safiya Noble wrote in an opinion piece in Time magazine in March 2018 that when Google learned of how its search engines would come up with objectified images of African American women it denied intent to harm and worked to modify such undeniably sexist and racist search results. However, while the search string ‘black girls’ no longer shows the kind of sexist results that it once used to show, one can essentially try the same exercise with other racial subsets like Asian girls or Latinas and the algorithm will still respond with sexually explicit results. Therefore, Amoore challenges the basic presumption that algorithmic biases or value judgements can simply be removed and points to how we are missing a very elementary point in how these algorithms operate in the first place.
The reason for such an assumption that biases can be simply removed is because we still tend to understand algorithms as a “logical series” (11) which is constitutive of a number of steps. Thereby it is thought that the removal of the ‘irrational’ and the biased step in that logical series would fix the algorithm. But machine learning algorithms and neural nets are “characterized less by the series of steps in a calculation than by the relations among functions” (11). As a result of this, removal of a step is not necessarily tantamount to the reorientation of its overall arrangement. On the contrary, algorithms “categorically require bias and assumption to function in the world” (74) and they are “intrinsic to the calculative arrangements – and therefore also to the ethicopolitics of algorithms” (75). Thus, locating the ethicopolitics of algorithms is to acknowledge the plurality of entangled human and non-human attributes and their intimate relationships, which condenses multiple potentials into a single actionable output.
In the first part of her book, Amoore presents an understanding of how there has been a transition in the principle form of reasoning which governs an algorithm - from a deductive form of reasoning to a co-relative form of reasoning by virtue of which error and failure are no longer conceived as problems but rather as an essential condition which allows the algorithm to operate. As a result of this, the “principal ethicopolitical problem does not arise from machines breaching the imagined limits of human control but emerges instead from a machine learning that generates new limits and thresholds of what it means to be human” (65). The subsequent legal and ethical concern then is simply this – if an algorithm makes an ‘error’ which prompts punitive action, then who is to be held responsible?
Amoore explores this question of accountability for ‘unforeseen’ algorithmic madness in the second part of her book. Who is to be held accountable if an algorithm generated drone strike kills civilians or botches an intricate surgical operation which results in the loss of the patient’s life? Who is the author of this madness? Where is the “unified locus of responsibility” (66)? Though the Algorithmic Accountability Bill of 2017 attributes accountability to the source code, Amoore points out that since machine learning algorithms are self-generating, the source code is never a complete and a unified whole and that the source of potential harm can also lie in the process of “iterative and heterogenous writing of a likelihood ratio” (96). In turn she proposes an alternative to formulate the ethicopolitics in “act of writing itself” (87) and the “forks” (98) or the number of alternatives or bends in the path to decision making. Her argument is that one ought to make these forks, bends, paths, iterations and contingencies that condense the outcome to be the focal point of ethical concern. Through a 2017 case study of United States v. Kevin Johnson (95-96), she shows us how an analysis of the source code did not reveal any racial prejudice. She argues that this is because the algorithm is constantly modifying and adjusting itself at these forks, bends, paths, iterations and contingencies in the act of writing itself which cannot be reduced to any imagined singularity of a source code.
Like the self-generating act of writing that it performs, the algorithm self-generates its own limits and lines of rational output. In what is perhaps the most riveting chapter of the book – ‘The Madness of Algorithms’, Amoore talks about how ‘mad’ or ‘errant’ algorithms only appear to be mad but are in fact showcasing an indispensable part of their rationality. This is because machine learning algorithms are supposed to perform in uncertainty; therefore, their very rationality is pre-conditioned by irrationality. Thus, by labelling the madness of algorithms as accidents or errors we fail to identify how unreason is always present and haunts our ideas of moral reason. She writes – “[t]he madness of algorithms does not reside in the moral failure of the designer or the statistician, but it is an expression of the forms of unreason folded into a calculative rationality, reducing the multiplicity of potentials into a single output” (123). Thus, in the final part of her book, Amoore distinguishes ethics from morality (165) drawing upon Deleuze and his reading of Spinoza (1988). This is because morality is associated with a pre-given set of codes whereas ethics resides in the specificities of encounters and relationships between events and entities. Thus, the ethics she suggests will begin from opacity and its task is not to demand transparency but rather to acknowledge the partiality of the knowing self.
As the central question of accountability of an algorithm becomes ever so problematized, one cannot but rethink ethical and legal questions that are not associated with algorithms. Amoore helps us locate the attributes of ourselves in algorithms, but can the inverse of this question be asked? Are there attributes of algorithms in us? For instance, how does one think of the convict as a unified site of accountability, punishment, rehabilitation or pardon if the very nature of ethicality is subject to contingencies? At a time when there is a complete polarisation of opinion regarding the use of algorithms as the reactions are generally that of either paranoia or celebration, Amoore’s book attempts to give the reader a much-needed clarity regarding its intricate operations albeit through hefty philosophical concepts, debates and ideas. Through new disciplines like Digital Humanities, algorithms have found a way of intervening in existing fields of literature, history and archival studies. This has largely resulted in a growing skepticism in departments across the world which partly stems from a notion that algorithms may either become a benchmark for testing conventional humanities scholarship or become a competitor who it will eventually replace. Thus, a humanist interrogation of machine learning algorithm and cloud ethics can perhaps go a long way into allaying these suspicions and fears of obsolescence. Throughout her book, she employs and relies for her analysis on several philosophical frameworks drawing from Foucault (1997) as well as literary theoretical tools borrowed from Fowles (1998) and Mantel (2017). Although Amoore gives us an approximation of what these new strategies of interrogation may look like (166-167), there are a lot of new and existing questions that this work respectively opens up and lends more currency to.
References
Amoore, Louise (2020) Cloud Ethics: Algorithms and the Attributes of Ourselves and Others. Durham: Duke University Press.
Chun, Wendy (2016) Updating to Remain the Same: Habitual New Media. Boston: MIT Press
Foucault, Michel (1997) Ethics: Essential Works of Foucault, 1954-1984. London: Penguin.
Fowles, John (1998) Notes on an Unfinished Novel. In Jan Relf (ed.) Wormholes: Essays and Occasional Writings, London: Jonathan Cape, 343-362.
Hu, Tung-Hui (2015) A Prehistory of the Cloud. Cambridge: MIT Press.
Mantel, Hillary, Medium (2017) The Day is for the Living: Hillary Mantel on Writing Historical Fiction, BBC Reith Lectures. Available at: https://medium.com/@bbcradiofour/hilary-mantel-bbc-reith-lectures-2017-aeff8935ab33 (Accessed 13th July, 2020)
Noble, Safiya (2018) Algorithms of Oppression: How Search Engines Reinforce Racism. New York: NYU Press.
Noble, Safiya, Time (2018) Google Has a Striking History of Bias Against Black Girls. Available at: https://time.com/5209144/google-search-engine-algorithm-bias-racism/ (Accessed 13th July 2020)
Suryansu Guha is a graduate student in the Department of Film, Television and Digital Media in UCLA. An Indian by birth, he did his BA from Calcutta University in 2012 and his MA from New Delhi’s Jawaharlal Nehru University in 2014. His areas of interests are electronic cultures, new media, television industry studies, streaming and materialities.
Email: suryan18@ucla.edu
Twitter: @Offendheimer