Reframing Algorithmic Transparency

Over several decades, while transparency has emerged as a dominant principle, its instrumental value to foster accountability is often not achieved. The use of machine learning further complicates the delivery of algorithmic transparency, and highlights the need to create conceptual models of AI.

October 28, 2022

Examining the Transparency Ideal

In The End of Secrecy, Ann Florini described the transparency ideal as one end of the continuum as the polar opposite of the secrecy ideal, with international consensus rapidly shifting towards it. Florini makes a convincing case for transparency emerging as a normative default across a range of initiatives—international inspection programmes for weapons, freedom of information regulations to demystify politics, need for timely information in free markets. Written in 1998, Florini’s essay came a few years after the end of the Cold War, and in the first decade of the Internet Age. Florini’s declaration of transparency as the inevitable default in governance reflected the popular thinking in the 90s that neoliberal politics and economics were the established norm towards which global thinking would gravitate. Her essay is, in many ways, an ideological successor of Fukuyama’s end of history pronouncements in early 1990s and the inevitable triumph of liberal capitalist democracy. [1] Accounts of human rights which view autonomy as central to the exercise of rights hold that information is a prerequisite for an individual to make ‘real’ choices and be autonomous. While this may hold true, transparency has been presented as a panacea for the ‘delinquencies of the public man and institutional inefficiency.’

In the last two decades, there have been several critiques of the ‘end of history’ narrative, including attempts by Fukuyama himself to reframe it. Paralelly, we also see critiques of the framing of transparency as a normative default, though without any clear connections being drawn to the ‘end of end of history’ debate. At the core of the transparency ideal is the ‘regulation by revelation’ discourse which works on the presumption that transparency will necessarily lead to accountability. This presumption has been tested severely in several contexts—capture of transparency by corporate interests and states alike in environmental regulations; and failure of privacy self-management based on privacy notices in data protection regulations, being two clear examples from unconnected domains. These critiques argue that transparency has only instrumental value and achieves little on its own if it is not accompanied by effective accountability and redressal mechanisms.

Defining transparency

Before moving forward we need to clearly define what a transparency initiative entails. If we go back to Florini, she provides a simple definition where transparency is the act of deliberately revealing your actions. This element of deliberateness or volition is critical, particularly in a networked and quantified world which necessitates a resigned surrender to intrusive technologies. Therefore, when Mark Zuckerberg describes an inevitable transparency that we, as people are to be subject to as a result of slew of new online services, including Facebook, it is disingenuous and fashions as ‘transparency’ that which is, in fact, intrusion. Yet Florini’s definition and her overall article, which also often erroneously juxtaposes transparency against privacy, is perhaps too simple. 

If we believed that transparency was an instrumental value designed to achieve public participation and accountability from the powerful, we would see no real value in making more visible the private lives of individuals not wielding any special power. Therefore, this brings us to the second element of power. Transparency initiatives must illuminate or make accessible the particulars which enable the exercise of public or private power, particularly the power exercised by institutions

One of the most comprehensive definitions of transparency is provided by Fisher which has four parts, one of which follows. What is being made visible by a transparency initiative is a resource that those within an institution are drawing on for their power. Such resources are often frameworks (legal, organisational, technical), the information or expertise, or the normative values on which the decisions are based. Often it may be required to create new resources to achieve transparency. 

Transparency initiatives are also marked by a diversity in their structural nature. There is no singular appropriate way to deliver transparency. It may take the form of allowing access on request, requiring the active distribution of material, or even requiring participation in decision-making outcomes. [2]

What is transparency for?

David Pozen doubles down on transparency’s normative value, or lack thereof, and persuasively uses MacIntyre's distinction between primary virtues and secondary virtues.The first relate directly to the goals which we pursue, while the second are concerned with the way in which we go about them. Pozen argues vehemently that the transparency belongs squarely in the second category as it had no normative coherence and could as easily lead to negative outcomes as positive one. More importantly, he declares that there existed no “straightforward instrumental relationship [between transparency and] any primary goals of governance.” The counter arguments against the normative virtue of transparency ironically rely on Florini’s framing of it as polar opposite to secrecy. Pozen, and other scholars like Fenster and Schudson who look at transparency critically resort to making the case for limited situations where government secrecy is a good thing. 

It may be perhaps more accurate to consider transparency as occupying a space somewhere between the primary and second virtues. Schudson touches on this without really elaborating what he means. As a secondary virtue, transparency is key for the public to achieve greater autonomy, and facilitate greater accountability of institutions. However, relegating it entirely to a secondary virtue may translate into placing a burden in each instance to demonstrate that transparency will lead to some policy goals. This may not be advisable as even Pozen, the most vocal of critics is left to acknowledge that it would be “reasonable to presume that more publicly available information is preferable to less publicly available information, all else being equal.” However, the space that it occupies currently as a panacea or a magical concept which is a worthy, and consequently sufficient policy goal, in itself has led to a situation where we priviledge mere access to information over comprehensibility which can foster accountability. 

This fetishisation of transparency paints the citizen as a consumer [3] within a political market who, when empowered with information, can take matters into their own hands, and take steps to both protect themselves and correct market trends. This belief that individuals would exercise rational choice after examining information is not supported by any data on how individuals actually behave. The simultaneous love and dread of transparency which is witnessed in the interplay of transparency with values of secrecy, privacy and non-disclosure, and the constant debates on the extent of desirable transparency also complicates its implementation. 

The other aspect of delivery of transparency solutions and the structural challenges it faces are largely ignored by its champions. The very fact of the opening of processes, resources and information does not translate into its recipients being able to comprehend it. While transparency has a normative commitment to the production of accurate information, there is less attention paid to the information construction processes and the cognitive problems it leads to. These gaps in transparency theory and practice ensure that its utility towards the overall goal of enabling autonomy of individuals and demanding accountability of institutions is limited.

The complex domain of algorithmic transparency

The myriad problems with the transparency ideal are exacerbated many times over with the introduction of inherently opaque technology such as machine learning in general, and neural networks and deep learning, in particular. [4] From being an abstruse mathematical terminology used primarily by computer scientists, the word ‘algorithms’ has quickly become a part of the mainstream discourse. Sandvig points out that in 1960, computer scientists at TRW wrote about the ambiguity in its meaning and whether it was different from a mathematical formula. Donald Knuth responded by defining it as not a formula, but rather “a word computer science needed to describe a strategy or an abstract method for accomplishing a task with a computer.” An algorithm is, in effect, a sequence of instructions that tells a computer what to do, typically by switching on and off billions of tiny transistors in a computer. Algorithms are meant to be an exacting standard. They have to be extremely precise and unambiguous recipes. Conventional algorithms involved an input and an algorithm which processed the input data leading to an output. In very crude terms, machine learning algorithms have data and desired output as the input, and the output is the algorithm that can turn the input into the desired output.

We deal with opacity on account of multiple factors—intentional secrecy; black-box nature of the model; specialised and high level skill set required to understand the model. Machine learning also poses problems of inherent tradeoffs between interpretability and accuracy. For example, linear regression produces models which are considered to be more interpretable but low-performing compared to methods like deep learning which produce models that are high-performing, though very opaque. 

Jenna Burrell demonstrated the futility of exercises such as code audits as the number of auditors hours that may be needed to be engaged in order to untangle the logic of the algorithms in a complicated software system, would be huge. She looked specifically at socially consequential classification and ranking systems, in instances such as spam filters, credit card fraud detection, search engines, news trends, market segmentation and advertising, insurance or loan qualification, and credit scoring that involve personal and trace data which increasingly rely on machine learning. The problems of scale and complexity are peculiar and distinctive. These are not characterised simply by greater number of lines of code or the number of team members or even various linkages between modules. It is not merely a matter of comprehending the code, but that of being able to understand how the algorithm operates on data, in action. While it may be possible to implement machine learning algorithms in such a way that it is comprehensible, such algorithms may not be of much use. For the models to have ‘accuracy of classification’, they must be accompanied by a degree of inherent complexity. Famously, in his DPhil thesis, Malte Ziewitz wrote about Google’s search algorithm that even if you had “Larry [Page] and Sergey [Brin] at this table, they would not be able to give you a recipe for how a specific search results page comes about.” The route to arrive at a particular conclusion would be too circuitous. 

As mentioned above, machine learning algorithms build upon themselves and the internal decision logic of the algorithm evolves as it ‘learns’ from input data. Handling a huge number especially of heterogeneous properties of data adds complexity to the code. Machine learning techniques quickly face computational resource limits as they scale and may manage this, using techniques written into the code which add to its opacity. 

The sufficiently-informed self

The account of human rights which perhaps places transparency at its core is James Griffin’s conception of personhood. The justification of a human rights framework needs to meet a very high burden — it must be able to satisfy different characteristics considered integral to human rights, including both their universality and their high priority. Most theories about how we understand human rights as well as the idea of certain inalienable rights in the last few centuries take a top-down approach. This approach always begins with the identification of an overarching principle or authoritative procedure such as the principle of utility (John Stuart Mill) or the categorical imperative (Immanuel Kant) to which rights owe their existence. This kind of approach usually traces the existence of human rights to human agreement, such as a law or a constitution. This approach has been criticised for its insistence that the concept of human rights applies only where there is a state system. 

On the other hand, the bottom-up approach starts with the idea of human rights as used in our actual social life by politicians, lawyers, social campaigners, as well as theorists of various sorts, and then sees what higher principles one must resort to in order to explain their moral weight. In On Human Rights, Griffin defined normative agency as our capacity to choose and to pursue our conception of a worthwhile life. It is important to recognise that by agency, he does not mean merely the ability to perform actions. This kind of agency involves not just the conception of a worthwhile life but also active autonomy. 

As protectors of human agency, Griffin’s framework speaks of human rights in three different stages. In this discussion, we are interested mainly in the second stage which comprises those elements that make possible the pursuit of this conception of the good life: the skills, resources and support we need to enable us to exercise our autonomy are welfare provisions above some minimal level. It is in this stage where we can trace a right to minimum information. Information is a prerequisite for an individual to make real choices and be autonomous. Let us take the example of patient autonomy in medical practice. The standard of autonomy is too low. Patients are often too stressed, and the explanations given by medical practitioners is too brief or too technical for a real choice to be made. On the other hand, autonomous action as understood in a Kantian framework requires a very high degree of rationality often beyond the capacity  of most individuals. In a Kantian framework, a decision is autonomous if, and only if, the person deciding appreciates fully the weight of all the relevant reasons, all of whose inferences are faultless, and whose decision is not influenced in a decisive way by anything but these reasons and inferences. A standard of acceptable autonomy must include the ability of individuals to identify goals and ends. It is in furtherance of this idea that I argue that the ability to make autonomous decisions hinges upon having access to sufficient information and further, on being able to act based on that information.

Our capacity to make autonomous choices depends on our capacity to come to some bare minimum understanding of the environment we engage with while making these choices. This argument that autonomy is relative and certain kinds of losses of autonomy do not lead to an abnegation of human agency. [5] Griffin himself argued one could make a rational choice to rely on others in many circumstances without compromising on his agency, in any real sense. Similarly, complex algorithms could be relied upon to help an individual make decisions, say, about his investments.

Seeking a conceptual model of AI 

In her essay and talk, Saskia Sassen draws the picture of a smart, quantified city relying on Big Data and networked technology. She points out that in the past, great cities have always evolved through constant engagement by its residents and, thus, have remained in existence for so long. However, the proprietary technology being used in building new age smart cities limits the ability of the city's inhabitants to engage with and change it. According to Sassen, the answer could be the use of open source technologies which allows those interested to fiddle with it. However, as pointed out by Burrell, proprietary technology is only a part of the problem. Even with the use of open source technology and open standards, the very nature of machine learning technology endows it with extreme complexity far beyond the capacity of most. 

The Explainable AI discipline which has emerged in the last few years in response to this problem annually published papers exceeding several hundreds. The surge in these publications is by itself an obstacle to understanding explainable AI, and several taxonomies on algorithmic transparency approaches have emerged. Aside from the problems of plenty, almost all of this literature remains overbearingly technical, leaving little room for the vast majority of non-technical stakeholders to engage with it. The overall lack of academic consensus, often on the most basic descriptors of the approaches to XAI, prevents both their adoption or any significant discourse surrounding them. For most of the actors involved, the need is for meaningful transparency solutions which privilege understanding over merely accessing information. One could argue that technology has always existed which the layperson have barely understood. However, while we may not have had the wherewithal to engage with minute aspects of technology, we have had the rough knowledge required to use it. Donald Norman, cognitive scientist and usability engineer, referred to this understanding as the conceptual model, [6] and defines it as “an explanation, usually highly simplified, of how something works. It doesn’t have to be complete or even accurate as long as it is useful.” The trouble with machine learning algorithms is that we are not creating transparency solutions which provide this conceptual model to those who need to work with it.

[1] Five years later, Florini published a book, The Coming Democracy: New Rules for Running a New World, more provocative and far more enamored by the potential of transparency. She presented a new paradigm for transnational governance, that incorporates public and private, national and transnational actors into agglomerated representative bodies. This form of governance, she argues, could be based on the idea that the free flow of information provides powerful ways to hold decision makers accountable and to give people a meaningful voice.

[2] The delivery of transparency, we will discuss later, is a critical problem that needs to be addressed in the context of algorithmic transparency.

[3] More critical readings on the citizen-consumer paradox and the role of transparency in furthering it can be found here and here.

[4] The name ‘neural networks’ is an example of contrived attempts to liken machine learning to human brain functions. Unlike conventional algorithms with an input and output layer, neural network algorithms also has additional internal layers, and each unit in these layers has weighted connections to the units in the previous and subsequent layer. The comparisons between neuroscience and the development of the disciple of artificial intelligence will be explored in future essays.

[5] The failures of transparency in data protection regulation have led to some experiments with multiple formats and contexts, for instance when Apple provides pop-up notice when iOS apps request iPhone location data. This is a transparency delivery solution where contextual, timely and specific information is shared with users.

[6] Conceptual models are devised as tools for the understanding of teaching of physical systems. This is an example of creation of transparency resource into the system in order for it to appear graspable and coherent to the user