October 26, 2022

Introducing Knowing without Seeing

In January 2019, I co-presented a regulatory proposal called post facto adequation at FAT/Asia. I argued that in cases where decisions are made by an AI system, the system must offer sufficient opportunity for human supervision such that any stakeholder in the lifecycle can demand how a human analysis adequates to the insights of a machine learning algorithm. Much of our focus has so far been on opening the black-box. However, what I propose is to sidestep the black-box and strive for not complete transparency, but a meaningful level of transparency. My standard for sufficient opportunity for human supervision required that the AI system must provide sufficient information about the model and data analysed, such that a human supervisor can apply analogue modes of analysis to the information available in order to conduct an independent assessment. 

Since then, I have had some opportunity to further develop this thesis and contemplate how it may be applied. An expanded technical version of the FAT/Asia presentation was published in the Electronic and Political Weekly. Alongside, I worked with colleagues to explore how this regulatory standard could be applied as part of a larger governance framework. Over the next three years, as I committed to other work responsibilities, I was unable to pursue this thread of research. It was also interesting to follow the explosion of research in the Explainable AI discipline in the meantime. 

This year, with the support of Mozilla Foundation, I have been able to continue this research. This research project Knowing without Seeing, is an attempt to critically question the transparency ideal in the context of AI systems employing opaque algorithms. The name of the project is a hat tip to Mike Ananny and Kate Crawford’s 2016 paper titled “Seeing without knowing: Limitations of the transparency ideal and its application to algorithmic accountability”. This project will view transparency as an instrumental value designed to achieve accountability of systems by empowering individuals. To do so, the research will centre the question—what is the meaningful level of transparency that is needed to form a conceptual model of the algorithmic system such that we are able to make enough sense of it to hold it to account. I hope to work on this research and present it through a series of long-form essays, supported by regular blog posts and resources, and finally culminating in an open access book.