Translating Algorithms

Photo by Retha Ferguson on Pexels.com

As a sociologist, and now a privacy practitioner, I am interested in the ethical issues we face in using new technologies like AI and machine learning. They are of course many ranging from privacy and fairness to accountability and transparency. But how does a social scientist without a computer science background engage in these topics when the algorithms are so complex and their interactions in society even more so? The best I can come up with for now is that we need a more concerted effort at interdisciplinarity. I am convinced that social scientists and humanities scholars are essential voices in the public debate about responsible AI because the challenges we face will require algorithmic and social solutions. This requires social scientists to understand the technology that is causing concerns (and vice versa) before social, legal, and technical solutions can be proposed. This is no small feat, but we’re slowly moving in the right direction with symposiums and books that come from a place of disciplinary humility and create space for mutual understanding.

This week I had the privilege of listening to some keynotes and paper presentations at a conference — Foundations of Responsible Computing (FORC) — predominantly intended for and organized by theoretical computer scientists. Being so disciplinarily removed from this field, I felt a bit like an outsider peeking in, but I was grateful for the opportunity to listen, learn, and try to make sense of what, to me, is still largely a foreign language. One keynote, however, given by the renowned legal and critical race scholar Patricia J. Williams, hit close to home. Although the beautifully literary and emotionally-gripping delivery of her legal analysis of algorithmic decision-making was still a little foreign to a more structuralist-leaning sociologist like myself, it helped me find my footing. We were at least in the same ballpark. Professor Williams spoke convincingly of the need for translation project(s) that allow us to understand our respective conceptual frameworks so that we can talk to each other rather than past each other, and this resonated with me throughout the conference. 

My difficulties understanding complex mathematical topics, however, left me energized rather than dejected. For example, I was intrigued by Adrian Weller’s  keynote “Beyond Group Statistical Parity”. Having recently read Micheal Kearns and Aaron Roth’s The Ethical Algorithm, I was excited to find that I could follow some of the general concepts of the talk like “statistical parity”, group vs individual fairness, and the general problem of how to apply fairness “constraints” while not sacrificing accuracy. But the “beyond” part, the interesting part offering up a potential solution was, well, beyond me (for now). I fared a little better with Jon Kleinberg’s fascinating keynote about interpretability and how we can use it for auditing algorithms for bias. I was aided by my familiarity with the social science literature on structural disadvantage, but I got a bit lost when it came time to translate this concept into an algorithm. Likewise, my fluency in Boolean analysis is limited, but, conceptually, I was fascinated by how simplifying a complex algorithm we can either make equity and accuracy improvements or produce the unintended consequence of incentivized bias. In the end, and as Kleinberg acknowledged, it will be practitioners that will use interpretability to ensure that algorithms are fair, but they need to be part of the conversation to shape how interpretability is conceptualized. 

The barriers to having this productive conversation are high because we face challenges even within disciplines. For example, I was trained in statistics insofar as it is applied in the social sciences and not machine learning. When I say things like independent and dependent variables and coefficients, my computer science friends reply with parameters, features, and labels. Huh? But I’m not alone. As the disciplinary distance grows so does the rate of misunderstandings and mistranslations. For example, in the paper session following Patricia Williams’s talk there was a valiant effort, prompted by Cynthia Dwork, to relate Williams’s personal story about racial bias in algorithmic osteoporosis diagnoses with a PhD student’s research on disparate impacts on subgroups. There was some common acknowledgement of the issue but the dialogue quickly hit a wall seemingly out of  a difficulty in translating terms and concepts into each other’s languages, still largely trapped in siloes. I could relate. To a sociologist, the computer science language around social values and ethics is a bit strange. The focus on algorithmic “fairness” seems trivial and myopic. A parent giving one child a cupcake and not the other is unfair. Social problems, on the other hand, are described in weightier terms like unequal, inequitable, or unjust. Likewise, algorithmic “bias” seems to locate the dicrimination in the internal, individualistic prejudice of the code itself. Sociologists would instead use words like structural and systemic inequality or institutionalized racism to talk about the discriminatory impacts of algorithms and the sociocultural environment that produces them. Which “object” of analysis — algorithms or institutions — to emphasize will necessarily be influenced by one’s disciplinary lens, but we must recognize that each framing misses something and only together do they provide a fuller picture. 

It is our struggle to translate each other’s words, framings, and perspectives that causes friction in identifying not only the problems that need to be solved but also which solutions to advocate. A sociologist may have a knee-jerk reaction against algorithmic solutions to problems that algorithms cause in the first place. As Kearns and Roth put it, tweaking algorithms to be more fair feels like rearranging the deck furniture on the Titanic. Indeed, my instinct is to scream from the bow, “Just regulate the ship so it steers away from the iceberg!” But even I can admit that regulation is just one (imperfect) tool of dealing with the problem. We will surely need legal interventions to prevent harms and offer redress, but we shouldn’t eschew technical solutions just because they are hard to understand or because, as non-technical social experts, we may have an (unconscious) bias against technical mediations of human experience more broadly. It will take all of us to come together to brainstorm solutions, and we have to start now. The algorithmic genie is out of the bottle and to take full advantage of it (rather than the other way around) we have to be able to speak each other’s language. The more we engage with each other the more likely we are to cast off our disciplinary blinders and discover creative paths forward. 

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: