Montreal AI Ethics Institute & Sociology of AI

Reproduced from MAIEI

Were you wondering why I’ve been so quiet in the New Year? I thought so.

I’ve teamed up with Abhishek Gupta from the Montreal AI Ethics Institute (MAIEI) and Nga Than, sociology doctoral candidate at CUNY, to produce a weekly blog column on the Sociology of AI.

Each week we tackle a sociology paper or book and summarize the research in a way that makes it more accessible to a wider audience. We also speak directly to AI practitioners by extending the sociological work to practicable guidance on how to develop AI more responsibly. Read more about the concept here.

I will be cross-posting the blog posts here. But you can also sign up for the MAIEI newsletter and get them right in your inbox each week.

Social Justice and Tech Policy: What To Expect in 2021

Image by herbinisaac from Pixabay

As we say goodbye to 2020 (good riddance!), what are key policy debates to look out for in 2021 that implicate technology and social justice? Because 2020 has been a turbulent year, raising many equity issues to the fore, I want to focus on those tech policies that will have the greatest impact on the most vulnerable communities; people of color, immigrants, rural and low-income students, and consumers of all stripes. I believe the most pressing social justice tech issues in 2021 will be facial recognition, privacy, and broadband access. 

Continue reading “Social Justice and Tech Policy: What To Expect in 2021”

What is Artificial Intelligence for the Sociologist?

firstpalette.com

In my last blog post, I discussed the need for a Sociology of AI, sidestepping the thorny issue of what AI is. Is it a field of study? Is it a technology or collection of technologies? Is it a product? Is it nothing more than a marketing strategy?

Continue reading “What is Artificial Intelligence for the Sociologist?”

Sociology for or of AI? Let’s invest more in both.

The New Yorker

As a sociologist in AI, I often wonder what can sociology do for tech? How can the development of responsible or ethical AI benefit from sociologists’ insights, perspectives, and research? To answer that question, like any decent researcher, I turned to the sociology of AI literature and quickly realized that there isn’t one. Why not? And why should you care? 

Continue reading “Sociology for or of AI? Let’s invest more in both.”

My grandfather: communist, feminist, technologist.

My grandfather was a communist, feminist, and technologist, though he would probably contest each of those labels. He passed away on August 16, 2020.

Continue reading “My grandfather: communist, feminist, technologist.”

Facial Recognition Bans and Moratoriums

Two Gray Bullet Security Cameras
Source: Pexels, Scott Webb

To what extent should we limit facial recognition technologies? In my February blog post titled, “Facial Recognition in 2020”, I asked what the “man in the street” thinks about facial recognition ending with, “Maybe in 2020 we’ll find out.” Well, I think we have. After the police killings of Ahmaud Arbery, Breaonna Taylor, and George Floyd, the public has made its voice heard in months of nation-wide marches for black lives and protests. Privacy, racial justice, and civil liberties organizations have at the same time called on government leaders to pass moratoriums on the use of facial recognition in policing. There is overwhelming evidence now of not only the racial bias that currently exists in facial recognition, but also growing awareness that even if the algorithms were completely unbiased, their use by law enforcement would still lead to unacceptable infractions on constitutional rights, especially in communities of color that are already oversurveilled and overpoliced. Respectable facial recognition companies concede that some sort of regulation is needed while privacy and racial justice advocates prefer outright moratoriums and bans, ranging from application only to law enforcement, to all government agencies, to use in all public spaces, to all uses period. How do we tackle this problem? To borrow from Microsoft President, Brad Smith, cleaver or scalpel

Continue reading “Facial Recognition Bans and Moratoriums”

Schrems II

European Court of Justice - Wikipedia
Source: wikipedia.com

The long-anticipated Schrems II decision is here! But as of yet it’s hard to tell whether it’s a game changer. The European Court of Justice judgement invalidated the Privacy Shield program that most companies used to legally transfer personal data from the European Union to the United States. However, it ruled that Standard Contractual Clauses (SCCs) are still a valid data transfer mechanism. Why was this decision made? What are the impacts on industry? And what does it mean for consumers? 

Continue reading “Schrems II”

Algorithmic Unfairness and Racial Injustice in America’s Criminal Legal System

Photo by Francesco Ungaro on Pexels.com

In America, we are currently living through a moment of heightened awareness of racialized police brutality and, in response,  the reawakening of a racial justice movement. In this same period, we’ve seen an incredible expansion in the use of data analytics and AI technologies in almost all industries, including the criminal legal system. No one asked us, but many of these technologies are already being used and have been for the last two decades. From predictive policing to bail and recidivism prediction to pretrial and parole/probation electronic monitoring, these technologies claim to reduce costs and take bias out of human decision making. But how valid are these claims? A lot of recent research and recent examples show that far from reducing unfair outcomes, these technologies serve primarily to replicate existing biases. When we have a criminal legal system as racially biased as that described in The New Jim Crow or Just Mercy, how can we possibly expect anything different? 

Continue reading “Algorithmic Unfairness and Racial Injustice in America’s Criminal Legal System”
%d bloggers like this: