Facial Recognition Bans and Moratoriums

Two Gray Bullet Security Cameras
Source: Pexels, Scott Webb

To what extent should we limit facial recognition technologies? In my February blog post titled, “Facial Recognition in 2020”, I asked what the “man in the street” thinks about facial recognition ending with, “Maybe in 2020 we’ll find out.” Well, I think we have. After the police killings of Ahmaud Arbery, Breaonna Taylor, and George Floyd, the public has made its voice heard in months of nation-wide marches for black lives and protests. Privacy, racial justice, and civil liberties organizations have at the same time called on government leaders to pass moratoriums on the use of facial recognition in policing. There is overwhelming evidence now of not only the racial bias that currently exists in facial recognition, but also growing awareness that even if the algorithms were completely unbiased, their use by law enforcement would still lead to unacceptable infractions on constitutional rights, especially in communities of color that are already oversurveilled and overpoliced. Respectable facial recognition companies concede that some sort of regulation is needed while privacy and racial justice advocates prefer outright moratoriums and bans, ranging from application only to law enforcement, to all government agencies, to use in all public spaces, to all uses period. How do we tackle this problem? To borrow from Microsoft President, Brad Smith, cleaver or scalpel

Continue reading “Facial Recognition Bans and Moratoriums”

Schrems II

European Court of Justice - Wikipedia
Source: wikipedia.com

The long-anticipated Schrems II decision is here! But as of yet it’s hard to tell whether it’s a game changer. The European Court of Justice judgement invalidated the Privacy Shield program that most companies used to legally transfer personal data from the European Union to the United States. However, it ruled that Standard Contractual Clauses (SCCs) are still a valid data transfer mechanism. Why was this decision made? What are the impacts on industry? And what does it mean for consumers? 

Continue reading “Schrems II”

Algorithmic Unfairness and Racial Injustice in America’s Criminal Legal System

Photo by Francesco Ungaro on Pexels.com

In America, we are currently living through a moment of heightened awareness of racialized police brutality and, in response,  the reawakening of a racial justice movement. In this same period, we’ve seen an incredible expansion in the use of data analytics and AI technologies in almost all industries, including the criminal legal system. No one asked us, but many of these technologies are already being used and have been for the last two decades. From predictive policing to bail and recidivism prediction to pretrial and parole/probation electronic monitoring, these technologies claim to reduce costs and take bias out of human decision making. But how valid are these claims? A lot of recent research and recent examples show that far from reducing unfair outcomes, these technologies serve primarily to replicate existing biases. When we have a criminal legal system as racially biased as that described in The New Jim Crow or Just Mercy, how can we possibly expect anything different? 

Continue reading “Algorithmic Unfairness and Racial Injustice in America’s Criminal Legal System”

Translating Algorithms

Photo by Retha Ferguson on Pexels.com

As a sociologist, and now a privacy practitioner, I am interested in the ethical issues we face in using new technologies like AI and machine learning. They are of course many ranging from privacy and fairness to accountability and transparency. But how does a social scientist without a computer science background engage in these topics when the algorithms are so complex and their interactions in society even more so? The best I can come up with for now is that we need a more concerted effort at interdisciplinarity. I am convinced that social scientists and humanities scholars are essential voices in the public debate about responsible AI because the challenges we face will require algorithmic and social solutions. This requires social scientists to understand the technology that is causing concerns (and vice versa) before social, legal, and technical solutions can be proposed. This is no small feat, but we’re slowly moving in the right direction with symposiums and books that come from a place of disciplinary humility and create space for mutual understanding.

Continue reading “Translating Algorithms”

COVID19, Technology, and Social Inequality

coronavirus
Photo by CDC on Pexels.com

The COVID19 pandemic is an unprecedented global public health challenge. It’s additionally troubling that its impact and our response to it throws into sharp relief many social inequalities in areas like healthcare, education, and employment that have existed long before the pandemic. Technology, as it’s used in each of these sectors, has historically co-created these inequalities as well as strived to mitigate them. Now, as we reach for technology to respond to the COVID19 crisis, we need to do it in such a way that doesn’t exacerbate these inequities and preserves fundamental rights like privacy, fairness, and human dignity. This challenge is daunting. Continue reading “COVID19, Technology, and Social Inequality”

Europe’s Data Strategy

person holding blue ballpoint pen on white notebook
Photo by Lukas on Pexels.com

The European Commission’s Communication from February of this year describes Europe’s “strategy for data”. As a leader in data protection, it is interesting to see the EU strategize on how to make data more available in order to stimulate the European single market. Can the EU be just as successful in making data available to all sectors of the economy to drive innovation, competition, and other benefits for consumers as it was in protecting personal data?

Continue reading “Europe’s Data Strategy”

Online Content Moderation and the German Network Enforcement Act

android app blog blogging
Photo by Pixabay on Pexels.com

The spread of the COVID19 global pandemic has recently brought to the fore a tech policy issue that has long been causing heated debate, namely online misinformation. In recent weeks, tech companies have been working together to figure out how to prevent the spread of COVID19 misinformation as the world tackles the pandemic, with social media platforms taking a tougher stance against lies and falsehoods that undermine the public health  response. For example, in the last month, Twitter alone has removed over 2,000 misleading COVID19 posts. It is surprising then that simultaneous attempts to improve the only national social media content moderation law in the world have not received more attention.Continue reading “Online Content Moderation and the German Network Enforcement Act”

Washington’s Landmark Facial Recognition Law

Flat, Recognition, Facial, Face, Woman
pixabay.com

Just toward the end of the legislative session, Washington state managed to pass a facial recognition bill, the first of its kind in the United States. Welcomed by some and criticized by others, the act constitutes a compromise between privacy and civil liberties advocacy groups, like the ACLU which favor moratoriums, and the status quo which would permit the continued unhindered use of the technology without any legal protections at all. But what does the law actually mean and does it go far enough? Continue reading “Washington’s Landmark Facial Recognition Law”

AI Ethics

high angle photo of robot
Photo by Alex Knight on Pexels.com

The conversation about responsible AI is in full swing among industry, governments, international organizations, and academic institutions. Last year in particular saw a flurry of pronouncements about ethical AI with the OECD issuing AI principles, the G20 adopting those same principles, and in 2018 the European Commission set up an independent High-Level Expert Group on AI (AI HLEG) that this time last year issued “Ethics Guidelines for Trustworthy AI”. This is the most in-depth, major government report we have on how to think about AI ethics, so let’s take a closer look. Continue reading “AI Ethics”

Europe: a Leader in AI Regulation?

gray magnifying glass and eyeglasses on top of open book
Photo by Wallace Chuck on Pexels.com

In the past couple of months, EU institutions have taken some preliminary steps to propose a new framework for AI regulation. The idea is to promote innovative AI technologies by creating consumer and societal trust in AI while at the same time preventing potential consumer harms and risks to fundamental human rights. While American leadership chastises any and all regulation for hampering innovation, can Europe gain a competitive edge in AI precisely through sensible AI law? Continue reading “Europe: a Leader in AI Regulation?”

<span>%d</span> bloggers like this: