Matthew Henry unsplash.com
This year, government action on facial recognition is heating up. While most agree that some sort of facial recognition regulation would be better than today’s “anything goes” status quo, the debate rages on between two radically opposed positions: regulate it or ban it altogether?
This month the European Commission released its White Paper on Artificial Intelligence. It was significant not for what the paper had to say about AI but rather what it omitted. And that is the proposed 5-year moratorium on facial recognition use in public spaces that an earlier leaked draft of the paper proposed. Presumably removed due to industry and member state pressure under concerns around stifling innovation and hampering national security, the white paper falls back on recommendations for regulating only high-risk uses of facial recognition or in high-risk sectors (e.g. healthcare). While defined broadly, high-risk uses would be ones where there is a likely negative impact to individuals or companies, specifically where fundamental rights are violated while non-high risk uses, the paper argues, already fall in scope for regulation under existing laws like the GDPR.
On the other side of the Atlantic, President Trump this month signed Executive Order 13859, unveiling the American strategy for AI. No surprise, the American approach concerns itself more with competitive innovation in AI than proposing an acceptable regulatory framework. In fact, the executive order prioritizes removing “regulatory barriers to AI innovation” and “overly restrictive government regulations” and focuses on “limit[ing] regulatory overreach” rather than mitigating risks of AI use. This preoccupation with limiting regulation is hard to understand given that no significant AI regulation exists in the US or anywhere else. The principles of responsible or ethical AI are an afterthought.
Meanwhile, at the local level, cities are banning the use of facial recognition in public spaces and state legislatures around the country are considering facial recognition bills. At the same time, the use of facial recognition by government agencies, including law enforcement, and corporations is spreading unabated even though the risks have been sufficiently documented. Bills like the Washington Privacy Act look to mitigate facial recognition risks to privacy, civil liberties, non-discrimination, safety and liability. Commonly proposed mitigations include mandating independent third-party testing of the technology to ensure it’s not biased, providing transparency around the datasets and algorithms used, and requiring human oversight. In Washington, a separate facial recognition bill for use by government has also been proposed that aims to regulate the unique risks that come with local and state government agencies using facial recognition. However, there is no comprehensive, federal approach yet to thinking about the variable uses of facial recognition across sectors.
The debate is likely to rage on in 2020 while adoption of facial recognition continues. While academics and civil liberties and privacy advocates push for moratoriums until the technology can be improved, industry is charging ahead and, at most, proposing lightweight regulation that would legitimize the AI enterprise without substantively changing the AI marketplace. But what does “the man in the street” have to say about this? How does the average citizen feel about facial recognition surveillance in public spaces? How does the average shopper feel about price discrimination resulting from facial recognition use in stores? Maybe in 2020 we’ll find out.