- pixabay.com
Just toward the end of the legislative session, Washington state managed to pass a facial recognition bill, the first of its kind in the United States. Welcomed by some and criticized by others, the act constitutes a compromise between privacy and civil liberties advocacy groups, like the ACLU which favor moratoriums, and the status quo which would permit the continued unhindered use of the technology without any legal protections at all. But what does the law actually mean and does it go far enough?
The bill covers the use of facial recognition only by state and local governments, including law enforcement, but does not pertain to commercial uses of the technology. Here are the major provisions of the law. First, the act calls for an “accountability report” and a 90-day public comment period prior to deployment of the technology. Second, decisions that have “legal effects” must require human review. Third, law enforcement needs to get a warrant or court order to use the technology. Fourth, the technology needs to be tested by an independent third-party for fairness on several grounds, including race and gender, before deployment. Fifth, a task force is to be created to monitor the use of facial recognition and propose recommendations for improvement. Sounds pretty good, but let’s examine the details.
At first glance, the requirement for an “accountability report” is laudable. The report is essentially a form of public notice about how the technology operates and how it will be used. The items that must be included in the report are well enumerated and require a level of detail that make the notice more akin to a data protection impact assessment, as required by the GDPR, than a mere privacy notice. The latter tells consumers how their data will be used while the former requires a risk assessment of the data processing and balances risks to fundamental human rights with appropriate mitigations. The bill even compels the government agency to describe how it will “address” error rates in facial recognition greater than 1% as identified by an independent party.
While that seems specific and rigorous, one way to “address” errors is to simply accept the risks they produce. There is no accountability to actually fix the issue. The “accountability report” would be better named the “transparency report” as it does give more information about the technology to the public that it would not have had otherwise, but it by no means holds the government accountable for its misuse. The report has to be filed to a “legislative authority” but it’s unclear what this body is, who participates in it, and how it will review and accept/reject these reports, if it has the power to do so at all. Beyond the “authority” to post the report on its public website, it’s unclear what power at all this authority has, therefore while transparency is elevated, accountability is not.
Prior to finalizing the accountability report, the agency must hold a 90-day period of public comment, “three community consultation meetings”, and “consider the issues raised by the public”. While designed to create more transparency and require public input, these requirements do little to actually enfranchise community voices. We know that marginalized communities that are most likely to be negatively impacted by the technology have historically had unequal access to government channels like public comment. There is no definition of who should be at these “community” consultations nor how disadvantaged communities will be included in this process. Encouragement to listen is better than nothing, but ultimately there is no accountability on the government agency to make material changes to their application of the technology based on public input.
Second, human oversight over facial recognition decision-making systems that have “legal effects” or “similarly significant effects” on a number of areas like housing, insurance, health and education is important, but it’s not enough. Although the terms are lifted straight out of the GDPR, it remains to be seen, how legal and significant effects are to be interpreted and how high the bar is going to be set. Moreover, while human oversight can be implemented, the requirements says nothing of what that human oversight is supposed to achieve; does it actually have to reduce negative effects in some quantifiable way?
Third, for law enforcement specifically, the outright prohibition on “ongoing surveillance, conduct real-time or near real-time identification, or start persistent tracking” is strong and applies to body cameras and beyond. However, there are exceptions for “exigent circumstances” so it remains to be seen how this will be defined and how easily law enforcement will be able to get a warrant for what could amount to surveillance purposes. It is important to note that real-time body cameras may still be used (as detailed in other WA laws) as long as they don’t use identification technology. In addition, facial recognition cannot be applied based on protected legal categories like race, sex, age etc. but also characteristics like “social views or activities” and “participation in a particular noncriminal organization or lawful event” as well as an individual’s exercise of first amendment rights. Designed to prevent mass public surveillance, it remains to be seen how difficult it will be for agencies to claim that they were not targeting any protected group or that a person was not exercising their first amendment rights.
Fourth, a requirement for independent third-party testing of facial recognition technology providers to government agencies is a bold new step in the right direction as facial recognition companies have never yet had to make their models and datasets available to independent third-parties for audit. However, it remains to be seen how “material unfair performance differences across subpopulations” will be defined and what will become sufficient in “mitigating” these differences. For example, will it be enough to reduce the bias by some margin or will it be required to eliminate it completely?
Finally, the task force, whose composition is described in fair detail to ensure it includes all relevant stakeholders and is not dominated by government or corporate representatives is a great idea at this early stage of facial recognition use by government agencies. While studies and recommendations are needed, this task force is limited by having no enforcement power. But it’s a moot point now, as Governor Inslee, when signing the bill into law, vetoed the task force citing insufficient funding. Thus the only semblance of oversight in the bill was eliminated; there remains no enforcement mechanism whatsoever. While citizens can always sue for personal injury against the state or for violations of federal rights by a state employee, it is notoriously difficult due to state sovereign immunity. The law seems to leave citizens out on their own to protect themselves when the requirements of this act are not met.
While certainly lifting the bar higher for the ethical government use of facial recognition, it remains to be seen how meaningful and impactful the law will be in protecting the civil liberties of the most vulnerable of us. Technology, including facial recognition, should work for all people, and when the rights and freedoms of some are eroded through its use, we all forfeit a bit of our human dignity.