
In the past couple of months, EU institutions have taken some preliminary steps to propose a new framework for AI regulation. The idea is to promote innovative AI technologies by creating consumer and societal trust in AI while at the same time preventing potential consumer harms and risks to fundamental human rights. While American leadership chastises any and all regulation for hampering innovation, can Europe gain a competitive edge in AI precisely through sensible AI law?
The European Commission’s White Paper “On Artificial Intelligence – A European approach to excellence and trust” released last month lays out this strategy. The paper describes some ways in which Europe can become an “ecosystem of excellence” by leveraging its strengths in the industrial and professional services and the B2B data economy while investing in private-public partnerships, workforce re-skilling, and research and development. More importantly for this discussion, the paper describes an “ecosystem of trust” that will drive a “human-centric approach” to AI. The Commission is of the opinion that a “strict legal framework” is already in place in the EU that is fully applicable to new AI technologies. However, a new regulatory framework will be needed to address uniquely challenging facets of AI that are not currently addressed in EU law.
What current legal framework applies to AI and what are its shortcomings? The most commonly recognized areas of law that AI uniquely impacts include data privacy, non-discrimination, product safety and liability. Let’s address each in turn. For privacy, the GDPR regulates “automated decision-making” and “profiling” which applies to algorithms that make predictions and decisions about human behavior. In a letter responding to MEP Sophie in ‘t Veld’s question about the extent to which the GDPR applies to AI, the European Data Protection Board (EDPB) highlighted the ways in which GDPR principles like transparency, data minimization, accountability and data protection by design and by default apply to algorithms, and that at this point, the EDPB does not foresee the need for an AI-specific data protection regulation. Many privacy advocates would say this analysis falls short as it does not acknowledge that algorithms are often built on non-personal data, which falls out of scope of the GDPR. While the GDPR regulates any processing of personal data, including anonymization, and while that bar is set high, it is unclear how anonymized data used for machine learning can be regulated by the GDPR. The GDPR also prohibits automated decision-making but only if it will have “legal effects” or other significant effects on the data subject. The person can object to automated processing of personal data (e.g. facial recognition) but there are broad exceptions for when public interest or even legitimate interest of a company (e.g. prevent fraud or loss) overrides the interests of the individual. Data used in facial recognition specifically is considered biometric data, a sensitive data type under the GDPR which is prohibited from being used with some broad exceptions, e.g. “substantial public interest”. This is already being used to deploy surveillance technologies, for example in the case of monitoring in soccer stadiums. There is room for debate if the GDPR adequately protects the public’s rights and freedoms.
For non-discrimination, the white paper rattles off a long list of existing laws that would ensure that AI would not be used intentionally or unintentionally to discriminate based on legally protected categories like sex, race, age, ethnicity, disability, religion and sexual orientation. These include the Race Equality Directive and two directives on equal treatment in employment. These rules are merely directives meaning that they do not apply directly to member states as a regulation does. Member states can transpose a directive into national legislation as they wish and historically, as with the GDPR precursor, the Data Protection Directive, there have been substantial differences in outcome and a lack of harmonization across the EU. While not specific to AI, this does mean that if the EU wants to build trust in a technology that does not easily respect national borders, the regulation, with respect to non-discrimination, has to be transnational as well.
For product safety and liability, the white paper most confidently concludes that there is legal uncertainty around how AI technologies could be enforced and held liable for damages to property or harm to consumers. While existing product safety laws apply, specific challenges posed by AI like lack of transparency, complex supply chains, and AI learning and changing over its lifetime are not addressed in existing law which is focused more on traditional conceptions of products, e.g. a product defect in a car that leads to accidents that cause bodily injury or death. For a more detailed analysis of this regulatory framework and proposed solutions see the “Report on the safety and liability implications of Artificial Intelligence” that accompanied the white paper as well as the 1/21/2020 European Parliament’s Committee on the Internal Market and Consumer Protection draft motion for a resolution on automated decision-making processes.
Across all of these policy areas and across all industry sectors, AI poses net new challenges unforeseen at a time when all impactful decision-making was made by humans. There are some unique characteristics of AI that pose special challenges to existing regulatory frameworks. First, common AI methods like machine learning rely on a vast corpus of data, amounts that no human or even traditional programs could have made sense of before. This introduces new questions around whether we want virtually all human activity to be digitized, fed into algorithms, and used by an array of stakeholders for an unbounded variety of reasons, producing in effect a surveillance society. Second, AI is self-learning, which means that the AI “product” that is “shipped” will change over time depending on the data that gets fed into it and the operational ecosystem. This poses questions around how to avoid unintended consequences of algorithms “learning” the “wrong” things. And when those wrong things result in real harm to the rights and freedoms of individuals, who is to be held liable – the algorithm developers, the algorithm users, the data providers or any other third-party service providers? Because the product is actually a complex set of services in the supply chain, it is more difficult to assign fault and hold all parties accountable. Finally, algorithms are complex often with a low level of interpretability meaning that it’s difficult for scientists, let alone the average consumer, to tell how and why an algorithm makes a given decision. This “black box” problem contributes not only to consumers not being fully aware of how algorithmic decision-making could impact them and therefore deciding whether or not to use a given product or service but also complicating accountability.
Perhaps less recognized for innovative AI technologies, Europe is more likely to be applauded for innovative regulatory solutions – the GDPR being the best example from recent years. Can Europe become a leader in AI innovation and AI regulation? It is too soon to tell but the opportunity to prove that the two are not mutually exclusive, and in fact reinforce one another for the common good, is certainly there. Europe just has to have the political will to seize it.