AI Ethics

high angle photo of robot
Photo by Alex Knight on Pexels.com

The conversation about responsible AI is in full swing among industry, governments, international organizations, and academic institutions. Last year in particular saw a flurry of pronouncements about ethical AI with the OECD issuing AI principles, the G20 adopting those same principles, and in 2018 the European Commission set up an independent High-Level Expert Group on AI (AI HLEG) that this time last year issued “Ethics Guidelines for Trustworthy AI”. This is the most in-depth, major government report we have on how to think about AI ethics, so let’s take a closer look.

The AI HLEG is a coalition of 52 AI experts from civil society, academia, and industry supported by, yet independent from, the European Commission which is tasked with developing a European framework for trustworthy AI. Its first major publication, “Ethics Guidelines for Trustworthy AI”, defines what is meant by “trustworthy” or “human-centered” AI, identifies four foundational ethical principles, and proposes an “assessment list” that is to guide AI practitioners in achieving trustworthy AI. After a variety of stakeholders take the assessment instrument for a test run, the AI HLEG is expected to propose a revised version to the European Commission this year. 

The paper defines trustworthy AI as AI that is legal, ethical, and robust. Like other EU pronouncements (see my 3/12/20 blog post), this framework starts from the premise that legal instruments for regulating AI already exist in EU primary law (EU treaties), EU secondary law (GDPR, the Product Liability Directive, anti-discrimination directives etc.), and international law (UN and Council of Europe treaties). However, these laws are not sufficient, so ethical guidelines are required to supplement laws that are outdated or ill suited for AI technologies. Finally, robustness rounds out the high-level definition of trustworthy AI, although it is unclear why it is identified as a third, stand-alone component since it essentially overlaps with prevention of harm (discussed below). 

Digging deeper, the paper homes in on four ethical principles that are most significant for human-centered AI: (1) respect for human autonomy (2) prevention of harm (3) fairness and (4) explicability. Respect for human autonomy is all about human self-determination, freedom of conscience and the ability to think and deliberate without manipulation or coercion by automated systems. Prevention of harm relates to safeguarding human physical and mental integrity and particularly being sensitive to ameliorating, or at least not exacerbating, existing societal inequalities. Fairness refers to “substantial” and “procedural” fairness; the former concerns the avoidance of biased or discriminatory outcomes resulting from automated decision-making while the latter refers to access to fair redress for unfair effects. Finally, explicability refers to transparency and interpretability of AI systems which can be notoriously difficult to understand for experts (the “black-box” phenomenon) let alone lay stakeholders. The four ethical principles are then distilled further into requirement for trustworthy AI that include (1) human agency and oversight (2) technical robustness and safety (3) privacy and data governance (4) transparency (5) diversity, non-discrimination and fairness (6) societal and environmental well-being and (7) accountability. 

The assessment list is perhaps the most useful part of this analysis. Meant as a practical checklist for AI developers, users, and other stakeholders, it provides more nuanced questions that should be asked across the AI lifecycle based on the aforementioned requirements for trustworthy AI. Organized by principles, it would be interesting to see if the final version might be reworked to trace the software development lifecycle, which more closely approximates the steps by which  AI is actually developed. This would be helpful because while all of those questions are important some might be more relevant at different phases, for example the early-stage research compared with post-launch monitoring and evaluation. 

The question remains; is this enough? First, while meant to be merely a set of ethical guidelines, the document could take a stronger stance on acknowledging the gaps in existing law and highlighting that while voluntary adherence to ethical guidelines should be encouraged, without broad, enforceable legal instrument(s) backing these recommendations, we will fall short in achieving a human-centered AI future. The EU first needs to perform a comprehensive, in-depth gap analysis of existing laws applicable to AI and where they fall short and make legal recommendations before attempting to fill those gaps with voluntary measures. 

Second, while the analysis acknowledges that trade-offs between values will be necessary as they often conflict and that a rational, evidence-based decision should be made and documented, it remains silent on just how to do a risk-based harm analysis and weigh the interests of multiple stakeholder groups. Understandably complex and difficult to spell out in one document, there needs to be a stronger acknowledgement, however, of the fact that more prescriptive requirements will be needed so a voluntary legitimate interest analysis doesn’t turn into a meaningless exercise where the interests of private sector AI developers always come out on top – simply documenting that won’t do anything for consumers. 

Finally, while the assessment list will be useful to many AI practitioners, focusing primarily on voluntary checklists promotes the misguided idea that any single AI innovator can actually safeguard all, or even some, of the human-centered values that the analysis lauds. For example, in the accountability space in particular, what does it mean for an AI developer to ask themselves, “Did you establish an adequate set of mechanisms that allows for redress in case of the occurrence of any harm or adverse impact?” Within the current legal structure, the private sector has little to no incentive (especially in the short-term when everyone is vying for market share) to ask itself this question and frankly has limited ability in actually changing the existing power inequalities between businesses and consumers. Putting that accountability on AI developers alone is unrealistic. For that we need stronger consumer protection laws that lower the bar for proof of harm as AI makes that potentially difficult, a shift from the classic liberal understanding of harm as individual-based to group and society-based as ubiquitous computing proliferates, and broader judicial reform to increase citizens’ access to legal remedy which is already asymmetrically distributed with or without AI. 

The AI HLEG has made a praiseworthy effort on a complex topic and took a step in the right direction, but the trustworthy AI journey must not end here. Without legal and regulatory solutions, we face the danger of human-centered AI remaining a theoretical ideal. 

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: