This market research report was originally published at Tractica’s website. It is reprinted here with the permission of Tractica.
One of the core benefits of artificial intelligence (AI) is its ability to remove humans from specific tasks and decisions. But if something goes wrong, who or what assumes responsibility? Who or what is culpable? What punishment would represent an effective deterrent? With no clear-cut answers, the looming governance of the accountability gap represents one of the greatest potential market barriers to the growth of AI.
The Maltese Falcon
Dashiell Hammett’s legendary book-turned-movie, The Maltese Falcon, is widely considered one of the greatest pieces of detective literature. The story is confusing – it starts with one crime to solve (a murder) but then devolves to include several other connected crimes. The brilliance of the plot is that everyone is potentially guilty of something. The same plot lines could be used to describe AI’s accountability gap: there are a lot of potential suspects, but the issues and the claims get confusing and difficult to prove.
(Source: Warner Bros.)
In an article published in October 2019, lawyer Michael Camilleri tackled the idea of product liability in an imaginary case of an accident involving an autonomous vehicle (AV) in Malta. Camilleri asked, “Who do I sue?” He looked at four options as potential defendants: the AI, the vehicle owner, the AV software vendor, and the vehicle manufacturer. But Camilleri didn’t come up with a strong candidate to sue. Consider the following:
- Can the AI be found responsible?
No, Maltese tort laws were written with people in mind:
… every person has a duty of care, to make use of his rights within the proper limits. A product is not a person … and to say that an AI (e.g., an AV) has a duty of care even if granted legal personality would be to stretch current interpretations too far.
- Can the vehicle owner be found responsible?
No:
… it is arguable that few would be willing to risk buying AVs if vicarious liability were attached to them.
- Can the AV software vendor be found responsible?
No, under contract law, manufacturers warrant against defects that exist at the time the contract was made. Camilleri also points to AI’s ability to learn as a challenge to liability:
Considering that AVs learn as they drive on the road, how can a vendor be made to answer for latent defects or lack of conformity which AVs learn AFTER the time the contract was concluded? This question would already complicate the task of attributing responsibility to the vendor under contract law.
- Can the vehicle manufacturer be found responsible?
Unlikely:
… how can I prove they are responsible for the damages suffered?
Causality, Justice, and Compensation
Camilleri’s example only covers one legal issue: causality. According to experts, there are two other areas in which the AI accountability gap will cause legal problems: justice and compensation. New Zealand-based attorney Matt Bartlett described the challenges like this:
Justice
The penalties and remedies of the legal system are for the most part built to be levelled against humans, not autonomous computer programs. Far from being an appendix to the legal process, the ability to impose effective penalties to those who break the law is absolutely crucial to the underlying justice of the system. In order to be credible, it’s vital that the legal system can punish wrongdoing proportionately and effectively.
Compensation
Our legal system struggles to apply a general principle (in this case, victim compensation) where the harm is inflicted by an autonomous agent. It’s obviously impossible for a court to compel an AI – a computer program – to pay thousands of dollars to a victim to cover their medical costs. So in order for victims to have any recourse in the legal system, they have to be able to pursue a human (or, at least, a business) with the capacity to pay compensation. Consequently, the “accountability gap” adds serious difficulty to victims looking to receive compensation through the legal system.
AI Governance and Ethics: A Leadership Vacuum
In today’s ecosystem of AI governance and ethics, the most active players have been academia and European governmental organizations (EU, Council of Europe, and the Atomium European Institute’s AI4People). The tech industry has been slow to organize itself, though there are initiatives underway that might become significant voices. Among those are the IEEE’s Global Initiative on the Ethics of Autonomous and Intelligent Systems and Partnership on AI. Between these groups, ideas about how to solve the AI accountability gap are starting to form. But when issues of causality, justice, and compensation come up in real-life instances within governments, those legal systems will have no choice but to find solutions and set precedents and uneven government standards, which could have chilling effects on AI business. The solution to this challenge is for the global tech industry to become proactive partners with governments to solve the AI accountability gap. In the meantime, piecemeal solution ideas are bubbling up.
Piecemeal Solutions
Attorney Bartlett suggested in the case of AVs, governments could adopt no-fault coverage policies to pay AI-related lawsuits:
Victims would lose ability to sue individual developers but would receive acknowledgement of harm and medical costs covered by the government … (governments could) reserve for themselves the ability to prosecute AI developers whose creations cause harm. This might be a more efficient system for taking the worst offenders to court. Victims of autonomous decisions can work through a dedicated body to bring developers to account, similar to how the police work with victims in a criminal context. This helps prevent the floodgates being opened, and helps stop garage-based AI coders from being drowned in civil lawsuits.
Some think there might be technical solutions that might help. The idea of algorithmic auditing is described in the Council of Europe’s paper, Responsibility and AI:
Algorithmic auditing is emerging as a field of applied technical research, that draws upon a suite of emerging research tools and techniques for detecting, investigating and diagnosing unwanted adverse effects of algorithmic systems. It has been proposed that techniques of this kind might be formalized and institutionalized within a legally mandated regulatory governance framework, through which algorithmic systems … are subject to periodic review and oversight by an external authority staffed by suitably qualified technical specialists. Cukier and Mayer-Schonenberg suggest that a new group of professionals are needed (“algorithmists”) to take on this role, which may constitute a profession akin to that of law, medicine, accounting and engineering and who can be relied upon to undertake the task of algorithmic auditing either as independent and external algorithmists to monitor algorithms from the outside, or by “internal” algorithmists employed by organizations to monitor those developed and deployed by the organization, which can then be subjected to external review.
In AI4People’s Ethical Framework for a Good AI Society, the recommendation has been made that might provide a good middle ground for AI advocates and government – collectively agree on decisions that AI should not make:
Assess which tasks and decision-making functionalities should not be delegated to AI systems, through the use of participatory mechanisms to ensure alignment with societal values and understanding of public opinion. This assessment should take into account existing legislation and be supported by ongoing dialogue between all stakeholders (including government, industry, and civil society) to debate how AI will impact society opinion.
While these three ideas are good starts, they are not cure-all solutions.
What Will the Industry Do?
As mentioned earlier, legal systems around the world will deal with AI accountability one way or another as real-life cases come to bear. Instead of forging ahead with any legally binding and enforceable best practices and codes of ethics, tech companies and tech company collectives should consider aggressively seeking to work cooperatively with governments to find solutions to the AI accountability gap.
Mark Beccue
Principal Analyst, Tractica