Home

First, Do No Harm: Algorithms, AI, and Digital Product Liability

Marc Pfeiffer / Sep 21, 2023

Marc Pfeiffer is the Assistant Director of the Bloustein Local Government Research Center at the Rutgers Bloustein School of Planning and Public Policy.

Some lawmakers and policy advocates in the United States have proposed the creation of a standalone federal agency to conduct safety reviews of artificial intelligence (AI) and other algorithmic systems which would be submitted by industry for “approval.” But the reality is that such approaches would be destined to fail. Technological innovation moves much more quickly than government policy development. The time it takes to establish licensing, regulatory, and permitting procedures will inevitably slow innovation and its potential economic and societal benefits to a crawl. A new federal agency needs time to coalesce; the time frame for staffing and establishing practices is a speculative and time-consuming undertaking.

Of course, adding such mechanisms to the portfolio of existing agencies is less risky, but is still not without similar challenges and the added risk of inter-agency conflicts. Either way, regulation that foresees every possible risk is not attainable, and no government agency will have the answer to all of the potential harms of a technology with the potential to change so much of how we live and work. Instead, we need to develop new procedures to manage risk. One approach would be to reimagine liability laws, updating them for the age of AI.

Enhancements to current U.S. liability laws to address algorithmic harms would force developers to consider and manage the full range of potential risks engendered by their products as a standard practice. This approach would leverage market-based incentives with an expanded set of liability laws serving as guardrails. Together they would address the harm that poorly designed algorithmic-based systems could bring.

Existing liability theory and law have not evolved to address the complexities that algorithmic-based products bring. To correct this, a supplemental legal regime will need to enhance negligence and product liability practices. This will require expanding the duty of care principle. Developers of digital products must be required to foresee and prevent harms caused by algorithms used to deliver goods and services. And, regulators and courts must recognize algorithmic harms as a type of product defect, injury, or harm.

In a new report for the Center for Urban Policy Research at the Bloustein School of Planning and Public Policy at Rutgers University, I propose a range of updates to liability law to bring it up to date for the age of AI. These include:

  • Federal and state regulatory and justice agencies should be authorized to accept and bring liability complaints of algorithmic harm caused by developer negligence in fulfilling their duty of care by offering defective products.
  • Class action lawsuits should be permitted to be brought by third parties on behalf of groups or society at large. Judges should have any necessary authority to manage and consolidate similar cases.
  • Federal definitions will likely need to preempt individual state policies. At the same time, provisions may be necessary to allow new laws or regulations to be negotiated and enacted to address potential harms for new products as they develop.
  • Matrices of harms and penalties that address the range of liability, from incidental to substantial and from individual to societal, must be developed. At the extreme end of substantial societal harm, they must be significant enough to discourage undue liability risk-taking.

Ultimately, the developers of AI and other algorithmic systems must be incentivized by their liability insurers to engage in harm prevention during development and deployment. Liability insurers would require that sound harm mitigation standards be met to secure and maintain coverage.

This approach motivates developers to identify and mitigate potential algorithmic harms before new products are released and remediate existing ones when harms are discovered. It reflects current trends in cybersecurity, where developers are expected to build security into products and quickly remediate existing products when new risks are found. It also requires the thoughtful development of definitions of digital products and algorithmic harms. There is a rich trove of academic, non-profit, and corporate research discussing the range of harms. Defined harms must be serious enough to affect the public interest.

There must also be a mechanism to permit developers accused of creating unanticipated harm to have a safe harbor if they prove they used contemporary best practices to mitigate any foreseeable potential harms. This process will likely slow development of some digital products. It requires developers to ensure that products are thoroughly tested and that potential adverse outcomes are mitigated before deployment. That may delay or limit returns on investment or extend development cycles. In some cases, application creators may decide to abandon products mid-development if harms cannot be sufficiently managed.

This is as it should be. Society needs sound, algorithmic focused public policies that incentivize harm prevention. Enhancing liability law is one way to do so. We should slow down a bit, and stop breaking things.

Authors

Marc Pfeiffer
Marc Pfeiffer is Assistant Director of the Bloustein Local Government Research Center at the Rutgers Bloustein School of Planning and Public Policy, where he supports local government officials and the public through policy research, consulting, support of local government professional organizations...

Topics