On September 28th, Professor Ben Sundholm’s forthcoming paper, Navigating the Frontiers of MedTech, was workshopped at the Seventh Junior Faculty Forum for Law and STEM at Penn Carey Law School. The paper, which analyzes the doctrinal reforms that are needed in response to the use of adaptative and opaque artificial intelligence systems in medicine, will be published in the Arizona State Law Journal. Here is the abstract:
Recently, the medical community has been buzzing about the emergence of new types of artificial intelligence tools that are adaptive and opaque. These systems are adaptive insofar as they can learn to improve their capabilities over time. They are opaque because they produce outputs by making cascades of complex calculations that cannot be fully understood by humans. Although the adaptiveness and opacity of these tools have many promising implications, these same features also pose difficult challenges for traditional legal doctrines. Unfortunately, to date, proposals to address these doctrinal difficulties have been fragmented and limited in scope. As a result of these partial and disjointed reform efforts, the challenges posed by adaptive and opaque artificial intelligence systems remain unresolved and their exciting medical potential unrealized.
This Article charts a path toward moving beyond these piecemeal reform efforts. Specifically, I propose a more comprehensive blend of reforms to doctrines in both public and private law to unleash the exciting potential of these technologies in a way that is safe and effective.
With respect to public law, I suggest that the U.S. Food and Drug Administration revise its existing framework for regulating medical technology. Traditionally, the agency approves medical tools based on a review of their past performance. Because this ex ante regulatory model is ill-suited to address systems that evolve over time, I suggest that the agency adopt a more forward-looking and flexible regulatory framework.
If the agency adopts my proposal, reforms to doctrines in private law will be needed. This is so because, pursuant to Supreme Court precedent, protection from civil liability is not guaranteed for medical tools approved by the sort of forward-looking regulatory approach I recommend. The specter of civil liability is a problem because traditional tort doctrines are ill-suited to address harms resulting from using adaptive and opaque artificial intelligence systems. To overcome these challenges, I propose leveraging a particular form of enterprise liability known as common enterprise liability. This doctrine can supplement the forward-looking regulatory approach I propose by filling doctrinal gaps to ensure compensation is available for those injured by adaptive and opaque artificial intelligence systems.