The Sedona Conference WG13 on AI & Law: Part 3

In my previous two blog posts, I covered the first six panels of the inaugural Sedona WG13. If you haven’t read them yet, you can do so here and here!
Patenting and Copyrighting of AI Generated Content
This panel featured Dr. Ryan Abbot, a Professor of Law and Health Sciences at the University of Surrey School of Law, Adjunct Associate Professor of Medicine at the David Geffen School of Medicine at UCLA, partner at Brown, Neri, Smith & Khan, LLP, and a mediator and arbitrator with JAMS, Inc; Scott M. Kelley, a partner at Batter Witcoff; Lestin Kenton, a director in Sterne Kessler’s Electronics Practice Group; Matthew D. Powers, the founding partner of Tensegrity Law Group; and Kathi Vidal, the former Under Secretary of Commerce for IP and former Director of the USPTO.
The panel kicked of with a discussion of the patentability of AI-assisted inventions. The USPTO has issued clear guidance that an AI model cannot be named as a sole inventor on a patent application, and the Federal Circuit courts have upheld this decision. Therefore, it’s apparent that some amount of human contribution is necessary for a patent to be awarded. The thorniest patent issues with the greatest ambiguity arise when AI is used as a sort of “co-inventor,” i.e. as a tool that contributes to the inventive process. To that end, the USPTO has issued guidance to inventors and examiners to help them assess whether the human contributions to an AI-assisted invention are significant enough to warrant patent protection. Specifically, they provide five guiding principles that can be used to apply the Pannu factors to an AI-assisted patent application. They are as follows:
- A natural person’s use of an AI system in creating an AI-assisted invention does not negate the person’s contributions as an inventor.
- Merely recognizing a problem or having a general goal or research plan to pursue does not rise to the level of conception.
- Reducing an invention to practice alone is not a significant contribution that rises to the level of inventorship.
- A natural person who develops an essential building block from which the claimed invention is derived may be considered to have provided a significant contribution to the conception of the claimed invention even though the person was not present for or a participant in each activity that led to the conception of the claimed invention.
- Maintaining “intellectual domination” over an AI system does not, on its own, make a person an inventor of any inventions created through the use of the AI system.
While it will likely take some time for the boundaries of AI-assisted patentability to be drawn as cases make their way through the courts, this guidance should prove helpful for the most common instances in which AI is used to assist invention. To that end, the panelists discussed the nature of our patent system, how it is grounded in economic considerations and how in order to encourage the use of AI in innovation, policy makers will need to be somewhat liberal in the granting of patents to AI-assisted inventions. Of course, this policy will need to be balanced against other considerations; for example, discouraging the abuse of our patent system by foreign adversaries who might seek to flood it with patents for wholly AI-generated inventions.
Another issue that policymakers will need to address is how to deal with inventors who hide their use of AI in the inventive process. Panelists pointed out that in the future, courts might need to “depose” an AI model by subjecting it to some sort of post-hoc forensic analysis in order to determine whether it contributed to an invention. While this would currently be extremely difficult to implement on a large scale, especially since so many AI models are open source, advancements in technology or AI model licensing frameworks may make this possible in the future.
Many of the same issues that exist with regards to patentability also persist in the copyright context, perhaps even moreso. AI models can already generate relatively creative short-form content such as poems, short stories, and images, and these models are already being improved to the point where they will soon be able to generate longer outputs, such as novels or full-length feature films. What amount of human contribution is necessary to grant an AI-assisted creative work copyright protection? This question has not been conclusively answered, although the copyright office has again provided us with some guidance. Much like with patents, the office has been adamant that wholly AI-generated creative works will not be granted copyright protection, but things are much less clear with respect to AI-generated works which have been modified or contributed to by human authors.
While the US has been relatively forward-thinking in its approach to protecting AI-assisted invention and creation, not all jurisdictions have taken the same approach. For example, the panelists mentioned that France has eschewed most copyright protections for AI-assisted works because in France copyright exists to protect the moral rights of authors. At the other end of the spectrum, the UK is one of the few jurisdictions that grants copyright protection to entirely AI-generated works.
AI and the Courts: Judicial Perspectives
The next panel tackled how widespread use of AI tools is affecting the judiciary. The panelists were the Honorable Allison Goddard, U.S. District Court, Southern District of California; Professor Maura Grossman, Research Professor at the University of Waterloo; Justice Jill R. Presser of the Ontario Superior Court of Justice; the Honorable Xavier Rodriguez, U.S. District Court for the Western District of Texas; and the Honorable Sam A. Thumma, Arizona Court of Appeals. The judges on the panel raised a variety of concerns based on how they have already seen, or anticipate seeing, AI used as a confounder in litigation. For example, many voiced concerns about the potential impact of generative AI and deep fakes on family law cases. Using any one of the numerous publicly available tools, litigants can now potentially use AI to generate fake evidence such as audio or video that is damaging to the other party. The panelists expressed concern about how difficult it is becoming to detect such deep fakes and how they could be used to muddy the facts of cases and be used as vehicles for perjury. One particularly strong worry shared by many was that ex parte temporary restraining orders might be issued based on AI-falsified evidence, thereby depriving litigants of their due process rights.
Another area in which the panelists foresaw AI having potential negative impacts is as a tool for predictive policing, in which machine learning models might be applied to determine which “high risk” areas police should patrol. While this could potentially improve police efficiency and lead to safer communities, it could also lead to extensive bias if the models are not appropriately tuned and derisked. Panelists also expressed concern that AI might be applied to sentencing and bail determinations. While this practice could remove the biases of individual judges from the equation, it could also introduce widespread racial or economic biases of its own.
One issue that the panelists universally would like to see addressed is the education of the judiciary. While the Ninth Circuit, in particular, has been extremely proactive in addressing the impact of AI on the judiciary and has even taken the time to test individual AI tools, this level of AI education is not representative of the judiciary as a whole. The panelists continually impressed the need for both state and federal judges to keep abreast of AI technology trends and expressed hope that incoming law clerks could serve as the front line educators for judges on emerging AI tools and uses.
While there is currently no national standard on AI for the federal judiciary, the panelists encouraged judges to begin adopting AI search and discovery tools, combined with appropriate post-hoc fact checking, to help streamline legal research. While they cautioned that judges should avoid using AI as a dictionary to define the plain meaning of words for statutory interpretation and related purposes, they posited a number of other scenarios in which they foresee AI operating in a capacity useful to judges, e.g. summarizing long articles, generating transcripts of depositions or trials (look out, stenographers!), coming up with analogies, toning down letters to attorneys, drafting internal chambers guidelines, generating chronologies of the events in a case (for use in Rule 16 conferences), and perhaps even outlining and generating first drafts of judicial opinions.
What the Future of AI Holds
The final panel of the conference was, fittingly, future-focused, tackling a broad swath of emerging AI uses and trends. The panelists included Serge D. Jorgensen, a principal in the consulting group at Crowe LLP; Professor Gary Marchant of the ASU Sandra Day O’Connor College of Law; Robyn Shepard of Digital Mountain, Inc; and Lucky Vidmar, Associate General Counsel at Microsoft. The panelists once again emphasized the need to prioritize unbiased data curation, address model biases and limitations, progress research into explainable AI techniques, and grapple with data drift. The panelists looked to the distant horizon of AI use and explored a number of intersections with other developing fields such as neuroscience. Unfortunately, I took far fewer notes on this panel than others, probably because I was so engrossed by it, so I can speak to too many of the specifics.
Moving Forward
The Sedona Conference WG13 on AI and the Law was an incredible experience, and I’m grateful to Kenneth Withers and Professor Gary Marchant for making my attendance possible. I encourage technologists, policy makers, and legal practitioners to get involved with Sedona in whatever way they can, to consider attending future working groups, and perhaps even contribute to future Sedona Conference publications!