The Promise and Perils of Using AI at Law Reviews

The legal world is no stranger to technological advancements, but the surge in artificial intelligence (AI) capabilities has sparked a movement that is reaching even law reviews. Every submission cycle, Scholastica – the largest platform for law-review submissions – releases advice from outgoing student editors, and the latest insights reveal a startling trend: law review editors are not just considering using AI to review articles – some are hinting they are already doing so.

One editor noted that “AI will likely increase the efficiency of the selection and editing processes.” Another editor said, “I’m sure that selecting articles will no doubt use the same technology that we are discussing in our articles to also assist in choosing articles.” These statements reflect a growing acceptance of AI as a tool in the editorial process, but unfortunately ignore the potential risks and necessary precautions.

In a recent article, we delve into the ethics of using AI not only for writing but also for reviewing legal articles. Our research uncovered several concerning issues that demand attention from the legal community.

First and foremost is the problem of accuracy and accountability. AI systems, particularly large language models like ChatGPT, are prone to hallucinations – generating false or misleading information that appears plausible but has no basis in fact. In the context of legal scholarship – including the selection of articles for law reviews –  where precision and accuracy are paramount, this tendency could have serious consequences. Among other things, it may perpetuate existing biases. Imagine an AI algorithm that is trained on all submitted and accepted law review articles over the past few years (insofar as such data is available). Any factor that is a result of bias (political-based, gender-based, prestige-based, or otherwise) will likely lead the algorithm to select articles with the same bias.

Moreover, there are significant issues surrounding transparency and the ability of an algorithm to explain to humans how it reached its conclusion. Many AI systems operate as black boxes, making decisions through processes that are opaque even to their creators. This lack of transparency is particularly problematic in the legal field, where the ability to understand and scrutinize reasoning is fundamental to the practice of law. The same is true for the law-review selection process: For authors, the selection process is already a sort of black box, as they often receive boiler-plate rejection decisions. Delegating the selection to AI makes this somewhat worse, as the editors might not be able to explain why a manuscript is rejected other than “the AI recommended it.” Of course, this is the pessimistic view. AI might also eliminate some bias or even assist in adding feedback to the author with less effort.

In another recent article, Hadar Jabotinsky and Michal Lavi argue that, beyond accuracy issues, there are also privacy and intellectual property considerations to contend with. Commercial AI firms often use the texts uploaded to their models to enhance training. While this does not necessarily mean the manuscript becomes publicly available, it does raise two significant issues. First, the AI firm has access to the file, potentially creating a risk of misuse or unauthorized distribution. Second, if another scholar innocently asks the AI for advice on a related topic, the result might closely resemble the text on which it was trained, potentially leading to unintentional plagiarism or idea theft. These concerns underscore the need for informed consent from authors before their manuscripts are uploaded to any AI system. Failing to obtain such consent not only raises ethical issues but could also potentially infringe on copyrights.

In our article, we argue for the development of explainable AI systems and the implementation of robust oversight and auditing mechanisms to ensure fairness and accountability. However, at a bare minimum, law reviews should disclose how AI is used in their processes and obtain informed consent from authors.

Law review editors would do well to exercise caution when incorporating AI into their work.  Various publishers follow different guidelines in how to use. For instance, Springer asks its peer-reviewers not to upload the manuscript into generative AI models and to disclose any use of AI when evaluating the manuscript. Such guidelines recognize the potential pitfalls of relying too heavily on AI in the peer review process. Conversely, and perhaps alarmingly, law reviews seem to lack any guidelines – neither for submission nor for review.

A possible compromise might be found in the use of AI for adjudication. The UK government, for example, recently released guidelines on AI use in the courts. While cautiously approving the use of AI for writing legal opinions, the guidelines discourage it for legal research or analysis. The reason  stems from AI’s potential to fabricate information and provide misleading, inaccurate, or biased content. This measured approach underscores the delicate balance between leveraging AI’s capabilities and maintaining the integrity of legal processes.

In any event, it is important that we approach AI with both enthusiasm and caution. The potential benefits are immense, but so too are the pitfalls. By fostering open dialogue, implementing thoughtful guidelines, and always prioritizing the core values of legal scholarship, we can ensure that AI enhances rather than undermines the vital work of law reviews.

This post comes to us from Hadar Y. Jabotinsky at the Hadar Jabotinsky Center for Interdisciplinary Research of Financial Markets, Crises and Technology and at Zefat Academic College, and from Professor Roee Sarel at the Institute of Law and Economics, University of Hambrg. It is based on their recent article, “Co-Authoring with an AI? Ethical Dilemmas and Artificial Intelligence,” available here.

Leave a Reply

Your email address will not be published. Required fields are marked *