Jump to content

Draft:Ethics and Accountability in Legal AI: Lessons from the Mata vs. Avianca Case

From Wikipedia, the free encyclopedia

Background

[edit]

The Mata vs. Avianca case, emerging from a legal dispute in New York, dispute in New York, not only sheds light on the complexities of AI implementation in the legal profession but also underscores the critical importance of ethics and accountability in AI-driven decision-making processes.[1]The case revolves around allegations of misconduct by the plaintiff's attorneys, who utilized an AI program for legal research. The AI tool, in an alarming turn of events, generated fictitious cases and fabricated legal citations, including nonexistent case law. This error led to severe repercussions, with the New York federal judge sanctioning the attorneys for their negligent conduct. [2]A n attorney should understand the risks and benefits of the technology used in connection with providing legal services. How these obligations apply will depend on a host of factors, including the client, the matter, the practice area, the firm size, and the tools themselves, ranging from free and readily available to custom-built, proprietary formats.[3]


Disclosure and Accountability

An attorney should consider disclosure to their client that they intend to use generative AI in the representation, including how the technology will be used, and the benefits and risks of such use.[4]

In Pennsylvania, the court issued a standing Order that required each counsel (or a party representing himself or herself) to disclose whether he or she has used generative Artificial Intelligence (“AI”) in the preparation of any complaint, answer, motion, brief, or other paper filed with the Court, including in correspondence with the Court. The court directed that the counsel must in a clear and plain factual statement, disclose that generative AI has been used in any way in the preparation of the filing or correspondence and certify that each and every citation to the law or the record in the filing has been verified as authentic and accurate.In Ohio, a state court prohibited attorneys from using Artificial Intelligence in preparation of any filing to be submitted in Court.[5]

The Mata case highlights the issue of accountability in AI utilization. When confronted with the discovery of the fabricated content, the attorneys' response was characterized by delay and evasion, exacerbating the gravity of their misconduct. The judge's decision to sanction the attorneys underscores the principle that legal professionals must be held accountable for their actions particularly where Artificial Intelligence is involved. Just as a lawyer must make reasonable efforts to ensure that a law firm has policies to reasonably assure that the conduct of a nonlawyer assistant is compatible with the lawyer's own professional obligations, a lawyer must do the same for generative AI. Lawyers who rely on generative AI for research, drafting, communication, and client intake risk many of the same perils as those who have relied on inexperienced or overconfident nonlawyer assistants.[6]

Cofidentiality
[edit]
  1. ^ Mata v. Avianca, Inc., F. Supp. 3d, 22-cv-1461 (PKC), 2023 WL 4114965, at *2 (S.D.N.Y. June 22, 2023))
  2. ^ Mata v. Avianca, Inc., F. Supp. 3d, 22-cv-1461 (PKC), 2023 WL 4114965, at *2 (S.D.N.Y. June 22, 2023))
  3. ^ Practice Guidance for the Use of Generative Artificial Intelligence in the Practice of Law 2023 WL 11054756, at *1 (Nov. 16, 2023)
  4. ^ Practice Guidance for the Use of Generative Artificial Intelligence in the Practice. of Law., 2023 WL 11054756, at *2 (Nov. 16, 2023)
  5. ^ Use of Generative AI,U.S.Dist.Ct.Rules N.D.Ohio, Boyko-AI
  6. ^ Florida State Bar Association Committee on Professional Ethics, FL Eth.Op. 2024