Draft:Mata vs Avianca
Submission declined on 27 April 2024 by Chris troutman (talk). This submission is not adequately supported by reliable sources. Reliable sources are required so that information can be verified. If you need help with referencing, please see Referencing for beginners and Citing sources.
Where to get help
How to improve a draft
You can also browse Wikipedia:Featured articles and Wikipedia:Good articles to find examples of Wikipedia's best writing on topics similar to your proposed article. Improving your odds of a speedy review To improve your odds of a faster review, tag your draft with relevant WikiProject tags using the button below. This will let reviewers know a new draft has been submitted in their area of interest. For instance, if you wrote about a female astronomer, you would want to add the Biography, Astronomy, and Women scientists tags. Editor resources
|
- Comment: Beyond the COYVIO added by a student, this draft relies upon too many primary sources, when secondary sources do exist. This draft makes no claim of notability. Chris Troutman (talk) 01:47, 27 April 2024 (UTC)
Mata v. Avianca, Inc., 678 F.Supp.3d 443 (2023) was a personal injury case heard by the United States District Court for the Southern District of New York in December 2022.[1] It became widely known due to the use of ChatGPT for legal research. The AI tool "hallucinated" a series of non-existent cases and provided fictitious summaries. The case was dismissed and the lawyers faced sanctions.
Facts
[edit]The case was brought by Roberto Mata who sought damages against Avianca for injuries he sustained during a 2019 flight from El Salvador to New York City caused by a metal food serving cart striking his left knee.[2] It was initially filed in state court, but Avianca removed it to the federal district court.[3]
In federal court, Avianca filed a motion to dismiss the lawsuit as time-barred by the two year limitation period contained in the Montreal Convention, a multilateral treaty governing the rules on international air travel to which both the United States and El Salvador are signatories.
In response, the plaintiff's lawyers—-Peter LoDuca and Steven A. Schwartz of Levidow, Levidow & Oberman—filed an opposition brief that cited a number of cases in support of the proposition that the Montreal Convention limitation periods could be disregarded in favour of the state law in New York, which had a more generous three year limitation period. They also sought to argue that the bankruptcy of Avianca had tolled the limitation period.[3]
The defence filed a letter stating that they could not locate many of the cases and authorities cited by the plaintiff. The plaintiff's lawyers did not immediately withdraw the Affirmation of Opposition or otherwise address the apparent non-existence of these cases.
Sanctions
[edit]The Court issued a supplemental order directing Mr. Schwartz to show cause why he ought not be sanctioned pursuant to Rule 11(b)(2) of the Federal Rules of Civil Procedure (as well as the court’s inherent powers) for citing non-existent cases and submitting of non-existent judicial opinions annexed.[4]
In response to the court's order to show cause, the plaintiff's lawyers filed a memorandum where they admitted they used ChatGPT to research the cases cited. Mr Schwarz stated that he had asked the chatbot whether one case in particulary—Varghese—was real. ChatGPT assured him that one case he had asked about—Varghese—"is a real case".[2] Upon asking for the source, it stated that the case "does indeed exist and can be found on legal research databases such as Westlaw and LexisNexis".[2]
The plaintiff's lawyers acknowledged their mistake and stated that they had no intention to defraud the Court.[5]
The Federal Rules of Civil Procedure requires parties in litigation "[certify] that to the best of the person’s knowledge, information, and belief, formed after an inquiry reasonable under the circumstances, the claims, defenses, and other legal contentions are warranted by existing law or by a nonfrivolous argument for extending, modifying, or reversing existing law or for establishing new law".[6]
Judge Castel found that the lawyers had violated Rule 11 of the Federal Rule of Civil procedure by submitting the false information and fake legal arguments, failing to read the cited cases, and swearing to the truth of their affadavit without no basis for doing so.[7]
The court found that the attorneys acted in bad faith: had they apologised and withdrew their brief after opposing counsel questioned it, sanctions would probably not have been at is.[8]
The plaintiff's lawyers were fined $5,000 and the court required that they send letters with copies of the opinions and sanctions order both to the original plaintiff as well six judges that ChatGPT had falsely identified as authors of the fake authorities.
Aside from the sanctions issue, the court considered and rejected the plaintiff's assertion that the court could apply the limitation period specified by state law instead of the Montreal Convention.
Reception and implications
[edit]Judge Castel's decision warned legal professionals adopting artificial intelligence tools without verifying the accuracy and reliability of their results:
In researching and drafting court submissions, good lawyers appropriately obtain assistance from junior lawyers, law students, contract lawyers, legal encyclopedias and databases such as Westlaw and LexisNexis. Technological advances are commonplace and there is nothing inherently improper about using a reliable artificial intelligence tool for assistance. But existing rules impose a gatekeeping role on attorneys to ensure the accuracy of their filings...”[9]
The court affirmed the role of technology and the use of artificial intelligence in practice, however, lawyers should comply with the existing rules to ensure that their filings are accurate. This court's position aligns with Rule 1.2 of the ABA Model rules that requires lawyers to maintain the requisite knowledge and skill, and to keep abreast of changes in the law and its practice, including the benefits and risks associated with relevant technology.[10]The Mata case was notable both for the brazenness of the reliance on ChatGPT and for the fact that it is believed to be the first instance of sanctions for improper reliance on Generative Artificial Intelligence.[11]
References
[edit]Category:Wikipedia Student Program
- ^ Mata v. Avianca, Inc., F. Supp. 3d, 22-cv-1461 (PKC), 2023 WL 4114965, at 449-450 (S.D.N.Y. June 22, 2023))
- ^ a b c Maruf, Ramishah (2023-05-27). "Lawyer apologizes for fake court citations from ChatGPT". CNN. Retrieved 2024-05-14.
- ^ a b Brodkin, Jon (2023-06-23). "Lawyers have real bad day in court after citing fake cases made up by ChatGPT". Ars Technica. Retrieved 2024-05-14.
- ^ Memorandum of Law re: 33 Order to Show Cause, Document filed by Levidow, Levidow & Oberman, P.C., Steven Schwartz.. (Maulsby, Ty) at 9.
- ^ Memorandum of Law re: 33 Order to Show Cause, Document filed by Levidow, Levidow & Oberman, P.C., Steven Schwartz..(Maulsby, Ty) at 5.
- ^ Fed. R. Civ. P. 11
- ^ Mata v. Avianca, Inc., No. 22-cv-1461 (PKC), slip op. (S.D.N.Y June 22, 2023) (Castel, J.) at 34 available here: https://law.justia.com/cases/federal/districtcourts/newyork/nysdce/1:2022cv01461/575368/54/
- ^ Professional Liability Reporter, 48 No. 9, NL 6 (September 2023).
- ^ Mata v. Avianca, Inc., No. 22-cv-1461 (PKC), slip op. (S.D.N.Y June 22, 2023) (Castel, J.),at 1, available here: https://law.justia.com/cases/federal/districtcourts/newyork/nysdce/1:2022cv01461/575368/54/
- ^ The American Bar Association [ABA], Center for Professional Responsibility,Model Rules, 2013.http://www.americanbar.org/groups/professional_responsibility/publications/model_rules_of_professional_conduct.html
- ^ Samuel Mallick, Chapter 9. Generative AI in the Law, 2023 Advanced Consumer & Com. L. 9-IV (State Bar of Texas ed., 19th Annual TXCLE Advanced Consumer & Commercial Law 2023).