It is essential to always check the sources of information we rely on, as Steven Schwartz, a New York attorney, recently discovered to his detriment. Schwartz and his associate Peter LoDuca, along with their law firm Levidow, Levidow and Oberman, have been fined $5,000 by a judge for submitting a court document filled with fake citations generated by ChatGPT, an artificial intelligence language model. This incident, reported by The Guardian, occurred during a case where a man was suing Avianca, a Colombian airline, for injuries sustained on a flight to New York City.
In this particular case, ChatGPT produced six supposed legal precedents, such as “Martinez v. Delta Airlines” and “Miller v. United Airlines.” However, it was revealed that these cases were either inaccurate or entirely fictional, as reported by Engadget. The use of ChatGPT’s generated citations resulted in serious consequences for Schwartz and his colleagues, prompting the judge, P Kevin Castel, to impose the fine. The judge emphasized that while technological advances, including the use of reliable artificial intelligence tools, are acceptable, attorneys are still responsible for ensuring the accuracy of their submissions.
The core issue in this case is the failure of Schwartz and his team to undertake due diligence by verifying the accuracy of the information generated by ChatGPT. By neglecting to do so, they disregarded their ethical and professional responsibilities as lawyers. Even when questioned about the legitimacy of the fake statements in court, Schwartz and his team defended them, compounding their misconduct.
This incident involving ChatGPT is not an isolated case. Instances of inaccuracies from AI chatbots have been reported widely. For instance, the National Eating Disorder Association’s chatbot provided individuals recovering from eating disorders with harmful dieting tips, as highlighted by Engadget. Additionally, ChatGPT falsely accused a law professor of sexual assault, citing a non-existent article from The Washington Post as evidence, according to an article in the same publication.
While the development and utilization of AI tools like ChatGPT can have significant benefits, it is crucial to exercise caution and skepticism in relying solely on them. These tools should serve as aids in legal research and analysis rather than substitutes for human discernment and verification. Attorneys have a duty to scrupulously check the sources and accuracy of any information they present in court.
Legal professionals need to maintain a strong gatekeeping role in ensuring the integrity and accuracy of their filings. This entails comprehensive vetting of sources, particularly when relying on new technologies like AI language models. While it may be tempting to rely solely on the efficiency and proficiency of such tools, it is important to remember that they are not infallible. Human judgment and critical thinking are still vital in the legal profession.
This case also underscores the need for ongoing evaluation and improvement of AI technologies. Organizations that develop AI chatbots should prioritize rigorous testing and quality control mechanisms to minimize the likelihood of misinformation and false claims. The potential consequences of reliance on flawed AI-generated information can be far-reaching, as demonstrated by the fines imposed on Schwartz and his associates.
Ultimately, the use of AI tools should complement and enhance legal professionals’ work, enabling them to navigate an increasingly complex landscape of legal information. However, attorneys must remain vigilant, understanding the limitations and potential risks associated with these technologies. A balance between harnessing the advantages of AI and maintaining a commitment to accuracy and responsibility is vital for the legal profession to preserve its integrity and uphold the principles of justice.