Lawyer Who Used ChatGPT to File Charges Faces Trial

A New York attorney's use of OpenAI's ChatGPT for legal research has led to the dismissal of a case due to the falsified information generated. Despite the attorney's apology, he, his colleagues, and their firm have been held accountable for neglecting their responsibilities in filing incorrect cases and defending them when asked. The use of AI for legal research is not inherently improper, but attorneys must ensure the accuracy of their filings. This case may set a precedent for the authentication of AI-generated data in legal proceedings.

Technological advances have made AI-assisted legal research an attractive tool for attorneys, but the reliability of AI-generated information is still a concern. In this particular instance, the plaintiff's legal team filed a charging document that cited examples from non-existent legal cases generated by ChatGPT. The court, upon discovering the fictitious citations, ordered the plaintiff's legal team to explain the bogus sources. Though the offending lawyer claims to have been unaware of the system's limitations, at 30 years in the business, his lack of discretion in accepting an AI's assurances of accuracy is troubling.

The punishment for this case was mild, with the two lawyers involved incurring a $5,000 fine and the responsibility of informing their clients and the cited judges of the misstep. However, this could serve as a warning to the legal industry about the authentication of AI-generated data in legal proceedings.

Previous
Previous

Breakthrough in Solar Panel Boosts Efficiency

Next
Next

Snap Research's SnapFusion: Faster On-device AI Image Generation