Remember earlier this year when two legal scholars and scientists had GPT 3.5 take the multiple-choice portion of the bar exam and it failed to pass? Well, the second time’s a charm, as GPT has now passed not only the multiple-choice portion, but also the essay portion, and scored around the top 10% of test takers.
Only this time, it was the just-released GPT-4 taking the exam, the latest AI model from OpenAI that the company says “exhibits human-level performance on various professional and academic benchmarks.”
For this latest test, those two legal scholars and scientists, Daniel Martin Katz and Michael Bommarito, collaborated with the legal AI company Casetext, which recently launched the AI legal assistant CoCounsel, which Casetext has now confirmed is powered by GPT-4.
“GPT-4 leaps past the power of earlier language models,” said Pablo Arredondo, co-founder and chief innovation officer for Casetext. “The model’s ability not just to generate text, but to interpret it, heralds nothing short of a new age in the practice of law.”
In their prior paper, GPT Takes the Bar Exam, Katz and Bommarito described how they had put the GPT version released in late 2022 to the test of the bar exam, only to have it fail to pass any portion.
In a forthcoming paper, they will detail how GPT-4 passed the multiple-choice portion and both components of the written portion, exceeding not only all prior large language models’ scores, but also the average score of real-life bar exam test takers.
The implications of all this, Casetext said in a press release today, “go far beyond passing the bar exam.”
Casetext’s CoCounsel product, as I explained when it launched March 1, performs seven core functions, or “skills”, using GPT-4. Lawyers can use it to search a database, review documents, summarize documents, and review contracts for policy compliance.
They can also use it to extract data from contracts, draft a legal research memo, and prepare for a deposition.
Watch for LawNext interview next week with Casetext about this latest news.