recent survey by DeepL, an AI translation service, reveals a risk of continued hallucinations and inaccuracies with the use of AI. Spoiler alert: 96% of those surveyed are using AI, 71% are using it without approval of their organization (aka shadow use) mainly to deliver work faster.

Why It Hit Home

The survey resonated with me for several reasons. I recently wrote an article talking about the pressures being placed on lawyers and legal professionals to use AI but not spend the time checking results. My concern was, of course, the propensity for hallucinations and inaccuracies in the AI outputs.

My point was that to merely sanctimoniously say those lawyers who don’t check cites are just reprobates and that there is no excuse ignores some of the real life pressures on lawyers and legal professionals. And perhaps in addition to focusing on penalties and blame, we should as a profession also be concerned with and address those underlying pressures. As I put it in the article, “The ability of LLMs to shortcut work is having its impact on expectations and what clients will pay for and, in turn, even what senior partners may demand of associates. And that impact may lead to even greater hallucinations and inaccuracy problems down the road if they go unrecognized.”

Some Examples

One example I mentioned was the situation when a local counsel gets a 50 page brief from a renowned national counsel containing literally hundreds of cites. Is it realistic to demand that local counsel take the time, likely non billable in the eyes of national counsel, to in depth review all the cites? That’s not an insignificant portion of time since finding and reading 100 cases can’t be done in an hour or two or even 20 or 30. Another situation: the associate under intense billable hour pressures who is told don’t bother checking the cites since the client won’t pay for that.

Neither situation offers an excuse in and of itself but is a reality that should not be ignored. 

It Better Be Billable. And Collectible

Placed on top of this was the report of the recent law firm who now demands 2400 hours of work per year of associates, most of which (2000) must be billable. It goes without saying that the 2000 hours must also be collectible to really count. So while AI may be helpful to get the work done, there’s a temptation to either not do things like cite checking or at least not doing it thoroughly enough to make sure there are no inaccuracies.

The DeepL Study

The DeepL survey data confirms these pressures that could create the exact problems I referenced. DeepL surveyed 1,000 US legal professionals spanning law firms, in-house legal departments, independent practitioners, and public sector teams to understand how AI adoption is transforming the industry.

First, as set out above, DeepL found the ubiquitousness of using AI tools: 96% of the legal professionals surveyed reporting utilizing AI tools, and nearly half (47%) said it’s essential to their daily workflows. A full 59% say AI helps them complete routine work faster.

So, legal professionals are using AI frequently and it’s naïve to think otherwise.

But here’s a bit of a surprise: despite widespread adoption, the vast majority (71%) of legal professionals admit to using AI without formal approval from their organization, with 35% doing so frequently. There is this use of AI going on that is concerning. 

But their reasons for shadow use are even more concerning and in line with my conclusions. The main drivers are pressure to deliver work faster (35%), insufficient functionality in approved tools (32%), and recommendations from managers or senior colleagues (30%). Unclear policy is not the leading cause for this unapproved use, mentioned by only 24% of respondents. 

Both unauthorized AI use and time inflation represent predictable responses to the same underlying problem: unrealistic expectations about what can be accomplished within billable constraints.

And then on the heels of this study came the article by Paul Hodkinson and Krishnan Nair in Law.Com entitled Inflating Hours is Widespread, Lawyers Say After Associate’s Ban. The article referenced a UK lawyer who was banned by regulators for overbilling. According to the article, “several lawyers have responded to the punishment by saying the practice of inflating hours is widely accepted and that firms place unreasonable demands on their workers.” 

Both unauthorized AI use and time inflation represent predictable responses to the same underlying problem: unrealistic expectations about what can be accomplished within billable constraints.

Why Is This Worrisome?

So lawyers and legal professionals are using AI for all sorts of things whether they have approval or not or whether allowed by their organization. And they are using it to save time. And if they are using them to save time, that’s probably because they are under pressure to get work done faster. And that in turn leads to a temptation to perhaps not take the extra time to check carefully the outputs for accuracy. 

The other interesting revelation is the recommendation from managers and senior colleagues to shadow use the tools. In other words, higher ups are saying ignore the guidelines to get the work done. How likely is it that these higher ups are going to say use the tools even if not organizationally approved, on the one hand and say make sure you take the time to check everything carefully, on the other? Explicitly or implicitly, that may very well be the suggestion. And where does that leave junior personnel?

Add to this the intense pressure to bill for work that can be collected, the billable hour quotas and the readily available ability to fudge and you get a recipe for disaster. If you can’t bill for it, how likely are you to do it? And if you do take the non-billable time to do it, how will you make that up and reach your goals. 

What’s the Answer?

Rather than just blaming the lawyers, we need to address the issues across the board and make sure there is a recognition and accountability both among senior lawyers and, as I pointed out in my previous article, clients who demand the use of AI but refuse to bill for things like robust cite checking. We can’t leave younger lawyers out on a limb to decide whether to do the necessary work and perhaps not bill for it while at the same time imposing billable quotas and work demands. 

The responsibility for hallucinations and inaccuracies is not just that of the lawyer. It’s that of senior partners and clients who expect and demand AI use. They must recognize their accountability in creating demands and pressures to not do the time-consuming work to check cites. 

Just blaming the individual lawyer is too easy. The DeepL confirms this very point.