“They’re so busy that our practitioners need to realize not a 10% improvement but a 10x improvement in productivity before they will take the time to investigate, let alone implement and incorporate, a new tool” is an observation the always astute Kyle Dumont of Morgan Lewis made to me the other day.

Kyle’s insight reminded me of one of Jason Barnwell’s most quotable lines, “If capacity must increase by 10x, our current approach breaks, as the option of a 10x increase in hiring is simply off the table.” (btw, congrats to Jason for being recently appointed as Microsoft Legal’s first ever General Manager for Digital Transformation—a development worth noting)

Bruce MacEwen introduced his own 10x into the discourse in the conclusion to his excellent post on our scalability problem:

Some years ago the head of “Google X”–the name at the time for its totally out-there incubator for new projects–described their ambitions with an analogy: “If you tell me to build a car that gets 50 mpg, I can do it with off-the-shelf stuff put together with that express end-goal in mind; if the goal is 500 mpg, I need to forget everything I know and leave it behind me.”  (Google X is now named “The X Company,” and they call themselves “the moonshot factory.”)

I concur that the threshold for investing in change is high (Kyle), yet the need for material change is inevitable (Jason/Thanos), and that such change requires a fundamental rethinking (Bruce). But to avoid being too agreeable (#boring), permit me to suggest that, maybe, the way we think about change is rather incomplete—in part, because we underestimate the impact of seemingly incomplete changes.

Simple Math, Hard To Intuit. I’ll take advantage of Bruce’s mpg example as a jumping-off point (for our friends on the metric system, think km/L). According to the EPA, the average new car sold in the United States is rated at 25 mpg. As noted, already available, conventional methods can improve this to 50 mpg. You would, however, need to achieve Emmet Brown levels of inventiveness to ramp up to the 500-mpg moonshot.

The objective is to consume fewer gallons of gasoline (the constraint). But what if I told you improving mileage from 25 mpg to 50 mpg (2x, +25 mpg) conserves more gas than improving mileage from 50 mpg to 500 mpg (10x, +450 mpg)? For most of us, this violates our intuition—yet it is correct, nonetheless.


A slightly different frame may enhance clarity. Once we reduce baseline resource costs by 50%, there is no further improvement we can make—save eliminating the cost entirely (e.g., go electric)—that can ever have an equivalent impact.

That is, many forms of productivity improvements are subject to diminishing returns when solving for specific constraints. Most of the benefits are realized at the low, unsexy end of the spectrum. Thus, improving from 1 mpg to 2 mpg (2x, +1 mpg) saves 500 gallons while improving from 100 mpg to 500 mpg (5x, +400 mpg) only saves 8 incremental gallons on the same 1000-mile trip. The 2x leap from 1-2 mpg at the inefficient end of the spectrum is therefore 62.5x more impactful than the 5x leap from 100-500 mpg at the efficient end of the spectrum. This is an area where our intuitions let us down.

No Time To Save Time. Let’s apply the same calculations to something closer to home.

What if I told you improving productivity from 1 contract per hour (“cph”) to 2 cph (2x, +1 cph) saves more time than improving from 2 cph to 50 cph (25x, +48 cph)?

I presume you already updated your priors. But if seeing the arithmetic helps:


Feel free to substitute any legal unit of production for “contract.” The math holds where time is the constraint.

And let us not kid ourselves about the centrality of time as a constraint. Despite our decades of debate as to whether time is a useful proxy for value, time remains, indelibly, a resource cost and rate-limiting factor. As Drucker writes, “Time is the scarcest resource and unless it is managed nothing else can be managed… Everything requires time. It is the only truly universal condition. All work takes place in time and uses up time. Yet most people take for granted this unique, irreplaceable, and necessary resource.”

We are time constrained even where we are not money constrained. One of the better talk tracks I’ve encountered recently is Kira co-founder Noah Weisberg discussing the concept of total diligence. Noah notes that the standard due diligence approach on even the least price-sensitive megadeals results in only a small percentage of potentially relevant contracts being reviewed. Not necessarily because of worries about accumulating too many billable hours. Rather, everyone involved is invested in maintaining deal velocity, which limits the time available to conduct diligence. Yet there can be material issues lurking in the presumptively non-key contracts (Noah shares some striking examples of these “deep holes”). Certainly, AI can be used to review the typical small percentage of contracts faster (and it is). But Noah is keenly interested in using AI to augment the review process so that 100% of contracts can be reviewed in some fashion with minimal additional time—i.e., total diligence.

Time is not the only constraint. But time is a key constraint, even where money is not. There is an underappreciated interplay between better, faster, and cheaper—in part, because a narrow view of, and overemphasis on, “cheaper” often induces a counterproductive myopia.


We rarely recognize the outsized impact of reducing low-end friction. Less eloquently than Noah, I have long ranted and raved that my obsession with legal professionals improving their facility with the core technology tools of their trade (Word, Excel, Email, PDF) is not about lawyers using such tools more but, rather, about being able to use them less (bc more efficient). This is decidedly unsexy. But it is a simple means to reduce low-end friction—i.e., the type of minor improvement that can deliver massive time savings when starting from a low baseline (e.g., that small but significant leap from 1 contract per hour to 2 contracts per hour).

I share Kyle’s assessment of stakeholders’ demonstrable, 10x improvement threshold for adoption. Spending much of the last decade, including my current role, engaging in these conversations, I am confident the way most decisionmakers think about the 10x improvement is the leap from the 50-mpg conventional vehicle to the 500-mpg moonshot vehicle, instead of the counterintuitive understanding that the more impactful 10x can be the smaller steps getting from 1 mpg to 10 mpg  (depending on what we are solving for).

I am confident most decisionmakers think this way, in part, because of most of us think this way about most things, and are mostly correct to do so. Indeed, remaining acutely aware of the unavoidable implementation dip, there is wisdom in demanding fairly substantial ROI on any improvement initiative that consumes finite time and attention, especially in an environment of significant opportunity costs. Most marginal improvements are, in fact, marginal. If you are already driving the 400-mpg vehicle, the modest gas savings of upgrading to the 500-mpg vehicle is unlikely to be cost-effective—better to spend that energy investigating going fully electric. But this can go too far. We encounter too many instances of professionals stuck in a 5-mpg antiquated vehicle unwilling to upgrade to the available, if conventional, 50-mpg alternative because, as they correctly point out but too heavily weight, it is not in fact a 500-mpg moonshot.

Our intuitions are mostly reliable. But we remain subject to some predictable irrationality where they fail us. We frequently fail to recognize sources of low-end friction, let alone understand the outsized impact this friction has on the allocation of our finite resources.

We don’t need to do it all at once. The other day, I committed the minor sin of straining a sportsball metaphor (apparently, I’m a “big metaphor guy”). In my defense, he started it.

I was speaking to a formidable in-house leader who made an observation similar to Kyle’s. He insisted with respect to expectations around innovation, “Our stakeholders will not be content with us just hitting singles.” (I’m paraphrasing)

I pushed back, respectfully, “If the singles are in separate innings, probably not. You will just strand runners on base. But if you string together singles in the same inning, you put runs on the board, which is key to winning the game.”

There are passing few grand slam opportunities. But there are many opportunities to put runs on the board. If we make potential grand slams our threshold for taking a swing, our strikeout percentage will be high, and we will miss out on many wins.

Just like the steps from 5-mpg to 50-mpg, the leap from the 50 mpg  to 500 mpg would not be the result of an isolated grand-slam innovation but the combinatorial result of many complementary innovations (cumulative innovation and the expansion of the adjacent possible). The aggregate impact of marginal gains can be significant when they compound.

As Alex Hamilton writes in his new, must-read book Sign Here, “We need to recognize that there is no sweeping fix that will make everything alright and that instead, we will have to make lots of small changes to keep improving how we work…so, while it is very human and understandable to wish it weren’t so, there is no silver bullet that will solve everything.”

Alex consoles us, “You might find it depressing to discover that that there is no single solution…but there is good news here, too: because many changes can be made as relatively small tweaks, they can also be cheap, fast, and low risk.”

Indeed, many of the success we see are not the wholesale replacement of an entire process/system (though, sometimes, this is simply unavoidable—for example, a legacy DMS or CLM) but, rather, successes building on each other as teams re-engineer pieces of their process/system until, eventually, they have developed something entirely new without any single, iterative improvement making it feel completely different (the Ship of Theseus effect).

There are many interconnected pieces in our processes. We should consider all of them, and prioritize the limiting factors—i.e., the key constraints—in constructing optimal, integrated operating environments.

Towards this end of thinking in integrated processes, systems, and, ultimately, platforms, I commend to you Rob Saccone’s exceptional exploration of interoperability.

Indeed, let me conclude with a sentence from Rob that made me smile so much I stole it for the title of this post, “Succinctly stated, we need to advance our thinking about how humans and technology can better work together, as humans alone are not going to be able to compete against humans + technology….Let me repeat the key part: we need to advance our thinking.”