Reading Time: 8 minutes

I am a policy skeptic. Which I think is a bit funny because I am (mostly) a rule follower. This can lead to unintended consequences when rules lead me to playing silly buggers but if you want a different outcome, think before you start rule-making. After working at the ABA and then a lawyer regulator, I have seen too much policy-making revolve around reacting to a specific, narrow case when the policy, if needed at all, should address a much broader scope.

Comment 8 to Rule 1.1 of the Model Rules of Professional Conduct is a great example of this but hardly alone. “You have to be competent” but also “you have to be competent with technology“. The original rule was already broad enough to encompass the goal. A failure to be competent in communications, in use of legal research materials (using technology known as paper and ink), and so on. Comment 8 is a political policy for people who want to say something even if it has no impact.

This is top of mind as people start developing artificial intelligence policies. This has already gone in really goofy directions. A teaching resource site at Duke suggests that each faculty member craft their own which I think would be the worst approach but is probably what is essentially happening at any university. There is a public Google Sheet with syllabus suggestions for AI “policies”. This is a great resource but most of these are not policies so much as guidance or suggestions or a reminder of best practice (similar to netiquette).

Maybe we should start with a more concrete idea of what a policy is

2a: a definite course or method of action selected from among alternatives and in light of given conditions to guide and determine present and future decisions
b: a high-level overall plan embracing the general goals and acceptable procedures especially of a governmental body

Definition of “policy”, Merriam-Webster

To that extent, a policy needs to sit at a higher level than a single faculty member, either at the college level or at the university level. As little as I know about academia, I could definitely see the ABA or some other accreditor horn in and specify that every college have an AI policy. This would be overreach similar to the ABA Model Rule 1.1 comment.

To Policy Or Not To Policy

I do not support AI policies. I think the idea that AI needs a special policy—separate from other tools (which rarely have their own policies: Policy on Books, Policy on Databases, Policy on Word Processors, Policy on Spreadsheets), some of which can be similarly misused to dishonorable or worse outcomes—is starting with the wrong question.

I think most academic honor codes probably already cover the use of artificial intelligence, just the same as the rules of civil procedure and of professional conduct already fully cover the lawyer’s use of artificial intelligence. A good policy is going to look at outcomes, not the methods used to reach those outcomes.

If the existing code or rules do not provide guidance for students or lawyers, then the code or rules should be clarified. I don’t think any law student would be surprised to be disciplined for outsourcing their memo or brief writing to a third-party, whether it was an AI tool or a meatspace ghost writer. We aren’t surprised when lawyers are sanctioned for delegating legal research to a guessing machine or to a paralegal and have them get it wrong. It’s not the tool, it’s the delegation.

The hype train says that artificial intelligence is transformative. Is it though? It may be in a commercial sense, in that you can build or do anything you want with it, whether constructive or destructive. But a lawyer can’t use AI without reference to the rules of professional conduct. A bank or hospital can’t use AI without considering its compliance requirements. When AI is used to violate rules—as seen with this story about a company called Nota violating both copyright law and journalism standards—there are consequences, like a lawsuit or having a website taken offline. Whatever transformation artificial intelligence enables remains constrained by rules.

And even if it is considered transformative, we have not created policies for every transformative technology that has come along. Or not successfully. How about those laptop in law school classroom bans? We haven’t created a professional rule change or interpretation for cloud computing or even for the descendants of the iPhone, two pretty remarkable technology developments in the last 20 years.

But if we are going to go that direction, let’s at least do it with wisdom.

What Do We Want To Do

The goal seems to be to give clarity to students (and perhaps faculty) about when AI can or can’t be used. But already, we’re off the rails. We are probably talking only about generative AI, not all of the other variants of AI we have been using for decades without a policy and without notable downside. You Boolean search purists are probably safe but anyone engaging in natural language or following Amazon’s buying suggestions is probably in a pickle.

For me, this would be an immediate terminus. If we are talking only about one flavor of AI, the generative tools, then surely this too narrow a technology to need its own policy? But, for the sake of riding this train to further along the line, let’s say we do want to police a very specific iteration of a technology.

Another reason I’m not thrilled about these sorts of technology-specific policies is that the technology moves far faster than policy-making and decision-making bodies. Lawyer regulators have shown themselves to be entirely unable to keep up with technology changes in their rule-making and I have long suspected Comment 8 was “welp, it’s something” approach. A technology-specific policy will become out of date quickly, creating either new uncertainty or becoming entirely irrelevant as the technology changes or is eclipsed.

The goal of an academic law school generative AI policy would seem to be to ensure that (a) students learn specific professional skills and (b) that the work product they create is in fact created by them, not a delegate. To achieve that, the options seem to be to (a) ban the use of generative AI entirely, (b) allow it in limited instances but not in others, or (c) allow it any all situations.

The challenge of (a) and (b) is that there is no good way to determine if generative AI was used in a given instance unless it makes an obvious error. A careful law student or lawyer will catch these errors, like hallucinated cases or mis-represented case holdings, whether or not they use generative AI. Someone who is careless will create work product that contains obvious errors.

Even without a policy, then, we could identify a certain number of people who resorted to generative AI use merely by looking at their output. Even more importantly, we could treat everyone the same, whether they used generative AI or not, by looking at what they created from it. It’s the same mindset that managers apply when thinking about hybrid work: are you trying to ensure your staff works a certain number of hours in a certain mode or are you just trying to see what they’re getting accomplished? If the performance is unsatisfactory, it doesn’t matter whether they put in the hours (or used a particular technology or not).

My guess is this is where courts will fall out. There will be cases where they may think generative AI is used but they don’t have the time to care. If the law is correct and the citations are right, it doesn’t matter. In the same way it wouldn’t matter if someone had used a book-technology instead of a computer-technology to do research. I still suspect very highly that we are seeing a lot of sanctions for generative AI use because parties and judges just weren’t really careful about validating the law they were presented with. In a go-along-get-along rah-rah-civility legal profession, where lawyers laugh when judges make stale jokes, participants could easily decide to assume the best of their colleagues, even when in opposition. Sanctions and discipline referrals (which I think is less than a slap on the hand, given the namby pamby stance of most regulators) are easy to place on people who have clearly violated the rules.

In situations where we are willing to allow generative AI use in all situations, we no longer need a policy. We might state that somewhere (“Feel free to use generative AI”) but you don’t need a policy for that unless you really like policies. That’s a bit like “Call me Professor (or not)”; I don’t need a policy to ask the students to call me something specific.

How Will We Know

We really only have two policy cases then: complete or partial ban. Both have the same problem: you may not know if AI has been used unless there is an obvious outcome issue. We already have tools to measure those outcomes, whether a grading rubric that looks at properly cited cases or a civil procedure rule that says a lawyer will only state actual law in a pleading. But let’s say we want to still create a policy.

I’m assuming a complete ban is not realistic. As I mentioned in a recent meeting with colleagues, my goal in exposing law students to generative AI is to enable them to speak to employers about it. I do not want a student to lose a work opportunity because they were not able to speak to generative artificial intelligence use in research and writing. In any case, it’s a bit out of our hands as I’m sure it is in most places. Law firms recognize the competitive need to use generative AI in some form and universities are already jumping foot-first into the “generative AI literacy for all students” pool (although the bloom may already be coming off that rose a bit). At most, we’re looking at constraints on use, not a full on ban.

Then we need to identify the technology. We will need to be specific that we mean generative AI like chat bots and perhaps also those embedded in our legal research platforms. We may or may not mean generative tools like those built into Microsoft Word and Grammarly. As we identify which products or types of products we include—Lexis Protege’ Ask is okay, Lexis Protege’ Draft is forbidden, &c.—we then need to think about how we’ll police that use outside of outcomes.

We also need to identify how much generative AI is allowed if it’s not a complete ban. If I write 1,000 words but Grammarly changes 20, is that too much? What if a chatbot creates an 800 word memorandum but I then edit it into a 1,200 word memo and add links, so that more than 50% of the final product was manually written and 100% was reviewed? Is that too much? I think a partial ban would still not want a final product to be 100% generated by AI but how would you measure that?

Honestly, I think this is where every generative AI policy falls apart. There is no 100% foolproof way of determining if AI is used in legal work. There is also no way to measure how much or the impact of the generative AI being used. It’s the same as plagiarism, although at least with plagiarism, you may be able to compare the original and the submitted work. Courts don’t care about plagiarism; a smart lawyer is copying winning arguments from pleadings and building out new precedents and templates for their client’s benefit. Legal documents bear a substantial amount of re-use, quoting courts and other legal sources ad nauseum. No law student is going to get dinged for paraphrasing a court’s holding, even substantially re-using the court’s language, unless they get the interpretation wrong.

Because if you have any sort of ban, and then you have a violation, you must have a consequence. Otherwise, the policy is toothless and is not needed. In a world bound by such a basic concept as due process, I don’t know how you have a generative AI policy that has consequences when you have less than 100% confidence in a violation.

I have already heard enough about honors code violations to know that it can become a time suck for faculty involved. It can be disastrous for students who are wrongly accused. I’m not sure who wins in a situation where the outcomes—the memos, the briefs, etc.—didn’t show evidence of errors themselves. The uncertainty taints the entire process no matter the resolution. This is even without calculating the hours it will require instructors and administrators to divert from their primary roles into the enforcement of the policy.

We haven’t even considered agentic AI, which goes beyond generative into potential hazards. It was interesting to me that a learning management system vendor decided against agentic AI checking tool for the same reasons that so many people are struggling with generative AI checking:

We did a ton of testing ourselves very early on on the accuracy of AI-detection technologies. We came to the conclusion that actually detecting the use of AI can really not be done in an accurate way. There are lots of stories of students being accused of using AI where they haven’t used it, of students with a particular disability or students that have English as a second language being flagged more often.

Our research is coming to the conclusion that trying to detect agentic usage reliably and ethically is currently impossible. We didn’t feel we could bring something that was reliable and responsible enough to really go in that direction.

Nicolaas Matthijs, Blackboard’s Chief Product Officer
“Blackboard Executives Say Catching AI Cheating Is a Lost Cause. This One Isn’t Worried.”, Sonel Cutler, The Chronicle of Higher Education, March 16, 2026

This may be because I think generative AI use in law school and law practice already has consequences. No policy needed. A law student who has successfully cleared all of their course assessment using generative AI will still need to cross the barrier of the bar examination. We know that generative AI has an impact on executive function; I am guessing that law students who rely on generative AI will find themselves lacking in key knowledge or skills when the bar exam rolls around. If they are able to clear that hurdle too, there is always practice and the rules that provide boundaries for their professional future.

It can also have consequences earlier by forcing outcomes into compliance with expectations and within assessments. Require something on paper in handwriting. Set a task that can only be completed within a timed setting in class, where generative AI and validation isn’t possible. A colleague mentioned how they use in-person discussions, one-on-one with students, to provide a checkpoint that does not allow for generative AI use. This can all be accomplished not by creating a policy but by instructors assessing student progress in ways that are human, face to face, in spoken or hand-written communication.

I want law students and lawyers to use generative AI. They should be as fluid with it as they are in legal research with natural language searching. They should be able to draft documents and help their clients as efficiently and economically as possible. They don’t need to be, as I have heard so often recently, to be “scared” into partly or entirely avoiding AI. None of that suggests I want them to be sloppy and unprofessional.

And I won’t want them to be sloppy or unprofessional when we get the next transformational technology. Or the next. We have ways to prepare people for practice and to actually engage in ethical practice using “technology”. You don’t need a policy to achieve that.