Reading Time: 8 minutes
I am back in the classroom this semester. As I put together my syllabus, I faced the artificial intelligence section. It is, for me, relatively inconsequential even though I am teaching a research and writing class. I point students towards Grammarly, which the university (not the law school) has a site license for and I suggest that they use AI in compliance with the honor code. But I am not going to push too much more on this issue. If anything, it feels like we are about to circle back through the information literacy challenge we visit any time a new technology is visited upon us.
Wikipedia, good or bad. What if it gets better? Google and Bing, good or bad. What if they get worse? Electronic and print formats, which is preferable? Artificial intelligence seems to be the latest tool that we end up focusing on at a level of specificity that is unhelpful. The attempt at nuance–do we use OpenAI’s latest version which has been pronounced “bullshit“, do we chase down every amalgam of Anthropic and Perplexity, etc.–is like chasing shadows.
It Is Easy–and Pointless–To Ban Technology
The broader perspective might be to, again, focus on information literacy. As I have posted before, I think of this term in a generic way: the ability to navigate information systems with a set of skills that can be applied regardless of the vendor. It’s like financial literacy: understanding expenses and revenue, income and outflow, budgeting and debt are skills that you can use whether or not you use a spreadsheet or accounting app. Information literacy should be an operable set of skills that work regardless of the technology.
So the academic discussions about the range of responses, up to and including attempting to ban artificial intelligence (or even the technologies, like laptops and phones, on which they can be accessed) land badly for me. This may be due to the fact that I maintain a highly adversarial posture towards all technology and believe strongly that bans do not work. In light of this, the best approach is to understand the tool or the enemy and identify how you will manage the risk it creates.
Country-blocking will not accurately keep only people from a particular country out of your network. Age-related bans on social media sites will not inhibit under-age use (in part because geo-blocking doesn’t work either). Sometimes, like with age-bans, we see that it’s not the information seeker that wants the ban so much as others like parents who want control (over devices that they may themselves provide the child) or even governments who are attempting to implement scientific research through poorly considered legal tools.
Moreover, age-tiered safeguards only work if the service knows a user’s real age. However, only two of the 50 services in the OECD benchmarking study routinely assure age upon account creation. Most still rely on self-declaration or only assure age in specific cases – such as when suspicious activity is detected or for access to certain features. Some platforms do not assure age at all.
Jeremy West and Lisa Robinson, Too young to scroll? Why governments are cracking down on social media age limits, OECD Blog June 12, 2025.
There is a rush to try to stop the technology when we see that there are harms it causes or that we think it causes. Politicians have initiated “violent” video game bans to try to forestall the daily mass firearms killings in the United States, rather than focus on controlling the firearms even through something as simple as stricter licensing and ownership requirements like we have for cars. We know that people who watch a lot of short video also are causing themselves harm, in particular altering their loss-aversion sensitivity (“Individuals with higher short-video addiction symptoms made faster decisions, often at the expense of thorough risk evaluation“). There have not yet been calls for Instagram Reels and Tiktok to be shut down.
We focus on the technology that is creating risk rather than focusing on risk mitigation outside of a specific technology. We do that because it’s the easiest approach. This piece over at The Atlantic is a good example of a laundry list of “activity” but not functional actions. But it means we are only looking at one end of the challenge and ignoring the likelihood of the outcomes.
This is a bit funny to me because anyone who has led an organization will already have dealt with this issue. You can make a policy about anything. We can ban laptops in the classroom and disable wireless antennas or use signal blockers to remove some distractions from student agency. We can ban the use of AI for any school work and make it subject to the honor code. We can even ban video games. Inevitably, someone will find a way around a technology barrier. Then what?
An effective ban needs enforcement and enforcement will result in consequences for violating the ban. This means that you can’t just ban AI or just ban laptops. If there are no consequences, the ban is toothless. But any consequences will require enforcement and now you have added overhead costs to this problem. If you are going to achieve your goals, you will have to burn some resources.
Many of the laws related to technology shift the burden to the technology provider because governments know they do not have the ability to manage the ban. A provider can nominally comply—like Bluesky did with Mississippi, citing U.S. Supreme Court precedent (which itself suggests the age ban is unconstitutional)—but that just shifts the work to the information seeker in Mississippi. It takes very little skill or resources to appear as though you are using the internet from somewhere you aren’t.
As Sean Connery’s character, Jim Malone, says in The Untouchables: “Now what are you prepared to do?”
Start With The Consequences
Law library managers will have faced this in any number of ways. Let’s start with one of the most basic policies, that regarding attendance. Your work day is 9am to 5pm. What do you do if someone’s late? How do you know they’re late? What’s the enforcement and consequence?
The beauty of the the workplace shift the pandemic wrought is that more people realized that we often don’t need an attendance policy. This is particularly true in places with knowledge workers. If I focus on outputs rather than presence, then any potential attendance issue goes away.
I don’t need to have anyone watch a clock and a door and see who comes through when, or delegate that to some app or card-and-clock option. I don’t need to have an enforcement regime, a process for workplace investigations, or even worry about fairness: why can Amelia run late some days but Bethany can’t?
That’s all well and good but what if people are late? What then? What about a reference desk or people who have to open or close the law library? Those are outputs, specific tasks that happen to have a specific time attached, and so can be measured and, if necessary, dealt with as a performance issue. The focus should be on the goal: what are we trying to accomplish?
I will be frank. When I see people having to get late slips when they get off the train to excuse their tardiness at work, I feel awful for them. Workplaces that measure minutes are exchanging outcomes for managing time (see also, billable hour). If someone who works for me is late, I don’t care unless it impacts their work. I have had to explain to people that they do not need to reassure me that they’ll make up the 10 minutes or whatever. I really don’t care. The 40 or 35 hour work week is a guideline that helps us know how much work someone can probably get done in a week. It’s not a measure of their success.
With artificial intelligence, the goal is the end point we are not focusing on. We are trying to inhibit use of artificial intelligence that leads to sub-optimal outcomes. In academia this may be called “cheating” but really we are trying to avoid skill development being replaced by automation that erodes acquired skills and future skills (for more on de-skilling with AI, see this RIPS-SIS blog post). We should be attempting to guide lawyers to avoid de-skilling as well, since the failure to evaluate and validate information presented to court has real world outcomes like sanctions and professional discipline. But AI isn’t the sole cause of those sub-optimal outcomes. It’s merely the latest tool to enable them.
So the question then becomes, how do we avoid those sub-optimal outcomes?
We focus on the outputs and outcomes we do want. This is particularly true with artificial intelligence but academics have been dealing with it, under the guise of plagiarism, for years. If we focus on how the outputs or outcomes are arrived at, we are focusing on the tools and resources and we are setting ourselves up for failure.
If I am going to expel a student for an honor code violation, like improper use of AI, how certain do I need to be? If I was to fire someone who worked for me, you can be damn sure I’d be as close to 100% sure as I could be. Short of that, I would almost certainly find some other way to reach an outcome. There is no way that I would be willing to accuse a student of improper conduct without an extremely high degree of certainty.
When we look at AI as a tool, I don’t think we can ever have that certainty. Unless I am looking over someone’s shoulder when they invoke it, I can’t tell if they did or didn’t. This is further complicated as universities make a drive for “AI literacy” which requires that students be exposed to the tools. Or when, in the legal world, courts start to use AI-infused tools. Even if there wasn’t this overt challenge, it is creeping in to the core tools used by lawyers and law students: Microsoft Word and its suggested words and reviewing tools. Law students and lawyers will be hard pressed to avoid AI.
We were talking over a case in class the other day and we hit upon authenticity. I was commending Wikipedia as a source but also, in light of the Scots Wikipedia scandal, never a single source. I made the observation that, in general, you can rely on a resource you find in a commercial legal publisher database to be authentic (and single-source): a case from a court, a statute from a legislature. Now that I consider, though, if they are using the publisher’s AI tools, databases like Westlaw and Lexis are no longer as authoritative and will require just as much validation as any other AI-infused output.
It’s these outcome-oriented skills that we should be working on:
- multiple sources or an individual definitive source (like the court or legislature that created the document or their re-published content that has been marked as “official”);
- validation. If I get a confabulated reference in a legal memo, or a court gets one in a brief, that is evidence of AI usage. But it doesn’t really matter if it was AI or not. It’s 100% wrong and so can be subject to whatever consequences apply. In this case, a confabulation may be a new way to create a bad citation, but lawyers have been incorrectly citing information for decades;
- verification. It may be that the cited is incorrect but the law isn’t. Someone can read that and verify. Similarly, they can attempt to verify and determine that the cited law does not exist in addition to the cite being wrong.
None of this requires us to dismantle or prohibit a technology. The consequences for a bad memo or brief are unchanged and as ascertainable as ever.
You know what else? They really should apply no matter what technology we use: books or databases, AI or no, Westlaw or Lexis. In fact, Westlaw and Lexis are a great example of how we teach tools (like AI) rather than focusing on skills that are portable to any commercial provider. I would love to eventually get to the point where we could license one product and focus on teaching those universal skills, rather than trying to resell commercial products by highlighting their nuances and nuisances.
The upside of avoiding a ban and focusing on outcomes is that immediately your approach is streamlined and, dare I say it, positive. You immediately eliminate any surveillance required—I have heard of one organization where they version check documents to identify whether additions “look like” AI—and all of the “Do not…” sentences that might be required to convey the ban. This is what works in parenting and it works in management too. You can create an operational perimeter within which someone can be successful without a bunch of proscriptions.
But what if they “cheat”? What if they take advantage?
So what if they do? Bans and prohibitions on technologies can be bypassed by those determined to do so. You cannot eliminate cheating. I bypass paywalls and authentication mechanisms all the time. Anyone who uses a VPN to watch a pirated show or to catch a sports event has found a way to get around technology prohibitions.
I would much rather let the consequences fall where they will. If I can communicate the need for skills development, I can leave the choice to use AI up to them. I can craft their coursework so that their writing is reviewed for effort and not always for a mark so that, even if they could use AI, there’s no real advantage to doing so. I can be unforgiving in my grading rubric for confabulations and false statements of law. None of these approaches are AI-specific. They might have copied someone else’s memo or had someone else write for them in the past. I’m not sure I would have known in those cases either.
The difficulty, which students and then lawyers will own, is that they may choose to use automation and shortcuts. As we are seeing with lawyers already, consequences will come eventually. But I do believe there is value in creating an enforcement mechanism with constraints and barriers on a technology. From strictly a resource perspective, I think that’s a non-starter. It also does not assure you that you will achieve the optimal outcomes that you want people to have.
The answer is not to do nothing. It is to do something with outcomes rather than inputs and in a way that applies beyond any one technology. That will serve everyone better than bans.