It is pretty apparent that we are in a super Hype Cycle when it comes to AI tools like ChatGPT, but for many of us in the legal profession, we’re not used to reaching this point of the cycle at the same time as the rest of the world. Because things are happening so fast, we wanted to bring in someone like Colin Lachance from Jurisage to talk about how they are integrating Generative AI tools into their products.
Greg was going down an AI rabbit hole on Twitter this week when Colin mentioned his own project he was launching. Jurisage’s tool, MyJr (pronounced “My Junior”) is part of a joint venture between Jurisage and AltaML, and is designed to change how researchers access information by allowing the AI tool to synthesis and read cases as the researchers search and analyze the information. Rather than opening up web browser tab after tab and scanning cited cases for relevant information, the idea behind MyJr is to have it quickly answer that information for you. If you need to know what the relevant arguments are from each side in Smith v. Jones, as MyJr to pass that along to you. Ask it a plain language question, get a quick and plain language answer.
Lachance is working to use the GPT 3.5 tool to pass along cases and create what he calls “guardrails” with the cases so that the prompt and the results limit themselves to the case itself. This protects the researcher from the AI “creating” the answer from all the non-relevant information it has collected in its large language model of machine learning. Lachance has additional goals for using AI within Jurisage’s data, but he’s focused tools like MyJr establishing trust with those using it for researching Canadian, and soon US caselaw.
The MyJr product works as a browser extension and identifies Canadian and US case law citations on any web page. It delivers a preview into key details about the cited case, and a link to a free full-text version, in a popup when the user hovers over the citation. Clicking through to a “more insights” dashboard reveals additional detail as well as access to the upcoming “Chat with a case” feature (Feb 20th for Canadian case, a month later for US). While the paid version of the dashboard won’t officially launch until late March, user can get unlimited pre-sale access today as well as secure a future 50% discount option for a one-time payment of $7.
Listen on mobile platforms:  Apple Podcasts LogoApple Podcasts |  Spotify LogoSpotify
More information on Jurisage and MyJr can be found here:
Contact Us:
Twitter: @gebauerm, or @glambert
Voicemail: 713-487-7821
Email: geekinreviewpodcast@gmail.com
Music: 
Jerry David DeCicca
Transcript

Greg Lambert 0:07
Welcome to The Geek in Review. The podcast focused on innovative and creative ideas in the legal industry. I’m Greg Lambert and Marlene is traveling this week. So you got me again, going solo, but I think we got a really exciting guest on with us. And we kind of pulled him in at just the last minute because there’s so much going on with companies integrating some of the GPT 3.5 tools or ChatGPT, into their products. And I went on a kind of a little rabbit hole adventure this morning, tracking down some things and in something that Dennis Kennedy on his podcast, had mentioned with Tom Mighell. And that was there was a site that basically took the GPT and allowed it digest books. And then it allowed for a chat interface with the, with the book itself. And so I was I was having all kinds of fun with that I posted some stuff on on Twitter, and I said, Hey, would this be great. If you know Lexis and Westlaw were to take their treatises or take their monographs or to take their practice guides and put this type of interface on it so that you’re not just doing a Boolean type searching on it, but you’re actually doing something that librarians were trying to get established 20 years ago, and that is to just use plain language searching. And I’m drawing a blank on the term, natural language searching. So natural language searching. It seems like after 20 some odd years, we’ve kind of figured out how to do that through the AI. I’m joined by Colin Lachance, who is the co founder and CEO of Jurisage AI in Ontario, or in Ottawa, although he is joining us from beautiful Salt Lake City today. So Colin, thank you for coming on here. And I know we’re going to talk some of the stuff that you’re doing as well. And it’s really exciting,

Colin Lachance 2:10
Greg, thanks very much. Big fan of the show.

Greg Lambert 2:12
And thanks. Well, and you were on I believe you were on my In Seclusion podcast years ago, right at the beginning of the pandemic. So this is not your first rodeo here.

Colin Lachance 2:24
So it is not. Although I do miss your Brady Bunch backdrop

Greg Lambert 2:30
That was fun. Well, first of all, Colin, before we dive into some of the really cool stuff that that you’re doing there at Jurisage, give us a little background on how you found yourself co founding yet another legal tech company, they’re in Canada, this time Jurisage. So give us some background.

Colin Lachance 2:42
Sure, Thanks, Greg. So Jurisage is looking to deal with what we call the second biggest legal research problem. If search is the is the first, then then engagement with the content and synthesis and understanding of the content is the second. We keep about this, you know, a roundabout but a logical way when we will get it historically, I’ve been working in legal information and tech since 2011. I was CEO at kami Canadian legal information Institute. And then 2015, I left started working in different aspects of legal tech. In 2016, I started a company called compass, which was built around a case file database that a small independent publisher in Canada, who was shutting down was making available and that gave us a chance to start working with Canadian case law with a view to doing what it is we’re doing right now. So I’m only six years off schedule from why we acquired acquired this database in the relationship with the courts to continue to receive content, that company had built a Key Number system a really robust taxonomy over its decades of operation 150 topics detailed case digest and so on it model, you know, going back decades off the west Key Number system, but its own made in Canada version. So our view back then was this is a great foundation for natural language processing. You have these beautifully editorially tagged items. So I’m going to skip over a bit of the history, we may have an opportunity to come back to it up to 2020. I started a nonprofit with a view of saying what happens if I let more people play with this data, if I start putting it in more hands at academics and you know other companies and so on and let them play explore for non commercial purposes. And this is where I first met my partners inside the founding of Jurisage it was an applied AI company called Alta ml. They do AI machine learning proofs of concepts for governments and large companies and so on. And they wanted to explore legal. So after a little bit of exploration, they said, Oh, this is amazing. We want to do something with it. But how are we going to go about getting the data and And the domain expertise. And so that led to conversations that ultimately led to a joint venture between compass and Alta ml, we created Jurisage joint venture. So that was a concept but a concept without a purpose. Other than we do, there was stuff to happen in legal. So we came up with the name first, before we even decided what we’re going to do casting vote for a good name, it’s going to be something new legal intelligence. And in terms of figuring out our focus, we spent a lot of time talking with knowledge directors, law librarians, law partners, students, associates, and so on, just to explore the range of challenges that they had in dealing with legal research generally, in case law, specifically, in order to find an area and opportunity for us to engage. And so that’s what led us to the point of believing that not only could we create something with the chocolate and peanut butter aspects of what we could bring together, their machine learning capabilities, our data, my domain expertise, but we were getting some strong signals from the market as to where there were some gaps that we thought we could work. Okay. That’s how we started.

Greg Lambert 6:09
Okay. And I brought you on here, because as I was raving at some of the possibilities of what the AI could be doing, you actually pointed out to me was like, you know, hey, we’re already doing it here. So tell us a little bit you have a subset of Jurisage. That is that you call My Junior MyJr. And rather than me trying to explain it, I’m going to let you explain what it does and what the inspiration was behind your creation of it.

Colin Lachance 6:41
Thanks. So my junior was at the idea of saying, Well, what does a junior associate do other than without other but a significant function of a junior associate is to bring forward its research output to its partner, the partner might be going to argue the motion of junior associate has done the work, but who’s helping the junior associate? So what it does is it group of capabilities is it analyzes the case law in the background, so that we when you get exposed to case law, whether you’re on an existing legal research site, like a Westlaw Fastcase, CaseText, what have you, or I’m a Google Scholar or Justia, we’re in Canada CanLII, when you see a citation, all of those tools will offer you the ability to click the link and open a new tab, right click the link and open a new tab. So you can imagine if you’re reading a court opinion, and it cites 10, more court opinions, you run the risk of opening many, many, many tabs. So what we do within my junior, is we having analyzed the case, by using you know, various machine learning NLP, you know, magical things, not so magical things is we create a view into the insights that’s behind that leg. So you can hover over the citation on your page, get a little pop up and explain what’s there. If you want more information without actually reading the case, you can go go to a more robust dashboard, where we’ve broken down more information, including recommendations on related cases by content related cases by citation graphs, and so on. And what we’re adding in going live for our what we call our presale users next week, is that conversation, the chat with a case. So essentially, what you’ve described at the top of this call on engaging with a book, that’s what we’re introducing, it’s it’s having my junior serve in place of your classmate or colleague, to ask the question. So in the matter of Smith v. Jones, you might ask your colleague who’s read the case before, what did Smith argue your colleague would tell him what Smith said, Okay, what a Jones argue. So this is what the my junior function inside our offering does. Okay. You just ask, What did Smith argue what did John’s argue explain this to me, like I’m a 10 year old,

Greg Lambert 8:58
You just give it you just give it plain language? And it gives you hopefully a plain answer. Right?

Colin Lachance 9:05
That’s That’s right. That’s right. And so it does use the ChatGPT technology. So in this case, it’s the DaVinci three API in the GDP GTP 3.5. Although the way we’ve structured it, having done a lot of the underlying analysis of the content organization, we put what you might call guardrails or context around the engagement. So this means when the question is being asked, the answer can only come from the content and not from the universe of language. So the universe of language, what these language models are great at is they’re great at summarization, they’re great at classification. What they’re not great at is answering general questions when specific knowledge is is important to the answer. So we try and address that by limiting its context. So your question is about the content of the case. But the properties of language models allow you to ask it in whatever way makes sense for you. And it comes back in a very clear and digestible way.

Greg Lambert 10:07
Okay, just to make sure I understood that. It’s basically the GPT version is interpreting just the case. Correct? And then applying the answer on that. So it’s it’s not doing any type of general knowledge that it’s bringing back in which, which it’s really good at making stories, but it’s not necessarily really good at creating the truth based on actual facts, right, it will make stuff up if you don’t limit it, like the way that you’re doing.

Colin Lachance 10:40
Yeah, that’s, that’s exactly right. So the way we’ve tested the quality of its responses, and the accuracy of its responses, is just lots and lots and lots and lots of prompts, where the only correct answer has to be something very specific. So that would include things like, what was the legal test? And what is the authority for that test? Okay, there’s only one way to answer that correctly inside the scope of the document. Okay. Now, there may be many ways to say it. But the answer has to be the test is x. And the supporting authority is why is there that’s letter? Why not? Not the question. Right, right. But the nature of how it comes back, allows us to validate it, we can go back into the body and say, Is this correct? Was this correct? And so we try to do a variety of things. And actually, we’re currently hiring prompt engineers to really push the limits of this, to both find what great questions will result in great answers. And to find out what kind of things can result in incomplete answers.

Greg Lambert 11:45
Okay. Yeah, that I know, there are complete Reddit groups out there that talk about the prompting some of them good, some of them evil, but it is how you structure and ask the question of the AI that really influences the answer that that you may get back. Let me dive in just a little deeper, if you don’t mind. And that is, are you setting it up to train the GPT version? Or are you doing it kind of on the fly? Are you are you inserting the case? As the question is being asked? Or are you setting up a model on the back end that that’s actually training on your data?

Colin Lachance 12:29
It’s a bit of both. The model was trained on our data. So we’ve been looking at the different generations of language models, probably going back two years now from that first initial experimentation from Alta ml working with it, the early versions, were the results were uninspiring. Yeah, but we could we could find areas where there was potential. So we’ve been working with like a transformer based approach to accelerate classification. So for example, I mentioned that we have in our historic collection, a lot of like great editorially tagged items. So we’re training, easing, transformer based approaches, training classifiers on tags. So when we are handing one of our documents over to the language model to answer the question, we’re doing two things. We’re saying to the model, answer the question based on this document. So the interface for engagement right now is to our dashboard, down the road, the interface will be through our browser extension, the pop up so you just anytime you see a citation, you can hover over start talking to that case, but right now is through the browser extension. So that gives you that context of the question that’s about to come is about this document. Now for that document in our back end, we’ve already indexed it, we’re not grabbing that on the fly. We already have a copy of it, we’ve already broken down into fragments so the system can engage and talk to, and we’ve already labeled to that file. Other things that we know about that case. So we’ve labeled to that file, that we happen to know that this one might deal with a particular procedural matter, or this one might deal with a particular tort and so on. It’s not intended to influence the result. But it’s it’s intended to constrain the imagination of the document.

Greg Lambert 14:15
So essentially, you’re passing through some of the metadata that you’ve generated over the years as part of it so that it that it does kind of put it into a framework rather than just giving it just an open text to ingest and digest itself. Right.

Colin Lachance 14:33
That’s right. So it will have the ability to draw on the available context, should it be relevant to how the answer comes back. The idea with these language models is they can be influenced by what you let it look at. So GPT is not restricted by a database, the book version that you’re talking about, is restricted. The answer has to come from within that book. The answer for us has to come from within the case document. The other thing that we’re doing with this it’s going to be a later iteration is to pass it back. So we question comes in, tell me the relevant test and how this matter was argued by the plaintiff. So the answer comes back, then we use everything else that we know, including the breakdown of the document. And we’re going to map it back to where in the document, we think the support for the statement came from. That’s good idea. So that’s not part of the first iteration. That’s out right now. But that is something that we’re incorporating into the next which, you know, by having spent so much time analyzing these documents, prior to the incorporation of a GPT type tool, it’s easier for us to work our way back, like an example of that is with one of our classification models is a sentence level classifier, where every sentence of every opinion, is classified as one or more facts, issues, analysis, law, and conclusion.

Greg Lambert 15:52
Wow. It’s pretty granular,

Colin Lachance 15:54
it is pretty granular. And so we can group that into paragraphs, and so on. So it allows us to take other NLP techniques like semantic matching and say, Okay, well, the words of us does this, it was making a point in its paragraph, we can see that it’s saying, fact fact, an outlet or law analysis. So it allows us to match map it back. So this is a, this is a fun time for us to all these things that we’re working towards, like the power of this particular model kind of caught us off guard. But everything that we had built in the structure of the browser extension, the dashboard, the analysis, meant that this was actually quite straightforward for us to plug in and begin using.

Greg Lambert 16:35
Yeah, yeah. Well, sounds like you had a really good baseline of work that you’d already done. And you’re right, this near, especially this DaVinci. Three, seems to be just leaps and bounds better than all the other models that that have been out there. I know. I’ve been, you know, significantly impressed by the whole thing. I’m just doing some very basic stuff.

Colin Lachance 16:59
You’ve been doing some interesting stuff with your automated legal tech news. Yeah.

Greg Lambert 17:03
And comic book news and all kinds of stuff. I’ve been having all kinds of fun with with this. And I think, you know, this is a this is a geeks dream of having a tool that can really kind of inspire people to come up with unique ways to play with information. And yes, like I said, when you told me that you were already working on this project, I was like, Well, I have to learn more about this one.

Colin Lachance 17:30
One thing that we have going for us is my co founder and this His name is Juliana Ravello. He’s been working in AI in one capacity or another for the past 20 years. For the past four years, he’s been working in legal AI and Explainable AI, and also been working with universities and managing a annual National Competition on what’s called legal information extraction, and entailment. So this is like solidly solidly in his domain. So when we build these models, we build them in a way that we’re not dependent on, in this case, the open AI GPT. It’s actually set up in a way that if we decide we like what Google comes out with next better, we swap out the DaVinci API call and swap in something else. There’s an incredible company in Toronto called cohere. It was founded by a guy named Aiden Gomez, who was one of the original authors of the transformer paper at Google, that sort of led us to Bert and everything else, right. And so we’ve been working with their API’s as well, too. So we see this as something that can get better, you know, it becomes a tool. So it’s, let’s just call it a really, really, really powerful wrench. But we don’t have to change our entire garage. Based on the wrench we choose, we can find the right wrench for that for the things that we’re trying to achieve.

Greg Lambert 18:48
Right? Yeah, it’s almost like a like an impact wrench where? Yeah, it does exactly what you know, a regular wrench would do only faster and stronger.

Colin Lachance 18:59
And also it’s, it’s, it’s a point of sadness me that I’ve never actually owned it in that wrench. I’ve had times where I’ve wanted one, but I’ve never

Greg Lambert 19:06
owned, it’s too powerful for me, I’ll do the AI, but I’m not going to use an impact wrench. So let me ask you, who are you thinking is going to be the consumer for this type of product?

Colin Lachance 19:20
When we were putting it together. So a little over a year ago, we had our proof of concept thing that we started bringing back to lawyers, while librarians and others to say, here’s what we think we’re gonna build. What do you think about it? And then from that, it really became obvious to us that this is a tool that you almost have like a consumer level engagement with the individual user because they’re gonna have a personalized experience with it. So a junior attorney is going to engage with this to explore new and unfamiliar areas more quickly, a senior attorney is going to refresh their knowledge. One of the key questions we ask people is assume that if we build this, your law firm will not pay for it. Your university will not pay for it your government department or legal department will not pay for it. Would you reach into your own pocket? Put your credit card and use it having heard enough? Yes, as we went ahead and build it. But from a marketing perspective, that’s who we think it’s going to be the person who wants to solve their own problem by making this a web based overlay. It’s not something that needs to go through a purchasing it’s a very personal experience that we’re creating for individuals.

Greg Lambert 20:31
And so is the consumer going to be the in the Canadian legal market? Or do you see it expanding beyond that at some point.

Colin Lachance 20:39
Everything that we’ve built, we’ve ensured performs equally well on US data as well as Canadian data. Okay, so we have the advantage in Canada of having had a data set, we have opportunities in the US base built around availabilities of different quality datasets, so our initial built was on top of the core Glessner. dataset. So we’ve incorporated a lot that in, we’ve been doing our testing on that, whether it becomes our permanent data set, we’re chatting with different folks to figure out the right way to incorporate the US content in there. But it’s very much something that’s targeted to the entirety of Northwest Canada, US, not Mexico yet. But we’ve also done a lot of testing on on UK and Australia and other other Commonwealth judgments. But right now, for the first half of this year, we’re about 75 80% focused on Canada, the the chat function is going to be available on the Canadian data next week. So roughly from the 20th, February 20. The US data will be available likely in March, awesome, through the through that model, where a finalist at the ad tech shows startup alley, so we better not, you know show up in that house and not have an offering for US Attorney.

Greg Lambert 21:56
Yeah, that’s a you gotta read the room there. And it’s gonna be American lawyers looking for American law. So I’m glad you glad you see that.

Colin Lachance 22:07
Yeah, no, we absolutely see it. But what’s fun is American workers occasionally get exposed to Canadian law and vice versa. And by analyzing both, we can actually provide access to the to the other country’s content to people when they come across it in a judgement.

Greg Lambert 22:21
Yeah. Well, and you’ve been working on this for you said well over a year, right? I mean, this is yes. And what, you know, I don’t want to take any intellectual property from you. And so just limit this to what you can say, but what other kind of concepts? Are you thinking about that you can use a tool like this, to help you create beyond, you know, kind of the junior attorney interpret this case, and, and give me you know, give me a summary, what other concepts are out there that you’re thinking of?

Colin Lachance 22:57
From the perspective of serving the individual attorney, that natural evolution is into a word add in, every everything about the way we set up, could quite easily be turned into a word add in so the engagement becomes an analysis, not of what you’ve been reading, but of what you’ve been writing. And an easier pathway into that. So you can expect that later this year, I don’t think we’re giving away any secrets and suggesting that we want to be where lawyers live. But we’re not jumping in to Outlook or in or to email because we want to be where they’re engaging with the content, right. And that is in their research tools and where they write the other aspect that this is all set up. And we’ve been building it all as background API’s that feed our own services, which means they can feed other services. So everything that we do to analyze what we call a rhetorical legal document. Now, rhetorical just being it’s not a contract. It’s something where someone’s actually arguing and making a case. So that includes briefs that includes and legal memos, demand letters, and so on. So our models are capable of analyzing that material inside databases. So if integration partners want to point our models to enhance the metadata associated with the content already in their systems, that is an evolution of what our tools were built to accomplish. And then the other thing that sort of flows naturally from that week, we’re having conversations with a couple different integration partners, where they’re engaging with user content, already through browser extensions, and outward add ins, but we can layer in extra information for them. So that we, the user doesn’t have to rely on two separate browser extensions to accomplish something. If we just piggyback our capabilities and feed it into an interface that they want. Our philosophy behind all of this. It sort of goes to the same reason we started with this idea of synthesis and reading rather than search is no one really wants to change the first step of the research process. If you like Westlaw, you’re going to stay with Westlaw. If you like Fastcase, you’re going to stay with Fastcase If you encountered you a camera, you’re in a state with cameras, we believe the same thing applies to document management services, and so on. So you need a lot of those places, it’s not the goal to replace the first step. But it’s the goal to enhance the second step when you start engaging. And so integration partners will be better suited to incorporate that and then we will to create a new interface.

Greg Lambert 25:22
Yeah, and that’s one of the things that I’ve been, I’ve been using the term and it may not be the exact right term to use, but it’s tools like this, I see as more of an augmentation of what you’re doing. And so I saw Casey Flaherty had tweeted something out earlier that we want to have a legitimate conversation around what this is doing. And when you start the conversation was saying, this is going to replace lawyers or on the other end, this is a, you know, terrible thing, it will never, you know, never be able to even write documents, both of those extremes just really aren’t a starting point in the conversation. And so using it as an augmentation, I love that you talked about using it for synthesis and reading and not for search. That’s another way that you know, I think most people are like, well, they can’t do search. And they’re just like, yeah, that’s not what it’s set up to do. That’s not the you know, if you want, there’s great search tools out there. You know, this is much more than that synthesis. And in reading, I think, I think you’re the first person I heard say that straight like that.

Colin Lachance 26:32
Yeah, exactly. When Westlaw announced precision, they really emphasize the amount of work that they put into improving the quality of your search results. But the experience of when you start opening up documents, is relatively unchanged and potentially worse. And this isn’t a knock against Westlaw. This is just the reality of the framing of how a legal research tool operates is that they have a way of presenting. And frankly, it would be dangerous for them to change that we have the benefit of starting with zero customers. Well, that’s not entirely true. We’ve actually had a really positive growth in our pre sale. But can you imagine starting with hundreds of 1000s of customers, and you change something,

Greg Lambert 27:13
because you know how much lawyers love change.

Colin Lachance 27:16
They all love change, but they’re not going to come up and pat myself on the back for saying, we’re really glad you change the search interface. Or we change the next step. We’re really glad I have to learn something new.

Greg Lambert 27:27
And again, I think whether it’s what you’re doing or another example would be like what docket alarm is doing with the giving the get three bullet points on on on the docket sheet. That’s an add in that’s an additional thing that doesn’t replace what the core part of the business is. That’s, that’s right. And maybe at some point, it does. But for right now, I think, you know, it’s incremental change, it’s, you know, testing things out, as you go in kind of testing the limits of what a product like this will do.

Colin Lachance 27:58
It also represents a part of what people are looking for, again, they don’t want to keep opening tab after tab after tab. They don’t want another document, they want the information inside the document, when when they already know their app, the document that’s useful. The next step is how do we get that information out? You mentioned Casey, like, honestly one of the guiding principles of setting this up with something that the Lexis fusion when they did their 2600 interviews of clients, and they put out their initial report, I can’t get the quote exactly right. But it was something to the effect of make it easier, make it better, but don’t make me learn. And so,

Greg Lambert 28:34
Casey simply said it in a 5000 word document. That’s the same thing.

Colin Lachance 28:40
But he was he was stuck in that point. He was summarizing the feedback that he received. But the message that we took is okay, that’s that’s the standard we have to achieve. What can we do without actually requiring training? Can people hover over a link? Yes.

Greg Lambert 28:56
What are some of the limitations that you’re running into?

Colin Lachance 28:59
One of the nice things and one of the reasons we didn’t pursue this earlier was cost was a significant limitation. So in fall of last year, open AI reduced the prices of its its DaVinci API’s. And that made a big difference to to moving us down the path of of making this available because it is it is a challenge. Even if you’re just talking pennies and engagement, you still have to say well, what if there are 100,000 engagements or 200,000?

Greg Lambert 29:26
It’s gonna say any anyone that had to pay a large law firms pacer account knows that those payments add up.

Colin Lachance 29:33
That’s right. So having enough confidence that we can plan for that and do this effectively. So one of the ways that we’re approaching that on the Canadian side is the volume of published case law is sufficiently small that we’re actually get a pre process using my junior against the half dozen summarization questions, every case that comes out of the courts, and then make it available as a mailing service. So free Be daily case summaries, choose your court choose your topic, choose your summary style. And then those inside our dashboard become, it’ll, here’s some recommended previews. So you can say here are six different ways I could summarize this case, let me pick one, but it will be now a cached answer because we’ve already processed it. So planning for classes is a big one. And then also, the next big challenge is, if I can, I think was a, you know, stand up Stand up comics in the 90s, always saying that they would get offers for sitcoms, oh, you’re great stand up. But can you act right? So so we’re getting the we’re getting the equivalent of this is really cool. Can I do these next 10 things? Well, maybe at some point, I if there’s a, you know, a logical path for growth. So constraining our own enthusiasm, and those of our prospective customers is, is actually a challenge, because everything else we’ve done up to this point has been trying to concentrate on what we can do in a way that attorneys can trust the result. So that exuberance of not breaking that commitment, say we you can trust the result because of what we put in. So it doesn’t matter that, frankly, and I wish and I hope and I expect, you know, dozens of other companies are going to stick a ChatGPT interface on top of the free law. And they’re going to do lots of amazing stuff. And it’s going to create a different level of expectations. But they’re not committing to that idea of trust. So that’s that is a challenge for us, too, is to make sure that we keep pace with the market’s desire to go further. But without sort of breaking that initial commitment that will give you stuff that you know you can rely on. Yep. Well, I think I think docket alarm get adopted, the docket alarm approach is a good example of keeping that kind of commitment. Because they’re saying, Look, we’ll give you the three point summarization of this to help you decide if you should check, right, that is something that they can with confidence, say we’re delivering what we said, we’re going to, we’re going to deliver for the purpose that we said we’re going to do. So I think, as people approach it that way, that’s the right way to approach it.

Greg Lambert 32:01
I know things are changing very quickly. I mean, three months ago, we weren’t talking about this. And now it’s all we’re talking about. But I want you to pull out your crystal ball here. And kind of let me know, where do you think the legal industry is headed? When it comes to tools like this, and resources like you are setting up?

Colin Lachance 32:26
I believe that by the end of this year, so I’m gonna give you my short term and then a bit of a long term but but by the end of 2023, I do believe the idea of proceeding without some sort of tool that allows that can serve as not necessarily a colleague, but some some form of check tool engagement tool that allows you to test ideas and test theories, first drafts, and so on, I think that’s going to become pretty commonplace, it’s going to move I think more quickly on the transactional side, not on whole documents. But on portions of documents on first drafts. We’ve never seen this before, from the legal community having the same level of enthusiasm for a piece of tank, as the community at large. As we’re seeing here. We’ve never seen the complaints that people offer on online as being I’m disappointed that it didn’t do this tiny little narrow thing correctly, when it did all this amazing stuff correctly. So that kind of adoption of that kind of tool, it’s not necessarily going to be from legal tech companies. But if Microsoft is incorporating GPT into its core products, then it becomes an expectation that you’re going to have some ability to engage with information other than traditional search. And so that for the future beyond 2023 really opens it up, it really opens up our collective imagination for the parts of our system that we can change. I met somebody this morning, who believes that there’s a significant portion of transactional work, that’s anything built around forms to government bodies has to move away from work done by lawyers. So I think a lot of that is it’s going to happen and I think the acceptance of it, it’s going to be it’s going to surprise us. Yeah, like how ready people are for that to happen.

Greg Lambert 34:15
Okay. Anything longer term.

Colin Lachance 34:18
Longer term, I think about the movie Her with the Joaquin Phoenix.

Greg Lambert 34:23
Yes.

Colin Lachance 34:25
Movie you heard where eventually the machines are so smart. They get bored of us and they leave.

Greg Lambert 34:31
Well, you know, in it’s kind of unfair for me to answer my own question here. But that’s never stopped me before. But my guess is that somewhere along the line, and we saw just a glimpse of this last week, when when Google did their bar demo, which was a canned demo, and they lost like 7% of their value over $100 billion of value in just a few minutes. I got I think we have to be careful that And the worst thing that could happen is that somebody relies upon this. And it causes a client to lose their business or to or law firm to go under. No one wants to be embarrassed, and no one wants to lose money. And so I’d say that that’s, you know, the two things and there’s all, you know, it’s always some event happens, it could be in, you know, in we’ve seen it, you know, a crash in the.com. Market 911. The big, you know, the Great Recession, things like COVID. You know, there may be other things that slow the process down. But I think we’re definitely and you were right. I’ve never seen the legal industry, jump on something as quickly as its jumped on this. And so I think, you know, and I think it’s got some traction,

Colin Lachance 35:48
it’s got traction. Yeah. But I agree that there’s some serious risks. And I think there’s a role for Responsible trusted players to show leadership. People ask you, oh, we are running this as a startup. Aren’t you concerned about what the larger guys will do? And I said, there’s nothing that we can do technically that they can’t, right, they absolutely can. So I really encourage them to find the elements that they want to showcase that they want to help move the industry forward on. Either with as a startup, we pick an entry point, and we grow from there, but they ultimately are the ones who are going to turn the dial them in government.

Greg Lambert 36:28
Yeah, that says government in the in the Bar Association’s don’t don’t underestimate their ability to throw throw a monkey wrench into the works on that as well, I suppose. Well, Colin Lachance, CEO and co founder of Jurisage AI. Thank you very much for taking the time to come on and talk with us. And I’m as excited I think, as you are about the potential for this.

Colin Lachance 36:53
Great, thank you, Greg. Appreciate it.

Greg Lambert 36:54
All right. And of course, thanks to everyone listening for tuning in to The Geek in Review podcast. If you enjoy the show, share it with a colleague. We would love to hear from you. So reach out to us on social media. I can be reached at glamoured on Twitter and Marlene, can be reached at gay Bauer M on Twitter. Colin, how can we reach out to you?

Colin Lachance 37:15
I’m on Twitter. I spent too much time on Twitter. It’s at Colin with Sean co l i n l a chance. Same on now.

Greg Lambert 37:26
Yeah, we’ll make sure that we put that in the show notes along with the information out to both of my junior and to Jurisage. Or if everyone if you’re not into all this advanced AI and you want to pick up the phone and give us a call. You can leave us a message on our geek and review Hotline at 713-487-7821 and as always, the music you hear is from Jerry David DeCicca Thank you Jerry. Thanks again Colin.

Colin Lachance 37:54
Thanks so much.