In this episode of The Geek in Review, hosts Greg Lambert and Marlene Gebauer interview three guests from UK law firm Travers Smith about their work on AI: Chief Technology Officer Oliver Bethel, Director of Legal Technology Sean Curran, and AI Manager Sam Lansley. They discuss Travers Smith’s approach to testing and applying AI tools like generative models.

A key focus is finding ways to safely leverage AI while mitigating risks like copyright issues and hallucination. Travers Smith built an internal chatbot called YCNbot to experiment with generative AI through secure enterprise APIs. They are being cautious on the generative side but see more revolutionary impact from reasoning applications like analyzing documents.

Travers Smith has open sourced tools like YCNbot to spur responsible AI adoption. Collaboration with 273 Ventures helped build in multi-model support. The team is working on reducing dependence on manual prompting and increasing document analysis capabilities. They aim to be model-agnostic to hedge against reliance on a single vendor.

On model safety, Travers Smith emphasizes training data legitimacy, multi-model flexibility, and probing hallucination risks. They co-authored a paper on subtle errors in legal AI. Dedicated roles like prompt engineers are emerging to interface between law and technology. Travers Smith is exploring AI for tasks like contract review but not yet for work product.

When asked about the crystal ball for legal AI, the guests predicted the need for equitable distribution of benefits, growth in reasoning applications vs. generative ones, and movement toward more autonomous agents over manual prompting. Info providers may gain power over intermediaries applying their data.

This wide-ranging discussion provides an inside look at how one forward-thinking firm is advancing legal AI in a prudent and ethical manner. With an open source mindset, Travers Smith is exploring boundaries and sharing solutions to propel the responsible use of emerging technologies in law.


Listen on mobile platforms:  ⁠Apple Podcasts⁠ |  ⁠Spotify⁠ | YouTube (NEW!)

Contact Us:

Twitter: ⁠⁠⁠⁠⁠@gebauerm⁠⁠⁠⁠⁠, or ⁠⁠⁠⁠⁠@glambert⁠⁠⁠⁠⁠
Threads: @glambertpod or @gebauerm66
Voicemail: 713-487-7821 Email:
Music: ⁠⁠⁠⁠⁠Jerry David DeCicca⁠⁠⁠⁠


Marlene Gebauer 0:07
Welcome to The Geek in Review. The podcast focused on innovative and creative ideas in the legal profession. I’m Marlene Gebauer.

Greg Lambert 0:14
And I’m Greg Lambert. So Marlene this week, once again, we find out why in legal,

Marlene Gebauer 0:21
we can’t have nice things

Greg Lambert 0:22
nice things. You may may have seen that there was filing an appeal filing in the 10th circuit. It was an ex parte case, but it was actually reviewed by an attorney, a 23 year prep long practicing attorney that had somewhere in the area of eight made up citations that you know, and I can tell you, I took that brief and I passed it through Lexuses brief analyzer and it immediately found those eight. So, you know, again, so, so word of advice, and I went around I was in Dallas this week, talking AI with the attorneys there. And the one thing I told them is this is not a legal research, pure legal research tool. Please do not use citations that are in there. And if you if you do you better check them. Yeah, and if something sounds too good to be true, it probably is. So

Marlene Gebauer 1:24
it’s just trying to please you. It’s just trying to make you happy. Absolutely. Absolutely. We’d like to welcome three guests today. Oliver Bethell Chief Technology Officer, Shawn Curren, Director of Legal technology. And Sam Lansley, artificial intelligence manager from Traverse Smith from their London office, Sean Olly and Sam, welcome to The Geek in Review. Thanks, fine.

Greg Lambert 1:48
So Olly, I want to throw this to you first, because we do have although we have a lot of audience in the UK and Australia may not be familiar with Travers Smith. So would you mind just giving us an overview of what Travers Smith does and exactly what your team does to support the mission there?

Oliver Bethell 2:09
Absolutely. And look, Greg, Marlene, thank you very much for the invitation to come and join us today. I’ve been a longtime listener, but it’s great to be able to join you first time as a contributor to say Travis Smith, we’re a UK law firm. But we’re a leading full service UK law firm, we regularly conduct cutting edge in industry, we first work for clients across lots of industry sectors. But we’ve particularly carved out a market leading reputation for our expertise in international asset management, cross border m&a, and also particularly global dispute resolution investigations work. So really a full service firm. And the team here were part of the the wider technology team here at Travis Smith, and very lucky to work with Sean and Sam, on on our strategy for AI, which hopefully we’re gonna get to touch on today.

Marlene Gebauer 2:53
I imagine in the past year or so has been absolutely insane. For the three of you, with the advancements in technology, the acceptance and the demands made of API’s. And of course, the awakening of the public to generative AI ai. Shawn, how have the expectations of attorneys and others there at traverse Smith, change the way you approach and launch technology at the firm?

Shawn Curren 3:18
I think for you know, I’ve been in the legal tech industry now for about 20 years. And I sort of talked about this a lot, which Yeah, I was around, you know, before the digitization of legal services, but everything was paper based. And when we used to archive fields physically, and we didn’t have document management systems. And you know, I’ve been through a lot of many revolutions over the last 20 years, say document management was one nice thing, you know, the BlackBerry was one, and I think there’s been a few others. But the improvements of the technology have always been quite marginal, I think. So you know, let’s say there was a process that was 90% people, people process and 10% Tick, then a new piece of technology would come out a new feature a new product that would maybe make that 12% Tick if the 18% Tick, taking a little bit of eating a little bit of the people in process. Certainly think with a lot of the hype on AI over the last eight months there is this expectation that the tech is going to eat everything a law firm does, that equation is going to go to 100% Tech and 0% people and process that has been quite challenging, I think for for people to operate. And so we took the strategy very early on that we needed to get our hands on this technology, get it in front of it or get hands on generative AI, get it in front of lawyers, get them testing it, getting them understanding what it’s good at and what it’s not so good to understand where there was opportunities for marginal gain efficiency was this tick and where there’s also opportunities for significant efficiency. And I think probably come on to talk to a couple that you’ve picked up on. And so we deployed an enterprise chat bot about a month ago. And I think one of the first firms to do that blocked ChatGPT very early. We didn’t want to be granting an irrevocable Well, worldwide royalty free license to data to big tech into open AI. And we’ve been testing and trialing these models across the business since then trying to understand exactly where we should put our focus. I don’t know, Sam, if there’s anything you’d add?

Sam Lansley 5:34
Yeah, I think I just all I’d add to that is that we’ve been very open and transparent with what the model can and can’t do. I know sometimes more technical people can choose to exaggerate the technology, which has sometimes produced a bit of skepticism from lawyers, we’ve been very honest about hallucination problems. And we’ve kind of explained to lawyers that we have to be really careful when using it to develop certain contracts or anything like that. So we’ve been very cautious with our approach. And we’ve also taken the approach of basically showing what is good and what showing what it’s bad at. And to try and do that we basically introduced community into our YCM bot, which essentially allows lawyers to promote something he’s good at, and also promoting as bad. And then they can have a bit of a discussion as to why it’s bad and why it’s good. So that the idea there is that we’re just trying to make lawyers really aware of how to use the technology and how not to use technology. I think that’s been really useful for helping them kind of get the expectations of what it can do and what it can’t in, and they are using that community. Yeah, they are using that community. It’s like a whole social network they’re liking they’re commenting on it, and we’re finding some really useful prompts from

Greg Lambert 6:38
Shawn, I want to elaborate on something that you said that you said you you had blocked ChatGPT, early on, do you mean for your users to use it or for open AI to come in and kind of scrape your materials or

Shawn Curren 6:55
Yeah, so we I think, Ollie and I actually were at Libya, in November, I think, or December. And it was all he actually talked to me about this and said, Have you seen this new technology called ChatGPT? And you’ve talked a bit of sophistication? I thought, I know. I’ve not heard of it. But clearly, if it’s got the ability to ask and answer questions, then it’s going to be phenomenal. But the concern is going to obviously be around making sure that you don’t improve the model by giving it data. And you’re asking the models providing a change of control clause, you’re just getting a version of that from Wikipedia, or law teacher dotnet. And that’s one thing, providing it a change control clause, which is proprietary, valuable intellectual property for that organization. And in asking it to provide some analysis on that. We recognize that that may be not saying that it is, but it may be a certain strategy for companies who are using data to train algorithms to improve the quality of the algorithms. So I think it was January, we were like, We need to block the looked at the terms of ChatGPT. And we knew that all of the lawyers who may be dabbling with it and testing and trialing it, were agreeing to the consumer teams, because we didn’t have an enterprise relationship with open AI. And those consumer teams were very explicit in saying that your content will be used to improve our services. And we thought, no, that’s not It’s not acceptable for us, we blocked ChatGPT, January, February, and Sam and his team, actually, it was in about two or three weeks, built a really thin chatbot layer that sat on top of the enterprise API’s of open AI. So we became a b2b customer of theirs. And it sat on top of the enterprise API’s, which meant that we were not subject to the same consumer terms are subject to enhanced enterprise teams, I couldn’t take our data to improve and develop services. And we then looked to roll that out, and then we’ll safe and secure way. But we did so was the kind of badge of experimentation and education as it wasn’t to be used and what product, we’re very concerned about hallucination. We’re very concerned about copyright. We’re concerned about an outlier risk, but the potential legality of the model having scraped away up and question marks around that. But we do want to allow us to understand the sophistication of this technology, and we’re sort of that hate that it was existing in the market, you couldn’t not do anything, right. You couldn’t just hold and wait, you had to give them something. But you had to do it in a safe and secure way. And that was effectively a strategy. So I think in March, we rolled out an enterprise version of a run chapelry, which we’ve developed in tannic, and we had to build logging and auditing capabilities on that, too. We can track him it’s been used.

Greg Lambert 9:34
So Sam, I noticed that and I love the fact that your title is AI manager. And I’m assuming that’s a long term title that you’ve had for multiple years, right?

Sam Lansley 9:47
Yeah, for 25 years. Just to be really clear on the AI manager title that does involve people it’s not me just managing an artificial intelligence but so thank you Obviously, the title is a reflection of the unique demand that was put on our firm and every firm in the last three to six months. I think prior to this, obviously, I was a software engineer inside the engineering team, we did focus heavily on AI as well, we actually won an award for a cog x with our platform s Turner, which was a review and labeling platform for reviews that we would do internally. But yeah, I think obviously, we decided in March that it was probably a good thing to create a whole separate team, which could really focus on this generative AI and how it can be leveraged inside the firm. And I think obviously, the decision there was made by the firm, that I’d be the right person to lead that team. And obviously, now we’re looking to expand the team more and more, I think we’ve looked into hiring, you know, ml ops people, machine learning engineers, software engineers, and it has produced already a tremendous amount of value for the firm.

Greg Lambert 10:48
If you get I know that there’s a big push for prompt engineers, are you guys looking to hire someone specifically? Or how are you training people to, to kind of manufacture those prompts in a good way?

Sam Lansley 11:05
At the moment, obviously, the community feature we added into YCM bot has been tremendously helpful, that’s allowed lawyers to kind of come up with prompts themselves and share useful ones, I think we probably will look into prompt engineering in the future, you can kind of see the quality difference between a really sophisticated prompt and a really poor prompt. So I think it’s something we will definitely explore, I think someone does, maybe need to be dedicated to that, to get the best out models, but it’s going to be really interesting field, it’s crazy how that prompt engineer, no one even would have thought that would exist, you know, six months ago, and all of a sudden now, I think nearly everybody’s gonna be hiring prompt engineers in the next couple of years.

Shawn Curren 11:39
We’ve been working with this board or now for the last eight months, and an experimental and educational basis. But part of that experimentation, the education is working with our clients and looking at whether there is ways for us to use this in certain instances. And so because we’ve got right now, you know, six months ago, we were b2b customers of open AI. Now we are b2b customers of Microsoft, we’ve got access to the GPT models in Europe, actually Amsterdam. And it’s just another Cognitive Services epi that we use regularly for what the work that we do. So we’ve been testing it a little bit more and engaging with clients on that. And we have been sampling certain types of use cases, particularly around the search for relevance in litigation, and discovery. And what was found already, is that the way that the lawyers may instruct a human to look for relevance is completely different to the way that you architect and structure prompts to search for relevance. And there’s a kind of computational law aspect as a computer science aspect and a legal aspect or kind of being sandwiched together. And we have, you know, I’ve been involved in a couple of these where I’ve really enjoyed understanding the facts of the case, the point of issue, the particular area that the legal team would like to hone in on on that set of data. And then the wheather asked the question, to get the right outcome and the right output from the models. I think that’s going to be Olly and I were talking about this yesterday, and you know, what that is going to be, but the next prompt engineer, whether it’s kind of like legal intelligence engineer or something, but there’s certainly an area between the law and the computational aspect of these models, that needs to be a gap that needs to be filled with some form of skill set that’s going to need to evolve.

Marlene Gebauer 13:30
Well, Shawn, that that raises a really good point. You know, Olly, I know you’ve been working with Michael Bommarito. Hi, Michael. And Dan Katz 273 Ventures on Gen AI ai resources like the YCN bot, which we’re going to dive into. But you know, first, what sparked your interest in exploring how Gen AI ai? You know, As Sean mentioned, there’s sort of this gap that needs to be filled between that and a law. What got you interested in sort of delving into that area?

Oliver Bethell 14:03
Sure. I was shouts to Mike and Dan, because they’ve been great to work with so far and this. As Sean said, we were having one of our particularly in depth one to one meetings in a brand new quarter one evening, and we had quite a long standing strategy around application of artificial intelligence that Sam was spearheading mainly around extractive AI so similar to technologies such as Kira, which was, which was going well and proceeding the way in whether we’d expected and then having an avid technologist. So reading about the capabilities of ChatGPT obviously set up many light bulbs in my head. I remember the conversation with Sean. Imagine instead of being able to extract a clause you could say, then say write me a clause that sounds like this, that is by a friendly that is competitive, that is off market, however you choose to describe it. And we had this theoretical conversation back and forth. Which then led to Shawn and Sam and, and several members of the team starting to investigate what was possible. And really since then it’s it’s been Shawn and Sam spearheading what, what we’ve done around the open source projects and what we’re doing. latterly, I think what’s been most interesting has been initially we look obviously looking at the generative use cases. But What’s impressed me is that is the reasoning potential. So there’s we’re going to talk quite a bit about hallucination today. And it’s obviously a concern for the legal industry around the risk of hallucination. But there is so much capability there today, around the ability to reason. And I think that’s where we’re starting to have some interesting experiments at this stage. And, and we’re just, we’re just uncovering and unearthing what the potential of this technology could be.

Marlene Gebauer 15:47
Would you say reason? Can you expound a little bit more on that?

Oliver Bethell 15:51
Yeah, absolutely. So when we think about to slightly incorrectly answer your question, we think about generative, we’re asking it to create new content. So write me a clause or prepare a citation. So it’s generating almost new content, which carries with it all of the risks of copyright and hallucination, which which we’ll dive into. But if you were to ask a question such as, does this batch of emails contain any concerning language, or any derogatory language, that that is a sort of more of a reasoning exercise. So you’re not asking it to generate anything new and but you’re asking it to carry out a task that were very difficult to programmatically describe using previous generation technologies, because you would have to train the model on what you meant by all of those different things. But straight out of the box, we’re seeing this huge capability. And I think, clearly, lots of legal tech vendors are seeing the same thing. So what are the announcements by the big platform providers that are saying they’re bringing these capabilities to market? And I think, well, your previous podcast episodes, talked about a lot of technology that’s being launched by press release at the moment. But clearly, the capability is there. It’s just about being able to harness it, I think,

Greg Lambert 17:08
as a very polite way to put it.

Oliver Bethell 17:13
Innovation by press release, I’ve called it.

Greg Lambert 17:17
Shawn, we kind of touched on this. But I want I want to pick your brain a little bit more on some of the due diligence that you’ve done with the generative AI. And it sounds just from the brief conversation that we’ve had right now is that you have both a passion for the technology, but also a passion for understanding how the the legal process works as well. So what have you seen as some of the biggest obstacles in preventing generative AI tools from accurately conducting legal work and processes? And how did that coming to under to understand those obstacles help you in setting up a use case for the YCN bot? There have it, Travers Smith?

Shawn Curren 18:07
Yeah. So I think, to follow on from what Wally has said, we are we have a slate knows we can’t show on this. But we have a slate, which is a bit of a spectrum between generative use cases, and extractive use cases. And on the generative side, we’ve got issues with hallucination and copyright. You know, the more creative the model, the more risk that is for hallucination. The more verbatim the output is, then the the higher the risk there is of copyright, because, you know, it’s verbatim exactly taken from a website that hadn’t rebutted us clause, it said, Don’t scrape me. So on that generative seed, we just leave. For us right now. It’s like high risk, low value. We’re not too interested. So the obstacle is figuring out whether we actually want to be providing legal advice, which is a derivative of publicly available data sources, like law teacher dotnet, and law dotnet, and all these websites. So that needs to be figured out. We’re not going to come out and see we’re using generative AI to draft contracts as obviously, but but on the extract has said, we we haven’t really come across any obstacles. And to Olly’s point, that is where we see the most value immediately and over the shortest term. And we think that if we’re to talk about legality of the models, sort of big problem that’s that’s occurring. Using the reasoning engine of the model is the furthest away from risk. Using generative, you’re creating content as copyright issues. You give that up draft a contract, we give that to a client we try and draft we try and grant them rate and title of the copyright of that output. The clients then got contaminated content, based from US based on open AI based on a website is scraped and as a kind of dz chain of copyright. Yeah, that’s a bit of an issue. But but on the extractive seat, we end the reasoning side. So the reasoning side, yes, the models wouldn’t be as sophisticated as you were today, if they didn’t ingest the internet, and doing so that has potentially scraped websites that they weren’t, you know, maybe weren’t supposed to call taking corporate they’re not supposed to. But we’re giving it to it asking a human equation is really smart. We’re giving it most of the data. And we’re just asking you to compute on a very specific point. And so we don’t really see any obstacles on that side. And that’s the area that we’re going to focus on.

Greg Lambert 20:33
Have you seen that the discussion around generative AI and just the hype, and it reminds me of the interview that we had a few weeks ago with Tony Thai from from HyperDraft. And that there are certain tools that do very well at certain tasks. And right now there’s this kind of idea that generative AI can do it all. Have you been able to leverage the the hype and excitement around the the generative AI to point people back to the tools that you’ve actually had maybe for years, that say, you know, okay, this has AI in it, too? Why don’t you use this because this is much better at you know, drafting documents or reviewing certain things has that helped in that?

Shawn Curren 21:26
Yeah, this is why, you know, eight months ago, when companies law firms are coming out and saying that overnight, they had decided to, as far as it seemed, pivot that entire document automation and document production strategy, to an LLM model that was streamed on the internet to seem to me to be a bit counterintuitive, because as you see, you’ve got those these businesses, our business, and they make many law firms have got closed banks, we’ve got document automation investment, we’ve got precedent banks, got content management systems with tags that are rich and useful to find documents or clauses that are relevant got by saving CLC, you’ve got tons of information that almost definitely surpasses what the model can produce. So those are the kind of use cases where we didn’t jump in. And actually, we think that we should continue to use the technologies that we have, because there isn’t a transformational difference there. To jump on appointment automated. We’ve been working on AI for the last four years, and it’s been around the entity extraction use case, Kedah luminance, is it a restrictive covenant as it is a control clause, this natural language ability of the model, where you can explain much more context, give it the background of a case, tailor, the contract that is reviewing stitching together amendments and variations across a large set, whatever it is, you can give the model that’s context. And it can then provide a natural language output, which not It’s not under percent, sometimes it’s wrong. Sometimes humans get it wrong. But the workflow that you get back from the more than one form of an Excel spreadsheet with documents down the left questions along the top, it wouldn’t be impossible to discern the difference between that and that that Excel spreadsheet being created by an AI bot, or a human. Some will be wrong, some will be raised, but it’s as it’s not a million miles away from, from Kenya. I think that’s revolutionary. I think that’s game changing. I think it’s new. And some people will say, Well, we’ve had semantic search for the wheel, true to the ability to cross reference based on meaning and context, not text. But we’ve never really had contextual natural language input, with contextual natural language output as an API. That is, in my view, new and it’s an exam degree, it’s revolutionary. I was on a roundtable with the Financial Times in March, and there’s lots of other firms and there was a question as to whether this was rich, any aid was revolutionary, or evolutionary? Was it marginal gain efficiency, significant inefficiency, I view as it’s certainly revolutionary, but it’s not going to solve for all problems. But actually, it might solve a really big pain point for legal, which is that we are in the industry of assessing risk. And that most of the time, we don’t have enough time to read all of the information to assess the risk appropriately. And that is now what’s going to change which I think it’s going to be a game changer for the legal services industry.

Marlene Gebauer 24:25
I wanted to I wanted to go back to your point about contaminated data, in the context of are you guys seeing any challenges in terms of clients saying, you know, I don’t want you know, I don’t want my data used in terms of, you know, drafting things like, you know, I don’t want that continue. I don’t want others, you know, competitors to sort of gain from that. Is that something that you’re seeing and how are you addressing that?

Shawn Curren 24:56
So we’re not seeing that because we are not, at this point off for them any generative use cases of the model. And we are very clear with their lawyers that they shouldn’t be offering generative use cases of the model was in terms of corporate risk, and hallucination risk. You know, we may come on to talk a little bit later about initiation is all you need the paper that we wrote. But just because just because a case exists, or doesn’t exist doesn’t necessarily mean that if it does exist, it’s been referenced or quoted correctly. And it hasn’t made up some fact that takes the face in a different direction or causes an issue. So we just on the generative side, we’re just, we’re not there yet in terms of having issues with our clients, because we’re not promoting that seat, when we get a big job. And it’s a kind of analyzed lots of stuff, and ask and answer lots of questions of stuff to get a view of risk. That’s where we are starting to see to the legal team slick. You know, your clients may be dabbling with ChatGPT, or an enterprise Chatbot. Patient? Why don’t we start the conversation now to see whether they would like us to do an paddlewheel review with EA, you do it on a human basis. And we’ll have any out of you alongside and we’ll just let them see what the output is. And we can cross reference that and we can see what we think and designate just take the clients on that journey. You know, and a lot of our clients aren’t we had breakfast with a lot of lgcs, they are very interested in AI. They want to understand how it can be applied to the businesses that they work in, but also their legal departments. And we’re trying to have that conversation early on. Because it’s not, it’s not going to go away. My view is that it’s not. You know, I’ve got certain views about blockchain and smart contracts. And I think this is perhaps a lot more interesting. And allowing this innovation is going to go away.

Marlene Gebauer 26:47
Sam, I’m going to jump to you. There was an article in May, which talked about how you were approaching YCN bots functionality. And you proposed multi model as one solution, can you walk us through how that approach would work and what impact it might have? You of

Sam Lansley 27:07
course, so one of the things we’re seeing is everybody becoming extremely dependent open AI, or a zero open AI. So they’re dependent on this one model that’s been created by this one company. And for us, we thought that first of all, was a massive risk, we would be completely dependent on one company for all the innovation in this space, given how important we think generative AI could be to the firm, we didn’t think that would be the right approach. The risks possibly being obviously this company could get quite restrictive in their usage, they might start, you know, adding terms that we can’t comply with, they may start limiting access to it. Which obviously would be a disaster if we built a large strategy on top of that was also in terms of just innovation, though, and I think, just do it, the doubling of nearly any technology has benefited from a wide array of companies and people working together to develop a product. I think with us, we looked at this multimodal approach, and we said okay, for YCM bot, this chatbot interface is going to be crucial for the way we deploy this technology internally. And we looked at it and went, Okay, we can’t just be dependent open API, we may want to integrate with other vendors like Google and use the BOD API, we may want to develop our own model and interact with it completely from scratch. So I think one of the things we looked at and said, Okay, we need to keep, obviously using this technology, because they are the front runners at the moment. But we need to be able to have an option. So what we did with our weissenbach was we made it extremely easy to plug in a brand new model. And you could just use it as if you were using, you know, open AI or pod or any other model.

Marlene Gebauer 28:34
So Shawn, in preparing for this interview, you brought up the idea of multi tokenization with AI and common law, and can you can you tell the listeners more about this so that they get the benefit of of our conversation from earlier.

Greg Lambert 28:49
So they can be excited when about it? And when we heard it the first taxi?

Shawn Curren 28:54
Absolutely, we would encourage anybody who hasn’t already to look at to review. Elimination is all you need paper. And to summarize, this paper basically says that there’s subtle, non obvious errors in generated common law and case law from these models provide a significant create a significant risk for an industry, which is that those subtle non obvious errors may be fed into motions to the court, the judge may accept certain arguments, they may miss the fact that previous common law has been slightly misquoted because it sort of had the same semantic meaning or there was a subtle word difference because the model just generates an actual slightly word that may get fed into a new judgment. And we potentially contaminate case law. They continue to do that. So regulation and legislation, lower risk because that’s a very centralized process. And the chance of that happening is low. But when you’ve got a decentralized process, like common law, where you’ve got a circuit court in the foothills of Wisconsin with one judge and one local solicitor. The probability and perhaps uninformed on AI, the probability of them, double checking the case it exists, teaching the facts of the case using it to support the argument. And then that being wrong, is much higher. And that’s a risk that we have violated. And it was it was really concerning. So in your paper, we talked to how you could perhaps solve for that. And so when the model is providing outputs, on common law, and it sort of says the facts of the case of this, this is my the judge says the facts of the case of this, this is my opinion. And I’m making this decision based on inverted commas, the the decision that has come before it 1000 times, or 2000 times or 3000 times, it is possible that the model swaps out what’s when it’s when it’s rating, and say that quote, it’s possible that it puts in a different quote, we had examples in the paper, where the cause of action definition was switched. With a more technical legal version, it was switched with a more easy to understand version from a different set of common law. But it was sort of same. In this case, this is the cause of action definition, but actually, it was a different cause of action that actually, so we started talking about how it with Kinsley so for that, and the multilink tokenization approach is that when we’re tokenizing models, which is breaking up words, so here’s an empty sequence. Here’s the next word, when you get to the next word, and you’re going, Oh, this is a judge talking. And this is a quote, that exists a million times and common law, you know, from COP, carbolic, smoke ball company, you know, Steve Donahue versus Stevenson, we’re not gonna, we’re not going to probabilistically guess the next word and say these quotes, we’re going to take verbatim, every single word. And so when you’re building your models, and you’re tokenizing, the idea is that when you tokenize, that quote, you take it, you take the whole thing, you don’t break it up based on what you don’t have. So it’s not guessing the next word, it’s guessing the next word up to the point in which it has to guess the next quote, because we cannot manipulate in any way, the spoken word of a judge. And so it’s just one idea to try and to try and resolve this the way that Lexis are dealing with this, which is really suitable as the take the court and the search for debt, and then make sure it exists and the rank it and the law, it’s important and that sense of one, that’s smart, but it’s this idea that we could get that rate at the model creation, and the model architecture fees, rather than after the fact always checking if the models Okay, to make the model better its source. So it’s a sort of theoretical is a theoretical approach to trying to resolve the issues of hallucination within the need an industry.

Oliver Bethell 32:53
Shawn, in your, in your research, did you come across some examples of where case law citations had been incorrectly summarized by judges, and therefore, if a model was to use a probabilistic outcome, it would just potentially regurgitate mistakes

Marlene Gebauer 33:10
would regurgitate human error?

Shawn Curren 33:12
Yeah, yeah. So so there was one instance where a court in Belfast and Northern Ireland quoted Denning, talking about the big red hand, the more onerous and unusual the term, the more notice has to be provided to it. And the judge, or the clerk of the court, whoever writes up the judgment, misspelled the word read. And instead, the word read was in the judgment. And we were doing a comparison between the, you know what the actual court says, and we remember looking at that one and going, Oh, interesting. The model has pervaded the word read, instead of it, we got it the other way around. But we thought it was the model who got it wrong, but it was the model, the model corrected it by putting in reads because probabilistically read was more relevant than that read was more relevant than read this is fascinating. But that’s the exception to the rule. Don’t want people to see that. You’re rubbish and esbs. But most of the time, most of the time, the model was, you should look at the paper, but it was it was what the model was suggesting. So basically, what we did, we took a judgment, and we took a bit of text, and we would ask the model to fill it in what the model was suggesting was semantically the same. But it was It wasn’t what for what so it was, which was amazing, like the model understood the key slot, understood the context understood what it was meant to do. And it explained, you know, the quote, so the bigger you are, the more unusual a terminal notice has to be faded to it may have said something like the court, you know, the term has to be obvious. So it can get the point understands the point, but it’s not quoting it perfectly. That might be close enough sending them as a quote that it doesn’t really matter. It doesn’t take their days off in a different direction or it may be so you know, it may take off and direction in the face may depend on that and then that potentially causes was a CD session. So that’s the kind of genesis of the paper. And so yeah, that’s why we will be articulate that to our lawyers. And we talk about we showed them the cause of action definition sweat out and see, would you notice that in practice, and go, Oh, my god, yeah, cool. We’re not going to touch this for legal reset.

Greg Lambert 35:17
You know, you probably made every legal geek leaned over and turn up the volume on this last part, because it’s super, super interesting. And I think it’s something that answers a lot of questions, and kind of settles a lot of doubt that people have on how, how are we going to stop with the hallucination issue? How are we going to get past this? Eventually? And we will, but I think this is the most clear answer I’ve heard on how we get past hallucinations of eventually. So thank you for that. And I did want to say we’re gonna put a link to the paper, which is called hallucination is the last thing you need, which is on the Site. Great, great read, I usually don’t read these types of articles. But I went through this one, I love how, you know, essentially, you had what one in every 20 Or one and 20. Results were actual, actually correct in quoting the resources. So you know, if you’re a legal geek, like, like I am, this is definitely a must read. So thanks. Thanks to the three of you for writing as it was great. Olly, I want to I want to turn back to you and kind of talk more. We’ve, we’ve mentioned the YCN bought a number of times, and I do want to clarify that and tell me if I’m wrong, that YCN stands for your company name, right? It’s you. And so would you mind just kind of giving us a high level overview of what the YCN bought does. And more importantly, talk about why you’ve cited there at Travers Smith, that you dedicated the resources in building this and developing it, but then you’re turning around, and you’re making this an open source project. So that basically anyone can go out and use the sweat of your labor to to have access to this. So what was the reasoning behind that?

Oliver Bethell 37:36
Yeah, so to start off, start off at the top YCM. But your company name bot is the product that Sean described earlier on in the episode, which is a chatbot interface to allow people to use or benefit from the capabilities of generative AI, specifically ChatGPT models, but through a safe enterprise environment, and it was something that we thought was necessary, in order to give people the ability to be able to use an experiment and learn from this technology, but provide some safety around it, we provided that safety through the enterprise terms with initially open AI, and then Microsoft, but also with some additional terms of use that just guide people and and then we were able to add some new features such as person’s name, entity recognition, which is the ability to spot if we were inputting a prompt that contained an individual’s name, or perhaps the results might have somebody’s name. And we’ve subsequently added an override for that, but just certain, I guess, governance safety features that have allowed us to be able to use this technology in a safer way as possible. That gave us the ability to be able to have really fantastic conversations internally. So people could experiment, we can have these great discussions. But we also recognize it was real potential for our clients and the community to be able to use this as well. And we’ve got a long standing history that we’re really proud of, of open sourcing technologies. So we did that with other products such as matte mail, and also our original document labeling platform at a toner that Sam was involved or actually pioneered and wrote. So we open sourced that made that available. And it’s been a fantastic way for us to be able to demonstrate our capability, have great conversations with people in the community, but also it’s been really fantastic for attracting and retaining talent as well. So we see that as part of be part of our people position. So why YCM bot has been great. We obviously promoted that. We’ve had some fantastic discussions with clients and also other other law firms as well. And we believe that several businesses have taken it and implemented it. We’ve done it under the least restrictive license, which is MIT, which is great, because we believe in that. However, that doesn’t prevent us from being able to see actually who has deployed it. But But we understand, as I say several businesses have and if we can play some small part in businesses being able to safely use this technology then then we’re really proud of that.

Marlene Gebauer 39:55
So what was the collaboration like with 273 Ventures on the Y See and bot. So

Sam Lansley 40:01
collaboration with Dan and Mike as they were a big part of us deciding to make the model make the platform more friendly to multiple different model types. So I think before we actually built the interface, we decided quite early on that it would be worth investing in an interface that could be user friendly and allow us to consume enterprise API’s. But I think what was really useful was talking to Dan and Mike, is they explained how useful it would be to be able to open up the different GPT models. And I think, based on that we actually added those features to it. They also had a little bit, look over the code, give us some advice on that. But yeah, that’s probably the most the most of the collaboration between us and 273 Ventures.

Shawn Curren 40:41
I think the one thing I just add to that is, we’ve now be open sourcing for four years, we open source enterprise technology that, you know, we all believe we could potentially sell. But what we’re finding is that a lot of law firms who want to take that technology, don’t have the sophistication of an engineering team who could implement it. And so we are trying to create partners in the industry who have a bit of an engineering ability or capability to change to be able to support taking that open source code and implementing it within law firms. Obviously, we don’t get a benefit from that commercially, but it means that people can benefit from the product.

Greg Lambert 41:14
So I’ve been going around my firm giving these talks on generative AI uses. And one of the common things that I both address and I also feel questions on is the safety and security of the generative AI ai tools. And Ollie, you kind of mentioned earlier about being you know, the difference in the licensing between being a business to business license versus a commercial license that most of us use when we jump on, jump on and use these tools. So when it comes to the YCN bot itself, what are you doing to make it safe for users to go out and experiment with the generative AI tools that are out there,

Oliver Bethell 42:00
I think there’s a couple of bits to touch on here. What is around the multimodal strategy components. So YCM bot has been built so that it can connect by API to any model. And then obviously, when you contract on enterprise terms, then you’re specifying that you’re not prepared to allow your prompt data to be used to improve and train the model. And the additional benefit, obviously, that everything that is prompted and then return back is is encrypted under all of the expectations that we would have. That’s then gets extended through through to the Microsoft model as well. And those those enterprise terms will be what we would expect when we integrate with any model. There’s an interesting point, I think it’s worth raising, which is, it will be interesting to see how the fur throwers describe it. As these lawsuits get brought to open AI. Is there a possibility that in future we may see there being restrictions applied to some of these models? Because it is generating content that is contaminated? And if that is the case, products that are built on top of those models? May they need to be restricted in some sort of way. And therefore, we are we’re obviously in the very early stages at the moment of experimenting with this technology. And so is there a way of maybe getting on to a bit of a crystal ball question here. But is it possible that even though the interaction with these models is secure, and under the right terms, as see the training data that has been used, could be brought into question that might mean that the use of those models needs to be thought about more deeply. And so having the ability to switch between different models, I think is going to be necessary in order to mitigate for the risk of that. We could even see there being a point where the most conservative of clients may say, we only want you to use a very restricted model that you can confirm that all of the training data is legitimate. And we want that you to run that on your own infrastructure. That is a potential extrapolation that we could see in the future. And it’s one that we’re thinking about quite quite deeply. So we’ve obviously taken all of the steps that you would expect of anyone that’s working with enterprise software today. And that’s how we’ve we’ve engaged with Microsoft. But we can see in the future, there might be a need to think more deeply about the quality of the training data of these underlying models.

Marlene Gebauer 44:16
And I’m wondering, I mean, again, you know, when you’re talking restrictive, less restrictive type of type of models. I mean, is there also a pricing component to that, in terms of what firms will, will charge to use them? Yeah,

Oliver Bethell 44:30
I don’t know enough yet about the cost of running these different models. But I think you you had a guest on the HyperDraft podcast a couple of weeks ago. And they made an interesting point. I think you’re talking about smaller models. And one of the things that we are thinking about at the moment is, would we potentially have smaller, more restrictive models that are finely tuned that are able to perform very narrow tasks, and perhaps they’re the ones that get used for the most sensitive of use cases, but they could potentially be quite expensive to train. So there might be a cost element there. Anecdotally, when it comes to cost, what I would say is that just the experiments that we’ve had so far have been pretty, pretty low costs in terms of how much is your technology that we’re using for the build is pretty minimal. So we know that at the moment, that cost isn’t that great. But I can imagine that potentially increasing quite significantly as we start to use more GPU resource.

Shawn Curren 45:26
And yeah, I think I didn’t just add as a fee, allocate some monetary value to every word that exists on the internet, and are talking like 0.00000001 cents. And there is a mechanism for that to be returned to the copyright holder, the person who wrote the blog, the person who wrote the story that fascinated the academic research, whatever it is, then clearly the cost of these models on entrance, right now it’s fairly low, and it’s nothing, it’s less than a month for us, you know, maybe don’t put that in, it’s very low. I don’t know if that’s anticipated, but it’s very low. But you can see how if we just sort of crystal bogeys, if the authors who are seeing their work has been taken in as a corporate issue in this, and the big tech vendors want to retain a lot of that work, because it’s the model, there might have to be some form of negotiation and renumeration back to the data owner. And that might be in the form of pennies. But it may still be something that has to happen. Similar to the music industry, where now you’re getting royalties that are in this sort of pennies and pounds, that kind of capability could evolve yester models, take the data and aggregate it in the output, it may be difficult to link it exactly back to the training data. But it’s not impossible. And there might be a way to do that on a roster or off basis. And so if you think about the fact that we might have to renew MIDI, the information creator. And if we don’t do that, I don’t know what the future of these models is because I’m not going to be creating any more information on the internet for somebody else to benefit. So if we think of it that the cost of inference has got to go up, it’s got it’s got to trend up quickly, we’re getting a really cheap free version of it. I don’t want to use the word Napster. But, you know, it’s but it’s, it’s good to trade up. Or as Olly said, the models have just got to become more narrow and focused on if you go into holding face to deer hunting faces a publicly available data asset police with different licenses of data for training data, there’s 3000 Lexis warning, these 1200 Odd packages of data that are MIT licensed. So, you know, for us sensible, sensible thing to do for any business is take all of that training in one model, get your own data that you want to cooperate for, refine it, fine tune it, see if that model is any good. See, if it can do some tasks, then you’ve you know, you’re hedged, you’re protected. But there’s also lots of detail on eigene CaseText, under more restrictive licenses, your patchy etc. And it’s going to be more difficult to use those for commercial use, and not continue, you know, if you use an Apache license that says that you’ve got to continue to open source, anything that you do, which effectively with throws would mean all of your data needs to be open sourced. I don’t know how we deal with that. So it gets very complicated. So I do think the cost is going to go

Greg Lambert 48:10
up. Yeah, we’re almost in the equivalent of a zero interest rate model, where things work really well when money was was super cheap. That don’t work well, when the interest rate is now at 6%. So yeah, yeah.

Shawn Curren 48:27
I mean, in one of the things I think is fascinating, it’s just happened yesterday, not to pick on certain companies. But there is a leading audio physeal vendor, who have been in the news mainstream for injecting into the ATMs, a clause that said, Any conversation that happens on their platform, platform similar to this can be used to will be retained and can be used to improve and develop the services. That clause has been around bags, but boilerplate clause has been around for five years. Nobody’s paid any attention to it. Yes, II I think it’s not really Buhl. What do you mean data can improve models? We don’t really get it. And now it’s had to be interesting. Like yesterday, it’s at the mainstream. So there’s gonna be lots of people focusing on that and going, Oh, is my that is my data, valuable data, retain it? Do I need to get some return and some renumeration from it if I’m going to allow algorithms to be trained to provide some form of service that sort of automates what we used to pay a lot of money for? And I think that’s going to be really interesting is it kind of consumerization of data labeling and the importance of it for algorithms in AI?

Marlene Gebauer 49:31
So clearly, there’s a lot to think about when it comes to AI models. What advice would you give to lawyers or other legal professionals who are interested in experimenting with AI in their practice? You know, how can they leverage the technology, both responsibly and ethically?

Greg Lambert 49:53
At everybody, all the ones

Oliver Bethell 49:58
on the conversation here Often, I think what if I was a lawyer wanting to understand how I could leverage this capability, I would be investing my time in learning how to prompt as effectively as possible? How am I going to want to develop this technology myself? Probably not. Hopefully, I’ve got a great team. I want to Travis Smith that I can lean on for that. But my input into the development of the products would be around the prompting. And one of the things that Shawn, Sam and I were talking about just this week was, there is going to be significant IP generated in sophisticated prompts on their own. And so actually, perhaps some of the products that are going to be coming to market over the next six to 12 months, a lot of the investment is going into building sophisticated prompts that will be abstracted by user interface controls. So as a lawyer, spending time on investing your energy into understanding how you can leverage technology with more sophisticated prompts will probably be to the benefit of your organization.

Greg Lambert 51:00
You know, we’ve seen the big players in legal information, Westlaw, Alexis Bloomberg, others, lots of others jumping in on the journey of a game. But as, as developers, as users, what would you like to see them working on that would help you know, all of you and your duties to to help advance the technology there, TraversSmith? And, Sam, you want to take a stab at that one?

Sam Lansley 51:31
Yeah, I think, obviously, with these with these vendors, the key for them is if they’re going to invest time in legal research is to focus on dealing with a loose nation problem. I think that’s the big focus, I understand they’re already doing that to a degree as well. But I think for us, we can’t really consume those legal research tools properly, without having a better understanding of how they’re going to deal with hallucination problem, I understand that we’ve obviously come up with one exciting mechanism of how to deal with that, and people can read about it in our paper. But I think at the moment, it’s just that inaccuracy point. And it’s it can be really subtle. It’s not as simple as just doing a case law name search, it’s much, much more complicated than that. Because sometimes, the facts can just have such a small difference, if you just added a couple of words here or there, which aren’t accurate. I think this part is really key. Because I think, obviously, there’s the judicial process will have a role to play here as well. But this really could be scary if it goes badly if everybody starts relying on these open AI tools to summarize legal information. And then if people use those in a legal argument, and the judges actually accept that argument, that becomes part of common law. And that point would literally allowing AI to write our legal system for us, right the laws that we will have to comply with. So think what I’d love to see from Westlaw and organised organizations like that is some real serious thought taken to any feature they offer. I think they are doing that, which is great, but just whatever else they can do to really stop this loose nation problem. And probably encourage them in a way because of the risks associated with it. If they actually can’t deal with hallucination problem, I think they should probably maybe even think about putting the feature out there in the first place. Because it is we’re already seeing it. And I think we’ve seen examples so far where judges have caught it, which is great. What does frighten me a lot more is situations where judges don’t catch it. And then we have a situation where we’re trying to rewrite the last 20 years of legal precedents.

Shawn Curren 53:21
One of the things I’m struggling with, and you mentioned, Greg, that you were involved in a librarian conference. And actually, there’s a there’s a librarian at the William and Mary Law School, who wrote a very interesting article, The Wall floor library. And as you wrote in tonight, there’s 2019 law before LLMs became a thing. And the article and you could perhaps share it is who owns the law? And I think that’s a really interesting question. When it comes to to waistline Lexis, which is to the to the one the law does the legislator on the law. And if you think about what LLMs Genie AI are doing to many other industries, investors right now aren’t looking at the intermediaries who take data and then resell it so violently, you know, under the hood for like, you know, Travelocity and Expedia and booking and all these companies is a data provider who provides them a lot of information about flights and hotels, and all that sort of stuff. And very enter the data providers market cap is 2 billion. All these intermediate ease is like 50, for 60 billion, because you’ve been a customer if at the margin. And actually a data provider doesn’t want to engage in deal with that. And so if you think about that model, the investors are seeing the data providers know, the value creation, and the intermediaries, perhaps and troubling and struggling a little bit. And so if you think about how do you apply that to earn industry, if there’s an algorithm that can take a problem and can compute that against accurate, accurate up to date, legislation, regulation and common law to provide an answer that is as good, if not better, but certainly as good to what we could do sales, who gets the commercial benefit of that, who gets the game. And it’s effectively the provider, it’s the person who owns the law, this is maybe 1020 30 years in the future. But that’s something I think we have to figure out. And so we believe that the laws will open, does an open justice movement in the UK. And we believe that the law should be open. And it should be accessible to everybody. And it means that any business could take that feed into a model and use it to compute outcomes. And I think that has to be the future that I’d be very interested to see 20 or 30 years from now, if actually, large tech organizations could have effectively on the law, with these algorithms able to sort of ask and answer legal questions and have more automated pieces. I’m not sure how equitable that equitable that’s going to

Greg Lambert 55:50
be here, in that paper, and we will put a link on that is Lesley Street and David Hanson, Lesley, still at William and Mary, and I think David’s still at Duke. Great, great paper, and, as you said ahead of his time. And I can tell you that everyone loves the fact that you cited law librarians paper in your research.

Marlene Gebauer 56:15
So given all that we’ve been talking about, what’s the future vision for developing testing and applying the YCN bot to help transform legal work over time? What’s you know, what’s the focus?

Sam Lansley 56:29
So obviously, we’ve looked at TS bar. So YCM bar, we obviously got to spot because we’re Travis Smith, but we are looked at is producing a tremendous amount of value. But it is limited in its sense that it cannot consume documents. So we’ve actually looked at it. And we’re working on a bunch of other tools that can be used to consume documents instead of chat. So I think that’s the biggest feedback we’ve had internally. And it’s one of the things we’re prioritizing now is opening up features where you can just for example, upload a series of documents, ask a series of questions about those documents, and then get this back in an automated response. So for example, one of the things we’re doing as you can upload 50 NDAs, you can ask a series of questions about those NDAs. And then what will happen is these GPT models will go and iterate through every single document, ask those questions of every document. And then you can get the answers back in the user interface or you can get an Excel export. So I think that’s where we almost see the future of this product. Because I guess we do see as TS bots having a role. But I think this will be more of a focus, because I think TS bot will be still there for quick questions for things where you don’t require a full on Document Upload. But I think if we just look at some of the processes we do, it’s crucial that we actually have a process to upload massive large amount of documents, and then we can get questions answered of each of those documents.

Greg Lambert 57:45
And I love the fact that you that you use TS bot to replace these your company name because that just drives home the why it says your company name for so. So if I had a V the JW bot, right? So we would call it exactly

Marlene Gebauer 58:00
is there any thought to sort of looking kind of internally at at the firm’s own documents or administrative things, or as opposed to client related documentation.

Shawn Curren 58:14
So we just think it’s just looking internally, to the extent that we take your data to tokenize it and fine tune models. I think certainly we’ve got data in our document management system where we are 100% confident that we want to cooperate, because not everything in your document management system is owed. Sometimes it’s the other side, sometimes it’s the client. So you know, you might use it to improve and refine models. We had this question actually from one lawyer was it? Can we just take a piston bank now and just train a model? And then ask the model the question to give us the output for the precedent bank. And the point I made was that, you know, the benefit of the Large Language Models is that they understand the concept of market understand the question, understand friendly and buyer, buyer friendly and seller friendly, just by taking an untrained model and taking those precedents, it’s not going to know any of that. So it’s not going to be any more sophisticated than what you’ve got just now. And this is the point that we talked about earlier, as you talked about a previous contributor talk, you know, talking about existing tooling, and is natural language input to get a clause and a precedent bank better than 50 tags curated, maintained, kept up to date, by experienced knowledge lawyers, thinking about common law developments, legislation, developments, regulatory developments, and applying that to that precedent bank. I’m not 100% sure whether the natural language interface to get that output is significant gain efficiency, or they’re just going into the basement bank yourself and and looking for that. So but certainly on the training side, I think that might be to complement an existing large model. I think that’s an area that we might look at the future.

Greg Lambert 59:56
Well, guys, we all we ask all of our guests that The crystal ball question. And it is that time. Otherwise, if we continue this conversation, we’ll have to split this podcast into two episodes. So I want all of you to pull out your crystal balls and peer into the future for us. And you can either answer it individually or as a group. What are some of the challenges or changes that you see, I would say specifically with AI, and in the law in the next two to five years?

Oliver Bethell 1:00:30
I’ll go first, Greg, I think it’s what I touched on earlier, which is the adoption of these models, whilst they’re still potentially question marks over the training data. So I think we’re at a stage where there’s clearly a rush to deploy these technologies as quickly as possible, there’ll be a few bumps in the road. Clearly, there’s enduring innovation here. So it’s going to have a fundamental impact in the long run. But there’s some figuring out to be done about how to safely apply the models. And that’s where I think being model agnostic is going to be critical to the success of any business that’s going to leverage this technology.

Shawn Curren 1:01:05
I would say that, I feel like that there’s a bit of a Napster music industry moment happening with existing advanced alien names and ad information based industry. And I think that over the next five or 10 years, we’re going to have to figure it out, whether we want Spotify and an Apple Music, dominating all of those industries, or whether we want the fair and equitable distribution of AI to go, to beat him up to be spread out more evenly across businesses. And so be open source where we create a bit open source, that’s where we want the law to be public. And I think that’s going to be a really interesting development to watch over. I do think, you know, when my kids get to college each and every 20 years, I think there’s gonna be so much change from where we are today. But it’s exciting. And I think it’s positive.

Sam Lansley 1:02:04
Yeah, my opinion, I think it will kind of follow the, the progression of this the general AI industry. And I think it’s really interesting there, because obviously, when I was first starting my career chatbots were a thing back then, obviously, they were nowhere near as sophisticated as they are now. And I think in 2012, I think it must have been I was developing chatbots. And then I think we gradually you see how far it’s progressed. And that has been impressive. But one of the things I wonder now is if we’re going to move beyond that, I want to mean by that is, there’s a limitation at the moment, if you use these chatbots, you’re still heavily dependent on a human per human agent going in there developing the right prompts. And then they still have to say something, if you go to a lawyer, for example, and say, you know, you have this case, they’re still going to have to break down that case into a series of prompts, which can then be asked of these GPT models. And then that’s the same for most industries. And I kind of wonder if Auto GPT. So this is an open source project where you can give it high level goals, and it can actually go out there and achieve those goals. For example, if you say, you have a goal to grow your businesses like, you know, marketing process by blah, blah, I can actually go out there and take certain actions to do that. And the idea being, for example, if you want to grow your online presence, it may actually go out there and start, you know, writing tweets for you, in my next stop posting on LinkedIn. And I kind of wonder if that’s going to be the next level in terms of AI. And I kind of wonder if there’ll be more focus on actual agents where you can maybe this is not two or three years, maybe this is 510 years. But I do think that eventually, someone’s going to have to deal with the problem of the limitation rather, of people being so involved prompting the algorithm correctly. And I think we could see something with these algorithms are so sophisticated, where you can give it a high level goal, it can break down those goals into a series of sub tasks, ask the user to approve the subtasks then actually go out and do some of the actions that are required. I think it might happen in law, it might not. But I kind of see that’s where it’s gonna go, to be honest. Well, we

Marlene Gebauer 1:04:01
may have to have you guys back in a year or so just to see, see how things turn out. So Shawn Curren, Olly Bethell and Sam Lansley, thank you very much for taking the time to speak with us here at The Geek in Review. Thank you. And of course, thanks to all of you, our listeners for taking the time to listen to The Geek in Review podcast. If you enjoy the show, share it with a colleague. We’d love to hear from you. So reach out to us on social media. I can be found primarily on LinkedIn but also at gay Bauer am on Twitter and at m gay Bauer six six on threads.

Greg Lambert 1:04:36
And I can be reached on LinkedIn as well and Lambert on X slash Twitter, whatever it’s called the sweet X. And, but more and more on glamoured pod on threads. So gentlemen, if someone wants to learn more, reach out and find you online where Best place to do that.

Sam Lansley 1:05:02
Yeah, the best place to be on LinkedIn. So if you could please follow our Travis Smith AI page that would be great.

Marlene Gebauer 1:05:07
And listeners, you can also leave us a voicemail on our Gigan review Hotline at 713-487-7821. And as always, the music you hear is from Jerry David DeCicca Thank you, Jerry. Thanks,

Greg Lambert 1:05:19
Jerry. All right, Marlene, I’ll talk to you later Hey

Unknown Speaker 1:05:34
Take me away. Welcome

Speaker 3 1:05:45
back, devils back home. Back home