AI is now a significant part of the legal industry, and technology companies such as LexisNexis are at the forefront of this technology shift. We sat down and talked with Jeff Reihl, the Executive Vice President, and Chief Technology Officer at LexisNexis, and discussed the current state of AI and its relevance to the legal and research sector. The recent survey conducted by Lexis uncovered that 39% of lawyers, 46% of law students, and 45% of consumers agreed that generative AI tools will significantly transform the practice of law. During Reihl’s sixteen years at LexisNexis he witnessed many innovations such as the nearly universal adoption of iPhone and other mobile products, cloud computing, and document automation, but the speed a acceleration around Generative AI tool like GPT 4.0, Bing, Bard, and others is causing even the big players in the legal industry to quickly adjust to the demands of the market. Jeff highlighted the flexibility and benefits of LexisNexis’ technology, which can provide valuable insights and information to its users on-demand. The organization generates and applies AI-enabled insights that assist users in finding, evaluating, and curating content more quickly and effectively. Jeff went on to explain how AI technology is helping lawyers reduce research time and increase accuracy in creating legal documents. In conclusion, Jeff explained that LexisNexis is committed to promoting innovation in the legal field by utilizing innovative technology solutions to advance research and meet the growing research demand, thereby improving legal professionals’ efficiency and accuracy.

Of course, Lexis is not a new player in the AI field for the legal industry. They began using tools like Google’s BERT AI as early as 2018 and included AI functionality in many of their products on the backend of the resources. With the popularity of chatbot-like AI and the interaction that users are now demanding, it will require a shift in Lexis’ approach going forward. One focus that Reihl stresses, however, is that unlike the public AI chat tools, Lexis’ approach will take in the issues of privacy, security, citation of sources, and the ability to understand how their tools get to the results its users see. Less “black boxes” and more transparency is the goal.

Links:
Listen on mobile platforms:  Apple Podcasts LogoApple Podcasts |  Spotify LogoSpotify

 

Contact Us:

Twitter: ⁠⁠@gebauerm⁠⁠, or ⁠⁠@glambert⁠⁠
Voicemail: 713-487-7821
Email: geekinreviewpodcast@gmail.com
Music: ⁠⁠Jerry David DeCicca⁠

Transcript

Marlene Gebauer 0:07
Welcome to The Geek in Review. The podcast focused on innovative and creative ideas in the legal industry. I’m Marlene Gebauer,

Greg Lambert 0:15
And I’m Greg Lambert. Marlene, you read the recent survey conducted by LexisNexis legal and professional were 39% of lawyers. 46% of law students and 45% of consumers agreed that generative AI tools will significantly transformed the practice of law. So the survey also revealed that the top five ways law firms and corporate counsel are currently or would like to use generative AI tools include increasing efficiency, researching matters, drafting documents, streamlining work, and document analysis. However, the survey also highlighted some concerns about the ethical implications of using generative AI tools in the legal industry.

Marlene Gebauer 1:00
It’s a very interesting and timely survey, and we brought in one of the people behind the survey to come on and talk with us about it. So we’d like to welcome Jeff Reihl, Executive Vice President and Chief Technology Officer, the CTO for LexisNexis. Jeff, welcome to The Geek in Review.

Jeff Reihl 1:18
Well, it’s great to be here Marlene and Greg, glad to glad to help out.

Marlene Gebauer 1:21
Jeff, before we get to the survey, we saw that you moderated a standing room only panel about generative AI on the Monday before LegalWeek. So congratulations on that. Tell us a little bit about your discussion. And what were some of the key takeaways of that panel. Why did you draw such big crowds?

Jeff Reihl 1:38
Sure, sure. Well, first of all, it was a standing only crowd because we got started, there was four of us, or four on the panel and myself. And you can look out into the audience. And you could actually see people standing outside the auditorium.

Marlene Gebauer 1:55
Craning their necks?

Speaker 3 1:56
They were trying to get in and what we had to do at one point, but 30 minutes into the session is they actually stopped the session to remove a wall so that they could open up a second auditorium. It was extremely popular. And there were a lot of folks there that were very, very interested. You know, there’s been so much hype about ChatGPT AALL. As we’ll talk a little bit about the survey, a lot of people been using it and they’re getting familiar with it. And so there was a ton of interest in and we had some interesting speakers there as well. The panelists included, Danielle Benecke, who is the founder and Global Head of machine learning at Baker McKenzie, so large law firms are hiring people to lead up machine learning within our law firms. Aaron Crews as SVP of analytics and AI at UnitedLex. Foster Sayers general counsel from Pramata. And Ilona Logvinova is Associate General Counsel for McKinsey and CO. So really strong panelists really know this space a lot. So we spent a bit of time in that session first talking about what is generative AI, including, you know, Large Language Models, we highlighted the differences between GPT3, 3.5, and 4, we talked about some other suppliers, a Large Language Models like Google and Facebook. And we spent a time just talking a bit about how those Large Language Models work, and got into high level description that neural networks and things like that. And then we transitioned into the legal use cases, and people were really, really interested in that. And in fact, the the people in the audience were offering up their views of use cases, and things like training came up. And so there were a lot of different areas that were brought up. And then we spent some time talking about the ethical, legal and privacy concerns. And as you can imagine a lot of different views on what the good things about it and the potential challenges and how it impacts the, you know, internal firm in terms of risk management and things like that. And then we talked a little bit about the future and where we thought things would be going. So there was definitely a lot of interest in it. And, you know, some of the key takeaways that we had was, number one, there are limitations with the technology. I think we’ve all heard about hallucinations, and that it can be very confident in the way it gives answers even when the raw that the currency of some of these base models, for example, even GPT four, which was announced just a couple of weeks ago, the data that was used for training, that model cuts off at September of 2021. So that’s the level of currency. So current information you’re not going to get from from those models. They don’t always provide the source of their answers. So you get these answers and and you know, where did that come from? And can I trust it? And then there’s also the legal and ethical concerns and bias and other things. Just like that they come up with these kinds of models. So we spent some time talking about that. We also spent some time talking about whether or not law firms should be considering creating role models, or, more importantly, do they need to understand how this stuff works to make decisions about whether or not how to use the technology. Then the other thing we did spend a lot of time talking about is there are privacy concerns or ethical issues. And, and you need to wade through all of those to make decisions on how your firm or your legal department may use these new technologies as they’re coming out. But fascinating conversations that lasted a full three hours. So a lot of interest in that topic.

Greg Lambert 5:40
Sure. Yeah, you probably could have filled up eight hours of time.

Jeff Reihl 5:43
I think you’re right. You know, very few people have left by that time. But boy, that a lot of interaction for sure.

Marlene Gebauer 5:51
Yeah, I mean, I think a lot of people are seeking guidance. And so I can totally understand that they came here to sort of hear what other people had to say, because, you know, you do see some firms that have sort of that are forging ahead, but I think there’s there’s a lot of folks and firms that that aren’t quite sure what to do with it yet. And and are seeking that kind of advice.

Greg Lambert 6:12
absolutely the case. Jeff, you’ve been at LexisNexis now for more than 15 or 16 years. And obviously, this isn’t your first rodeo to witness the introduction and implementation of some of the game changing technologies that have worked its way into business in the practice of law. So how would you characterize the enthusiasm that we’re seeing for things like GPT, and Large Language Models, compared to other major tech innovations over that same 16 years?

Jeff Reihl 6:45
Well, I think there’s been a number of technologies, as you mentioned, I’ve been with a company 16 years and, and public cloud is one of those examples of new opportunities for you know, accelerating innovation. And so that that was game changing. We all use our iPhones, you know, and we’re using that for all kinds of different purposes. And so that’s another game changing technology. But I do believe that these Large Language Models, generative AI are going to be game changing, particularly for the legal industry. So I think the enthusiasm is, is really, really justified. And, you know, going back to even 2010, when legal analytics came about Lex Macondo company that we acquired a number of years ago, you know that that was a big change as well. And around 2018, we started using Burr, which was one of the first Large Language Models that Google put out at that time. And at that time, we’re kind of experimenting, we’re learning about it. But now with ChatGPT, and all the work being done there, it’s truly transformative. So I think it’s going to have a significant impact on not only the legal industry, but all kinds of different industries going forward.

Marlene Gebauer 8:01
So we’re gonna get into the survey now. LexisNexis recently conducted a survey of lawyers, law students and consumers on the use of generative AI in the legal field. The survey shows that a majority of lawyers and law students believe generative AI will change the way law is taught and studied. How do you see this happening? And what impact do you think it will have on legal education?

Jeff Reihl 8:25
Yeah, so it’s interesting, because we did this workshop, and we asked people in the audience what they thought and, you know, again, a lot of hands went up, when we asked these questions, and it was pretty consistent that a lot of people do believe that this is going to change the way legal is practiced, as well as how law school students learn the law. And so, you know, examples that were discussed is, you know, drafting, writing briefs, and and doing legal research. So there’s no doubt it, it is going to change the way that legal students Law School students learn how to practice law. And there was actually a question at the end of the workshop by a gentleman that says, Well, you know, aren’t you concerned that new students aren’t even going to know how to write because this technology can do a lot of that for them? We did talk about that. And there are lots of other examples of new technologies that have come out. And at first, there were a lot of concerns about that technology. But over time, people got more competent and that the quality of the of the technology continues to evolve and change. And you do see much more adoption of those technologies in practice. So while the lot of discussion about how it will in fact change the way our law school students do learn the law and the practice of law.

Greg Lambert 9:53
So what do you think are going to be some of the more practical use cases for Large Language Models? Like GPT? And more specifically, how do you think a vendor like LexisNexis is going to use this sort of technology in its offerings, because as much fun as it is to work with ChatGPT, and the new models that are coming out, they’re not trained on legal product. I know, they’re, they’re trained on the internet, and everything that that entails. And so there’s not that trust that you get when you work with a vendor, like LexisNexis. So what is it that you’re going to do to make lawyers more comfortable and using tools like this?

Jeff Reihl 10:42
Well, I think, in the end, lawyers, as an example, are responsible for the work product that they produce. And so I think you can use those tools to help accelerate and make you more effective, more productive, but in the end, you have to fact check, you have to modify, you have to make it specific to your client. So I think the, you know, in the end, it’s really up to the individual to make sure that whatever they’re producing is accurate, has the right information, and that a lot of the issues that we talked about, about around bias and things like that are not incorporated into the work product that is produced a lot of the examples you gave earlier around legal search, document drafting document analysis, summarization. So, you know, those are clearly areas that we’re looking at. And, you know, as I mentioned earlier, you know, we’ve been using this technology since birth came out in 2018. And, and we actually created a Lexis answers of product offering, that is based on embeddings, and some of that technology. And we’ve continued to evolve. And we’ve got products like back to issue finder, brief analysis context, which is uses NLP and machine learning algorithms, litigation analytics that Lex machina offers, and you know, this different types of technologies. But one thing that we always start with is, what problem are we trying to solve? What is the opportunity to use these technologies, and sometimes you don’t need the latest and greatest ChatGPT or GPT. Four, we found in many cases, when we do our testing, that GPT, three or 3.5, is completely fine, or even Bert. And so you know, we start with the customer problem, we evaluate the different technologies, often times these Large Language Models are combined with other technologies, for example, the search engine or graph technologies, to really make sure that the output from those Large Language Models is checked, and that you can find the source of that the answer and make sure that that answer is correct. So we leverage the fact that we have all this content, we have over 144 billion documents in our repository, a lot of the information that we have from legal editors over the last 30 years, the metadata that we’ve created through our technology over the years, that can help fine tune these models so that they give more effective answers. So that’s the way that we think about it. We’ve got dozens of use cases that that we’re evaluating now. And we’re testing, and you’re gonna see a lot coming out and in the near future, that some of which just makes a lot of the things that we already have better. And then there will be new product offerings as

Greg Lambert 13:31
well. But Jeff, you know, over the past, I’d say 1012 years now, we’ve had a number of AI tools that have been created and launched into legal. And some of those are, you know, through the major vendors, some of those are through startups. But I think the you know, just the the hype that we’ve seen over the past three or four months, since open AI came out with this ChatGPT model, was the fact that, you know, the AI that we’ve dealt with in the in the past decade has been kind of behind the scenes. It’s not something that we’ve really interacted with is something that we’re being told is going on in the background. But there’s no you know, there’s that not that interaction, there’s not that building of a conversation. And quite frankly, open AI’s creating it as a chat bot, essentially, was brilliant. I think that that really engaged a lot of people. So how do you see going forward? How legal vendors like yourself need to implement that model where there’s much more of a interaction going on between the the end user, the lawyer, the legal professional and the product that they’re using to do legal research or draft documents?

Jeff Reihl 14:58
Yeah, there’s no doubt about it. And, you know, I, to your point, it is extraordinarily clever. I mean, just to remember and add context, and you can feed it additional prompts to clarify the answer. And so I think the tools that are that have been created to fine tune the models, the human interaction to reinforce the with the reinforcement learning, there’s a lot of opportunity to leverage their base models add a lot of additional kind of legal ease to it and, and train those models, or fine tune those models for legal purposes. But back to your original question around interaction we absolutely see an opportunity for for that in our space. For sure.

Marlene Gebauer 15:48
Jeff, The survey also highlights concerns about the ethical implications of using generative AI in the legal industry. How is Lexis addressing these concerns and ensuring responsible use of this technology?

Jeff Reihl 16:01
Yeah, that’s, that’s a really important question. And, you know, we’ve always taken security and privacy very seriously. We know our customers are very concerned about that. So, you know, just from a pure security and privacy perspective, we build in all the safeguards we, when we looked at the vendors that we use for Large Language Models, we make sure that that data is protected and safe. And not only our IP is protected, but our customers IP is protected with as well. But I’ll also highlight that we do have internal rel Lex responsible AI principles. And I’ll just kind of walk through those really quickly. First of all, we consider the real world impact of our solutions on people. And that’s not only our customers, and then users, but also our in our internal staff. We’ve got legal editors. And so when you start thinking about automation, well, what does that mean for their job? And how do you treat your internal staff as you’re creating these solutions? You know, the second thing is we take action to prevent the creation or reinforcement of unfair bias. And so looking at the outcomes of using these technologies, validating that we do not believe there’s any bias built into the results of that. And over time, we have to continue to check those solutions to make sure bias is introduced over time as well. The third thing is we can explain how our solutions work, you know, you start talking about neural networks, it’s a bit of a black box. So you know, there’s an answer, but you’re not sure exactly how that answer was derived. So one of the things that we’re really, really careful about is that we do have some traceability to be able to provide, you know, how did we come to this conclusion? And what, wherever possible point back to the original case, or the legislation to show that that is, in fact cited. And there’s real world specific examples of how that answer came about. In all cases, whether we’re using our legal expertise to help train the models, or to review the outcomes of the products and and, for example, search relevance, or the answers that come out of a q&a session, we make sure that there is human oversight, we’re not just completely relying on the technology to do this. And then last, but not least, we respect the privacy and we champion robust data governance, so making sure we understand how these different data sets that we have come together, that we’re maintaining accuracy of those up to date information. And so those are areas that we have invested in, it’s built into our development processes. So you know, it’s not just writing the software, it’s also validating the outcomes of the solutions that we provide.

Marlene Gebauer 18:55
So I was going to ask a question, Jeff went, you know, you’re talking about using generative AI? And I mean, you know, Lexis is a, a probably tried and true partner for for many firms for many years. And, you know, so the InfoSec testing is, is probably something that isn’t happening on a regular basis all the time, because it’s already been done. But now that you’re doing more of this generative AI, I mean, do you have firms kind of asking the questions that you were talking about in terms of, you know, your, your core pillars of, you know, how you approach genitive AI and the ethics?

Jeff Reihl 19:33
Yeah, well, yeah. With to answer your question. Yes, we do. And, but we’re always testing. So a lot of that testing is through automation. A lot of that testing is through our legal experts that we have internal to our organization. There is this concept called drift in data models and over time based on the data that feeds those models, your answers can start to stray and drift away from the accuracy. So, you know, part of those ethics is, you know, going back and retesting these models continuously. And so we have expected results, we retest those models, both in an automated way and through human experts to make sure that they continue to provide the answers. And when we entered this new technology, we do the same thing. And one example of that is we like, we do something called a B testing. So as we’re rolling out something new, we will start with a small set of customers to make sure and get their feedback along the way, and we check just the outcomes of their interactions to see is it really providing the answers as a really helping them and their workflow, as we’re expecting it to, if we find that it isn’t, we can quickly pull that back. And so if we find that it is, then we can start to expand it to a broader and broader customer base. So you know, we start small, we, we test it, we tested with real customers, we get their feedback, and then we can expand it from there. But we are getting more and more of those questions as we do launch these types of products. But, you know, like I said, we’ve been using AI for a number of years, even 16 years ago, when I first joined, we were using NLP before anybody was really even calling it NLP, we were doing it to enrich our data to identify entities like attorneys and law firms, etc. So we’ve been doing this a long time. And I think our customers have built confidence in the solutions that

Greg Lambert 21:36
we provide. You know, I think at this point, there’s an expectation that pretty much every product that lawyers and other legal professionals are going to use over the next few months to years are going to integrate this generative AI into those products. In that I imagine it’s going to be across the board. But at the same time, you know, we need to make sure that everyone feels confident, and the ability to use the tools that they’re not going to be sharing information with others that that we don’t need to. And so what do you see as the role that Lexis will play in helping evolve these tools in their uses into the future? And I know we’ve kind of talked about this a little bit already. But what are some of the other potential uses? Do you see for generative AI tools in the legal industry?

Jeff Reihl 22:41
Yeah, I think we talked about some of those already. But again, we’ve been in this industry a long time. We know our customers very well, our development process include discovery to understand, you know, what are the real pain points for the different users of our products, we have the lots of great assets, including the 140 4 billion documents I mentioned before, all of the metadata that we’ve created over the years, all of that positions us very well to really take advantage of these technologies, and to ensure that they’re producing the expected results. So we’ve got a machine where we’re taking in new data, millions of documents a day. And we enrich that data. We’ve got legal expertise built internally, we’ve created a knowledge graph, that connect judges to cases to attorneys to law firms to outcomes. And so we’ve got this huge amount of information that we can use, along with these Large Language Models to solve new problems as they develop, you know, doing document analysis, coming up with new alternative clauses for a contract. You know, helping summarize documents quickly. So, doing leak when somebody is doing legal research, they get their answers as quickly as possible, combining different data sets that we have including dockets and cases and the legislative materials and expert witnesses to provide that critical information that is similar to the case that I’m researching. And so we’re we’re very well positioned to take advantage of these technologies and really combine it with our capabilities and our data and our legal expertise to really solve new use cases to make existing use cases even better.

Greg Lambert 24:41
Jeff, I think we’re seeing a lot of significant movement from smaller companies to jump in and implement a number of things with the AI tools that are now available to them. And so you know, for a larger kind company like yourself? Do you see that you are nimble enough to start implementing these things into your larger products? Is there going to be a long, a much longer timeline that it’s going to take for you to make sure that you’re not undermining all of the faith and trust that you’ve built up over the years, in order to kind of keep up with with some of the smaller startups or smaller legal information providers that are out there that are just kind of jumping in with both feet when it comes to generative AI?

Jeff Reihl 25:42
Yeah, I think first of all, I applaud innovation. And so you’ve got a lot of legal startups. Harvey is an example of a new tool that’s out there, you’ve got other companies that have that legal week, last week, and almost every vendor has AI or ChatGPT. You know, they’re talking about it now. So that’s awesome. It’s great to see the interest in the innovation. You know, obviously, you know, we think there’s a huge opportunity with this technology. And we’re looking at all kinds of different use cases. As I mentioned, we’ve already implemented solutions using Large Language Models, including Burb, we’ve used GPD. Three, we use 3.5. We’re testing four point I’ll, as we speak, but again, we have a very high expectation, we like to say that our services are professional grade, so we want to make sure they’re well tested, we want to make sure customers have that expectation that we were worrying about security and privacy and making sure that those answers are correct. So, you know, we’re we’re moving quickly, we’re looking at dozens of use cases. So, you know, I think larger companies can get complacent. But I can assure you, that’s not the case with us. We’re really excited about this. And we’ve got most of our data scientists looking at not only this technology, but other technologies that are available to advance in our product offerings.

Greg Lambert 27:13
I just want to go back to a question that I asked earlier. And ask, do you see Lexis as being able to play a role in helping law firms get that, you know, it kind of reduce that barrier to entry when it comes to using AI tools with their own information or integrating it into the network that the law firm has itself or into the practice of law? That’s related to the their own attorneys? What what kind of role do you see Lexuses is playing to reduce that barrier to entry?

Jeff Reihl 27:51
I absolutely, I think that’s certainly an opportunity for us to really help law firms. During the workshop, there was quite a bit of discussion about this, because, you know, these solutions are costly, takes a lot of compute, to work with these models. And so it’s not an inexpensive and there were a lot of questions about do I need to hire data scientists do I need to have, you know, machine learning expert on my staff. And, you know, what I said was, I do believe that there are going to be providers like LexisNexis, who are going to be not only providing services, like we do with a lot of the content that we have, but working with firms to help bring their content into the same to get the same benefit. You know, to your point, particularly a smaller law firms, there’s just no way and, but larger law firms that have more capabilities, I do believe that there are going to be solutions that that come out. And even these Large Language Models are going to get easier and easier to use. There’s been so much progress since GPT, three was announced last November, you can expect that to continue going forward. And so using these tools, I believe are going to be easier, or it will be easier. And you know, the costs will come down over time. But today, it is very expensive. And so there is a role that we can play to help in this space.

Marlene Gebauer 29:19
So, Jeff, I know we’ve been asking you several future forward questions, but we’re going to ask you one more before we go. We’d like you to take out your crystal ball and peer into the future for us. And tell us how generative AI tools will be used and what effect they’ll have on the practice of law in the next two to five years.

Jeff Reihl 29:42
Yeah, that’s that’s a tough question. Because if you would ask me, you know, two years ago, I would have said we wouldn’t be where we are today because it is changing so fast. And I do believe this is truly transformative. I do believe it’s going to have a significant impact on the practice of law. Law, I believe there’s going to be a lot of new use cases. But a lot of the areas that we already talked about, you know, those those are, are just developing now. So you’re gonna see significant enhancements, significant innovations in this space. I do not think it’s going to replace lawyers. But it’s going to augment what they do. And it’ll

Marlene Gebauer 30:21
I was I was going to ask, What do you think it would do to jobs? Like, you know, what types of new jobs? Why might we be seeing not? Attorneys are going away? But what types of new things would we see,

Jeff Reihl 30:32
I do think it’s, we talked a little bit, but I do believe that you’re gonna see law firms with more data scientists, I do think that, you know, data and combining your internal data with other sources of data are going to be an important, you know, factor going forward. So there are going to be new rules. And we’ve seen this with other technologies. Over time that people are always concerned about new technologies, you know, eliminating all these jobs, but always seems like on the other end of that, there’s all these new roles that come out new positions, and, and just going back five or six years ago, we had a few data scientists, but nothing like we have to that. So a lot of that is shifting over time. And so in the same is going to be the case in law, what we see is that, you know, some of the mundane work that has been done in the past is going to be replaced. If you won’t be doing that, but we’re what we see is happening is lawyers are going to be able to spend more time with their clients working on more strategic work, and helping to build their businesses. And so there will be, you know, certain tasks, that will get a lot easier, they will be replaced. But, you know, we see, one of the things that did come up in in the discussion in the workshop is I think somebody mentioned that, you know, this technology isn’t going to replace lawyers, but it lawyers that use this technology will replace other lawyers. And so if you’re behind the curve, and you’re not taking advantage of this, you’re not going to be as competitive to other firms or other, you know, corporate legal departments. So great opportunity to leverage these technologies a great opportunity to shift the work to be more strategic. And so that’s the way kind of we see this evolving. But man five years out, it’s going to be a new frontier, there’s no doubt about it, given the pace of change that we’re seeing here.

Greg Lambert 32:28
Yeah, I think you’re right, you’re spot on on that we’re going to see a number of data centric jobs just probably explode in the legal industry. And it’s going to be a little bit different than I think what we’ve had before, but I think some of that skill set will translate in. And again, you’re not the first to say that a lawyer plus AI will replace a lawyer without AI. I think that’s pretty much a common understanding now in in those that don’t get it may, may end up being left behind as we evolve. So I think a lot of it’s going to be you know, people are just going to have to be very flexible in how they approach things going forward. And you know, some of the things that you’re doing today may shift as we apply AI, in which how you have to do your job tomorrow.

Jeff Reihl 33:26
We’ve seen that internal to our company, as well. So somebody that used to be a legal editor is now a subject matter expert part of these agile teams that are creating these solutions. So exactly the case that you just mentioned.

Greg Lambert 33:40
Well, Jeff Reihl, Executive Vice President and Chief Technology Officer for LexisNexis, thank you very much for coming on the show today. I’m sure we could have also continued having a conversation for another eight hours is a pleasure having you on

Jeff Reihl 33:56
my absolute pleasure. I really enjoyed it. Thank you.

Marlene Gebauer 33:59
And of course, thanks to all of you for taking the time to listen to The Geek in Review podcast. If you enjoyed the show, share it with a colleague. We’d love to hear from you. So reach out to us on social media. I can be found at @gebauerm on Twitter,

Greg Lambert 34:12
And I can be reached @glambert on Twitter. Jeff, how about you if someone wants to reach out to you what’s the best way

Jeff Reihl 34:18
they can search me on LinkedIn or Jeff Reihl. R E I H L and don’t hesitate to reach out.

Marlene Gebauer 34:24
And you can also leave us a voicemail on The Geek in Review Hotline at 713-487-7821 to one and as always, the music you hear is from Jerry David DeCicca Thank you Jerry.

Greg Lambert 34:36
Thank you, Jerry. All right, Marlene, I will talk with you later.

Marlene Gebauer 34:39
All right bye