TCDI Talks with Altumatim | Episode 2: Checking In: A Unique Approach to Gen AI

About TCDI Talks with Altumatim: Episode 2

Welcome to the second episode of TCDI Talks, where our experts cover all things Generative AI (Gen AI). This week David Gaskey and Vasu Mahavishnu from Altumatim, along with David York and Caragh Landry from TCDI, discuss why they approach Gen AI more like a human than a computer. Learn how Gen AI is similar to human reviewers and the important role processes play during this 11-minute discussion.

Episode 2 Transcript

0:17 – David Gaskey:

Hello, and welcome to the second installment of our summer series, TCDI Talks, where Altumatim and TCDI are talking all things Gen AI. I’m David Gaskey with Altumatim, and I have with me today David York and Caragh Landry from TCDI, along with Vasu Mahavishnu from Altumatim.

So, last time we spoke a bit about Gen AI more generically with just approaching the idea of what is it. Today we want to talk about the approach to it. Now, one thing we’ve realized in working with Gen AI over the last couple of years is it really is a different technology.

It’s got different characteristics and different behaviors. So, we realized that it requires a different approach. A phrase that we often use talking about our approach is that we treat Gen AI more like a human than a computer.

You might be wondering, what the heck are we talking about when we say that? Well, we’re happy to answer that question. So, I’ll open it up to our panel. Who would like to begin to explain what we mean when we say we treat Gen AI more like a human than a computer?

1:39 – Caragh Landry:

I vote Vasu. I vote you start.

1:43 – Vasu Mahavishnu:

So, we tend to want to… So, in the traditional sense of using a computer, we…like for example, a calculator, for example. Let’s just take a calculator. You punch in a few numbers. You get back an exact result, right? So, it’s 100% accuracy.

If you use an LLM to perform a three-digit multiplication, it’s going to get it wrong 70% of the time. Or even higher than that, because it was not trained or built to do something like that. It’s trained on data that is using language, and its purpose is not to calculate math. So, there is obviously accuracy issues with and consistency issues with Large Language Models (LLMs).

But we don’t interact with the calculator, and we don’t speak to it. We don’t say, “Hey, can you multiply these two numbers?” We have to actually punch the numbers in. So, the way we interact with it, compared to Large Language Models, is very different, because we actually use human language to interact with it.

So, we feel that, when you…because of these inaccuracies…much like humans, humans are not accurate. Because when you ask a question to another fellow human, they may not have all the answers, and they may be guessing, they may be retrieving the answer from memory, or they may be adding their own reasoning and answering you. There is…there’s no certainty that the answer is correct.

3:32 – Caragh Landry:

Or Vasu, more likely it’s…they’re basing their answer on the documents they’ve seen so far. Right? So, they’re learning [what] they’ve experienced so far. Which is a reason why we add QC and validation to like all of our processes is because, you know, our QC team has seen more, right? So, they’re basing their answers on a very small percentage.

Vasu Mahavishnu (3:54):

Right, so if you’re in a room of individuals…I’m not talking about LLM. Just you’re in a room of 20 people, and you ask a question to an individual. And if they ask that question, and they answer the question, well, you can get the response from the other 19 people, whether the answer was right or wrong. And if it’s an opinion base, well, you know, that may result in a room that’s divided with that answer.

So, that’s much like how a Large Language Model behaves. It’s not quite right all the time, but you will need to evaluate that answer.

4:37 – David Gaskey:

In terms of that evaluation, and what you were just saying, Caragh, it sounds to me like they’re…not to say that an LLM thinks like a human. So, let’s take the review process during eDiscovery. If you have a human reviewing documents, you tend to give them some to start with, right, with a review protocol. But then you want to, at some point early in the process, check in on them. See how they’re doing. Are they understanding things correctly? And it sounds like we need to do a similar thing with LLMs.

5:14 – Caragh Landry:

Yeah, absolutely. So, that I think was our, and I think we mentioned this in our first talk, but that was our biggest learning experience from employing Gen AI in the review process was that it’s not just a one and done. You don’t create a prompt based on what you think and you send it through, and then you get a result back and the result is going to be great. If you expect the result to be great, it’s really naive, right?

You have to, you look at the results and it’s going to get a lot of things right, but it’s also going to get a lot of things wrong, just like a review team would, right? In a traditional process, you get the attorney team on with the TCDI review team, the MSMR team, and the attorney team, outside counsel, trains us. We understand it one way. We’re applying that understanding to documents one way.

It may or may not be the way outside counsel needs us to interpret those documents, so they give us feedback. We retrain our team. We look at examples, and then we refine our understanding, and now we apply that understanding to the next set of documents.

Very similar using Gen AI, and especially working with you and your team, is that it’s not one and done. It’s an iterative process. It’s about creating good prompts that are going to get you good results, and then validating and QC’ing the results and refining the understanding or the analysis that your product or your tool is able to do, so that we’re getting better results with each, throughout the process.

6:57 – David Gaskey:

Right. So, when we say that we’re treating the Gen AI like a human, we’re not really saying that the Gen AI is human, or even has thinking capabilities or reasoning capabilities as a human does. But there are similarities, and that’s why the similar approach that you just described with the human reviewer. That’s why it is effective with using Gen AI, because people refer to the reasoning capability of LLMs.

And it’s not pure reasoning in the sense of a human reason. Like Vasu’s example of the 19 people in the room. Everybody would apply their own reasoning to giving their opinion on answering that question. But what an LLM does have, I mean, there’s, within a neural network, there are layers, and it can analyze things and detect things like hierarchical relationships among data points.

So, being able to connect those data points, if you guide it properly, it resembles a reasoning process in pulling, stitching together those data points. So, there are definitely similarities that when you take this approach, you do get much more effective results.

8:17 – Dave York:

Yeah, I think one interesting thing we’ve learned with the Altumatim-TCDI work is that it’s very much a process. It’s still a process. It’s not just throw data and questions at something, get back results, and move on. But just like with human interactions, and I’ll use the 19 people in a room example. We wouldn’t walk into a room with 19 people, give them a review protocol and 100,000 documents and say have at it. There’s a distinct process that is followed with that human interaction.

We sit there, we go through training with them, they ask questions, we give them feedback, because all 19 people coming into that room have a different base level understanding of the content, of, you know, different concepts that are being handled.

And so, you know, those inputs and the feedback that we get, it is very human-like. Because we come, you know, a lot of us come from a world of database prompts to where, you know, you’re building out very specific queries that are not how we talk. You know, certain words within five words of this AND/OR you’re you know you’re building out all these complex Boolean logic statements to find documents that you’re looking for.

Whereas with this, we can ask very conversational type questions, and then based on the process that we go through that’s going to drive the answers and results that we get just like it would with the human. A key difference being, you know, computers can read 10,000 documents a minute, humans not so much.  So, you can get to answers quicker, and you can get to the understanding of why it made the decisions it made. Because to Caragh’s point earlier, I think that that’s key compared to some of the legacy AI tools we’ve used over the years.

Because even with Boolean logic searching, you can sort of dive in and say, okay, why did I get the results that I got? You can start analyzing and looking to see that you need to tweak, add a wildcard here, and additional terms here to get the results that you want. But, you know with traditional TAR and other things over the years, you’re just relying on scores as opposed to getting an explanation of why the machine made the decision it made. Just like you would with the human.

You can actually see its reasoning. Those 19 people in the room…you could sit down with each of them and say, okay, why did you make the decision that you made on this document? Why did you code it this way? And then you can make that correction, give them that feedback, so that if they’re wrong, they know how to be right going forward. And that’s very much how the LLM interaction is seen. It’s very process driven.

11:09 – David Gaskey:

All right, well, hey Dave, Caragh, Vasu, thank you so much. We will be back. We have more issues and topics that we are very interested in discussing, but we wanna hear from you.

If you have an issue, a question that you want us to cover, please reach out to us. We’ll be happy to cover that in a future episode. And for now, we’ll say goodbye.

Meet the Experts

Caragh Landry | Chief Legal Process Officer | TCDI

With over 25 years of experience in the legal services field, Caragh Landry serves as the Chief Legal Process Officer at TCDI. She is an expert in workflow design and continuous improvement programs, focusing on integrating technology and engineering processes for legal operations. Caragh is a frequent industry speaker and thought leader, frequently presenting on Technology Assisted Review (TAR), Gen AI, data privacy, and innovative lean process workflows.

In her role at TCDI, Caragh oversees workflow creation, service delivery, and development strategy for the managed document review team and other service offerings. She brings extensive expertise in building new platforms, implementing emerging technologies to enhance efficiency, and designing processes with an innovative, hands-on approach.

David York | Chief Client Officer | TCDI

David York oversees TCDI’s Litigation Services team involved in projects and data relating to eDiscovery, litigation management, incident response, investigations and special data projects. Since his start in the industry in 1998, Dave has made the rounds working on the law firm, client, and now provider side of the industry, successfully supporting, executing and managing all phases of diverse legal and technical projects and solutions.

During his career he has been a NC State Bar Certified Paralegal, holds a certification in Records Management, is a Certified eDiscovery Specialist (ACEDS), and has completed Black Belt Lean Six Sigma training.

David Gaskey | CEO and Co-Founder | Altumatim

David has been at the interface between law and technology for more than three decades. Specializing in intellectual property law, he has represented clients from all over the United States, Europe and Asia, including Fortune 50 companies, whose businesses involve a broad spectrum of technologies.

David has extensive experience litigating patent disputes at the trial and appellate court levels including the Arthrex v. Smith & Nephew case that received an “Impact Case of the Year” award in 2020 from IP Management. His litigation experience was a primary influence on how Altumatim naturally fits into the process of developing a case and why the platform is uniquely designed to help you win by finding the most important evidence to tell a compelling story.

Vasudeva Mahavishnu | CTO and Co-Founder | Altumatim

Vasu brings his natural curiosity and passion for using technology to improve access to justice and our quality of life to the Altumatim team as he architects and builds out the future of discovery. Vasu blends computer science and data science expertise from computational genomics with published work ranging from gene mapping to developing probabilistic models for protein interactions in humans.

As a result, he understands the importance of quality data modeling. His extensive experience with business modeling, code construction for front-end and back-end systems, and graphic presentation influenced the architecture of Altumatim. His creativity and commitment to excellence shine through the user experience that Altumatim’s customers enjoy.

In Case You Missed It

If you enjoyed this video, feel free to check out some of our other great content!