Video: ILTA Product Briefing: Unlock the Power of AI-Powered Semantic Search with Reveal's Ask | Duration: 3380s | Summary: ILTA Product Briefing: Unlock the Power of AI-Powered Semantic Search with Reveal's Ask
Transcript for "ILTA Product Briefing: Unlock the Power of AI-Powered Semantic Search with Reveal's Ask":
Hello, and welcome to today's Ilta product briefing. Today, we will unlock the power of Gen AI semantic search with reveals ask. My name is unlock marketing campaign manager at reveal, and today I will be your event producer. But before I turn this webinar over to our esteemed panelists, there are a few housekeeping items I'm going to share with you all. While the demonstration portion of this webinar was recorded yesterday, there will be a live q and a at the end of the webinar. If you have any questions, please submit your questions in the q and a section on the right side of your screen, and we'll try to answer as many as we can, at the end of the webinar. Also, at the end of the webinar, there'll be a super quick three question survey. We would absolutely love your feedback. So if you could please take 30 seconds to fill it out, that would be amazing. This webinar will also be available on demand. You will receive an email tomorrow with the on demand version of this webinar and it will also be available on reveals website. Lastly, if you would like to learn more about ask or any of reveals products, I welcome you to request a demo by clicking the button on the right hand side of your screen. And with that, it's my great pleasure to start this webinar. Good day, and welcome to this Ilta product briefing. Today's topic is unlocking the power of Gen AI powered semantic search with Reveal Ask. My name is Jeffrey Wolf. I'm principal strategic sales engineer here at Reveal. My background, I've got about 10 years in eDiscovery at this point and prior to that another 20 years in sales engineering overall, but primarily around search. Really relevant to today's topic. We'll dig a bit more into that later. But I'm very excited to be with you today to talk a bit about search and reveal, ask. Here's an agenda for today's session. We're going to level set a bit with the evolution of search to kick us off. I always like to begin my presentations with a little bit of history, just to make sure everyone's comfortable with where we are and then to move forward. We'll talk a bit about Reveal's generative AI strategy. I'll cover a little bit about the overview of Ask and how it, how it is best utilized. We'll dig a little bit in in under the covers and talk about the technical design of Ask and the, the technical implementation of the product for those of you that are are interested in that, And we'll finish up with a demonstration of the product. So I'll do a live demonstration of ask within the Reveal enterprise environment. We will have a q and a at the end of the presentation and demo. So feel free to go ahead and put your questions in as you get them throughout the presentation. They'll go in the question pod and then we'll answer them towards the end of the presentation. So let's begin with the evolution of search. And first of all, as we all know, you know, about 80% today of data as we've discovered applicable to most businesses is unstructured data. As opposed to your structured data sources like your database applications, things like Salesforce and your Oracle Enterprise Solutions, Unstructured data as found in e mail systems, chat communication systems, enterprise content management systems like Google Workspace and Office 365. All of those documents are hard to find. They're typically difficult for most organizations to get a handle on, both from an information governance standpoint, but also and more importantly to this audience from a legal standpoint, legal and compliance. A lot of effort over the years has been developed in trying to combat ways for organizations to find the information they need in an efficient manner. Very first approach was Boolean search. Right? And so in in preparing for the presentation today, I wanted to kind of go back and understand the origin of Boolean search. We've all been using it for years for both legal research, as well as search engines, e discovery tools. Boolean search dates all the way back to 18/54. It was a 19th century English mathematician named Georges Boole, hence Boolean, who initially came up with this. What Boolean search allows you to do is to combine words and phrases that you're looking for with and, or, and not. Those are the we call Boolean operators. Those terms are going to limit, they're going to broaden, or they're going to define your search accordingly. Now, the limiting part, as you can see on the bottom of the slide, the left side is the important issue here. Boolean search provides results, but you never quite know if you have all of the results that you're looking for. There's really no ranking of results. Obviously, relevancy ranking came along much later than 18/54, using primarily a methodology called TF IDF or term frequency compared to inverse document frequency. Basically what that means in a layman sense is how often a term appears and how little it appears in normal language. How often it appears in your document versus how little it appears in normal language and that's what controls relevancy. But it's still Boolean search under the covers. It's still and, or, or, not. Fast forward quite a bit to the early 2000s, and you have in the legal industry, conceptual search or concept search. And concept search, is really designed to help you find responsive documents using the meaning of things. So instead of an exact keyword search, concept search looks for things that are similar in meaning based on its understanding of documents. The way it does that is it is trained against a large corpus of documents. Think of it like an expert system. Right? So if you call a customer service rep and you ask a tech troubleshooting question, they have a script in front of them that says, okay, the customer asked me, this question and so here are the possible answers. Based on those answers, the customer service rep is going to give you information. Conceptual search works in a very similar capacity. It's trained on a whole bunch of documents to understand that in this case, if you look at the slide, Apollo, it can be related to a number of things. It could be a lunar mission, it could be related to rockets and the space race, it could be related to the theater in New York, it could be related to the Greek gods. Conceptual search is going to expand and broaden your search, but it's going to bring back a whole lot more documents. Therein lies the problem with conceptual search, is that it's going to bring back too many results. That makes it time consuming to analyze all those documents and review them. But you are going to find more documents than you would have with a keyword and a and a Boolean search, which sort of brings us to the next level of search. And that's what we talk about here when we talk about Reveal Ask. This is a generative AI assisted search, and it really does combine the best of both worlds. Right? So you take your kind of Boolean search and concept search together. And what, Ask delivers is the ability to, in natural language, much like you would do a concept search, query your documents or your project for information and have it bring back relevancy ranked documents. Conceptual search introduces ranking and summarization. Gen AI search in the form of Ask puts those together and gives you that best experience. It provides a much quicker understanding of the documents that you have access to in your dataset. What I really like is that second bullet turns novices turns novice users into search experts. Even, in the early days of especially in the early days of Google search, but even in the early days of concept search, it typically required someone who really understood how to use the tool to actually get the most out of it. Gen AI is the is the great level setter. So Ask in its by its nature allows almost anyone to understand the data very quickly even without, a lot of technical knowledge around search and how it works. And we'll see more of that as we kind of go through the cycle here. This is pretty interesting. I don't know if any of you are familiar with the Gartner Hype Cycles. They've been developing these for years now. I think this goes all the way back to 1995 is when they first started their Hype Cycles. This one is the Legal Tech Hype Cycle from 2021. There was a reason I put an older one up there. You can see here, for those of you that are not familiar with the Hype Cycle, this is a graphical representation of the maturity and adoption of new technology. Specifically in our case, we're talking about legal technology. It begins with the innovation trigger where this is where the technology first captures attention. Think of it like when full self driving was first announced by Tesla. That that was an innovation trigger. So that innovation trigger then, arises swiftly to what they call the peak of inflated expectations, and then it plummets to to the trough of disillusionment. I love their terms. And then it climbs back up through the slope of enlightenment at 4th days there, and then finally plateaus, what they call the plateau of productivity. So, the reason I bring this up is that you can see that kind of e discovery now very much in its in its heyday is climbing the slope of enlightenment towards a plateau, an inevitable plateau of productivity. But what's over here in the farthest reach here in July of 2020 is this one is predictive analytics. So those tools that we've been using, continuous active learning, technology assisted review, natural language processing, those predictive analytics that we've been using in legal technology now for a while are now plateaued. If you look at the hype cycle from 2023, you will see that generative AI is now just at the peak of inflated expectations. So Gartner predicts that in about 2 to 4 years, we will plateau in generative AI. That's where we're going. We're about 2 to 4 years of of for a plateau of productivity with Gen AI. So that means we will we were now peaking right now in what we can do with it, and then we will climb we will drop back down before we climb back up to a plateau. What Reveal has done, through a very thoughtful process that I'm going to describe in a bit, is create our generative AI, first generative AI offering we call Ask. Ask expands upon Reveal's already very well established AI capabilities. Reveal is considered the gold standard within legal industry for artificial intelligence. Our acquisition of NextLP, Brain Space, the integration of those platforms into Reveal Enterprise as a single homogeneous product. We've always been at the forefront of giving lawyers the tools they need to leverage advanced technology, especially AI technology. What Ask does is now expand that to generative AI technology. It was really purposeful built AI for legal. Ask allows end users to interact with their project data using natural language questions so they can easily ask and receive responses in a narrative form. Augments human decision making. I think it's really important to make sure that obviously whenever you're using artificial intelligence, especially Gen AI, there is what we call a human in the loop. You'll see that phrase used a fair amount, HIDL for sure, a human in the loop. That means that every decision that AI makes, a human should validate. You should never take a generative AI response to anything without any form of validation and use that going forward. But the whole process of using Ask really elevates that user experience. This tool was developed with a design driven approach based on the needs of lawyers specifically. But it provides a really easy to use interface. It's not a separate product. It's built right into the UX and it makes it very easy to engage with. Flexible is important because while we use a certain a large language model today to power Ask, that doesn't mean that has to always be that way. In the future, it can be powered by your bring your own large language model, option. Not not today, but in the future, we can do that. We we can be in an environment where the design of the of the Ask tool was designed to be flexible enough that you can plug and play in the future with other models. We wanted this technology to be pervasive within the platform, and that's really important. So it interacts with all of the other artificial intelligence analytics that we've already established in our product. Things like the visual dashboards, the emotional scoring, the scores by predictive model, the communication analysis. All of these tools that already exist and are being used by our customers, ask can interact with. I'll show you a little bit of that in the demonstration. The last bullet there is responsibility and that's obviously really important. It was so important I gave it its own slide. There are 3 things around responsibility. The first is obviously making sure that our customers understand we never use their data to train artificial intelligence models, whether that's any of our models with it in house, the large language models we work with, unless we've specifically gotten permission from a customer to use it, we never use your data for that purpose. It's simply not done. We have an entire data science team that all they do is build artificial intelligence models and curate them and fine tune them over time. We have very trained staff that does only that job. That's very important to us at Reveal to make sure that we have the best people and the best minds in the legal profession working with our AI models. And of course, security. Security is at the forefront of all of our products, but very much so with our Gen AI tool. We wanted to make sure that the best security measures are in place, around every aspect of that chain. So what does that look like? This is the map of our deployment of Ask. It first went out to a select number of customers way back. So about this time last year, seems like forever ago, it was really only a year ago. But we had a series of customers that we selected for Beta release of Ask and they were able to test it out. We were able to solicit feedback from them and make changes to the environment. We first then deployed Ask. Actually, we first displayed it at LegalWeek early this year. Again, it seemed like forever ago, it was early this year in 2024 at LegalWeek where we first showed Ask to the general public and we went into an early access period. And so we were inviting customers then to turn Ask on in their projects or their environments. Just this summer, we went to full global availability for Ask for all rebuild customers. So it is available in production to all of our customers, all of our regions that are supported. What I will say is that it's been very well received by our customer community. Obviously, we have test demo and when you see me demo, ask later in the presentation today, I'm going to be using Enron data, of course. I have some slightly different questions to ask that aren't the standard ones, but it is Enron data. What we see, obviously, we can't share it, but what we see from our customers when they use Ask in their live data is fascinating. They are very quickly able to get to narratives that they would not have gotten to with, without a significant amount of time and effort involved in searching or reviewing documents. They can literally just ask questions of data they just received within minutes, know a lot more about the dataset than they did when they first got it. This is not the end. I will say that what you see today in Ask is obviously, I would say still first iteration. We have a number of things that are coming down the pike and I'll talk a little bit about that in line as part of the demonstration. So I'll save that for later. The way Ask works very straightforward is through the ability to ask questions. You're posing those questions in a everyday language. You can literally ask any question you want of your data. I will talk a little bit more about what questions work better than others. But then you can also follow-up your question with a prompt. The prompt says, I want this in, answer it in verbose or more detail. Give me a timeline of the events here. Answer this in Spanish. So it can do quite a bit. The generative part of generative AI is to generate content based on what it understands about your documents. So you're not just getting a document list back. I'll talk a bit about how Ask works, but it starts with a search. Everything begins with search. But in this case, Ask employs a semantic search, which is different than a concept search, because it is not based on tree trained content. Remember we talked about conceptual search being the system understanding the word you're searching for, but how it relates to other things it's been trained on, other documents or terms it's been trained on. Semantic search takes that a step further. You don't have to have it based on pre trained doc documents. It understands natural language and can then respond, picking out the terms in your question and bringing back documents that are very relevant to that. I'll show you how you can get more details. Obviously, citing sources is extremely important to us. That was part of our initial design. We're going to enhance that next month. I'm very excited about that, but I'll show you what I mean by that. It will bring back much more rich information faster. For the fellow nerds on the webinar today, I'm a technologist at heart, so understanding how Ask really works was important to me. Obviously from an end user experience, it's great, but I really want to understand exactly what was happening behind the scenes. One of the things that's really unique about Reveal Enterprise, the product, not the company, is that we have not only integrated these artificial intelligence tools into our into our product, But we've seamlessly done that in such a way that if you go back 5 years, you might have the best eDiscovery tools in front of you, but you typically needed someone who really understood setting up the technology assisted review project. That meant some of the technical experience. If you had a legal operations team or trained paralegals that could do that, they would set up the project for you to allow somebody to then go in there and start coding, and then the system can start scoring your documents for you. That's typically how predictive coding works. But there was a setup period to that. The reason I mentioned that and belabor that is that what Reveal did was really, really advanced in that they created this what we refer to as an artificial intelligence pipeline as part of processing. When we process data in Reveal, the last step of that is called the AI pipeline. What it does is create up, set up all of the indices and things in the back end that allow us to provide you with the analytics. The natural language processing analytics, the emotional intelligence analytics, the communication analysis, all of that is pre populated by the AI pipeline. There's no setup work that needs to be done. That's an important point and I wanted to make sure that that's clear. What we've added with Ask is we've expanded that AI pipeline to include a semantic search index. We integrate that into the database. Basically, it's a next generation version of a concept search index, and all of that is housed within our Elasticsearch database. It's extremely fast, extremely scalable. So understanding that, users now go ahead and type in their search query into Ask. And it is in fact a search group because what it does is it takes the question you ask and it under then it transforms that into, a semantic expression, which can then be populated by that Ask Search Service that we've created. Ask then goes ahead and probes that advanced index for text excerpts that have the highest semantic similarity. So it's looking for, those phrases and those terms that appear with the high similarity, to what you're looking for. And it extracts those as source document sources and references within the documents, and it ranks them by similarity from 0 to a 100%. All of that is done here in your own Reveal instance. Step 1 is search happens, the semantic search is done, and then we take a subset of that. It then further analyzes those documents through this additional AI model, which is external to the Reveal instance here, but part of the Amazon Web Service Bedrock environment that we're hosting in. In. That is where the large language model comes into play. That's where that generative model comes into play. The selected results are sent to Bedrock here. They are then in, of course, this whole path both in and out of the generative model are encrypted, and then the AI results are returned back to the user presenting those search results, both in the narrative form and the cited search documents with the reference scoring. That will make more sense for you when you see it. But that's really how the process works. So a lot of it happens here in the local reveal instance before a little bit of it shipped out, to the Bedrock service for, for some primarily for summarization. There are a lot of ways that you can use this technology. Obviously, we had some in mind when the technology was designed. Our customers are now finding new ways to do it. So things like obviously early data analysis, EDA in both, you know, internal investigations, litigation, being able to ask questions of the dataset and quickly, you know, quickly, very quickly uncover information that will enable a quick decision, whether it's a case strategy decision, whether it's an internal investigational decision, locating key docs very quickly, finding seed documents for a technology assisted review workflow, that's another way to do it. Identifying privilege obviously, in a production set. If you receive a production from the other party and you need to quickly find important documents, without starting a review, you can actually use Ask to kick start that whole process. We swiftly analyzing an incoming production, really important. Obviously, preparing for depositions or custodial interviews, these are great ways and I'll talk a little bit more about that and actually show it to you in the demonstration. A large variety of use cases, and there's one massive one there that I didn't mention, which of course is first pass review. That is the immediate future. I'll leave you with that saying, that is definitely where ask is going, is first pass review, and being able to identify responsiveness automatically or give you a cut at it. You'll see more about that coming out of Reveal early next year. But let's talk a little bit more in-depth about the questions that are asked. I put up there who, what, when, where, and how, and you'll notice why is missing. I find that questions like why, why did Andy commit fraud, or broad questions like was there fraud, are not ideal for a tool like Ask. These are these these searches are extremely challenging because unless someone's documented the reason, for an entire scheme of fraud in documents that are in your dataset, Ask is not going to be able to return a very good result there. So Ask ultimately needs to be able to have in order to answer your question, Ask has to be able to find the information in the documents that are provided. So, the questions that you see there, who, what, when, where, how, those are effective questions. Identifying people, items, discussions that took place. These are all great things for Ask. Ask currently does not understand metadata questions. So, dates, for like creation dates of documents or sending dates of emails. We avoid any kind of metadata questions and we focus on the text of the documents or the body of the e mail messages or the chat messages. All of that can be searched using Ask. But just don't want everyone to keep that in mind as we go through this. Comparisons. So I wanted to make sure that people understood the differences here. Right? Ask versus a traditional keyword search. We talked about the evolution of search at the beginning. Ask really excels in precision and leveraging that semantic search index to understand the natural expansion of words, and it offers those targeted results. Keyword search does not do that, right? It will return all instances of a specified term, which may include very time consuming relevancy checks. Ask is going to help you narrow down the results and the pertinent documents to really that contextual understanding of the search query. And so and I'm putting Ask versus Chat GPT, and when I say Chat GPT, obviously, I mean Gemini and Copilot and all the other commercial, generative AI tools out there, not limiting it just to Chat GPT. But whenever you say JAI, everyone's first thought is chat GPT. To be clear, and this is a very important point, ask is limited to just the documents in the project that you're working on, not even your entire Reveal instance. It only has access when you're asking questions, it only has access to the documents that are in the project you're currently in, which is extremely important. Obviously, ChatGPT has access to a much wider set of information. Excuse me. It has access to the entirety of the Internet, so that's an important distinction. There's not going to be hallucinations. Chat GPT, unfortunately, we all know that not everything on the Internet is true. If you're asking Chat GPT to go and create content for you or answer a question, it might come across or utilize information that is not in fact true. Now, that doesn't mean that every document in a dataset that you receive is also truthful, entirely truthful, but you can at least rely on the fact that Ask is only using the information in the document set that you're working from. Ask is designed for really optimized for searches in your database. It's also optimized for understanding documents in legal capacity. Chat GPT is not. It is not tuned for, for for that specific document set. It is not it is primarily Chat gpt is designed primarily for content generation. So it's less it is less efficient at database searches, and as a result, it's going to bring back noise. It's going to bring back things that you don't want it to. Chat GPT can simulate conversations, and that's not what Ask is designed for. Ask is designed to do intelligent document search and retrieval. There are some important considerations when you talk about the 2. Everyone always asked, okay, that's great. What about the commercials involved? I'm not going to go into pricing. I will say that Reveal is extremely flexible about how we license the Ask feature. All the other artificial intelligence analytics that we provide in the Reveal Enterprise platform are included with a subscription to Reveal Enterprise. Ask is a little bit different because there is a significant cost element to the compute time that we use. The way we do that is we have, obviously, if you want to try it out, we can enable it for you in a demo project at no cost. Reasonably sized demo project is not a problem. But commercially, we have a transactional model where you can say, hey, I need it for this one case or a specific document set or I'd like it on all of my projects going forward. We have the ability to enable it on an individual project. We have the ability to enable it in an entire, Reveal subscription if you'd like. So all of those options are available and if you're interested, you can obviously talk with one of our account execs about that. So without further ado, I know we're moving along here in a good clip. I want to go in to show you the actual product. So I'm going to switch out of my PowerPoint instance here and go over to Reveal. All right. So here we are within Reveal Enterprise and as you can see, as I mentioned in the presentation, there are a lot of analytics that are built into Reveal out of the box whenever you ingest your data because of that AI pipeline I was talking about. The, you know, the ability to identify exact and your duplicates, documents by predictive scoring using our AI models, emotional intelligence, communications analysis, content clustering, all of that is out of the box and all of that is populated by that AI pipeline. Where we're going to go now is to the, to the Ask interface. Right up here, because Ask is a search technology, we felt it was the most appropriate to put it up in the search bar. So if I go here and I can just click on Ask, you'll notice first of all that this is my Enron dataset. There's 1,100,000 documents, a lot of documents. Ask does have the ability to allow you to search, across all of the documents in your project or just on a subset that you've identified. So, let's start out with a simple question. What's really funny is that I noticed that when I ask questions that ask, I'm still very formal about typing my questions out with proper punctuation and capitalization, none of that's really required, but it's just habit. This first question I asked is, who worked in finance at Enron? Obviously, it's very easy to say who was the CFO at Enron, comes back Andy Fastow. It's much harder to ask who worked in finance at Enron. Think about the fact from this from a keyword search standpoint. How would you go about conduct if you've just received this information about your client, or or this was a third party production to you and you wanted to glean some information about it, how would you construct a keyword search that would hit on all the right questions to bring back the relevant information for a type of question like this who worked in finance? It's very hard to do actually, and you're going to get a bunch of superfluous information back. Here, I asked this simple question and it went through those documents and found me all the people involved. So you can see here, Jeff McMahon was the EVP and treasurer. It talks about, Ben Gilson, talks about Kevin Howard. It talks about Andrew Fastow, of course, because he was CFO. It It also goes on to mention those partnerships that he engaged in. And then it talks about other people that were mentioned in documents and their titles. So all of this is a narrative response created by, first a semantic search looking at my question and then all the cited documents here. You can see that ask will return up to a 100 documents and references. This is the actual documents in the dataset. This is the references within the document. Obviously, the references are higher because there's at least 1 or more documents that have multiple references in them. If I look at my documents themselves, I can see a scoring, a ranking, a semantic similarity scoring between the text of my question and the source document. And I can launch the source document just right from here if I want to see it. It also shows me a snippet of where it got that information from. So the first three documents are going to, you're going to see snippets and then I can open up any of these others if I choose to. But this is an important thing that not only giving you the narrative response to your question, but we're also showing you the cited sources. I hinted at a November update to this. What we're actually doing is adding citations. There will be footnotes at the end of every statement in your narrative response that are hyperlinks to the specific document where that information is gleaned. You can actually go directly to the most relevant source, semantically similar source is probably more correct, and you'll be able to do that. Let's expand upon that whole concept of the partnerships that Andy Fastow created. So let's ask, explain the LJM partnerships. Now, I'm going to give it some more information. This is really the prompt portion. If you think about it, this is my search question and here's the prompts. I want you to answer in detail and provide a timeline. What I've known from working with attorneys for a very long time is they love timelines. Timelines are extremely important to building cases, understanding case strategy. Having this look at the data and build a timeline for me without having to do any document review ahead of time would be incredibly useful. You have to think about how much time and effort and money is saved by having a tool that can generate this in less than a minute for me. So I've asked it to to explain what it meant by LGM Partnerships, pretending I have no idea what they're talking about, and then my narrative response comes back and it says these were private investment limited partnerships formed in 99 by Fastow, who was CFO at the time. Here's the key events in timeline fashions. So it literally breaks out every event that it found in the documents related to the LJM partnerships and plots them out for me. Again, this time it used 54 documents and it found a 100 references. If I look again, I'll see you know, the top few documents here are extremely relevant. In fact, this first document had 4 references in and of itself. So it found a lot of information there, but it also used 8 references in the next document and so forth. So it used a number of these documents repeatedly to glean the information it needed, to respond to those questions. Let's move away from Andrew Fastow for a moment, and this is what I found recently. That works really nicely. Actually, we're gonna we're gonna say, answer I was working with an organization that dealt specifically around construction eDiscovery. As it turns out, there are a fair number of documents in the MRON dataset that deal with construction projects. One of them is EPE project. You can imagine, let's say you were given a dataset and you're just given the term EPE and you have no understanding what that means. I went ahead and typed in EPE. What is the EPE project? EPE project deals with a project in Brazil, hence why I asked it to respond in Portuguese. I don't speak Portuguese by the way, so I do have Google Translate sitting here. It comes back and tells me that EPE was a plan to build a 80 megawatt thermoelectric power plant. It talks about all of the information that went into that project, the construction of that plant, and the pipeline to transport that natural gas that will be used as fuel for that. I can delve a little bit further into it and say I actually built this one out in advance, so I can show it to you. There were construction delays in that project. We often focus so much on asking questions that we forget about the generative element of artificial intelligence. Obviously, it was able to convert its answer into Portuguese. It can actually convert it into almost every language. In fact, it it's able to convert it in some languages that are not real. For those Lord of the Rings fans out there, it will work in Elvish. I've had this one do that properly. I've tried a few other fun ones like Klingon has worked. But you can also ask it to generate content based on the document. So in this case, I'm going to say, were there any, you know, what there were construction delays obviously on the project. Documents in the dataset allude to that. So I can ask what caused those construction delays and more specifically, I'm asking to suggest questions that I that could be asked of the project manager on that project if if that person was deposed in a legal examination. So now not only am I asking for information, I'm asking it to generate work for me, create new content that, I can use. So in fact, yes, there were there were construction delays, they were caused by the AG's investigation, and that's then here are some questions that could be asked of that project manager. So this is the the part that to me at least is the most exciting and interesting about technology like this because this can really, not only speed up decision making and legal discovery, but more importantly it can speed up the entire process and it could replace some some human workers or I would say make them more efficient. You're never going to take that human out of the loop that I was talking about, but you can certainly make them a whole lot more efficient in a much quicker capacity. I can even let's go a step further. I'm going to say, I'm going to ask it here. Let's say I know nothing about construction projects. And by the way, I don't know a whole lot about construction projects. But I'm providing now information to the application here. I'm saying I wanted to bring back a list of 10 terms that would be good for a keyword search of documents about physical construction projects. So I gave it some information what I mean by a term and here comes back 10 terms. And these are these are fairly complex terms. These are not one words or or keywords. These are actually terms that are used in construction projects. Here's what I mean by using the Ask tools in conjunction with other tools within Reveal Enterprise. If I grab this and I say, let's go out of Ask for a moment and we'll go over to our term list feature and plop those in. I can literally just cut and paste. Now I turn on, go from plain text to table and I hit this. Within a matter of seconds, now I've run all of those terms that were provided by Ask and I've gotten hit counts of documents. In fact, I can click Add to search and then run that search within Ask and bring back just the 6,000 documents that came back as responsive, 6,700 documents came back responsive. Then I could go back into Ask and you'll notice that when I do that, it offers me the option to limit my scope of my questions to just the dataset here. That's your ability there to focus on a specific subset of documents in Ask is already there for you. Not to mention the fact that I didn't even touch on the fact that you can look at previous conversations, you can download your interactions with Ask. There was a defensibility element. But this is first and foremost a search tool. You are able to use it to search for documents. It is just a more intelligent search tool because it leverages that generative AI, and large language models. All of the searches and the conversations that are had between the user and the ask interface are tracked in the audit log of the project. So a project admin can go and download that information for any user of the project. From a demensibility standpoint, we have that covered here. One thing I really like about this obviously is it is not currently able obviously to go outside of its project. We made that very clear. I can say, Hey, what teams are in the World Series this year? Well, it's not going to know the answer to that because this is the end run data. But this is just an example of what I'm talking about here. The results of this question are actually pretty funny. You'll see in a second here. Obviously, it's not going to know who's in the World Series this year, but it does bring back some interesting information. It'll say, hey, based on the given text, there's no information about the teams playing in the World Series. There appears to be financial data and projections related to inflation rates, exchange rates, tax depreciation. The answer can change. I've run this question before and I've gotten the fact that it come back and said, actually we found documents in the dataset that talk about how the Mets complained about the last time they played the Yankees in the World Series, that they should have won that World Series. So someone talked about that in a document or 2 within the Eneron dataset, but there's obviously no information there about who's playing there because it is not connected to the Internet. It is only limited to the documents in this project. Hopefully, that was a good overview of Ask, how that works inside Reveal Enterprise. That hopefully also expands your understanding of how generative AI tools in general can be leveraged in the legal industry. And at this point, I think we're gonna turn over, for question and answer. Awesome. Thank you, Jeffrey, for that great demonstration. Let's get into some q and a. We have questions live from the audience, and then we had some presubmitted as well. So to kick us off Perfect. We have a question from Vanessa. Is there a plan on the road map for ASH development to incorporate metadata information? For example, having the ability to use date information in a day field for a timeline would be ideal. Yeah. That's a great question, actually. Right now, the short answer to the question is yes. I think in the future that we will be able to incorporate metadata into, the natural language search functions of ASK. However, even today, right now, you can incorporate you could you could for you could follow this example, using the other analytics in Reveal together. So for what I mean by that is, for example, you could use, any of the date fields in the database to select your document set and then go to ask and ask a question of just that subset of documents. So you can already use the metadata fields today, in many ways to influence what what you're providing to ask in terms of a project set. But I agree. Yeah. I I think I'm pretty sure that, metadata will be part of the future of Ask directly. Awesome. So our next question, kind of live from the audience from Josh. Josh wants to know I I asked handle I asked handle to handle queries that span across multiple matters. Yeah. That's a great question. Not today. Today, it is focused on the individual projects. I don't know if that is the direction that we'll we'll take the ask, product in the future. It's definitely something that's being considered. But there are a lot of implications to that when you go across projects. So, but it is yeah. It's a great question. Awesome. Our next question was a presubmitted one. How does ask how does ask semantic search perform when compared to keyword based searches in complex investigation scenarios? Yes. So this goes back to what I was saying in the presentation regarding the different, types of search and kind of the evolution of search. So keyword search is great. However, it often, will return you you don't really know if it's returned all of the results that you're looking for. Right? And so the difference with a semantic search is that semantic takes the best of both worlds from the keyword Boolean world and the, the concept search world and brings them together. So it it can parse natural language queries like a, like a concept search can, but it is not gonna return too many results like a concept search would. So I think, you know, that is a semantic search is always gonna be better than a keyword search in that, respect and even better than a compound search as well. Awesome. We have another presubmit question. What's the learning curve for integrating us into existing workflows, and are there resources to support onboarding for larger teams? Sure. Yeah. There's not really a learning curve. That's the beauty of generative AI, and I think that's one of the reasons that, there's so much interest in the topic. You know, ediscovery companies rolled out technology assisted review a long time ago, and the adoption of TAR took a very long time. Gen AI on the other hand, generative AI tools are being adopted in a much faster rate really, because it allows for that whole natural language approach to interrogation of documents. I like to I talk about it as interrogating your document set, you know, in natural language. So the learning curve is very short in terms of how you deploy it into workflows. That I could spend a whole another webinar talking about because there's a variety of ways that ask could be utilized depending on whether you're talking about an internal internal investigation, whether it's a, you know, a true litigation and you're doing it using it for, different elements of that litigation. So ask could be used for a, a production from the other side. It's a great way of quickly understanding what's been provided to you in the document set. It can also be used, as you will see, kind of early next year as the tool for first pass review of, so instead of a traditional linear review, letting ask, giving ask some basic understanding of the case and having it go and apply responsiveness tags and citing reasons why it applied a tag or did not apply a tag to every particular document. So I think as the workflows evolve and mature, we will we will see more and more use of it. But, But again, I think either way, it's going to be fairly easy for, for legal professionals to pick up on. In terms of onboarding it, yes. Absolutely. So, we have a wonderful team here. Arviel, we refer to as the customer success managers, and they work with all of our customers, to onboard new functionality and ask, ask very much falls into that category. So our CSM team is great in working with our customers to help them, deploy, ask into their existing projects or new projects they wanna use on and suggesting best ways to do that. Awesome. Our next question is, does ASK allow for customization or training specific to a firm's recurring types of cases or specialized legal needs? Yeah. Not currently. I mean, I will say that the every every case is a little bit different. Right? So every project is gonna be a little bit different. And in such, the the semantic search behind that case is going to is going to vary. I would say that as a firm utilizes, some of the other analytics in Reveal and this is probably where I I probably differ from Ask to some of the other other tools within the Reveal enterprise platform like our AI models, which really can be trained, and reused from case to case and they actually get better and better over time. Or when I say gets better, I I would say probably reiterate that as, understand Affirm and their work and their you know, the industry and practice area that they're working. It gets better at that over time, and you're able to reuse that knowledge and those classifiers. That's probably a better, a better tool that would fit this question, I think. Ask is gonna be very, very specific. It's gonna run a semantic search. It's gonna return a narrative. That narrative will improve over time. I will say that. The LLM, the the large language model and the, that we're utilizing for summarization, it improves over time. It is constantly refined and it improves over time like everything else. So, ask is going to improve even even if you don't if you set aside the functionality enhancements that we make at Reveal, the the the LOM itself is going to get better over time as well. Awesome. Our next question, how does ask assist in minimizing irrelevant or strenuous data in large datasets to improve search efficiency? Yeah. So that's a complicated one. So what ask is doing is part of that so this goes back to that semantic search piece. Right? That search is extremely important to answering this question because what it is designed to do is understand the importance of, keywords and phrases. One of the things that Reveal actually does, really well is understand phrases, terms that are, you know, 2, 3, 4 words because we actually have patents around phrase detection within documents. And so we're able to understand the importance of documents, using that technology and in in converse to that, understanding what is not what is not relevant to a case. So being able to to call out that extraneous data as part of the semantic search so that the documents that are returned and then ultimately sent to the LLM are not are not are junk. They're not, they're not included as part of the set. So I would say that I think our our underlying technology for phrase detection and semantic analysis is what helps us eliminate extraneous data from the dataset. Now there's a whole bunch of other tools that I could talk about that would help do that as well, But in terms of how it relates to ASK, I think that's probably the best answer. Great. So our next, presubmitted question is, can ASH natural language processing adapt to unique industry terminology such as jargon or abbreviation specific to certain legal fields? It will over time, I suspect. You know, jargon, terms of art, abbreviations and acronyms, initialisms. Yeah. It it can understand that. It's obviously the LOM behind the scenes is trained on a large corpus of information and is not specific to the legal field. However, I read up on a lot of the LLM training now both being focused on specific industries. So we may see a case where you can unplug 1 LLM and plug in a different one in its place. I talked about that in the kind of the the preamble to this presentation where we talk about the flexibility of ASK as a as a technology. So I would say, yes in the future, but probably not today. Great. And we have one last pre submitted question. Sure. How does ASK handle and prioritize information from different data sources such as emails, chat messages, and documents when generating results for complex multi source cases? Oh, so remember that Ask is working from all the document unless you unless you, intentionally scope it down. Ask is working from the entire document set of the project. That document set can combine all of the things that are mentioned here, email messages, dot loose documents, chat or chat messages from, collaborative tools like Slack or Teams, even mobile message data, actually. It doesn't prioritize one data source over another because what what's happening is that it is using that semantic search, to find the the relevant documents. And those relevant documents, depending on the case, could be any of the above. So, you know, if I'm having a conversation with you, Brendan, in Slack about, a project and then our Slack data is collected and that's part of a case, that goes to eDiscovery, that Slack data could be just as important, if not more important than any email messages we might have sent or any any documents we created. So I don't necessarily think it needs to prioritize one data type over another. It treats them all, I would I wouldn't say equally the same, but very similar in very similar capacity. Awesome. So that is the last question that we have for today Great. Which is perfect because we just have a couple of minutes left here. So a couple of closing remarks. We would love all your guys' feedback. So I am launching a survey right now. It's super quick, only a couple of questions. But if you guys could take 30 seconds to answer it, that would be amazing. Also, if you're interested in learning more about Ask or any of our Beals products, I welcome you to press request a demo in the right hand corner of your screen. And with that, thank you, Jeffrey, for your great demonstration and your time today. And thank you everyone for jumping on this webinar, and we will conclude today's session. Thank you, everybody. Bye, everyone.