Episode #3
July 29, 2019
A.I. & Predictive Analytics And Artificial Intelligence
With Stan Smith Of Gradient AI
Host James Benham is joined by Gradient AI’s Founder & CEO, Stan Smith. Learn about AI & predictive analytics as they apply to the insurance industry.
INTRO
On episode 3 of the InsurTech Geek Podcast talking all things Predictive Analytics and Artificial Intelligence with the CEO and Founder of Gradient AI, Stan Smith.
The InsurTech Geek Podcast powered by JBKnowledge is all about technology that is transforming and disrupting the insurance world. We will be interviewing guests and doing deep dives with our own research and development team in technology that we see changing the industry. We are taking you on a journey through insurance tech, so enjoy the ride and geek out!
INTERVIEW
JAMES: Alright greetings everybody! I hope you are doing well out there and InsureTech land. This is James Benham. The InsurTech geek and we are here to talk about predictive analytics with one of the guys who knows his stuff about predictive analytics. Mr. Stan Smith, he is wicked smart. He is from the Boston area and Stan is joining us a remote, so he is driving in this car right now, got some windshield time. Though we do an interview. Stan, how are you doing today?
STAN: Hey James, thanks for having me here.
JAMES: Thanks for joining us. I appreciate it and I certainly appreciate you taking some windshield time and dedicating it to talking about all things InsureTech in particular predictive analytics. Now, look, I always like talking about tech. I like talking about the people behind the tech, and you are one of those people. So just tell all the listeners out there, where are you born and raised, what got you into insurance and tech? What is your background?
STAN: So, I am a military brat, Navy brat, born in Florida, Pensacola, Florida. Moved all around the country. My dad retired in the Winland, so we sort of settled up in high school in Boston. Went to school up here. This is my home now and just got into software a long time ago. This is my 6th startup and I founded 3 of them. And I have done different things. But, about 20 years ago, the 1st startup that I got involved in what was called Machine Learning back then now called AI, cause it is sexy now. But back then it was the same stuff. But I saw what it could do and what got me in insurance, I looked at the insurance space. I realized insurance needed some changes, some great opportunities to help companies do better. But also, it was a challenge because a lot of companies are not willing to share their data with anybody. They are very protective of their core asset, which is their data, which I understand. So, it makes it kind of challenging, but an opportunity as well.
JAMES: Awesome. And what were all the different startups you are involved in 6, you co-founded 3. And what were the different types of businesses that you built over the years?
STAN: Okay. So, there was a manufacturing software company for a sort of computer integrated manufacturing. There was a computer–aided design company, there were a couple of product lifecycle management companies, and then the machine learning–based company was like a supply chain. So, a supplier management company for managing tens of thousands of global suppliers. A lot of big companies use and using machine learning to help them predict future performance and also future bankruptcies of their supply chain so they could manage their suppliers more effectively. And 2 out of those 6 went public. So, it has been kind of an interesting ride, both good days and bad days. All of the above.
JAMES: I mean you have certainly been along the ride for an economic ride with the economy and the ride of having to be involved in an IPO. And so, I could do a whole show with you just talking about entrepreneurship startups and the company life cycle, but that is not what this is about. But it is fascinating to me. And I started my business in 2001. I have stayed with the same company, started JBKnowledge in 2001 and then we started products and we just did our 1st big exit about a year ago. And it has been interesting going through that entire life cycle of startup to growth, to ramp up and then to exit. It certainly we will put some more gray hair on your head will not it.
STAN: If you still have a few on your head you will know it is true.
JAMES: So, Stan, let us talk about gradient AI. This was part of Milliman, which is a very well known, well–respected firm, and then it is spun out of Milliman. Is that correct?
STAN: That is correct.
JAMES: And you are using AI. So, just as well as I do, this is a buzz phrase. Putting the letter “I ” in front of everything, was 10 years ago. And we had 3 different names for cloud computing, with the application service provider model. Then we have got all these different names that things take on as marketers get more and more clever. AI is certainly one of those things, but the reality is machine learning is a subset of artificial intelligence. So, it has always been a subset. We just call it AI now. You are using true specific AI. I saw a wonderful image that I re–posted on my LinkedIn feed last week that had a guy with a hood on and it said AI on it, and then the guy took the hood off and underneath was a bunch of if–then statements. And, that is for those of us in c committee, that is hilarious because what a lot of people are doing is, they are building an expert system basically that humans have built conditional statements into to make it look like AI, but it is not. You are using a specific form of AI in this business. What was the origin story for Gradient AI and why did it spin out of Milliman?
STAN: So the origin story was, I would have never done a service in insurance, so I had to go find a business problem that was worth solving, and to me, a business problem is something that affects either the revenue or the profitability or hopefully both, for a large number of companies that are in the same kind of general industry. And what I discovered was Work Comp as they align in the insurance company space was performing poorly. So, I looked at that closer because it was doing very, very poorly financially. And as I uncovered more and more about it, it turned out the claims side was performing much worse and much harder for clients to control in underwriting, so, I thought I would start there. And I did what was called a proof of concept. I had a client that was willing to pay me some money to see if AI/machine learning could tell them things that they did not know already.
And so, the question that they wanted me to find out an answer for it was, can you identify for us, what we call a creeping catastrophic claim, a claim that is not catastrophic. It looks like it when it comes in, primarily soft tissue injuries. So, somebody looks like they sprained something strange it, twisted it, not a major injury, yet these creeping catastrophic claims, 1 out of 10 of them will end up costing you about 60% of your total losses. So, you have got, for every 10 claims, 1 that looks like no big deal. But the person never comes back to work or does not come back for a year or 2 and they become a very bad and complicated and expensive claim. So, they said if we could identify these claims immediately or very close to immediately, we can do a lot more about these claims instead of waiting for them to emerge on their own. So that was the question. That was a business problem they wanted to solve, but we did not know how to do it.
So, we built a model that turned out we could identify a large portion of their high–cost claims early. So, the CEO is a smart guy and he asked me to do 1 more piece of analysis that was okay standing. At day 30, he said it looks like you have identified most of my high-risk clients, but I want to know specifically for every claim that ever got to a $100 000,00 or more, it is total spent, in its whole life, how many of those had you identified by that day, either before day 30 as a high-risk claim. And then I went to compare against my adjusters, how many of those same claims that they already identified. So, if have you have identified the same claims, there is nothing to do here. If you identified different claims, you always should look into that. Well, James, it turned out that it day, by day 30 we’d identified over 90% of those $100 000,00 or more claims as a high–risk way before they looked at risk and his same adjusters, who are great people, do a great job, and they’re still a client of ours, 6 years later, they identified about 15% of those claims as high risk by day 90.
JAMES: That is a huge statistic. Claims adjusters are a rare breed. Good ones are hard to find. They have got to have knowledge and experience matters with claims adjusters in a big way. You cannot just take an 18-year-old and have them adjusting claims like a pro in 6 months. This takes time and experience. So, they are experienced adjusters. They were identifying the 15% of these largely soft tissue industries, these creeping catastrophic claims within 90 days. And the machine learning algorithm you built was identifying 90% of them in the same period right, or immediately?
STAN: In 30 days, in 30 days.
JAMES: And you know that because you went back on historical data. You certainly did not want to have to wait 10 years to find out what all the creeping catastrophic claims were.
STAN: There is a hell of a back–test and you can take a set of claims, just the models I have never seen before, but are closed and you just show them the 1st 30 days of data and let the model tell you what is going to happen. But then you compare it against what did happen.
JAMES: Yeah. So, you have to have all of your ads of data so you can wind the clock back on those claims. So, you took all this claim data and you why you wound the clock back to where the computer was only presented with the first 30 days of information, not the rest. Even though the rest of the data was already known and stored, it was not presented to the model. The model then took that data and was able to accurately identify 90% of them within 30 days, which is a 4½, 5-fold improvement over what the adjusters were able to identify? That is huge.
STAN: That is huge. That is what the CEO said. That is why they became a client of ours.
JAMES: Yeah. So, the sales cycle became shorter at that point.
STAN: That is right, that is right.
JAMES: But to identify that it took them, having the courage, in this case, to turn over their data sets to you.
STAN: 100% again. Again, and I think the biggest reason I did not join Middleman was that I felt, and it turned out to be true, being part of a well-known, well respected, very credible, very secure big company. It made it easier for us to get that data in the early days. And I think that was a critical part of our growth. And if you think about it, James, we were sort of incubated inside the middle, even though that was not the plan, but we ended up aggregating over 20 million more comp claims. Now, when I started and talked to some of the bigger carriers at Work Comp, they had 100 000 claims a year. I had never thought I would approach that volume of claims in my career of this business. And now what we would consider a client with a 100 000 annual claims, sort of a modest–sized client.
JAMES: Exactly. Yeah.
STAN: It is changed the dynamic and the extra data is valuable. The models find a signal in any place, all over the place and more data is always better than fewer data.
JAMES: So how does this apply to other lines of insurance other than workers’ comp? Have you looked at property? Have you looked at commercial auto? I mean, does this same approach and methodology apply to the other lines of insurance?
STAN: So, yes, and no. So, if you think about artificial intelligence when it does its best job is when you asked him very specific questions. So, asking a general question to a model, oh, is this good or bad? It is kind of hard for them all to assess what that means. What you want to be able to do is ask the model or not also ask the data very specific questions like how much will this claim cost? How much would we medical, how much will be legal? Each one of those would be a different question and a different model that is built to do just that one task, even though it is bundled together behind the scenes, you want to share them all with doing that. What we found, James is that more top is the longest kind of line, knowing the most.
That was the best line to bring that to, but now we are rolling out liability solutions today because it is the second–longest tail. I call them Work Comp, a 90/10 problem, about 10% of your claims are going to be your more than half of your total losses and liability has about a 95/5 problem. About 5% of your claims or your losses are going to go over 50% of your total losses. So, it is still hard to find those claims early, and that is exactly what our solutions are built for. And in both cases, we use that loss information to inform the underwriting model, how to price policies better. So, it is a virtual cycle. Losses help inform the underwriting. Underwriting helps hopefully control losses. And we are working on a few other lines that we call adjacent lines as well right now.
JAMES: Yeah. And what you have got is a solution that can help out, certainly with claims adjusting. And if you go to your website, of course it says, you have Work Comp, then you have underwriting. It is the next obvious thing because you are going to change the way you price based on this data, right?
STAN: Again, that same deal when he asked me to move into underwriting, he said to me when I said, why is this going to help you, how is this going to help you? He said, well, I have to wait 2 or 3 years to understand exactly how much a single claim for a single policyholder is going to cost before I can adjust the price of that policy, and you can tell me that in 30 days. That means when the policy comes up for renewal, I know if this is a good policy or not on my renewal basis, and then hopefully you can inform me on new business that’s likely to have those kinds of claims better than I can do myself. It turns out AI can do both those things very, very well.
JAMES: That is awesome. And this is not general AI. So, for the audience, general artificial intelligence is what you think of when you look at, of course, I am a Star Trek fanatic, so Commander Data, he was general AI. He was a sentient being, that was artificially created, you have Hal from 2001: A Space Odyssey. You can go through many, many, many examples of general AI, in science fiction movies. This does not exist yet. And my favorite, I am a futurist, Peter Diamandis. Diamandis, he hypothesizes that we are about 30 years away from general AI being here. What this is, is specific AI. So, I mean, Alexa, and Hey Google, Google Home devices, those are a specific form of AI and natural language processing, in speech to text, text to speech. Some interesting things are going on in the speech area.
You are also leveraging NLP because you are reading, and this is what I love about your solution and machine learning in general, is it is allowing us to tap into unstructured data as we have never been able to. You can tap into it. We work with some AI solutions that tap into photos and videos and do object recognition and image recognition and pattern recognition and look for the way that workers are lifting things. So, you can identify if you have some loss control, things d you need to work on. And what I like about this is that you are tapping into the ability to read free text. So, you can take all those diary entries and start making something of them, right?
STAN: So, Wikipedia, if everybody is pretty familiar with, is about 4.2 billion words. We have single clients that their claim notes just in the Work Comp claim file are multiple Wikipedia’s worth of text.
JAMES: Wow
STAN: And that’s just years and years and years of knowledge has been put into play on the pages electronically. And the models can read sort of 4 forms of information that can be keywords, key phrases, sense of meaning and context, which means they can understand if someone tripped over a platform, if they fell in a plant, they can understand that the context is someone’s probably a landscape or the tripped over plant by working or someone works in a factory when they felt in the plants. And so, it is helpful. You do not have to often learn this kind of information so they can assess, does the setting makes a difference in the severity of the claim, but the duration of claim and cost, the claim. And understand all that kind of context without us telling them what to do. They can understand smoking and not smoker, heavy smoker, and those kinds of levels of things and understand them because they can learn from past claims how much those key phrases matter and when they matter the most.
JAMES: Yeah it is just untapped potential. When you look at free text and note entries and unstructured data that is not in a relational database, it’s not tagged, or hint is that it’s not organized in any way that a sequel query is going to be able to run a search on this easily.
STAN: That is exactly right. Exactly right. And some of the funny, funny stuff we have to do, James, is we have to understand how many times and how many ways you misspelled certain words, like weight. Height and weight are important terms for the model. So those are some of our feature engineering is, make sure the model understands that weight’s spelled WEIGHT, WAIT WATE AND WT period. All that is the same word so there is some really important stuff that you need to do to make these models as accurate as possible. But some of that is actual mundane engineering kind of work.
JAMES: Sure. You have to teach them before they can learn, and you have to hint datasets. So, for a machine–learning algorithm to run, billions of iterations, and to learn from it, it has to know what a good and bad outcome is. So, you have to look backward before you can look forwards. You have to look backward and say, here is the good outcomes here are bad outcomes, and then let us look for causality, not necessarily correlation. I am sure you are bound to have seen a lot of the items that were correlated, but not caused. That happens all the time, right?
STAN: That is right.
JAMES: Just because someone wears a red shirt does not mean they are going to die on every away mission. Star Trek reference.
STAN: They do not have a name though. They do not have a name. They are likely to die, so.
JAMES: Exactly. I mean, that is core. It is a correlation, not causality. The redshirt does not cause their death. It is just a foreshadow in Star Trek that they are going to die. But there is a big difference between the 2, and I am sure that is where you spend a lot of time in data sciences, narrowing that down. Stan, there is another phrase out there that gets overused, predictive analytics. And I have seen people that just have a reporting engine and a reporting tool calling it predictive analytics. So, they build maybe an expert system where they have a bunch of, if–then that we talked about earlier, and they call it predictive analytics. They are using it as a marketing wrapper for what was originally reporting systems then BI, now it is BI with predictive analytics, and I have heard a lot of BI tools saying, oh we have predictive analytics. I am like, what are you talking about? Let us talk about what it means. What does predictive analytics mean? I have all the definitions I can read to you our listeners, but you tell me as someone who is in the business, what does it mean?
STAN: It means it takes historical inputs and outcomes and says, based on what has been what’s seen in the past, and given all the different combinations and correlations and relationships that statistically this is the highest, probable or highest range of likely outcomes that you’re going to see in the future. So, it is probabilistic, prediction of what is likely to occur.
JAMES: Yeah, so it assigns a probability to an outcome.
STAN: That is right.
JAMES: If we had to sum it all up, it is not predicting the future because that is not possible. If it were, we would not be in this business, anymore.
STAN: That is right.
JAMES: It would be over everything. Every industry would be over. It is not predicting the future. It is assigning a probability and outcome based on historical performance.
STAN: Exactly right.
JAMES: And it is using machine learning, which is a specific subset of AI to get there. At the end of the day, what is the big hairy audacious goal here? It cannot just be, look, to set an accurate reserve on a claim and the insurance business is massive. To be able to use that in real–time to change your underwriting process, also, massive. What are the bigger social impacts? What are the bigger audacious goals wrapped around this that are way bigger than just reserve setting and underwriting?
STAN: What if you could truly take the underwriting side, which is exposure, what risk are you trying to manage, just to manage the actual risk of occurring versus the claims, in other words, when those things, those bad outcomes occur, if you could marry those 2 sides of this data sandwich into 1 and say if I have this information as close to universally available as possible on any kind of risk, could I start to identify those risks before they occur and use loss control and loss mitigation strategies to reduce or eliminate those losses? I think that changes this from car insurance. You cover the expense if you have an accident. What if you could prevent the accident? And I think there is an opportunity if we do this right, they get into loss prevention as opposed to loss coverage. And I think that is something that changes a lot of people’s lives.
JAMES: That is a world–changing goal. If you look at Google X, their labs initiative at Google, they will not work on anything that does not impact at least a 100 000 000 people, Stan, and to me, that is a 100 000 000-person problem. Is what if you can use this data to prevent the loss in the first place through effective loss control. Then you have something that not just changes the profitability of insurance companies or the profitability of companies that employ people and carries risk, but you change lives. Millions and millions of lives.
STAN: Right.
JAMES: And the other interesting thing. So, let us jump back into the current reality cause that is future reality. The current reality is you can dramatically speed up claim processing too, I mean, there is a whole subset of claims, probably medical–only claims and specific types of claims that you can even approach auto adjudication where you can just approve the claim and move on without a person even touching it right?
STAN: We are doing that today. We got clients doing that today. I am not as lonely as, there are so many of them that are, they do not get treated well and they do not get treated in a timely fashion because they do not have the capacity and AI allows them better customer service. People get what they need to get the information they need, the money they need to get it faster, and it just, everybody’s life is better off because it is just, there is no friction in the process.
JAMES: Yeah. Are we trying to reshape the loss triangle too? I mean, I am just thinking.
STAN: How many actuaries on this podcast, because if there are, but yeah, it does. First thing, what happens James, is that the amount of IBNR changes. Why? Because the models are very, very accurate. As I gave you the early example of understanding which claims are going to be expensive at high severity, and just in the inverse of that, the vast majority of claims that are not going to be. So much we see time and time again is reserving patterns change, lower risk claims get lower reserves, higher risk claims get high reserves much earlier in their life, and the amount of IBNR diminishes, which means there’s a lot less uncertainty in that book of claims in that year of underwriting. A lot more certainty about where the losses are and where they are going to occur.
JAMES: Sure. And it does not mean, again, that there is no, there is no perfect model. Okay. There is the old saying, there is no perfect house. There is no perfect spouse. There is also no perfect model. You are not going to achieve 100% accuracy. You do not have a looking glass into the future. But why are machines, this is the big question, why are machines this much better at assigning probabilities than 30- and 40-year experienced people?
STAN: So, I think James, it comes down to a couple of simple things. They are complexed, but they are simple. One is that a human’s brain can handle between 4 and 7 variables at once and understand sort of how they interact. These algorithms can understand virtually an infinite number. And understand these things, they may have only seen it a certain combination a few times, but every time they have seen that combination, it is always equal to some outcome. Whereas humans have a limited scope of limited ability to observe and learn, and these computers can learn from enormous amounts of data and never forget, never get tired, understand what is happening, understand recency versus, long, long ago and all sorts of other combinations of things. And they are just good at picking out the complex relationships that indicate a strong probability of a certain outcome happening, where a human may not realize it yet or may think it is not going to happen at all.
JAMES: Yeah. It is just fascinating. The human–computer. The brain of the human being has some built–in software limitations. Now, the hardware that we have is theoretically better biological computers with the storage possibilities. We have got the hardware to do much more than we do with our brains. We do not have the software. The way that we are wired does not fully utilize, the neurons, their capacity, the chemical storage that we have, the DNA storage that we have far outstripped the software that we have that is sitting on top of this brain. And so, we figured out a way to hack our biology, more effectively than it is being done right now. This is going to have to do right?
STAN: That is right. That is right.
JAMES: Let us step back and look at the insurance from a slightly bigger altitude, a slightly higher altitude. I am a pilot, so let us talk about altitude. Let us go out to 50,000 feet and we can see the whole world, or I will say 46,000 feet, which is more realistic for an airplane. And we are going to look at the whole world here. We are starting to see not just the rise of insurance technology vendors, but insurance tech companies that are deciding to not sell the insurance companies but compete against them.
STAN: That is right.
JAMES: And that is a big sea change because insurance has been a very manual, very paper or Excel. It was paper. Now it is just heavily Excel driven and data table–driven business with people crunching on data and now you are seeing these guys come in and they are doing auto adjudication, they are underwriting immediately. They are issuing policies immediately. You are talking about like minutes to bind and issue policies, minutes to process claims, and much smaller claims and policy teams. It is a fascinating dynamic that companies that used to be vendors and now turning into the insurance companies themselves. What are your thoughts there?
STAN: So, I think it is natural. I think if the insurance companies do not do it themselves, it leaves a gap in the marketplace where people can combine the technology and the ability to write insurance into one entity. And we, we do see it a lot. We see a lot of startup insurance companies. You see it all the time. A lot of the venture money has been funding the sort of apps for buying insurance quickly and I think that is just the beginning. That is more low friction distribution. The backend is still tied to a big insurance company, but the full soup to nuts insurance companies that are AI–based, automation based, low cost, high speed based, those are coming. We see it all the time and the existing incumbents have a chance to either be responsive and aggressive because they still have a data advantage. They still have more information. They have got distribution agents and so forth that want to maintain their businesses, but if they keep thinking it is not going to happen, well, we have seen that a lot of times in history with industries that do not think anything’s going to change, and there are no more punch cards. There are no more rotary phones, there is no more, lots of things.
JAMES: Yeah, exactly. There are entire radio shack ads where the entire page is not an iPhone right? I mean, we are in an era of much more rapid disintermediation and much more rapid change. So, what is next? Because what you are doing now is already very cutting edge. There is not a lot of companies truly utilizing predictive analytics, real machine learning, and tapping into big data sets. What is the next step for you guys at Gradient AI?
STAN: I think the thing that we see a lot of us is all the automation that is going on. A lot of investment going on in infrastructure and or RPAs and trying to take this AI and drive faster, better decisions. Especially on the mundane, low value add from a human perspective. Get more of that automated, put more of the hard decisions that you need experienced people for underwriting claims. Put those situations in the hands of the experts and let them spend more time on those truly complex situations where you need that judgment that computers just do not have yet.
JAMES: Yeah.
STAN: And so, I think it is just optimizing the workflows and the throughput and the investments that have been made, and I see that going on more and more rapidly over the last 12 months, and the next 5 years, we see that is going to be number one. And where most of these companies trying to take and optimize what they have got.
JAMES: Yeah. They are finally taking a look and what we would call a lean perspective. My favorite book on Lean is “2 Second Lean” by Dan Akers. Lean pioneered by Toyota and the Japanese where they are just eliminating waste and improving efficiency and RPA, robotic process automation is one of the best ways to do it. And one of the easiest low hanging fruits, you say, okay, what are all the manual tasks you can do? Let us write a script to repeat and do those. I did an interview in my construction podcast with a brilliant guy named Bassem Handy from Briq, who has applied machine learning to RPA, and so the RPA bots that they are writing dynamically learn as the UIs of the software change, they automatically adjust to those UIs. And so, they are not requiring manual programming. The RPA bots are modifying their scripting to be able to do all the data entry and data extraction without a human override, whenever there is a software version update.
And so, it has been very interesting to watch what they are doing. I think RPA has a huge promise to liberate people. I am not a scarcity thinker. I am, what Peter Diamandis would describe as an abundance thinker. I think that all of these things liberate humans to do value–added thinking tasks. I think if bots and robots and machines eliminated jobs, we would all be unemployed by now because the industrial revolution started in the early 1800s and that is the Luddites set at the time was, all the jobs are going away, and they were wrong. Unemployment went down as a result of the industrial revolution and wages and work out where wages went up, work hours went down, vacation hours went up. Life got materially better from automation. And so, I just believe it is going to keep getting better.
STAN: So, James, I see that firsthand. I see claims adjusters that when we got there, the claims would, when they got away from them and they did not know that it was going to get away from, they were like, how should I know? How could I know? And their management has been live with them. There has been a lot of collaboration and nowadays the mundane claims are being taken care of a different way and they are being assigned to the most complex claims and having a great impact because they have the time to have a great impact and they are feeling empowered. And so, it is just the opposite. If they, when the fear gets reduced or eliminated, when management understands this is going to be an empowering technology, a lot of great things take place.
JAMES: It does. So, margins go up, employee satisfaction goes up, productivity goes up, and accuracy goes up. And of course, ultimately claim costs go down. So, there is every variable that you want to move, moves.
STAN: That is true.
JAMES: It is a fascinating dynamic. Well, Stan, you and I have gotten to co–present a few times. I have enjoyed presenting with you. I love listening to you. I know our listeners will enjoy this as well. If people want to find out more information on gradient AI, I am assuming they must go to “gradientai.com”?
STAN: Yeah sure.
JAMES: Okay, so go check it out at GRADIENT gradientai.com. Check out their solutions. You can reach out to Stan through their contact form on the website. And again, this has been a great discussion. I appreciate you joining me today.
STAN: Thanks, James. I enjoyed it!
JAMES: Awesome.
So again, this is the InsurTech Geek Podcast powered by JBKnowledge. JBKnowledge.com. It is all about technology that is transforming and disrupting the insurance world. I have been your host, James Benham as Jamesbenham.com thanks for joining us this week, and I look forward to talking with all of you soon.
We are taking you on a journey through insurance tech. So, enjoy the ride and geek out.