Episode 50: Full Transcript

[00:00:00.45] SPEAKER 1: I'm in love with AI, and I think that it's not that AI is the future. AI is here, and it only gets better and better and better.
[00:00:09.18] SPEAKER 2: This is the Insurance Technology Podcast, where we bring interesting people from across the insurance ecosystem to discuss and debate technology's impact on the industry. Join us each episode for insights and best practices from industry stewards and tomorrow's innovators. Now, here's your host, Reid Holzworth.
[00:00:31.39] REID HOLZWORTH: Welcome to the Insurance Technology Podcast I'm your host, Reid Holzworth. In this episode, I'm interviewing Elad Tsur. Elad and I met recently when you reached out just to get to know each other. Ekad has a very rich background in AI, and I thought, hey, have him on the show because the timing's right. Everybody's talking about AI. He's got a really interesting story, as everyone does on this podcast.
[00:00:55.69] But I'll tell you-- on this one, we go pretty deep, and it's really a good educational session, and it really was for me in this episode as he just breaks down what AI really means, so stay tuned. In this episode, we're going to get into how Elad fell into AI, and we're going to get into some of that education. It's a really awesome episode. The timing is great. You guys are going to love it. Enjoy.
[00:01:20.78] So Elad, welcome. We actually just got to know each other. We recently met. You just sent me an email, and you're like, hey, man. Like, I want to get to know you. So we jumped on a call, and that is literally our relationship thus far. I reached back out to you, and I'm like, hey, man. Would you love-- would you join me on my podcast? And you're like, yeah. Hell, yeah. So here we are. Right?
[00:01:46.88] ELAD TSUR: Yeah.
[00:01:47.84] REID HOLZWORTH: Elad's the co-founder and CEO of Planck, and you just have an interesting story, man, so I'm looking forward to getting into it with you, so welcome man. Welcome.
[00:02:00.08] ELAD TSUR: Thank you. Thank you. Thank you for inviting me, and I think the teams-- Ivans' team and Planck's team were speaking to each other, and my team suggested, man, you should contact Reid. You have common interest. You're leading technology in the insurance world. You need to speak to each other, so I'm definitely glad that opportunity came up, and now, we're here.
[00:02:28.50] REID HOLZWORTH: Yeah. Here we are, man. So let's get after it. So tell the listeners, where are you from? How did you get into this space? Tell us a little bit about yourself. And by the way, I love your shirt. I was just noticing it. When you glance at, it's a Pink Floyd, "Dark Side of the Moon" t-shirt, but it's not. It's in AI nerd fashion. It says the dark side of AI.
[00:02:50.76] [LAUGHTER]
[00:02:52.11] That's cool, man.
[00:02:52.86] ELAD TSUR: Exactly. We once read in some Facebook group that someone said that Planck has the best swag with shirts, and towels, and bags, et cetera. So we do put efforts, thought process as well, into the swag, not just into our technology. But being with-- doing AI for-- I guess this is the third decade already.
[00:03:21.18] REID HOLZWORTH: It's crazy.
[00:03:22.17] ELAD TSUR: AI has its dark sides, and you need to know them in order to master that.
[00:03:26.40] REID HOLZWORTH: Is this your-- is this like a company t-shirt then?
[00:03:29.49] ELAD TSUR: Yeah. I have at the back like, Planck.
[00:03:32.82] REID HOLZWORTH: Oh, that's cool, man. Oh, no kidding.
[00:03:35.43] ELAD TSUR: Yeah, we've designed this shirt.
[00:03:37.50] REID HOLZWORTH: --started some random chatting bot. That's so cool.
[00:03:39.21] ELAD TSUR: We designed it. This is Planck's logo, right? The triangle.
[00:03:43.05] REID HOLZWORTH: Oh, yeah. OK. It's you.
[00:03:45.27] ELAD TSUR: So right? Like, the triangle here. So we-- let me tell you about my background. As you can hear from my accent and for my name, which is not easy to pronounce, I was born and raised-- always lived in Israel, though, we spent many, many years accommodated in the US for business. I grew up in a small village in Israel working probably since the age of six, seven years old all the way till today.
[00:04:22.20] They didn't stop, which is uncommon even in Israel. It's not that-- Israel's culture is very similar to the US culture. So when you think about life in Israel, it's very similar to the life in most of the parts in the US. Mostly the East and West Coast. A bit less the Midwest.
[00:04:39.08] But I grew up in a village in Israel, and very early on, I knew that I like computers. It has something that excites me. So I went and studied it in the university and been doing it since 1998 till today professionally, and now, the co-founder and CEO of Planck trying to help to change the insurance market with AI.
[00:05:13.57] REID HOLZWORTH: With AI. So let's talk about it, man. So how did you get in to-- I mean, if you go-- let's not get into how you got into insurance or any of that, but tell me a little bit about your background. After school, what did you do? What did you get into? Tell me a little bit more about the journey.
[00:05:31.78] ELAD TSUR: Right. As the first born son to the first born son to the first born son-- I was the oldest amongst all of the grand kids and the grand grandkids, and so I always helped doing the work. My first work at the age of six, I worked at the hatchling farm believe It or not. You know how it works, right? The-- of turkeys. So turkey lay their eggs. Put them in ovens.
[00:06:01.84] About body temperature-- a bit hotter than body temperature. And hatch, and then their hatchlings are put on a mill, like a treadmill. It goes like a band that goes to one direction to the transportation into the farms. And you need to find-- my work was to find this sequence. And you need to look at their eyes-- whether their eyes are purple and shut. This is one sign that they're sick, or whether the fur is a bit of whitish and not yellowish, like the other hatchlings and just need to take them out of that.
[00:06:41.76] And pattern recognition at the age of six-- that was like my first work. A few years later, I got my first computer. It was an XD back then in the days and started programming very, very young at the age of-- like the fourth grade. Learned computers, and then moved to work at startups at edtech startup-- education startups.
[00:07:08.70] I went to the university when I was at the age of 14 to do my bachelor's degree and graduated together with high school in computer science. Then I got recruited like most Israelis to the army to a very special unit in the intelligent forces, where basically I've been doing AI way before AI was called AI.
[00:07:33.15] We were running deep neural networks on GPUs and CPUs way before Nvidia released Cuda and the other SDKs that helped to do that. And it was an amazing time. Very proud of that time. And then I left to open my first start up. It was at the end of 2008. That Sarah became to be a selfless Einstein.
[00:08:03.61] REID HOLZWORTH: Hold on. Hold on. Talk a little bit more about when you're in the army building stuff and doing that. Did you actually-- you leveraged AI back then. Like, how? For what purpose? And those were early days. You're not a super young dude, you know.
[00:08:21.49] ELAD TSUR: AI had many-- it was a buzz. It used as a buzz till today, and I guess till probably 2016, 2017, when everyone were saying AI-- no. They were meaning statistics. We were doing true AI, but AI wasn't invented at the beginning of the century. People were doing AI in the '70s and in the '60s.
[00:08:51.97] One of the best known models for detecting faces-- it's called Viola and Jones-- to detect objects and images. Viola-Jones is model that was from 1991. The first deep learning model that was used for the day-to-day tasks, which was publicly available was face recognition, and that was-- probably the first one was deep face that Facebook ran.
[00:09:25.09] I helped to finance that research from a Tel Aviv University's Professor Leo Wolf to use very, very, very deep neural networks to do the computation trained on huge data sets, but we were doing a lot of stuff way before that-- running very, very deep neural networks and other machine learning models but specifically deep learning way before anyone called it deep learning.
[00:09:51.68] We just called it very deep neural networks. No one called it deep learning at that point. And we were-- you needed to save lives, and that's the goal. And no, I'm very proud of it. I saved many, many lives in my military intelligence forces days.
[00:10:16.20] REID HOLZWORTH: I am probably one of the dumbest people in this space when it comes to AI. Like, seriously and like-- so I know what AI stands for and that's about it. But you know, I mean, I know a little bit, but explain to listeners, what are you even talking about? What's a deep neural network? What is that? I don't even know what you mean. Explain that.
[00:10:42.57] ELAD TSUR: With pleasure. So AI, artificial intelligence-- artificial intelligence doesn't have any strict definition, any dictionary definition that people use. We mean something, and it's very, very fuzzy-- the definition. Machine learning, for example, has much better definition to it. With AI, we mean tasks that are conducted by a machine, which resembles the capabilities of human beings that before the age of AI you would think that only a human being can do such a task.
[00:11:19.02] But it's not really something that human beings can do. For example, I mentioned face recognition. Since 2006, face recognition exceeds the human ability to detect face by a mile-- by orders of magnitude, and that was 2006, even before deep learning. The first company that introduced deep learning for border control, for example, NEC-- NEC, the Japanese company. I think Japanese. Was already winning all of the challenges that compare the mental test comparison at the end of 2009.
[00:11:59.85] We were doing machine learning back then. Machine learning is something very simple. You give a machine a set of samples, and you let the machine to learn how to detect those samples. It's a subfield of patent classification. Let me give you an example. If I want to detect the beer that you have on your face, I might write my own algorithm to detect it. I might, for example, look for very high contrast area underneath the eyes and use another algorithm to find the eyes, and I'm building the algorithms, and I'm telling the machine, this is how you detect beards.
[00:12:39.39] On the other hand, I can give the machine images of beards, telling it, this is your training data. These are beards. Now, try to understand by yourself, how does a beard look like? And that's with machine learning, and you have several different types of machine learning, depending on your training data set. But in essence, you collect data. You tag it manually or with some other mechanism. You tell the machine, this is your training set. This is what you should try to accomplish and try to learn by yourself how to accomplish this task, and it's a subfield of pattern classification. Was that too nerdish or--
[00:13:24.21] REID HOLZWORTH: No, it's great. It was great. I love the definition. So how do you build that? So how do you take it through that-- of really recognizing, understanding in this example, what a beard is?
[00:13:40.98] ELAD TSUR: First model that we've built in Planck, it was a model for personal insurance, not for commercial insurance. Now, we're only doing commercial insurance, but that model, we were still doing personal. It was like 2016 when we've built it. The model was to predict a person's body mass index-- BMI-- from the facial image. How do you approach such a task?
[00:14:10.59] First, it's a machine learning model. We need to collect training data for that. We've collected about 6 million tagged faces. Tagged, meaning people that we knew their height and weight but that's a noisy data. Some of it we measured it ourselves. Some we knew that someone else was measuring. But even when you measure, it's a very noisy data. So you need to pick the right algorithm to train with such noisy data. Think about it yourself.
[00:14:39.30] Just your bladder is about 1 pound on average. It's even more than. It can be 2 pounds. Just whether you need to-- before or after toilet. It's just your height. You're a bit lower at the end of the day-- about eight hours after you wake up, you're a bit lower by a few centimeters. Know that typical astronaut going to space is taller by 2 inches just because the spline doesn't have the forces that gravity has. So it changes when you wake up. After weekends, you weigh more. Eat more over the weekends, so it's a very noisy data, even when you measure it.
[00:15:26.46] So what we've done was collecting training day. We've tapped part of it ourselves. We acquired sources that got that data tagged, and then you just give it to a machine and tell it, try to predict using these terms. And we've used deep learning for that. Back then we used a schema for deep learning that was released by Google back then, and it was working very nice, but it's not as simple as you might think because your data is very noisy.
[00:15:59.12] Example, we've had a minority group that wasn't represented properly in the training data, and all the performance for that minority group-- that specific ethnicity group-- wasn't good. You need to understand it. You need to detect it. You need to unbias your training data and make sure that all different types of group are represented in a proper way. At the end of the day, the algorithm was-- we've extracted the face of a person from the image then predicted the gender of the person using one deep learning model-- male or female.
[00:16:40.43] For each one of them, we've divided the face into different way-- they fat distribute across the face about eight different subgroups. And for each one of them, we've trained another machine learning model to predict that body mass index, and that was extremely accurate across the population and nice story. Fun fact, the-- because we're not using it anymore, I don't mind sharing that story.
[00:17:06.32] We went into the regulators, and we've met-- I won't mention which-- one insurance regulator from one state in the US and presented to them the model and asked them, can carriers use that model? And they've said, yes. You're predicting BMI. We're allowed to use body mass index as part of the onboarding of the underwriting. Yes. They can use it. Right.
[00:17:30.29] What it works-- what it does is we're detecting the gender, and then for each gender, we're predicting the body mass index. And it's better accuracy than a human declaration. Perfect, you can still use that. It's not the full story. Actually, the middle step is to cluster the way the fat is delivered on the face. And if you look at those clusters, they detect ethnicity groups because a different ethnicity groups has different way the fat distributes across the face.
[00:18:02.06] This is biology. We all the same species. It's not-- I'm not-- it's not racist act.
[00:18:09.87] REID HOLZWORTH: Totally.
[00:18:10.26] [INTERPOSING VOICES]
[00:18:11.12] ELAD TSUR: --automatically, the fat distribute differently for different ethnicity groups, and this is-- we look different. This is part of our biology, and we've added that middle layer in order to do unbiased to the final results. With that middle layer, that's the only way that every different person got exactly the same treatment with exactly the same accuracy, and no subgroup was discriminated. You can't use it. They told me. You have a race classifier.
[00:18:43.44] REID HOLZWORTH: Yeah. There it is. Right.
[00:18:45.23] ELAD TSUR: But no one gets that. It's like a black box. If you look at a certain layer of the neural network and you analyze that certain inner layer, no one gets access to it. It is a race classifier. But no one gets access to it, and you had to add it in order to unbias. In order to know whether you have bias, you need a biased detector to unbias it, and they weren't ready.
[00:19:11.33] These days, I think they are. When you speak with the regulators, they understand that in order to make fairness, in order to make AI not truly unbiased, truly fair, you need to add techniques internally that make sure it's fair, and you can't just avoid looking at that just because somehow it's connected to some minority group.
[00:19:37.73] You need to look at it in order to make sure that minority groups get exactly the same features, price, et cetera, and not being discriminated because you don't want to look at that minority group because someone doesn't understand AI. I think these days, at least from my discussions with regulators, they do understand AI. They do understand what it means to develop fairness in AI.
[00:20:03.83] REID HOLZWORTH: That's fascinating, man. Yeah, wow. Thank you for the explanation. And so it's so wild, and back from-- and I don't want to go too deep into this because I want to get back to your history, but from where you were back in the day wrenching on those types of projects compared to what's available now and what's out there, it just got to be night and day, I would assume through your lens. Through my lens, it seems it is, but actually, it's really a question. Is it? Right?
[00:20:38.83] ELAD TSUR: It's amazing the capabilities that you have once you have almost unlimited computation-- compute power-- unlimited memory, and unlimited training data. We have unlimited training data. These days everyone speaks about GPT, about OpenAI's ChatGPT. There's also Google Bard and Amazon is soon to announce theirs. It's amazing the level of language understanding that the LLMs, the large language models, get to.
[00:21:13.30] We're using that for many years now internally as a Black box. An example-- GPT, large language model, is brilliant. We are using ourselves our own version of GPT. We've trained it. Actually, we've introduced it to the market and ensured the connect opened it to the market. Issued the connect last year-- three months before OpenAI released their ChatGPT to the market. We were also using GPT 3.5-- the architecture. With AI, again, you have the architecture of the way the neurons are connecting to each other and being trained.
[00:21:51.64] And you have the product, the end product, which is ChatGPT in OpenAI's case, or our GPT, which we call Ask Max. Max from Max Planck-- we've named the company after the German physicist, Max Planck. And we've been using large language models that we've trained ourselves. They were great. They are so much inferior to ChatGPT. It's unbelievable. ChatGPT's language ability is amazing. Immediately--
[00:22:26.71] REID HOLZWORTH: Why? Why is it so good?
[00:22:28.62] ELAD TSUR: They've trained it on so much training data that it got to the level that it understands language far better than any native language speaker. It knows language way better than you. Obviously, way better than me. I don't know English. It's not my native tongue but my mother tongue, but ChatGPT is so powerful language model. But you need to treat it as a language model, right?
[00:22:55.41] Elon Musk's claims that it hallucinates. It does hallucinate because you're using it for tasks it wasn't trained to do. It was trained to understand language, and it understands your prompt, your questions far better than any other model invented before that. And it can articulate itself in an awesome way to an answer, but when you give it tasks just a language model-- task which are above the language model way, it gets what we in the company call [INAUDIBLE].
[00:23:30.42] It just [INAUDIBLE] you something. It's not true, but it's so convincing because it has an amazing ability to generate language. It's such superior way to your way of generating language that it looks amazing. You trust it, even though it's not necessarily true. Most of the times it's not today. They will get better with that. We're using it as a black box for understanding language.
[00:23:56.76] For example, we're telling GPT internally, we've crawled all of those websites. We've analyzed all of those images, and this is what appears in it-- because ChatGPT is not crawling images, and it's not analyzing images. We've looked at all of those images with other machine learning models, and this is what it present those images. And we've done the entity resolution, and we're building it a query, which you can try it even the query. We're being like thousands of words query to GPT with hundreds of queries being run on every single questions we have.
[00:24:35.10] And then we're analyzing using other machine learning models the results in order to predict the handwriting insight. And that usage of GPT as a language model dramatically improved most of our insights that depend on understanding language because it is the best in understanding language. An example-- ask about fine dining, and GPT understand what fine dining is better than other models that we've used before, and it's not just synonyms of fine dining or some-- Was it related to fine dining?
[00:25:14.03] It understands the fine dining. Has specific dishes in it, specific way of sitting in it. It understands the concept of fine dining from a language perspective far better than any other model we've used before, and that is brilliant if you know how to use it.
[00:25:32.07] And lucky for us, this is-- our entire infrastructure was built on those tools, and we have so many different such tools. We have about 12,000 models. Most of them, deep learning models, such as GPT is just one of them in production. So GPT is brilliant. Just one model out of a whole set of models that we have.
[00:25:59.00] REID HOLZWORTH: That's amazing and so cool. Let's go back to your experience. So after the army, you did this, then what? What'd you get into?
[00:26:10.96] ELAD TSUR: So after the army-- the army was an amazing time, and I thought I was a kid when I left the army. And I thought, yeah, I know enough about the world. Let's open a start up. Let's be a business owner and make software. We didn't know what we're doing. We were fore founders. Didn't know--
[00:26:30.25] REID HOLZWORTH: How old were you at this point?
[00:26:31.60] ELAD TSUR: 25.
[00:26:33.31] REID HOLZWORTH: Gotcha. Yeah.
[00:26:34.39] ELAD TSUR: 25. After leading-- in the army, I ended up leading about 20 PhDs in mathematics, computer science, and physics, actually, that we were all doing AI. And 2008-- 2009 I left to open my first startup. Now, we're doing lots of stuff all with AI to help predict sales and help you to close deals. All of our customers were Salesforce customers. It gave amazing insight to them. They loved it.
[00:27:09.82] REID HOLZWORTH: So hold on. Explain that to me. So it's all Salesforce-- like CRM stuff? Like, which opportunities are the best? Conversion ratio-- within your conversion ratio, if you will. Is that right?
[00:27:19.91] ELAD TSUR: Exactly, but we were looking at the prospect that you're trying to sell to, and looking at the sales rep, and using AI, doing some match making in between them. So for example, let's say that you're the sales rep, and you like to play golf, and the person that you're trying to sell to like to play golf. And let's say that you're both based in California. No one cares about it. If you're both based in Israel, no one plays golf here. There's just one golf course in the entire Israel.
[00:27:50.41] That's interesting because it's an anomaly, and the both of you have that. And whether you've learned the same place, whether you have the same hobbies-- and giving the sales rep also insights about the business you're trying to sell to. Any budget for this quarter, or what are the platforms they're using, which your product might be integrated into? So all of those insights together with predictability about closing the deals. Whether the deal will indeed be closed at the date that you've put it as the close date in Salesforce.
[00:28:23.26] So all of that, we've done for our customers, and about-- when we wanted to go and fundraise-- because we've had thousands of users. We're growing very nicely. Everything was looking super exciting. We went to fundraise, and we wanted Salesforce to invest in us. Strategic investment because all of our customers were Salesforce customers. It was very trivial to approach them.
[00:28:50.32] And here and there, some discussions. They said this is so strategic. We want it embedded in Salesforce-- part of Salesforce. They acquired us. We've opened the Salesforce Israeli R&D center-- the first acquisition that opened that I chose the office. I chose the--
[00:29:09.35] REID HOLZWORTH: Hold on. So you were out hunting for capital for Salesforce, and-- but it's funny. When you were talking about this, I remember when I was at Dreamforce a number of years ago.
[00:29:18.34] This was like a while ago, and I remember cruising around. I don't know if it was, but I swear like-- I put it-- this was before a lot of this stuff existed, and they put in like my name, and it just crawled in it brought back all this information about me, like specifically. And then my behavior, how I like to respond to emails and things like that and whatnot. And they were demoing this for me at Dreamforce. That wasn't you guys, was it.? And that's--
[00:29:46.12] So the only capability that Salesforce had regarding this was our capability, but if you got a demo of it, you're very lucky, and probably that was the first year--
[00:29:55.33] [INTERPOSING VOICES]
[00:29:55.75] REID HOLZWORTH: --demo though. This was a vendor at Dreamforce that was showing this, anyways.
[00:30:00.67] ELAD TSUR: I don't know. Other vendors might have had it. I don't know all of the vendors that work with Salesforce. When we got acquired, I sat with the Salesforce management. Also met the board, but I sat with selfless management, and we've presented to each one of them the information the AI predicted on each one of them.
[00:30:20.17] And I asked before that, guys, are you fine with me sharing that? Because if I'm open your record, it's going to show to which party you voted in the last elections. It's going to show your hobbies, and I'm not sure whether-- so yes. I won't name the party that they all voted, but it was pretty much the same party. And it--
[00:30:40.19] [LAUGHTER]
[00:30:42.19] So it wasn't that ground shaking, but they were surprised. Wow, all of that can be predicted because obviously, it's not out there. You need to predict it. You need to look at the behavior. To what fundraising events are they attending? Probably very correlated to what party they're voting for. All of that we've presented, and after that, they called me. Already went back to Israel. It was after a few days in the Valley.
[00:31:08.37] I went back to Israel, and they're called me and saying, we want to move forward with the deal-- write the terms, et cetera. But one thing-- you presented insights about companies and about people. Let's remove the features about people. It's way beyond the creepy factor. Let's define a creepy factor and make sure we don't cross that creepy factor, and actually that was what we've done. We've only added the business-related insights.
[00:31:38.50] We have demoed it in Dreamforce only to people that were very well-connected just to show the capability but never used it as part of the product. It was only the business' side of the product. And you know, after a few years, I said, right, we need to pick a name. And again, loving German physicists, I chose the Einstein name. We paid some royalties to the Hebrew University, who holds the Einstein--
[00:32:08.44] REID HOLZWORTH: Pretty still solid. That's working crazy. Hold on. Hold on. So you-- OK. So not only did they acquire-- you're out raising capital in Salesforce. I think it was Salesforce ventures, I assume. Was like, oh, yeah, we'd get in on this, and they ended up like, wow. This is really cool. It's creepy, but cool. Basically, like, if you can get rid of the creepy factor, we got a deal here.
[00:32:29.33] And so you're like, OK, cool. And then so then they acquire you guys. You came in, and you literally pick the name Einstein.
[00:32:37.60] ELAD TSUR: Correct.
[00:32:37.96] [INTERPOSING VOICES]
[00:32:39.31] REID HOLZWORTH: --Einstein today.
[00:32:40.21] ELAD TSUR: Together with the person that replaced me as the head of civil science named Gilly, who is another acquisition to Salesforce called Implicit. Gilly is a very good friend of mine-- my best friend. We've been together in the same unit, actually, managing for a while in the intelligence forces, and it's like-- when they acquired Gilly, I said I can't help you in the diligence because I was helping a bit in diligence here and there. It's like--
[00:33:11.80] [INTERPOSING VOICES]
[00:33:12.82] REID HOLZWORTH: This is what you were telling me about last time we met. Like, so one of your boys you grew up with, basically, in the world, they ended up acquiring his business. And is he still at Salesforce now?
[00:33:26.66] ELAD TSUR: No, he left already. He has another-- he opened another startup, like I opened Planck, but he spent also about three years at Salesforce, and it was an amazing time. By the way, Salesforce is a great company. I worked after the acquisition for three and a bit years, and it was very, very good company, working fast, embracing innovation. I really enjoyed working at Salesforce, I've got to say.
[00:33:57.29] REID HOLZWORTH: I love Salesforce, man. I mean, I'll tell you. I owe a lot to Salesforce. Salesforce did a lot to me-- my last company in TechCanary. I mean, man, if it wasn't for Salesforce, I wouldn't be where I am. And so those guys-- I mean, I really have a lot of respect for those guys. I've been involved with them so many levels, and I have a lot of people that work for me today that were at long time Salesforce people. And they're a machine. They're a monster. They really are.
[00:34:24.25] And that's awesome, dude-- like, for you to be part of that. And I remember when they-- because I was deep in the Salesforce world when that all came about and Einstein came about, and I remember just seeing the demos and whatnot. And we even had a guy locally here in Milwaukee-- I think his name is Brian or something. We ended up hiring him after a while, but he was doing all these insurance demos for Salesforce on Einstein, and he like-- he built a bunch of models and whatnot. It was super cool.
[00:34:55.63] But it was like it was funny because even then, it's like it was almost like too early still. I don't know. I feel like-- I mean, it definitely took off, and it's done big things, but like, people are like, wait, what? Right? Now, I mean, it makes so much sense in so many ways.
[00:35:14.31] ELAD TSUR: It makes sense now--
[00:35:15.26] [INTERPOSING VOICES]
[00:35:16.15] ELAD TSUR: --and I can't take any credit to what's going on now because they've changed it a lot since I left. It was my baby, and now, it doesn't look like my baby. They've added lots of stuff. They've changed lots of stuff. It is that way with tech, right? You develop something, and three years later, it's obsolete, and you need to reinvent. But if you have the core working, the core functionality there-- the entity resolution, the operation of the models, the retraining of the models to make sure they're maintaining their accuracy and cover and such, so if you have that mechanic. Then you can reinvent but still keep high level of support, and accuracy, and relevancy of your offering.
[00:36:04.86] REID HOLZWORTH: That's awesome, man. That's so cool that you got to be part of that and part of that whole experience and especially then too. I mean, just rocket ship. I mean, just so much going on in the industry. So what do you think about it now? I mean, you think it's going to continue? I mean, things have changed so much. I mean, all the big platforms have their answer for this, right? So I mean, what are your thoughts on that? Like, what is the future?
[00:36:31.64] [INTERPOSING VOICES]
[00:36:31.68] ELAD TSUR: In the past month, I've been invited and went and spoke at so many board meetings, C-level meetings, conferences about AI. From the CEO of Microsoft Germany that invited me to speak together with him about ChatGPT to a board of one of our customers, to most of our customers C-level gatherings working with McKinsey about it. Specifically Doug, who's brilliant. If you want to work with McKinsey, look for Doug. He's brilliant-- about GPT and the usage of it and how to embrace it.
[00:37:15.30] I think it's going to change the market, not because of the capability per se. Because again, machine learning models were here before that, and yes, it is much superior in understanding the language, but the solutions-- we were offering pretty much the same level of solutions for years now, and it's been going fine. Business is going great, but the boom that we've had that we've had in the past four months, three months is unprecedented. And people understand because they use it for their personal life.
[00:37:53.34] And they start to write letters with it. They start to write documents with it. They start to analyze stuff with it, and they see the capability with their own hands. And they say, well, we have to do something with it. And there has a lot to do with it. Not just what Planck's doing with it. You need to know that there are a lot of legal aspects to it.
[00:38:17.55] There's lots of best practices on how to make sure it's not hallucinating stuff. Test that it's awesome for. Test that it's not. We're doing a lot of those discussions these days, and in most of them, we are able to offer our customers, the carriers very, very strong utilization of GPT wrapped in all of our platform.
[00:38:42.76] REID HOLZWORTH: That's awesome. That's so awesome. It's insane. It's just like, ChatGPT, OpenAI-- it's so big right now. This is what everybody's talking about. I'm not-- this is no bullshit. Like, I've had two meetings this week about it. I have a board meeting tomorrow, and because of-- Google is one of our owners. They're bringing in a Google person literally to talk about AI in our board meeting. And like--
[00:39:09.79] And so we're going, what do we do with this? What does this mean? What does this mean for our technologies? How can we embrace it? How can we productize it? How can we use it internally, right? I mean, it's just-- it's insane, and it's just completely exploding. And it's pretty neat, too. It's fascinating,
[00:39:31.92] I'd love to get your opinion on this because I was hanging out with this dude not that long ago. Super technical guy like yourself. Really frickin' smart, and he went off on this whole tangent, like, for two hours about how AI is Satan, and it's going to take over the world. And we're all going to like-- it's going to take all our jobs and like literally-- like, what are your thoughts on that? The negative side on it.
[00:39:58.89] ELAD TSUR: Well, so as with any technology, it can be used for the good. It can be used for doing very bad stuff with it. You can use AI to save lives. And again, I've been, actually, awarded by one of the highest military awards in Israel for the amount of lives I've saved with AI. It can be used to save lives. It can be used to kill people. And you have robots with guns with AI doing-- making decisions by themselves, whether to shoot someone and where to shoot that someone at.
[00:40:33.96] I think that you can't stop innovation. You need to define your self the barriers-- the . An example to that-- look at AI. Look at biometrics. Let's get back to facial recognition, and biometrics, and finger recognition. The regulation around biometrics is not advanced at all and--
[00:41:02.24] [INTERPOSING VOICES]
[00:41:03.29] REID HOLZWORTH: There's not much regulation around it at all.
[00:41:05.25] ELAD TSUR: There is a bit. There is a bit. I'm part of the Israeli regulator about biometrics, and I'll tell you in a second what we've decided. Volunteering-- I'm volunteering in the prime minister's office in Israel for many years now as an expert in machine learning and specifically--
[00:41:18.67] REID HOLZWORTH: This frickin' guy. Like, what? OK.
[00:41:22.18] [INTERPOSING VOICES]
[00:41:23.66] ELAD TSUR: --in Hebrew in governmental decisions, you see my name appears there, and there, and there. And like, I'm the one that recommended removing the fingerprint from the National Repository-- Biometrics Repository for the ID documents, for the Israeli ID and the Israeli passport because face recognition was enough. You didn't need face and fingerprints. So you need to make sure that you're not storing something for no reason. Don't give that power to the government if it's not needed.
[00:41:56.55] And the-- look at FIDO-- FIDO, F-I-D-O. It's an alliance. Look at its member-- Google, the founding member. Google, Apple, Microsoft-- they understood that they're manufacturing those mobile devices, and they have fingerprint login mechanism or face mechanism and no regulation. Do you send that fingerprint to the cloud or not? Do you make the comparison on the cloud or on the device? What do you send in the cloud? No regulation. What about it, right?
[00:42:30.83] And it's not simple. Think about that. Your gender is biometrics-- male or female. Biological gender-- I won't go into all level of gender definitions that people have today. Biological gender, you have two and asterisks, some X, XY, and other combinations. It's maybe dividing the population into two parts. It is biometrics.
[00:42:56.41] Your palm print-- palm print, not fingerprint. Palm print, about one in every 20 people have the same palm print that you have. Your palm print is about the same as any other person within a group of 20%-- you find someone with the same palm print, because it looks at the radius of the--
[00:43:16.57] REID HOLZWORTH: So one in 20 palm prints--
[00:43:19.51] ELAD TSUR: Are the same.
[00:43:20.12] REID HOLZWORTH: So one in 20 palm prints are the same?
[00:43:23.02] ELAD TSUR: Exactly. Again, on average, given some configuration of the palm print detection verification model. With fingerprints, giving most of the configuration, it's one in 10,000 per fingerprint. Now, they got about 1 in 100,000. Your face is way, way better than that, and it depends on how you define it. So they decided the FIDO alliance decided not to send your fingerprint to the cloud but to do the matching on the device.
[00:43:53.45] They've defined regulation, and later on, we the regulators adopted the FIDO Alliance regulation. We said the industry regulated it amazingly well. They understood that there is a gap in regulation. It can be used for bad things. Let's make sure that we're taking care of it ourselves, and they've self-regulated.
[00:44:17.09] And I think that with AI, we need to go to somewhere that area. I'm not afraid of AI at all. I don't think that people would be able to use AI to do bad stuff at scale. I'm much more afraid of gene editing and its abilities. Once a technology drops its costs, it becomes available. Everyone can do that. I think that everyone can train AI. You can train AI on your machine. You can train on the cloud.
[00:44:51.98] It's so cheap these days. You can do very smart artificial intelligence models that make decisions that human being made. I think that gene editing is very scary. The fact that you can at your home lab run a CRISPR experiment, changing the genes of mosquito, make sure that all offsprings will be just male and not female, and wiping out the mosquito population-- something that human being was able to do it.
[00:45:24.14] Humanity was able to do it already at 2018, but government decided not to do because what are the implications of not having mosquitoes at all from nature perspective? Think about that. Someone crazy enough might wipe out a whole species like that. What happens to the world with that? I'm more afraid about gene editing than about AI.
[00:45:47.87] Think AI is-- once you're controlling where you run stuff, what you allow others to run with, it's within control. We do need to do self-regulation as the industry. We do need to listen to the regulators out there, but I'm not concerned with AI. I'm in love with AI, and I think that it's not that AI is the future. AI is here, and it only get better, and better, and better.
[00:46:15.49] REID HOLZWORTH: It's just-- I don't know. To me it's a little bit-- I think about it from time to time. It's a little bit scary. Like, I'm listening to my AirPods, and I'm doing my thing. And it's like, oh, hey, Siri, and just like everything's always listening all the time. And it's so comfortable to have those AirPods in and to start to speak to text and speak to email, and hey, make this call for me, and do this, and do that. It becomes like a part of you in ways, right?
[00:46:45.97] And when you think about-- when I think about the future, like long, long future-- it's not that long, right? And people really adopting that in a major way. All the things that it can do for you, right? The other things that-- but there are also things that could be manipulated by others in the future as well. Like, everything, right?
[00:47:09.28] ELAD TSUR: We're living in a different world. I don't know. It's like, we have those mobile phones with us, since iPhone. Let's say with the smartphones from 2007. We have the entire internet at your palm print-- all of the world's knowledge at your palm print. You can access the entire world's knowledge, and we know that probably, what? 90% of what you read in the media is not correct.
[00:47:36.70] REID HOLZWORTH: Totally.
[00:47:37.24] ELAD TSUR: Many of it misleading. Not just not correct but on purpose misleading you. On the social media, even more than 90%. People-- there's that-- right? Let's speak about anti-vaxxers, right? And if someone is an anti-vaxxer by religion and speaking with me, even if it's you, I don't care, right? Those people are stupid from my perspective and apologizing for cursing them at the moment.
[00:48:08.39] Science's best invention of the previous century-- how can you be anti that? Why? Because someone read an article of a person? One article of a person that went to jail after that because they faked their article and believe in that. And then you have people trying to show you that it's fake, right? It wasn't reproducible. It wasn't critically reviewed. Like, and people still believe in autism connected to vaccines, and they say, there are all of those materials in vaccinations.
[00:48:39.20] Yeah, we have all of those materials in the fruits that you eat. And someone published a paper saying that rice is connected to cancer, and in the abstract of the paper, it said this is not a true paper. I'm just making a title here to see what the influence on social media will be. This is like literally in the abstract of the paper. You're not going to need to go and read the paper itself.
[00:49:06.56] And people were referring to that saying, you shouldn't eat rice anymore. They weren't even opening the PDF to read even the abstract of it. And so we're living in an age where it's so dangerous. Think about it. Your health being influenced by others that don't think. Look at it. And that's part of what we have already. I don't think that any new technology has its dangerous side.
[00:49:36.85] You need to learn how to live with it. The radio had it. The movie had it, right? The TV, the internet, AI-- it's an amazing technology. It can do bad stuff. It can do the good stuff. We need to adapt. We need to learn how to embrace it because it won't stop. Just keep on moving forward, and life is dangerous.
[00:50:00.50] REID HOLZWORTH: Yeah. No, you know, I agree. And many people may not know this, but I don't do any media, man. I'm on zero social anything. And like, I have linked just because I have to because-- And it just-- there's just so much BS, but you know, I don't know. It is a little scary as I see people that are so addicted even to all of those different platforms.
[00:50:31.25] And they believe that stuff, and they're persuaded so easily. And there's so much social manipulation with all of that. And blah, blah, blah-- I'm not a big conspiracy theorist. I just see it. I see how people react to those things. And so therefore, I choose to not-- so therefore, it's just not part of my life, and I don't worry about stuff, until it hits me in my face, frankly, those types of things.
[00:51:01.25] And that-- maybe some people will say, that's ignorant, this, that, and the other, whatever. But again, it's peaceful in that way, and I don't have anybody holding a gun to my head right now, so I'm not worried about it. The point is that there's so many people that really adopt these things and these technologies and all of these stuff, and they're easily persuaded. And as AI continues to just get so good, what comes from that? And it's so unknown, right?
[00:51:36.29] And it's like-- and I'm not talking about watching terminator here in this kind of stuff. It's not like that, but I could-- I see it. I see all sides of it. What's really interesting to me, though, about all of it is what it's going to do to our industry and technology. Not even insurance. Like just technology generally speaking.
[00:51:59.30] And I think about it as a technologist and especially in insurance technology, and if I were to build-- like what I built previously, an agency management system, or whatever-- policy administration system, one of these things. And you built it on a modern stack today leveraging those-- what did you say? 12,000 models that are out there or whatever it was. You say 1,200 or 12,000?
[00:52:23.15] ELAD TSUR: 12,000.
[00:52:23.27] REID HOLZWORTH: I forgot what you said. 12,000? Like, yeah. Leveraging a bunch of that with super smart dudes, like yourself. That's a whole another product that's out there. I mean that-- and that changes things.
[00:52:39.17] ELAD TSUR: That changes--
[00:52:39.89] REID HOLZWORTH: --in so many ways.
[00:52:40.74] [INTERPOSING VOICES]
[00:52:41.03] ELAD TSUR: --industry in so many ways. Just think about-- look at that. I have like-- here, This is my son's product. Never mind. He's building-- fourth grade. Building a keyboard with Raspberry Pi and in Python, and he's not a Python developer. Just started to learn how to code in Python. And he needed to build a keyboard for his friends. Like, he's trying to sell a keyboard. It's cool.
[00:53:05.45] And he went to ChatGPT and asked ChatGPT, please write a Python code on my Raspberry Pi Pico-- that's the version of Raspberry Pi he was using, the microcontroller he was using. --for a keyboard. And it needs to behave that way, and enter, and immediately he got the full code. And it worked out of the box. It had one bug, and he told it. You have a bug.
[00:53:29.60] He told ChatGPT, you have a bug. He tried to use it, and something didn't work. You have a bug. And ChatGPT said, you're correct and fixed the bug, and the second version was the one that he is still using today. So think about developers-- after developers, ChatGPT develops the code for you. Microsoft's-- or GitHub Copilot develops the code as you type so brilliantly, then it leaves you to be the architect, right? The person to put it all together, to assemble it, but someone else writing the code. I think that junior developers, for example.
[00:54:05.74] I don't know whether I would go to learn to code today if I wouldn't have an amazing fashion love to this field for just looking to be the software developer because it's a work. It's for a living. That's it. ChatGPT would replace you. You need to be senior, superior, super smart to remain in this field because the mundane tasks will be given to GPT-like solutions.
[00:54:38.41] Again, we're using GPT as all AI, generative AI name, but actually, when I'm saying GPT, I mean large language models and other generative models that can be used. GPT is the first one that the public is using as a buzz, as a term.
[00:54:55.81] REID HOLZWORTH: And you told me in our last conversation that in your current business, you guys had even-- didn't you acquire a company or something?
[00:55:05.17] ELAD TSUR: We acquired Chisel AI about a year ago. Chisel AI, brilliant folks out of Toronto's University. Built brilliant document ingestion platform with AI. They had several amazing assets. One of them, for example, is they're tagged training data. They have hundreds of thousands of insurance documents tagged as trained data.
[00:55:36.50] So even if their deep learning model-- which they had many dozens of deep learning models they were using. Some of them, like Amazon's model, started their own models. Even if the models are becoming inferior to the advanced models that are out there in the market, the ability to train a new model on a new architecture that's being released is-- I don't want to say priceless. It's worth a lot.
[00:56:04.21] The models are great. They were, I think, the best from what we saw in the market. But the data that we can use to train so many other models with it makes it super interesting, and I think that part of your controlling the AIs is making sure that you can always train the machine to do the task for you, and for that, you need lots of data-- lots of tagged data.
[00:56:31.05] REID HOLZWORTH: Didn't you say that one of the things from Chisel Ai-- you're like, hey, you know what? ChatGPT, we have this. It's amazing, but this is better. Right?
[00:56:41.70] ELAD TSUR: So a lot of stuff that ChatGPT for-- in their demo because they haven't released that part yet through an API or document ingestion, where-- surprised me, and it looks like they have gone into a bit superior work to what Chisel was doing. But what I told you is that I assume that once it's released, it will be so polished that it will-- that specific model will even be better.
[00:57:09.39] But better at what? Better at taking an image and processing it into text. Taking that text into insurance domain, projecting it into the insurance domain. And making sure that you get the submission of the emails transformed into the input to that model is another glue that you need to put in order to make sure that pattern works. But on this specific thing, I believe the ChatGPT will be so better than-- getting proven to be so better than what it is today. That I think that all carriers might implement such a thing themselves.
[00:57:49.47] We don't think document ingestion is the future not because it is the easiest thing to replace with ChatGPT once they released their API for it. I don't think so. I think it's going to be difficult to replace with ChatGPT. I just think from another way. You know that. Distribution would be digital. You won't have--
[00:58:09.00] REID HOLZWORTH: Exactly. Document ingestion is just a belt in a broken arm, right? Yeah.
[00:58:13.08] ELAD TSUR: It's like five years from now-- insurance moves slow, so five years from now, you might still have those code forms scanned, received via emails but-- so you will always have someone to take that input try to convert it into workable data, but most of these submissions will be ingested digitally. Will be put and inserted and requested using a digital channel, like a website or API requested or anything like that.
[00:58:43.46] REID HOLZWORTH: There's been a lot-- there's been a bunch of companies over the last number of years who've really focused on that, right? And document ingestion doing stuff with it within insurance. And it's like I always said-- like, why would you invest in that? I mean, invest in the now. It may solve a problem now, but long, long-term-- like you said, it's going to be--
[00:59:08.66] [INTERPOSING VOICES]
[00:59:09.08] ELAD TSUR: Long, long-term, it's probably is not-- it's going to be just one out of many products that you will have if you want to survive in the long, long run. But on the other hand, look at insurance, where we have customers with core insurance platform from the '70s written in COBOL. We have customers that their repository of policies is a folder with subfolders per customer with PDFs in those subfolders. That's the repository. You can't do a simple select query. There's no database to hold those to be running those SQL queries on.
[00:59:47.75] So insurance is a market that moves slow. Lots of dinosaurs, but on the other hand, once they move, they usually adopt the latest and greatest. So I agree with you, it's-- a decade from now, I don't think that any document ingestion would be needed. Five years from now, probably less than what it is now.
[01:00:10.27] REID HOLZWORTH: Elad Tsur, guys. What a great episode. Really educational. He is really awesome. I love that when he was a child, basically at 14 years old, he went to college. Like, Doogie Howser, right? The guy is amazing.
[01:00:26.14] Working in the military in AI and artificial intelligence at a very young age-- it's pretty wild stuff. Really, really cool. In the next episode, we're going to get deep into it about his new business, Planck. Stay tuned. It's going to be great.