Alex Dalyac: Revolutionizing Accident Recovery with Computer Vision at Tractable
By: Terah Lyons
It's such common advice that it is practically a cliche to tell founders to take inspiration from customer problems that can be solved – and never to approach building a product as a technological hammer in search of a nail. Luckily for fintech unicorn Tractable, co-founders Alex, Adrien, and Raz ignored that advice when starting their company. They had an instinct about the utility of computer vision applications deployed in the real world, and successfully pursued it, building a billion-dollar business in the process.
In 2012, there were several significant breakthroughs in deep learning. In the ImageNet Large Scale Visual Recognition Challenge, a convolutional neural network halved the existing error rate on ImageNet visual recognition and proved out several new techniques, including the use of GPUs to train a model. This and several other related developments around that period led to meaningful performance improvements in computer vision and really shifted the possibilities for its application.
Investors often talk about the importance of market timing, but Tractable had the ideal technology timing — they caught the wave of the rise of this new technology right when it was just beginning to be commercially feasible (with enough focus and expertise) and before it was commonplace. Alex Dalyac, Tractable’s CEO, and the co-founding team were maniacal about identifying and pursuing a specific vertical use case for computer vision technology that delivered tremendous customer value: damage appraisal for accident recovery, starting first with cars and homes. They understood the compounding effects of large-scale datasets in improving the accuracy of their models, and prioritized pursuing a specific segment of the market with access to that data in mind.
Alex walks us through how he and his co-founders Razvan Ranca and Adrien Cohen set out to start a company based on specific machine learning techniques; how they moved from applications in radiology to insurance appraisal in their initial research process; and how they developed a product into a scalable business that has grown Tractable into the breakout applied AI company that it is today.
Terah Lyons: Alex, welcome. We’re going to start where we start with every founder: Can you describe what Tractable does and how you are using machine learning?
Alex Dalyac: Tractable makes computer vision accessible to people from their smartphone, and basically it allows people to take a scan of their car or their home for purposes of repairing, protecting, and selling those assets. So imagine you get into a car accident. It normally takes 21 days to go from start to finish on an auto claim. And that's very slow. The fact that your insurance company has to send an adjuster out to go check out your car. That's no longer needed. You don't need anyone to go check out your car. Your phone is checking out your car for you.
You just walk around your car, take a few photos, or you take a video, and voila, we have an immediate assessment of the vehicle. We know how much it's going to cost to repair it. And this applies to whether you want to repair your car, or insure your car or sell your car. Or if you've had damage to your home, same thing. Walk around your home, scan it, we'll get an aerial image of your home as well, and from that create an assessment of the cost to repair that home.
TL: Tell us more of backstory for how you came up with the business idea, Alex. What was the original insight there? How did you go about validating it? Was the premise you just described the one you started out with originally – or has it transformed?
AD: It was very technologically driven. There was a breakthrough in artificial intelligence in 2012, which was the first time that convolutional neural network algorithms were implemented on GPUs. So basically in 2012, somebody figured out how to 100X the amount of compute power available. And this happened a couple years after Stanford figured out how to 10X the amount of data available through 15 million labeled images. So by combining very high compute power and lots of labeled data, much stronger algorithms emerged. I was doing research in this field and all of these factors together convinced me that this was no longer just a research project. These algorithms can actually be taught to perform tasks, visual tasks, that usually only humans can perform. So we went out looking for [such tasks], and realized that cars and homes are pretty valuable items that are huge parts of our day to day lives. And appraising them visually drives most industries.
TL: That’s really interesting. Because AI in most cases is a tool, and not necessarily a product, we don’t generally advise founders to think of it as a hammer and to go looking for a nail. But in this case you did exactly that, and settled upon an application that really makes sense for the technology and the use case. Do you think it was always obvious that computer vision was going to work well enough for the commercial application that you selected at the time that you founded the company? What was the gap in research and development that needed to be closed in order to realize the product and the market?
AD: We knew it wasn't going to be easy. But the fact that human accuracy had been attained on ImageNet object recognition told us that if you have an image classification task, enough data and label training data, then yes, with enough applied research effort, it will be possible to solve this. No one else was doing it at the time.
TL: Okay. So it's 2014, and you're coming out of a research environment. You're thinking about how to commercialize this technology. As a founder or potential founder at that point, what were your biggest questions at the time and who was the first person you talked to about the business idea? And what finally made you decide to go all in on the company when you did?
AD: It's one thing to find a case where images are involved. It's another case to say, "Hey can AI really learn this task fast enough? Is there lots of data available?" And it's another case to figure out, "Okay, what's the value-add if AI were to be doing this?" I think that's often a fallacy when you're solution-chasing for problems to solve: You think you may have found a use case, but it might be that even if the AI were there and it worked perfectly, it wouldn't make such a big impact. We looked at a lot of different applications. We looked at cars, we looked at natural resource exploration. We looked at dermatology, medical imaging, utilities, preventative maintenance – the list goes on.
”I think that's often a fallacy when you're solution-chasing for problems to solve: You think you may have found a use case, but it might be that even if the AI were there and it worked perfectly, it wouldn't make such a big impact.”
The framework we ultimately developed is that generally AI value-add breaks down into scale value-add, speed value-add, and accuracy value-add. And you have to just really make sure you're being very rigorous at trying to quantify the economic value per AI transaction that will be generated through either of those three: speed, scale or accuracy. The question to ask is: How much value creation can we get? Then assuming that you keep 10-20% of the value generated, do we have a billion dollar opportunity here? Yes or no? And you have to make sure that you go all the way to quantifying that so that you don't accidentally rush into something where there's not much AI value-add.
I'll give you an example here. Let's take medical imaging, and let's assume there's going to be no accuracy value-add. It's a very challenging task. The AI can kind of be as good as a radiologist, but not quite that much beyond. People often think of savings made, but you still need to go to the radiology center. You still have this enormous, very expensive machine that's required to produce a CT scan or an MRI. And no matter how much you automate the interpretation of that data, that laborious step and that expensive data collection step is still going to be there. So we ended up prioritizing cases where data collection is also at the touch of your fingertips. In other words, cases where the imagery and the video can be obtained from people's smartphones.
TL: How long did it take you to prove out the technology–and to what extent is that a continuous process? What has the biggest challenge been?
AD: It's definitely slow. It takes a lot of time. Year after year, we hear Elon Musk in the news getting the date wrong of when autonomous driving will be “solved”. And that's because it is still a very challenging science, just because there are so many different corner cases where the pixel values are completely different, and the algorithm would think, "I've never seen this before. What do I do with this?" And you have to make sure that they don't get that wrong. Human users also have a low tolerance threshold for mistakes because our visual cortexes are trained to work in such an enormous variety of visual conditions. If the algorithm gets it wrong, we conclude this algorithm is dumb and not as smart as us.
So there's a very high bar on what good performance needs to be. It takes years. Today, 85% of our revenue comes from deployments, which are 100% automated, no manual intervention whatsoever. But it definitely took a few years before the algorithm was making any kind of fully autonomous decisions. For a few years, I would say even when we were in the millions in revenue, we still had humans manually quality-assuring, very intensely, the outputs of our AI algorithms.
“For a few years, I would say even when we were in the millions in revenue, we still had humans manually quality-assuring, very intensely, the outputs of our AI algorithms.”
TL: So for the roughly 15% that isn’t fully automated: Are you trying to eliminate that? Or is that difference really immaterial to the business at this point, given that the majority of cases can be solved without human intervention? To what extent is closing that last mile of humans-in-the-loop important to you right now?
AD: Yeah. I would say that when you go into a new domain, you should go in with a mindset that you're going to go live with 100% of cases being QA-ed by a human. Then I would say within twelve to twenty-four months, provided you can get your hands on enough historical training data, you can start to see a lot of scale value coming from your algorithms. Then maybe once you're in years two to three, you can start to really completely remove any kind of human involvement, but it's always a multi-year process at this stage. I think another thing to bear in mind is that this space is continuing to advance in leaps and bounds. There's a lot of exciting research around few shot learning or zero shot learning. I think if we're to have this conversation again in four, five years, things will be completely disrupted and you'll be able to adapt to new domains much, much faster. But for now, based on how we've had to build track up until now, that's been the pace.
TL: Alex, how did you think about your customers from the outset? What did customer discovery look like for you, and the subsequent acquisition?
AD: You first think, Hey wait, this is a case of where imagery is needed. You start like that. I went about my life, trying to look for visual tasks where humans are involved. We initially thought: dermatology, natural resource exploration, assessing damage to vehicles and homes. Monitoring wildlife, utilities. In all of these cases, the customer is immediately obvious, because it is who is paying to get that task performed today. So you go to the radiology center, you go to the dermatology center, you go to the insurance company, you go to the energy company, you go to the utility company. You speak with them, they're your potential customer. Then you try and build a business case, you try to quantify the AI value-add. You try and see if you can find a champion. Are you going to find somebody who's very enthusiastic about what you're talking about and really, really wants to jump on this adventure with you? And I guess that gives you a sense of whether you've got some sales traction. That was basically exactly how it went.
TL: What kind of feedback have you received from customers that has really changed your mind about the product, either from a technology/product perspective or from a marketing and packaging perspective? Were there any of those bigger pivot points that you encountered along the way, based on customer feedback?
AD: Here’s one. We actually went to a radiology center in France and spoke to a physician. And I explained to him this framework for accuracy, speed and scale value-add that we talked about previously. I said, "Look, the easiest way to scale value-add, which is a case where you've got so much data out there, your humans don't have enough time to pore through all of the data. If only those humans could pore through all of that data, even if it weren't perfectly accurate, you feel like you'd get so much more value." That's usually the best because that's where your AI algorithms don't even need to be at the performance of the human. They can be only 70% of the way. The mere fact that they can ingest 10X more data that could generate economic value, your bar for generating value, relative to AI maturity is lower.
When I explained this, his eyes lit up and he said, "Yes, we have patients come in for CT scans. And if maybe they're unhealthy and I need to check them for potential heart risk, my radiologists only have time to look at the arteries in their body and to look for fatty calcium deposits, which could indicate risk of heart attack. But these CT scans are taking a 3D scan of everything from the neck down to the knees. And if I had all the time in the world, I would ask my radiologists to go through the entire respiratory system. Every vein and every tube. And right now, they don't, but I've got that data. And I feel like if I did go through all of that, I'd be able to actually identify many more people at risk of heart attacks. And that way we could avoid people dying from those conditions."
So you can imagine at this point, we're extremely excited. I ask him, "Okay, how much historical data do you have?" And he looks at me proudly and he says, "Oh, I think I could get you 7,000 scans." At the same moment, we were speaking to an insurance company that had 25 million historical images of damaged vehicles. There were so many orders of magnitude difference in historical data available.
That told us that we were going to have a lot more AI maturity if we went down the insurance route. So that’s what we did, and here we are today.
“That told us that we were going to have a lot more AI maturity if we went down the insurance route. So that’s what we did, and here we are today.”
TL: That says a lot about the commercial viability of applied AI – it is in so many cases very deeply dependent on the robustness of the data sources available. And you really backed into the business model here by identifying what success criteria needed to be met for a healthy, scaled company.
I want to dig into that a little more. How did you transform the idea of a product into a company? And a successfully scaled one, at that? A lot of entrepreneurs do lose sight of the idea of a scalable business model. How did you think about pricing, for example, to support that scale?
AD: When we looked at this first case of assessing a vehicle after an accident, we knew this was a visual use case. We tried to look at the accuracy scale and speed value-add that you get from this. So instead of waiting for a human to appraise the car, we asked: What happens if the driver takes photos, and then the repair estimate is created immediately? We quantified it and we realized that we would be able to save seven days [in that process]. And we'll be able to make sure that the right repair estimate is created that's accurate, without any unnecessary inflationary repair decisions.
Pricing that, the thinking was: "Well, it looks like we'd be able to generate about $150 of value every time, every time. So surely then all we need to do is charge $15 and that's a huge amount of value that's created for our customer." The pricing model came out of that. We built a business case around that sample of a hundred historical claims, and proved that it's $150 worth of value. That allowed us to back into this $15-per-claim price, and then start discussing tiers. For example: If a customer is willing to commit 200,000 claims every year, maybe we can bring down the $15 price to $13 or $12. And if they can commit for not just one year, but three years, then we can shave off a little more. I guess that's how we came to the price. First, quantifying the value per transaction, and then turning that into some kind of long term subscription commitment.
TL: Let's talk more in depth about the data since we're in the business of ML here. I'm just curious to hear more about what the relationship is between data and the business for the company. What data do you rely on? Where do you source it? And what are the hardest parts about sourcing the data necessary to support the business as you've grown? And how did that bear on your selection of use cases and customers when you approached going to market?
AD: We source data from our customers. Again, we've prioritized cases where this visual task has already been carried out for a long time by humans. So there's a digital trail of it – digital records of the task having been carried out. So, for example, a hurricane hit Florida, and 50,000 homes were affected. The adjusters went in, they took images of all these damaged homes. We flew airplanes over Florida, so we have aerial imagery as well. And we know what all of the damage was, how much it costs for us to get all these homes back to normal. That's brilliant. You've got the visual raw data, and then you've got the outputs that the AI needs to predict, which is the repair operations.
“…we've prioritized cases where this visual task has already been carried out for a long time by humans. So there's a digital trail of it – digital records of the task having been carried out.”
Those types of scenarios are great cases. That way, once you work with an insurer, you can tap into their historical data, provided you can reach the right commercial arrangement – that way you don't have so much of a cold start. What we found really interesting is that if you go into, for example, a dermatologist, dermatologists don't necessarily keep digital records. They just look at what you have on your skin and they write that down in their notes, but why should they take an image, when they don't necessarily need to? So we found it interesting that in some cases you have these records, sometimes you don't.
One thing we found quite fascinating is that insurers actually are a very good place to go to for accessing historical data because insurers are often paying for another entity to do the work. Because of that, they need a very rich audit trail of what the situation was, what needed to happen to remediate, and then how much that cost. They need to be sent that documentation in order to make sure they're paying the right amount. Whereas if that's integrated and the person paying is also the person doing the work, there's a bit less need for these digital trails.
I'll give you another example relevant to our world: assessing a car. When assessing a car for repair, insurers receive estimates and photos of damaged cars from body shops, and this has been happening for many years. Conversely, if you're interested in assessing cars to produce inspection reports to sell or trade in a car, well you're not going to get that because you've taken your car to a dealer. That used car salesman is the person assessing your car and buying your car. So they don't need to keep some images and a video of your car. You won't have training data built against those transactions. So one thing that we found is if you're looking for training data, it's great to work with insurers.
TL: What about cold start data problems? Do those differ by use case or customer for you?
AD: We're all facing cold start, and I guess the biggest question is just how long you stay cold. The ideal scenario is one in which you're working with a company which has lots of historical data accumulated over time, and you can create some kind of partnership where you negotiate good terms so that you can make use of that data. That's one way to accelerate it. The other way to accelerate it, as we talked about, is to go live with humans in the loop with plenty of human quality-assurance. In that case, you don't need to wait until your AI works perfectly, and you can immediately go to production. I would say these are the two approaches that we've taken. And it really does depend on the customer.
Another thing that's worth noting is that sometimes the incumbent software providers to that customer base will be sitting on the most of the training data, because they all have lots of customers and they're software companies, so they're holding onto the digital records. It could be their job to collect some of that data, and they've aggregated it across the customer base. For example, the company that allows American insurers to write down what the cost is to repair a home after a hurricane will have the data, not just from State Farm, but from Progressive and from Allstate. So if you can strike a partnership with them, now you're tapping into an even bigger pool.
TL: And does that present any complexities?
AD: Yes. There are some delicate dynamics to figure out there, because that software company might be wondering, "Wait a second, you are a software company, I'm a software company. I've got the data. Should I partner with you, or should I try and do this myself?" And that's where dealmaking comes in. It might also slow you down because now you need to wait 12 to 18 months to strike a partnership with them before you can even go to market with the target customers. So there are pros and cons to each approach.
TL: Have you seen any of those third party software providers actually strike out to try to do this by themselves, or have you so far been able to stave them off?
AD: Yes, absolutely we have. What we find is that it works best to say, "Look, every option is available to you, partner. Set up a team that tries to develop it themselves and partner with us as well. Then let your customers choose whichever is best.”
TL: Right. And you just have high confidence in your levels of accuracy and the quality of the product that you're offering, and hopefully that will win the day.
AD: That's right, exactly. You then need your execution, your laser focus, your obsessiveness and your talent as an AI-focused startup to win.
TL: Alex, this was a real pleasure. You’ve really proved that you can start with a technology and end up building a business around it. One of the real strengths of your approach seems to be that you grounded every early investigation in value delivered to the customer. That focus has clearly borne out – it has been amazing to see Tractable’s growth. Thanks for taking the time to talk more about that journey with us.
AD: Of course! You are very welcome.