08 December 2017
AI adoption is limited by incurred risk, not potential benefit
It’s tempting to think that adoption of AI is limited by the technology itself. Headlines declaring the rise of robot doctors and approaching technological singularity, contrasted with humorous memes of robots falling over, make us alternately fear and doubt AI’s capabilities. In practice, however, decades-old AI technologies could unlock significant value, although many companies still have yet to adopt them. This is because adoption of AI is determined by both trust and risk. Thinking about AI adoption in this way enables us to more accurately anticipate opportunities for AI startups.
Gradual exposure to successful applications of AI in daily life, or as part of trivial workflows, builds trust. For example, machine learning algorithms encourage us to revisit abandoned online shopping carts every day, so adopting AI-based software to make our jobs in enterprise sales and marketing easier seems natural. A nuclear power plant manager, however, has a wider mental gulf to bridge in imagining how the technology behind her Nest thermostat could safely automate dangerous maintenance procedures in a power plant without extremely close supervision.
AI in the consumer space
For over a decade, we’ve experienced AI applications for consumer personalization, where the benefits of applying AI are high but the consequences of an incorrect prediction are low. Some well-known consumer personalization applications include Google’s PageRank and suggested searches, Amazon’s product recommendations, and Netflix’s content recommendations. In these cases, the right recommendation at the right time leads to increased revenue for the company, while the wrong recommendation does not lead to anything more serious than an unintended laugh for the consumer.
Similarly, in the enterprise space, AI has been primarily and successfully applied in low-risk, high-reward areas. Products have been architected so that AI is applied as a layer on top of a workflow application that would function just fine without the AI. We call these applications “AI Augmentation” - because the AI functions to augment an existing workflow.
Constructor, a Zetta partner, is an AI Augmentation company. Constructor applies machine learning to dynamically rank site search auto-complete suggestions and search results. The algorithm observes which search results visitors to the website are most likely to click on and ranks the search results accordingly. This dynamic ranking has increased conversions by 2 to 20 percent for Constructor’s ecommerce customers. If the machine learning layer fails completely, however, the website would still have a functioning search bar.
The AI ecosystem is currently shifting to applications completely built around AI. We call this category “AI Automation” applications.
Zetta partner company Tractable is an example of a AI Automation. Tractable uses deep learning-based computer vision to visually inspect damage to a car after a crash. Like a human inspector, the product evaluates the damage and determines whether the damaged section should be repaired or replaced. The computer vision element is so central to this application that if the AI fails, the product would not provide its intended value to its customers - the workflow has been fully automated by the AI. That said, no one is physically harmed if the product makes an incorrect assessment, because a mechanic performing the repair can override the recommendation.
Other AI Automation companies include x.ai (automated appointment scheduling over email), Falkonry (anticipating maintenance and repair of industrial equipment), and Focal Systems (retail inventory tracking and restocking).
We’re now at the beginning of the next stage of the risk curve, with applications that are only possible because of recent advances in AI. We call these “AI Creation” because the use of AI is what creates a new category - these products and services could not exist at all without AI.
Invenia, for example, is only possible because of AI. The company builds models that predict the demand and supply of electricity. Invenia collected troves of proprietary data — on grid operations, energy usage, weather, etc. — and modeled the physical flows of power to build predictive models for energy usage. The company gets paid for its predictions because it helps the independent system operators (ISOs) of the electricity grid avoid both blackouts and the production of too much energy. Energy systems are so complex that machine learning is necessary to create an accurate model.
We have a general moral imperative to maintain the quality of life for all people around the world. This is difficult to satisfy as populations increase and resources decrease. However, machine learning technologies are particularly good at solving complex optimization problems. There is a lot of risk in using probabilistic methods to solve societal problems such as climate change, health care, or food production. However, advances in machine learning technology are making this increasingly possible, provided the AI earns our trust.