In 2011, the IBM Watson artificial intelligence (AI) computer made headlines when it beat several human competitors on the game show “Jeopardy!” Observers at the time speculated that some version of artificial intelligence was poised to take over insurance jobs, particularly in distribution.
Cognitive computers work—and to some degree mimic human decision making—by taking in data from a myriad of sources, from tweets to libraries. Unlike humans, AI doesn’t suffer from information overload; it can absorb and sift through millions of unstructured documents in seconds, then interpret the data to discern patterns, connections and insights.
Also unlike humans, these computers don’t get hung up on miscalculations from our flawed senses. In a recent lecture, Michael Abrash, chief scientist at Oculus VR, quotes from the film “The Matrix”: “’real’ is simply electrical signals interpreted by your brain.”
Most times, our interpretation of reality is skewed by our physical limitations. Vision, arguably our most-used sense, is riddled with flaws: we can’t see infrared or ultraviolet light, we have peripheral blind spots, and we only see a fraction of the 360 degrees of the world around us. Yet this mere sliver of our external world is how we perceive reality. The social media controversy caused last year by “The Dress”—is it black and blue, or white and gold? —illustrates how our senses can fool us.
Will AI systems replace humans in the workforce? According to “Artificial Intelligence is Almost Ready for Business,” a report by Harvard Business Review, it’s already happening, especially in health care and financial services—including insurance. I think for many it will be difficult to defend that humans are better than autonomous drive cars which can “see” in ultraviolet and infrared, and sense any moving object within a large radius. We humans think we are good drivers, but statistically speaking that simply isn’t the case.
Robo-advisors and robo-advice are becoming part of our vernacular. USAA is already using a Siri-like function called the Enhanced Virtual Assistant, or “Eva,” allowing users of its mobile app to conduct 200 transactions, including money transfers and bill paying, simply by talking.
For the time being, most AI still needs humans to help it interpret all that data. In the best-case scenario, especially in the area of risk management, AI will work alongside humans, where its use of natural language and high level of abstraction can help provide an unprecedented level of service to insurance customers—the perfect example of insurers using technology for a very human-centric outcome.
However, the advance of AI in the insurance workforce presents another issue: liability. As we’re seeing with autonomous vehicles, shifting risk management decision-making from humans to machines can eliminate the need for a traditional insurance policy—and create a gray area of liability if the resulting decision-making is wrong. This transition period could be a boon for Insurers, allowing them to take back their societal role of making innovation a safe reality.
Insurance must look ahead to determine new business models that will address future liabilities surrounding the use of AI. This could include designing policies to address professional liability (E&O) involving robo-advice, or exploring other products and services that address these evolving needs and shifting risk pools.