Insurers can no longer view AI as something to be programmed. Rather, AI must be “raised” to act responsibly, ethically and collaboratively.

An AI-based solution that makes decisions about insurance payouts. A mobile app that ages your selfie based on your lifestyle habits and makes recommendations for health improvements. Using machine learning, text analytics and optical character recognition to process disability and illness claims in less than five seconds—a dramatic reduction from the 100 days it used to take.

Science fiction? Think again. These are all real-life examples of how insurers have leveraged AI to do things differently—respectively, Ant Financial Insurance in China, Dai-ichi Life Insurance Company in Japan, and an Accenture client in life and health insurance.

When AI-based help was first launched, it wasn’t always clear whether you were speaking with a person or a robot. While businesses may not have meant to be deceptive, today they’re being more transparent in their use of AI to help customers. When consumers interact with a chatbot on Facebook, they know they’re talking to a robot. When a digital avatar asks for their policy number, they understand that there’s an AI behind the scenes.

But when we look at AI within an insurance organization, we need to not just understand how it makes a decision. Bringing AI into an enterprise means having to rethink fundamental business processes. How will it change people’s jobs? How will they interact with AI? What will AI do to the other systems of thinking we’ve developed—our assumptions about operating models, revenue streams, customer experience?

Citizen AI

AI is going to have such far-reaching effects on the enterprise—and arguably, on society—that it’s not enough to see it as lines of code. Just like a person, AI must “act” responsibly, explain its decisions and work well with others. It must understand right and wrong, and act and impart knowledge without bias. It must be self-reliant, collaborative and communicative. All of this is our responsibility. In other words, we must “raise” AI, just as we would a child.

After all, in order for AI to effectively collaborate with people, insurers need to build and train their AIs to be explainable—to provide clear rationales for their actions, in ways that people can understand.

In addition to being explainable, AI must be responsible. This is especially important as the combination of big data and AI enables insurers to calculate risk on an individual, rather than pooled, basis. If an insurer is using AI, is it pointing in the right direction? Will it drift, as it learns over time, away from its intended purpose? The dark side of AI is that an insurer could price insurance beyond the reach of consumers who need it most—a prospect that is neither palatable to society, nor permitted by regulators.

Learn more about Citizen AI, the first trend of the Accenture Technology Vision for Insurance 2018 report. To discuss how Accenture can help your organization raise AI to be explainable and responsible, get in touch.

Submit a Comment

Your email address will not be published. Required fields are marked *