Insurance Blog | Accenture

Artificial intelligence (AI) was supposed to be objective. instead, it’s a reflection of implicit human discrimination. Lex Sokolin, futurist and fintech entrepreneur, on what AI bias means for insurers—and why there are no easy fixes.

Highlights

  • Some use cases for artifical intelligence (AI) can be fairly objective—for example, using AI to document damage to a vehicle to expedite claims processing.
  • When AI is applied to data about human beings, bias can become an issue. For example, the data set that the AI is trained on may not be diverse enough, or insurers may use proxies for data, such as zip codes, that inadvertently discriminate against certain people.
  • Digitization is happening across financial services, and leaders must change their beliefs about what is possible. Incumbents that understand what the future looks like can be better-equipped to re-engineer themselves to compete in that future. Key takeaway: standing still is not an option.

The ethics of AI and what happens when human bias intersects machine algorithm, with Lex Sokolin

Welcome back to the Accenture Insurance Influencers podcast, where we look at what the future of the insurance industry could look like. In season one, we explore topics like self-driving cars, fraud-detection technology and customer-centricity.

This is the last in a series of interviews with Lex Sokolin, futurist and fintech entrepreneur. So far, Lex has talked about disruption in financial services and the imperative for insurers to learn lessons from how other verticals have handled it. We’ve also talked about automation and AI, and how AI might affect insurance.

In this episode, we look at the ethics of AI, what the future of insurance might look like—and how insurers can prepare for it.

The following transcript has been edited for length and clarity. When we interviewed Lex, he was the global research director at Autonomous Research; he has since left the company.

You’d mentioned that AI still has a lot of room to go, and one of the more interesting topics is this notion of discrimination and bias—especially as you said [in a previous episode] that with AI, you don’t necessarily know what the outcome is going to be.

Especially with something like insurance or financial services where the outcome can have material consequences on somebody’s life, how does discrimination and bias come into the conversation? What is the responsibility of someone using AI to predict that or to correct that?

I think there is now a robust discussion in the public sphere. Even within politics today, given all the stuff about propaganda bots and election issues and the ability to fake videos using deep learning, because of those issues and their impact on politics, now the problems around this technology are coming to light and being articulated by senators and folks from the House of Representatives. And that’s an absolute positive––that it’s no longer 2015 where this was kind of an unknown. But the way you think about it has to be very, very case-specific.

Let’s say you have a company like Tractable, where the AI is pointed at damage that happens to windshields on cars or other types of damage. You take the photo and then the data from that photo can, in real time or close to it, be associated with a dollar amount for how much it could cost to repair that. In easy cases that might be sufficient for the insurance company to just let it go through.

Or you could look at something like Aerobotics where you have drone footage of crop land, and instead of sending out human beings to go and assess the different parts of the farmland to see what’s been damaged, you take photos of it and you’re able to say, “OK, there’s water in this part of the environment and it’s 3 percent of the overall stock and therefore this is what the estimated impact would be.”

In those cases, you’re not really in a place where there’s an ethical issue. You might have something to say about the quality of the photo or having to pay for the data. But it’s really fairly objective.

If you switch now instead to looking at human beings, and trying to analyze human beings, and the data about human beings… There are lots of examples where you can do that, whether it’s something around alternative data that you put into your underwriting process, or trying to validate somebody’s payment history or credit history. Even if it’s something like scanning a passport photo. Depending on the ethnicity of the of the subject, as soon as you touch people as a data point then you start thinking about these ethical issues—whether you’re accidentally treating people as an instrument and not really thinking about them as individuals.

And why is that important?

One of the things about the core capability of Google Image Search, and the classifying that it does on the images using its neural networks, is that it’s really, really good at telling apart dogs and cats. It’s silly but a lot of people on the Internet post pictures of dogs and cats. There’s lots of lots of data about that, and in fact the machine is better at telling apart different breeds of dogs than is humanly possible. You can think of this machine, trained on cats and dogs with lots and lots of specificity, seeing lots of variety and taking up lots of mental power on how one breed is different from another.

And then in the same algorithm, there’s a much smaller space for telling apart, let’s say various clothing, or different historical landmarks, or even the differences between human beings. There’s just less stuff for the thing to crawl. Where it might be really accurate in one place, it’s not very accurate in another place.

A recent study looked into this and found that AI was really, really good at telling apart individuals who were white and male, with an error rate of something like 2 or 3 percent, which is below the error rate of 4 or 5 percent, which humans make. The machine is better than the human in that case.

When you look at African-Americans, the machine made errors of 30 percent, because it just didn’t have enough data to tell people apart. There is a problem of the algorithm’s developer not thinking about having to expand the data set so that there can be more fidelity and accuracy with facial recognition.

Imagine somebody trying to open an account using their phone. If you look one way then your picture gets the account open in five minutes. If you look another way, then you can’t get access to the app because somebody else, who sort of looks like you, is on the platform.

When you take that one step further into things like credit underwriting and digital lending, it gets much worse, because you might be making decisions off of a postcode that is correlated with protected categories under American law. You’re inadvertently allowing the algorithm to make these decisions, which have a human bias into them.

And what does that mean for developers and users of AI?

There is no easy answer other than to expose the data for all of the ethical issues that we might encounter through the law, in human society. And then the only way to do that is to fix the teams that are building the software, because you can’t have a team that’s not diverse both in terms of ethnicity, as well as economic background. You can’t have a team that’s monolithic addressing these issues. It kind of rolls back, of course, to human society and the people building the stuff. And that I think is both a generational shift as well as an awareness shift.

This is a fascinating discussion that I wish we had more time for. We’ve talked about a lot of big ideas. How can incumbent insurers translate these big ideas into concrete action?

One of the things about all of these trends is they still relate to human beings. Even if we’re talking about the future, and it sounds like the Terminator or Blade Runner or your favorite science fiction movie, all the stuff that we’ve talked about is here today.

When you think about it from an insurance perspective, you might have the intuition to say, “Oh, the biggest issue is that in China insurance companies are also media companies, and they also do chat and so they’re much better at grabbing consumers.” Or you might say, “We’re worried about crypto and the automation of smart contracts and the fact that all the paper the insurers shuffle around is going to be now code.”

But I think that’s focusing on the hammer. It’s not focusing on the person holding the hammer. If I can stress one thing, it’s that the most important thing for insurers to do is not to feel like they’ve swatted away an inconvenient challenge from the insurtech industry. It’s not that there is this one-time moment where you can co-opt a bunch of early-stage start-ups, because that’s just a symptom.

We’re in a moment where digitization is happening to the whole industry, and the only real thing you can do is change your beliefs about what’s possible. I think what we have to do, at the senior management levels of these companies, is to be open-minded about what people are trying to accomplish, why they’re trying to accomplish them and what the underlying trend is that’s manufacturing these outcomes.

Once you go through that process, it’s just impossible to believe anything other than within 10 or 20 years, everything is fully digital, delivered to your phone, is AI-first, is powered by various blockchains (whether they’re public or private), is consumer-centric with data owned by the consumer. I mean this is a trivial observation because it’s the only thing that can happen.

The question is: if you’re running a large insurer, how do you get to that point without destroying shareholder value? And then also by being a good player in the ecosystem and allowing people to create value without co-opting it.

I would encourage incumbents to really think about being quick to address their legacy models. If you have pools of revenue or other parts of the business that you feel are really well-protected, that’s actually the thing you should probably throw in the pyre first. Find the way to get that to be a digital-first business. One thing that comes to mind is the asset management fees that insurers are able to pay themselves because they’re managing all of these premiums. These asset management fees are three times greater than what you get in the open market on a robo-adviser, if not more.

Incumbents that really start from a place of understanding what the future looks like and then re-engineer themselves to be digital-first, they’re going to have a shot at competing with the Asian tech companies, as well as with the fintech-plus-Silicon-Valley combination that is getting stronger and stronger every year.

I think you can’t overstate the point because standing still is massively destructive and creates fragility throughout the industry. So hopefully that came through, and I hope that some of your listeners are pushed to take that existential exploration for themselves.

Thank you very much for taking the time to speak with us today, Lex. This has been such an interesting conversation and I think a lot to learn, whether you’re a start-up or an incumbent in the insurance field.

My pleasure. Thanks so much for having me.

Summary

In this episode of the Accenture Insurance Influencers podcast, we talked about:

  • Applications of AI that don’t typically incorporate bias—for example, using AI to document damage to a vehicle to expedite claims processing.
  • Applications of AI where bias must be considered and mitigated. For example, AI trained on a data set in which minorities aren’t well-represented could result in those minorities not being able to use an app designed to streamline account opening—as well as more material consequences, such as being declined for a loan application.
  • Standing still is not an option. As digitization continues, leaders must change their beliefs about what the future could look like, and re-engineer themselves to compete effectively.

For more guidance on AI and digital transformation:

That wraps up our interviews with Lex Sokolin. If you enjoyed this series, check out our series with Ryan Stein. Ryan’s the executive director of auto insurance policy and innovation at Insurance Bureau of Canada (IBC), and he spoke to us about self-driving cars and their implications for insurance.

And stay tuned, because we’ll be releasing fresh new content in a couple of weeks. Matthew Smith from The Coalition Against Insurance Fraud will be talking about all things fraud: who commits it, what it costs and how it’s changed with technology. In the meantime, you can hear his answers to the quickfire questions here. Subscribe to the podcast to get new episodes as they launch.

What to do next:

Contact us if you’d like to be a guest on the Insurance Influencers podcast.

Submit a Comment

Your email address will not be published. Required fields are marked *