Artificial intelligence or machine learning (AI/ML) is starting to take hold in underwriting. Carriers are exploring their use in assessing complex data, helping improve data accuracy, and helping to make decisions in place of complex rules. As carriers increasingly become more familiar with the power and limitations of this technology, the potential is even broader. This is especially true when it comes to supporting complex underwriting decisions. With this impressive potential, though, comes a hidden threat of bias.

One example of this bias is from the world of small businesses. We know underwriters shouldn’t make insurance eligibility decisions based on age, gender, race, religion or sexual orientation. Unless we are careful, however, an AI solution could make the wrong eligibility decisions. 

The power of AI/ML is that it’s self-learning, but we must not forget it only learns what it was taught. If carriers are not careful in the training of these new intelligent solutions, ethical and legal requirements around protected groups could be violated. Google and Facebook, for example, found in their training of visual AI solutions that AI can develop sexist or racial inaccuracies due to the training sets. As we build AI to support key underwriting and pricing decisions, we must keep the risk of bias in mind and prepare against it.

For example, if we were to train an AI system to do small commercial underwriting and included owner demographic information in its training set, the AI might use race, sex, and gender in deciding on insurance eligibility. This already would be illegal and unethical. To prevent this from happening, carriers must include a three-part test into their AI programs to ensure they’re designed to build usable, legal, and ethical solutions. 

1. Assess the Potential for Bias

Evaluate the AI program to determine if there is any potential for bias in the AI’s application. For example, if we want to identify property elements from an aerial photograph, then there isn’t much worry of bias. On the other hand, if I’m using AI to make decisions on policies eligible for automatic renewal, then I need to be more careful. 

2. Assess your Data

What data will be used to train AI solution? Excluding demographic information is a simple step to limit bias, but more attention must also be paid to see if there are hidden biases. As some research has shown, zip code information and even curated photo sets can lead to hidden biases. Carefully checking data is crucial, and be sure to note if you have concerns on the data sets.

3. Test Results

In AI, we focus on testing the accuracy of a solution and how reliable or predictable it was. For underwriting, you should plan to do more analysis. If this is an AI solution that impacts on eligibility, coverage or price, you must test to confirm that the AI solution hasn’t created a hidden, protected bias.

AI/ML show significant promise in the field of underwriting. The challenge is that if we are going to earn regulator and the public’s trust, we must ensure these new systems meet the highest standard.

Submit a Comment

Your email address will not be published. Required fields are marked *