Inaccurate, manipulated and biased data can weaken the performance of intelligent underwriting systems.

The smart application of artificial intelligence (AI) offers underwriters a great opportunity to capitalize on the fast-growing array of new data sources.

By employing AI technologies such as virtual agents, robotic process automation and machine learning, underwriting professionals can quickly and accurately extract critical information from vast volumes of data. The demand for such capabilities is likely to soar as insurers gain access to an increasing number of real-time data streams. New data sources such as wearables, connected cars and intelligent buildings, as well as third-party providers such as government departments and social media sites, will improve substantially the ability of insurers to assess and price risk.

What’s more, underwriters will be able to use AI systems to monetize many of their new data streams. We estimate that insurers could generate as much as US$28 billion in the next five years by monetizing data, algorithms and platforms. Additional revenue opportunities include underwriting new risks, real-time risk assessment and closer collaboration with customers and business partners.

Critical to the successful application of AI in underwriting, however, is the accuracy of the data that insurers source. Inaccurate, manipulated and biased data can easily undermine the performance of AI systems. Case selection, risk analysis and business insights can all be corrupted by poor quality data.

The risks posed by bad data are substantial. Around 80 percent of the insurance executives we canvassed reported that their organizations were using data to promote critical and automated decision-making. However, an estimated 97 percent of business decisions are made using data that the company’s own managers consider unreliable.

Among the insurers we surveyed, 26 percent said they validate data sources to some extent but acknowledged they should do more to ensure the quality of the information they receive. Around 19 percent of insurers said they try to validate their main data sources but are not sure of the quality of the data they provide. Acknowledging the potential damage corrupt data could inflict on their businesses, 80 percent of insurance executives agreed that automated systems do create new risks. These risks include fake data and data manipulation as well as inherent bias.

Insurers rolling out AI systems to enhance their underwriting also need to put in place solutions that will protect the veracity of their data. Strong cyber-security and data science capabilities are needed to assess and mitigate risks across a broad portfolio of data sources.

To improve the veracity of information received from their data sources, insurers should prioritize three key activities.

  • Develop a strong awareness of the importance of data veracity among cyber-security, data science and AI staff. Consider rotating workers between roles to increase their exposure to disciplines that promote data veracity. Encourage a “digital hygiene” culture through training and scenario planning.
  • Track data flowing in and out of the organization. Require work teams to grade their confidence in the accuracy of the data for which they are responsible. Develop a rubric that ensures consistent grading.
  • Identify information asymmetries throughout the organization’s data supply chain. Minimize these asymmetries to reduce the risk of data manipulation and the “gaming” of data systems.

For further information about applying AI to improve underwriting, take a look at these links. I’m sure you’ll find them useful.

Submit a Comment

Your email address will not be published. Required fields are marked *