It’s time for insurers to rethink what data can do for the business

Data is the lifeblood of insurance. But the enormous potential of this key resource is outstripping insurers’ current capabilities. It’s no longer just about prioritising data for management information (MI) and business intelligence (BI) applications. Of course, these remain vital functions. But it’s data’s role in machine learning that will increasingly define winners and losers in this industry – powering the prescriptive analytics that allows firms to pinpoint ‘next best actions’.

In insurers today, almost all data is organised and managed, first and foremost, for MI and BI; human-led analysis tests out hypotheses on conformed, pre-prepared datasets. But machines learn from data in a different way. And to harness the advantage they can provide, data management needs to adapt to that difference. This will have three key impacts:

First, recognise that machine learning thrives on volume. The more data that’s thrown at it, the better. That’s a challenge. Insurers have had some success combining internal data from across the business. But they’re still missing big opportunities to gather more transactional and unstructured data from areas like claims and digital channels. Right now, insurance is lagging other industries in how they use external data. It’s time to close the gap.

Second, keep data as raw as possible. Data can be aggregated and summarised in multiple different ways, and one employee’s approach may not satisfy another colleague’s needs. If data’s kept in its raw format, new features can be added to it over time (increasingly easily, thanks to machine learning algorithms, like deep learning, which create new features automatically).

Third, experimental data is best. Insurers’ traditional use of observational data can only show correlation. But data produced from controlled experiments (an A/B test of a quote page, for instance) can prove causality – and proving how one action causes another is how businesses make better decisions. Machine learning algorithms can now identify causality, and that opens out a wealth of insight through prescriptive analytics.  

As well as reengineering how they gather and manage data, insurers should keep some other priorities in mind. As things stand, most corporate insights reside in desktop computers and laptops dispersed across the business. This means fragmented insights across functions, repeated analytical dataset creation and lost knowledge when employees leave. The alternative? Instead of trying to centralise these datasets into one monolithic analytical record, virtualise them instead. That will allow data scientists to recreate datasets across any time period, for any customer segment. 

One final data lesson, reduce the friction from insight to action. Most insurers are still not set up to deliver analytics to where business decisions are made. When they need to do so, data scientists’ desktop models have to be translated into enterprise-ready Extract, Transform and Load (ETL) code ready for scoring in a decision system. It’s a laborious, error-prone and unwieldy process that can be avoided by creating a centralised virtual analytical record of the initial predictive analytical model. This can be used – in near-real time – by decision systems anywhere in the business through an API call.

The overriding message? Insurers that build data for analytics will benefit from new insights that put them ahead of the competition. Next time, I’ll look at the key role of technology and people in making this happen.

Submit a Comment

Your email address will not be published. Required fields are marked *