The rules for collecting big data are about to change. With the emergence of the Internet of Things (IoT), more devices will be capturing more information about more people than ever before. The burning question in the minds of most people is, what are businesses planning to do with all that information?

In a panel discussion on big data hosted last year by the White House and MIT, experts examined the tradeoffs between accumulating useful data and protecting privacy. A Forbes article described one discussion:

Threats to our autonomy don’t end with government snooping. Industries want to know our buying habits and insurers want to know our hazards. MIT professor Sam Madden said that data from the sensors on cell phones can reveal when automobile drivers make dangerous maneuvers. He also said that the riskiest group of drivers (young males) reduce risky maneuvers up to 78% if they know they’re being monitored. How do you feel about this? Are you viscerally repelled by such move-by-move snooping? What if your own insurance costs went down and there were fewer fatalities on the highways?

The observation goes to the heart of this discussion: establishing “purpose” for data collection. Just because it has become possible to store massive amounts of historical data does not mean we should do it.

As it stands now, our entire big data universe could crumble with the stroke of a pen if the public outcry about privacy prevails and legislators respond by passing laws restricting or eliminating its use.

Businesses – including insurance – that are collecting data from devices must both be able to protect that data, and be able to state a specific purpose for which it’s being used. This has a dual benefit: it puts consumers at ease, and it’s a built-in safeguard to show regulators, certification bureaus, and other watchdogs that the business is doing the right thing. And it implies data will eventually be destroyed once its purpose is achieved.

The process could work like this: If I intend to collect and process data of any kind, the burden is on me to prove that the data will be collected accurately and for a specific purpose. These purposes can be tied to marketing, controlling epidemics, providing individual health and wellness advice, and more. And of course those data subjects need to be aware and agree that I can collect the data for the stated purpose.

Data collectors and processors with the right certifications or authority could share information that fits into one of the common “purpose” categories. This could resemble SIC codes focused exclusively on purposes, which could be managed by a regulatory group.

Over time, individuals and corporations can “opt in” to different purposes, and services will emerge to access and process these data sets – for example, purposes focused on health and wellness can be “sliced and diced” into different categories such as nutrition, weight loss, etc.

Once the stated purpose of the data collection is achieved, it would be the collector/processor’s responsibility to destroy the data. This alleviates some of the social issues of long-term retention that civil rights and privacy advocates are rightly concerned about.

Does this concept seem too restrictive? I don’t think so. Placing the burden on the companies collecting the data is a concept worth considering – especially since they’re currently making money on that data without any restrictions on what’s being collected. And if firms begin contemplating this now, they may avoid the risk of being caught flat-footed when the knock at the door is followed by the familiar salutation:  “Hello, we’re from the government and we’re here to help you.”

I would love to hear your opinion on this. Please feel free to contact me here and share your ideas!

Read more on this topic by downloading the Beyond insurance: Embracing innovation to monetize disruption report.

Submit a Comment

Your email address will not be published. Required fields are marked *