In my last two posts, we talked about the legislative updates from Washington on driverless vehicles, and the curious ecosystem landscape that is emerging between auto manufacturers, tech companies and third parties.

There have been many exciting developments in self-driving cars this year, and majority of the headlines are upbeat and hopeful. But as you start diving deeper into research as I have, you will eventually discover a quiet debate that doesn’t get mentioned as often. This emerging technology has also created a bit of a moral dilemma for experts, and some are devoting precious research hours and seeking feedback.

Take, for example, the Massachusetts Institute of Technology (MIT) project, titled “The Moral Machine.” The visitors to this page are greeted with an interesting welcome message: “The Moral Machine is a platform for gathering a human perspective on moral decisions made by machine intelligence, such as self-driving cars. We generate moral dilemmas, where a driverless car must choose the lesser of two evils, such as killing two passengers or five pedestrians. As an outside observer, people judge which outcome they think is more acceptable.” This is a fascinating exercise and I highly recommend taking the few minutes to participate. Be sure to read the descriptions of each scenario before “judging.” Upon completion, the site then provides each participant with feedback on their ranking regarding moral decisions.

Another academic institution, the University of Osnabruck in Germany, is doing a similar study on whether self-driving vehicles can be programmed for the big moral decisions after creating a ‘value of life’ model for every object that could be involved in an accident.

According to IoTnews, “Research participants were placed in the driver’s seat of a virtual car heading down a suburban road before obstacles were presented in the two lanes ahead of the driver, with participants asked which of the two they would save. To qualify for the test itself, participants to complete a trial setting where they had to avoid pylons, in order to immerse themselves with the VR setting. Users were given a slow and fast speed setting to make their decisions.”

Joe Schneider, the managing director of KPMG Corporate Finance, questioned the relevance of these academic studies earlier this summer at the Super Regional P/C Insurer Conference in Lake Geneva, Wis.: “The drive toward autonomous driving is unlikely to be slowed by such moral dilemmas. That’s because while scenarios involving split second decisions and catastrophic consequences are worth considering, the odds of some of them happening are very low whereas the odds of automated driving saving many lives are very high.” 

This Fortune analysis, however, argued insurers shouldn’t be quick to dismiss the importance of such a debate: “While significant technical challenges remain unsolved, AV technology is improving rapidly. Soon technological capability won’t be the greatest impediment to adoption; societal friction will be. This friction will delay full autonomy for at least a decade, or however long it takes for the tech community to collaborate with policymakers, regulators, insurance providers, and consumer advocates to address the significant social, regulatory, and legal challenges AVs will create.”

Self-driving cars are sure to be in the headlines in the months and years to come. It will serve all of us—manufacturers, innovators, academics, insurers and consumers—to be informed on all aspects of the debate and participate in it to design a better future for all.

Submit a Comment

Your email address will not be published. Required fields are marked *