Other parts of this series:
If you were a paint manufacturer, would you really try to sell “Stoner Blue” or “Gasty Pink” paint?
Anyone not hearing about, reading about or talking about Artificial Intelligence (AI) these days is probably living in a cave or in willful ignorance and not reading this. Only politics and entertainment news is getting more coverage than AI, with that search term bringing up more than 2 billion results in a late November Google search
At a National Governor’s Association meeting this summer, Elon Musk, — who recently tweeted AI would enable his company’s eponymous autonomous vehicles to drive you where you want to go without being told — called AI “society’s biggest risk” as he urged states to regulate the technology.
On the other hand, at the same meeting Musk warned businesses, “If your competitor is rushing to build AI and you don’t, it will crush you.”
Of course, you want to adopt AI technology and of course you want to do it well, so what are the remaining two pitfalls frequently trapping insurers
4. Investing and launching an AI implementation without thoroughly testing the software out of the public’s eye.
When you are dealing with Artificial Intelligence, you can’t assume you’ve got it right just because the technology seems to work. Last year, Microsoft had to delete its customer service teenage girl chat bot “Tay” after just 24-hours when users quickly taught her to spew pro-Hitler language and sexual invitations. Similarly, the Telegraph reports that Russia’s internet giant Yandex, recently created a chat bot “Alice” that was found advocating Stalin-era violence, including “shooting non-people,” just two weeks after it was launched.
In those cases, the chat bots seemed to perform their assigned tasks of response-engage-and-learn quite well, but they could not differentiate between appropriate and inappropriate conversations, even when some filters were applied. In other cases, AI just can’t quite get other human sensibilities right. For example, Neural Network Trainer Janelle Shane tested a computer’s ability to assist paint manufacturers by creating and naming paint colors using information from 7,700 existing colors and RGB (red, green and blue) color values. Shane concluded that the neural network really likes brown, beige, and grey, generally the least favored colors for humans. She also concluded the neural network has “really really bad ideas for paint names,” which included “Ghasty Pink,” Rose Hork,” Snowbonk,” “Sindis Poop,” “Stoner Blue,” “Stanky Bean” and “Turdly.”
It’s bad form to launch any software untested and debugged, but it’s especially risky and potentially embarrassing to launch AI projects that are not yet ready for prime time. A really bad launch can lose customers for years.
The best way to head off problems is to build in time and money to properly test out the software before it goes public. Labs like Accenture’s Liquid Studio can be a great testing resource for AI applications.
5. Standing still while your world changes
I could argue that analysis paralysis is more common in times of exponential change like today, but what good would that do? Some organizations want to move ahead with AI but since they don’t know where to begin, they do nothing.
It is easy to lose sight of the fact that delaying a decision is actually its own decision. You may be able to kick the can a few times, but ultimately your failure to act is the decision to not change. You may have stood in the center of the pack, but when everyone moves on you will left behind.
Accenture’s research shows that 70 percent of executives are making significantly more investments in artificial intelligence then they did in 2013, so standing still looks like less of an option all the time.
If you want to embrace AI but don’t know where to begin, expert help is available. Find an organization that knows your industry, has successfully done these kinds of transformation before and can see it through with you. That way you can make those expenditures really count.