When it comes to artificial intelligence, a lack of transparency in process and bad data to begin with are two of the issues most hampering the embrace by the boardroom. In AI: Black boxes and the boardroom, colleagues Tim Wright and Antony Bott examine how the resulting lack of trust can make companies wary of the AI technology despite its many potential benefits, and some basic steps one can take to alleviate those concerns.
Famously dramatized by the disembodied voice of HAL in Stanley Kubrick’s 1968 film 2001: A Space Odyssey, artificial intelligence has been the subject of humanity’s existential angst for decades. Although Elon Musk warns that those fears may be justified, one of the biggest pushes for advancing artificial intelligence to-date has been to market it for purposes of day-to-day corporate efficiency. Nearly every IT vendor is seeking to make a name for their proprietary AI tool by offering AI as a Service to the majority of businesses who are not developing AI in-house, but who want to leverage the benefits of AI’s automated decision-making, data analytics and cost-savings. In the business context, AI has yet to pose the threat of refusing to “open the pod bay doors,” but customers are faced with the challenge of exposing the vendor’s AI to data amassed by their entire enterprise, thereby allowing the algorithms to learn from and evolve based on information that may be private, proprietary and heavily regulated. The most common solution to this conundrum is for customers to contract for ownership of all machine learning models, bots and other outputs that result from the AI’s presence in the customer’s environment and processing of customer data. But what are the implications of this “all for one” approach to ownership of the fruits of machine learning? What, if any, innovations in AI are lost when lessons learned are retained by a single entity?