The conversation around the pitfalls of artificial intelligence (AI) has shifted away from potential widescale job displacement and Elon Musk’s apocalyptic fearmongering to where it belongs: the challenge of ensuring ethics, privacy and security in the deployment of consumer-facing AI. What’s needed is a robust framework to guide organisations in tackling these issues, and that’s just what Kathryn Hume, an authority on enterprise adoption of AI, outlined at last year’s CLSA Investors’ Forum in Hong Kong.
Hume has spent the last six years working with Fortune 500 companies on the integration of AI and observed that “obstacles occur when these new tools hit the baggage and legacy of culture and process.” Chief among her insights is that “because the techniques that go into machine learning are based on empirical science, there’s an implicit assumption that the future will look like the past.” That assumption could “impinge against human values,” said Hume, since “many normative values want the future to be different from the past.”
The devil is in the data
So how do these biases arise? While automated customer-engagement functions have until now been governed by clear rules programmed by humans, with machine learning, systems go off on their own and use data to seek the best way to achieve outcomes defined by the builder. In that scenario, it is crucial that the outcomes are carefully defined to align with our values and the data has been cleansed of inherent biases.
Even the likes of Amazon have suffered setbacks in this regard, with the company recently having to scrap an experimental machine-learning recruiting tool that showed a bias against hiring women. The problem stemmed from tainted data, with the system trained to vet applicants by observing patterns in resumes submitted to Amazon over a 10-year period. Owing to male dominance across the tech industry, most of the applicants were men, and thus, if left unchecked, the tool would have perpetuated that imbalance.
The bright side of AI
When done right, however, AI can go so far as to help organisations move in a positive direction. A case in point is a job-applicant screening tool developed by Scotiabank with Hume’s input, helping recruiters overcome unconscious barriers and hire the best talent.
While the risks are significant, AI can clearly be adopted responsibly. What’s needed, stressed Hume, is to “break down these big-picture issues into a set of local risk-reward judgement calls so that enterprises can make progress and innovate with the technology and not get stuck under the miasma of large, ambiguous, hard-to-figure-out questions around what the technology can do and what the risks are.”
For more insights into critical issues, follow us on LinkedIn and click to subscribe to CLSA’s monthly newsletter.