Will my loan application be rejected? Will my job application fail screening checks? Will my insurance premiums go up? Around the world, life-changing questions such as these are increasingly being answered by artificial intelligence (AI).
The assumption is that machines will come to fairer conclusions than humans by crunching data and spitting out dispassionate – and therefore unbiased – responses. But because they are trained on data provided, selected, annotated, inputted and updated by humans, the reality is that AI systems are prone to prejudice. Baked-in biases that go unnoticed and unchallenged can be perpetuated and amplified, leaving even the experts at a loss to explain how an algorithm reached its conclusions.