AI isn’t biased. We are!

There is a persistent myth that bias is something we can “remove” from AI. This is promoted, often by organizations and publications that should really know better, with articles like these:

“Controlling machine-learning algorithms and their biases” (McKinsey & Company, November 2017)

“Can We Keep Our Biases from Creeping into AI?” (Harvard Business Review, February 2018)

“AI bias will explode. But only the unbiased AI will survive” (IBM Research, March, 2018)

These articles are wrong. Bias isn’t something that we can, or even should, eliminate from AI. The idea that there could even be a bias-free AI is itself part of a psychological tendency (another bias, in fact) called “naive realism”.

Naive realism is our common-sense social tendency to think that we see the world objectively, and, worse, that it is the people who disagree with us that are biased. The myth of a bias-free AI comes directly from this bias.

I’m not minimizing the appalling biases found in some AI systems (see: the AI beauty contest fail, the MIT/Stanford study of gender and skin-type bias in AI, and the machine learning system that amplified gender biases in images for examples). Machine learning is mostly based on training data provided by people. It would be a surprise if it didn’t learn our biases.

The point was well-made in the UK’s House of Lords AI Committee, in April 2017, by Konstantinos Karachalios, Managing Director of the IEEE-Standards Association:

“You can never be neutral; it is us. This is projected in what we do. It is projected in our engineering systems and algorithms and the data that we are producing. The question is how these preferences can become explicit, because if it can become explicit it is accountable and you can deal with it. If it is presented as a fact, it is dangerous; it is a bias and it is hidden under the table and you do not see it. It is the difficulty of making implicit things explicit”

The strength of AI — and it is a strength — is that it can reveal these biases by making them visible to everyone, on all sides. We need to reframe bias: it’s not something to avoid, it’s something we need to expose as part of our accountability.


Tell me what you think: @turalt, stuart.watt@turalt.com

Dr. Stuart Watt has a PhD in the psychology of social intelligence and develops AI technologies that use psychological insights into organizational processes to improve email practice.

He is CTO of Turalt, a Toronto-based AI company using cognitive and psycholinguistic models to solve the negative impact of miscommunication in business.

Image copyright: aleutie / 123RF Stock Photo