Opinion

Risky bias in artificial intelligence

Machine learning is intrinsically biased. Mary-Anne Williams explains what can be done about it to reduce risks.

Why does the digital assistant Alexa giggle and speak of its own volition at random times throughour the day and night? Alexa is clueless as to why it is doing it, and Amazon cannot explain this bizarre – some say creepy – behaviour either.

Welcome to your AI-enabled future.

Artificial intelligence that can enhance and scale human expertise is profoundly changing our social and working lives, controlling how we perceive and interact with the physical and digital world.

We live in the Age of AI. It’s a time of unprecedented and unstoppable disruption and opportunity, where individuals, businesses, governments and the global economy progressively rely on the perceptions, decisions and actions of AI.

Machine learning, the dominant approach to AI today, has several scientific challenges holding it back from widespread adoption and truly transforming life as we know it. One of them is the “Opacity Problem”.

Machine learning cannot explain itself. It lacks awareness of its own processes, and therefore cannot explain its decisions and actions. Not being able to ask “why” is a serious and escalating problem as machine learning algorithms increasingly the scale and scope of impact on our lives and future opportunity.

We must develop robust solutions to the Opacity Problem because machine learning algorithms have been found to be biased – indeed outright racist and sexist in some cases.

It turns out machine learning is intrinsically biased. The ability to discriminate sensory information is critical for intelligence, but at the same time bias can lead to unethical or illegal outcomes.

Towel and dog

You can probably tell the difference, but AI might struggle to tell dogs and towels apart.

Machine learning systems learn to be biased: they learn to discriminate inputs like distinguishing images of melanoma from images with and without melanoma, outperforming humans in accuracy and scale.

Machine learning models simply encapsulate the data they are presented. Without a well-designed bias that leads to accurate prediction, machine learning makes critical mistakes: “False Positives” such as predicting melanoma where there is none; and “False Negatives” like not predicting melanoma when it is present.

There are three primary sources of bias in Machine Learning: data, training and algorithm. The data used to train the model is often biased – this can happen as a result of the human bias embedded in the assumptions or historical aspects of selection and preparation of the data sets.

This also happens when the data set is just too small, narrow in scope, or non-representative to build a robust model. Then, machine learning can amplify the inherent bias in the data by over-focusing on it.

Currently, machine learning’s predilection for bias can make it dangerous, because it may not be clear when machine learning algorithms will fail. Sometimes, failures may occur in weird and mysterious ways, like confusing dogs with muffins, towels or fried chicken.

Cybersecurity risks can occur if a malicious adversarial algorithm learns to manipulate the data input to other algorithms

Such failures have led to innovations like adaptive adversarial machine learning algorithms that learn by competing against each other. This technique was used to train the Deep Learning system, AlphaGo Zero, that beat the world’s best Go players. From a computational complexity perspective, Go is much harder than chess.

AlphaGo Zero is notable because it was not trained with a database of human moves, but by playing against itself over a period of three days.

This technique can also be used for malicious purposes to “fool” machine learning algorithms. Cybersecurity risks can occur if a malicious adversarial algorithm learns to manipulate the data input to other algorithms by exploiting their vulnerabilities, compromising the security for an entire system.

The risks associated with machine learning in terms of scope, scale, severity and likelihood are high, and they serve to amplify the urgent need for Explainable AI (XAI). Having recognised the need for “meaningful information about the logic involved, as well as the significance and the envisaged consequences of such processing” the EU has imposed new laws that protect humans’ rights to explanation.

Mary-Anne headshot
Mary-Anne Williams

Director of the UTS Magic Lab

Distinguished Professor Mary-Anne Williams FTSE is Director of the UTS Magic Lab, a Fellow in the Centre for Legal Informatics, and Co-Founder of the AI Policy Hub at Stanford University.

She is a leading authority on AI, explainable AI and social robotics with transdisciplinary strengths in law, strategic management, disruptive innovation and entrepreneurship.

Shes is a non-executive director of the US-based Scientific Foundation KR Inc, was Conference Chair of the International Conference on Social Robotics in 2014, and serves on the Editorial Board for AAAI/MIT Press, Information Systems Journal, Artificial Intelligence Journal, International Journal of Social Robotics and the ACM Award Committee for Humanitarian Contributions within Computer Science and Informatics.