STEM and society

Why we need to think about diversity and ethics in AI

12 January 2022

Dr Cathy Foley FTSE

Dr Catherine Foley AO PSM FTSE FAA

Australia's Chief Scientist

The world is in the midst of a digital revolution. Artificial intelligence, machine learning and quantum technologies are accelerating quickly.

These technologies will transform every aspect of our lives, and I am enormously excited about their potential. But we should take great care in the way we move forward.

It’s crucial that issues of ethics and diversity top the list of considerations, not only for government but for everyone working in the field of AI and algorithm development.

This is an important area for research and for professional development to ensure that the opportunities opened up by artificial intelligence don’t set off unwanted consequences that get away from us. We’ve all seen where that can lead in the case of social media.

Digital discrimination

Diversity is a crucial part of the equation as we adopt these technologies, for reasons more far-reaching than we often consider. We need to make sure machine-learning systems work with data that reflects the full human experience and treats everyone justly.

If the data used to train the algorithms is only partially accurate, reflects only some parts of our society, or indeed reflects inbuilt structural inequalities, the output will be wrong. This can entrench bias and disadvantage in insidious and unintended ways. It also offends science, where the aim is accuracy.

Skin colour is a case in point. We already know that deep-learning algorithms are poor at identifying the sex of people with darker skin, and especially women. Major facial recognition technologies misidentify darker-skinned women as much as one-third of the time, while correctly identifying white men more than 99 per cent of the time, in experiments at MIT.

Readers might remember the odd results when an AI algorithm tried to reconstruct Barack Obama’s face from a pixilated image: he turned out white.

obama image

At one level this inaccuracy is obvious. At another it is too often ignored. The approximations that sit underneath any science are well known to those of us who work in scientific fields. But they’re not always considered carefully enough when we spin off into practical applications.

Defective data

One of the problems is that algorithms are learning using flawed datasets – datasets which contain inherent biases because of inequities in our society – in employment outcomes, incomes, crime statistics and so on.

Or they are built on one sector of society, for example where health information gathered from a particular group isn’t easily extrapolated to others. An algorithm might be excellent at predicting heart attack survival in men in Sydney’s eastern suburbs, but relatively poor at predicting risk in their female partners, or in men from Singapore or São Paulo.

AI can entrench these patterns. It is not only about flawed data; there is also a question around the parameters for learning. What criteria is the algorithm using?

In hiring decisions, for example, we don’t jump to the conclusion that men make the best CEOs simply because the data shows that they make up 95 per cent of the CEOs of the top 200 Australian Stock Exchange-listed companies.

But while we apply our human filter to the information, the machine learns that CEOs are men.

Machine learning and human rights

This is why it’s important for engineers and researchers to be involved in this conversation at the individual level – engineers are pivotal in designing our processes and systems that turn science into reality.

I know ATSE is active in this space. There is considerable work at the national and international level to strengthen transparency and governance of AI based systems.

800px-Big_Bang_Data_exhibit_at_CCCB_17

The European Union proposal sets out a nuanced regulatory structure that bans some uses of AI, heavily regulates high-risk uses and lightly regulates less risky AI systems.

Earlier this year, the Human Rights Commission released its report on Human Rights and Technology, which stressed the importance of putting human rights at the centre.

The Department of Industry, Science, Energy and Resources has released an AI Action Plan, supported by an AI Ethics Framework. It will guide businesses and governments to responsibly design, develop and implement AI.

It’s also good to see the focus of our research community on ethical AI and I was pleased to speak at the University of Melbourne’s Centre for AI and Digital Ethics, which has launched a new cross-disciplinary program aimed at building the legal profession’s capacity to respond to the challenges of emerging technologies.

I hope that all these initiatives will come together for a robust, sophisticated approach.

Breaking the mould

As AI becomes part of our lives, it’s imperative that algorithmic approximations don’t start to control or define the way we live our lives.

To take the example again of employment, the use of AI to assess job applicants and even to interview them is likely to create new inequities. The well-off and the well-connected are less likely to be employed by algorithms because personal contacts are the currency of the rich.

camera eye

Job interviews via AI platforms are a surprising and concerning practice. Bias is a long-term and significant barrier for women in the workforce, and I firmly believe that our hiring practices should be broader and encompass a much wider human experience than they have to date.

We need people from different cultural and socioeconomic backgrounds working in science and technology careers. We need people from different academic backgrounds, from the social sciences, to the arts, to design, philosophy and law. Bluntly, we need people who don’t fit the mould.

The Human Rights Commission has done a deep dive into how algorithmic bias can arise in the commercial world – where AI systems are using incomplete and historical datasets to make models about the creditworthiness of certain groups of customers. Unsurprisingly, women, Indigenous people and young people are most likely to bear the brunt of the built-in biases.

“Internet-scale biases”

Chatbots are a high-profile example of flawed learning that have attracted much attention. Trained on internet datasets, they develop what has been referred to as “internet-scale biases”.

The use of female voices for chatbots has also been the subject of considerable consternation, and understandably so, given the unfailingly polite, sometimes sexually playful, and always subservient way chatbots and AI assistants are trained to respond, even to inquiries that are abusive or sexist.

UNESCO has addressed this issue in an important report, I’d Blush if I Could, named for the response Siri gave to a sexist insult before the system was updated to the more neutral but still worryingly inadequate: “I don’t know how to respond to that.”

The issue with chatbots is more than entrenching bias and sexism. They are increasingly used in our interfaces with businesses, banks and institutions, even entering the arena of health, but the deep flaws in the way they are programmed to learn mean, as one researcher memorably describes it, that they “hallucinate”. They omit information and make things up.

Customer service and support live chat with chatbot and automatic messages or human servant. Assistance and help with mobile phone app. Automated bot and robot. Smartphone helpdesk for feedback cell.

Certainly, humans do this as well, but when machines do it the implications are more serious.

A quantum leap

The solution is not to turn back the clock. On the contrary, I believe that Australia must embrace digital technologies and scale up quickly – understanding there are more on the horizon, including quantum.

These new technologies have significant potential to improve productivity, solve complex problems, improve service delivery and so on. There are immense applications in the medical and other spheres. We have top research and some highly innovative thinking and we need to seize the opportunity now, before we get left behind.

I also, however, want to see an equally urgent and simultaneous emphasis on diversity, accountability, transparency, reliability and safety. Initiatives to address these issues are underway in a number of countries, including Australia. We need transparency in the data and methodologies that underpin AI, and the situations in which is it deployed.

AI systems should reliably operate in accordance with their purpose. We need to consider options for accountability, including human oversight, the ability to identify and hold accountable those responsible for the different phases of the AI system, and think creatively about the auditing of algorithms.

People working in the sector should operate by clear professional standards, and the digital workforce must include social scientists, ethicists and others with insight on these issues.

Diversity across culture, sex, gender, age and life experiences will ensure that we are properly reflected, as a global community, in the emerging technologies.

Dr Cathy Foley FTSE
Dr Catherine Foley AO PSM FTSE FAA

Australia's Chief Scientist

Dr Foley is Australia’s Chief Scientist and formerly the Chief Scientist of the CSIRO. She holds a PhD in physics and contributed to the development of white light emitting diodes for low-energy household lighting. Dr Foley has won a multitude of honours, including a Clunies Ross Award, and is committed to tackling gender equality and diversity in the science sector.