Is artificial intelligence prone to prejudice? Why and how machines display human bias

IMI Public Lecture, University of Bath
Time: 13 Apri, 3.15pm – 4.05pm

On Friday 13 April, Associate Professor Joanna Bryson in the Department of Computer Science at Bath University will explore how artificial intelligence may be prone to human prejudice.

Abstract: Machine learning exploits patterns in existing data to create artificial intelligence (AI).

Research conducted by Dr Joanna Bryson in collaboration with Aylin Caliskan and Arvind Narayanan from Princeton University has shown that applying machine learning to text taken from the Internet results in AI with human-like semantic biases.

These biases can be morally neutral as toward insects or flowers, problematic as toward race or gender, or even simply veridical, reflecting the status quo distribution of gender with respect to careers or first names.

Their finding indicates that standard written and spoken words contain recoverable and accurate imprints of our historical biases. It also holds the promise for identifying and addressing sources of bias in our culture and technology.

In her talk, Dr Bryson will present her research on machine bias, and discuss what this reveals about the origins of human biases, stereotypes, and prejudices. Joanna will also explore how implicit and explicit human bias accounts for bias in AI, and discuss when and how we may be able to use AI to address prejudice.

Speaker: Joanna Bryson

Dr Joanna Bryson is an Associate Professor in the Department of Computer Science at Bath University, and affiliate at Princeton Center for Information Technology Policy.

Joanna holds degrees in Psychology from Chicago and Edinburgh, and Artificial Intelligence from Edinburgh and MIT.

Her research covers a broad range of topics from artificial intelligence, autonomy and robot ethics, and human cooperation. She has worked in AI ethics since 1996 and helped author the UK Research Councils’ ‘Principles of Robotics’ in 2010.

She is AXA Research Fund awardee, having been granted funding to conduct a series of unique experiments on how people interact with humanoid robots (http://www.bath.ac.uk/research/news/2017/10/12/humanoid-robot-tests-to-explore-ai-ethics). Last year alone, she consulted to The Red Cross on autonomous weapons, Chatham House on the impact of AI on the nuclear threat, and is currently advising the British Parliament, European Parliament, and the OECD regarding the regulation of AI.

To register for this talk, and for more information, please visit https://www.eventbrite.co.uk/e/is-artificial-intelligence-prone-to-prejudice-tickets-44402585328