Turing Lecture: Be prepared to show your working!

Increasingly, algorithms are shaping the way we see the world. They are being deployed to make decisions about sensitive parts of our lives, from our eligibility for a loan to the length of our sentence if we commit a serious crime. But how does algorithmic decision-making work and how do we know how decisions are made and if they are fair?

The demand for transparency, validation and explainability of automated advice systems is not new. Back in the 1980s, extensive discussions were held between proponents of rule-based systems and those based on statistical analysis, partly based on which were more transparent and how they should be evaluated.  More recently, Onora O’Neill’s emphasis on demonstrating trustworthiness, and her idea of ‘intelligent transparency’, has focused attention on the ability of algorithms to, if required, show their workings.

In this talk, Professor Spiegelhalter will argue that we should ideally be able to check (a) the basis for the algorithm, (b) its past performance, (c) the reasoning behind its current claim, (d) its uncertainty around its current claim and e) that these explanations should be open to different levels of expertise.  These ideas will be illustrated by the Predict system for women choosing follow-up treatment after surgery for breast cancer, which has four levels of explanation of its conclusions.

For further information please visit: https://www.turing.ac.uk/events/turing-lecture-be-prepared-show-your-working