Menu

Turing Talk 2019Graham Patterson, Principal Consultant at Gemserv, reflects on this year’s Turing Talk, on the 20th February 2019.

A highlight at the start of each year is the arrival of one of the main BSC lectures, The Turing Talk. The Turing Talk is always an interesting evening, jointly hosted by BCS and the IET, in the impressive IET headquarters and lecture hall. This year was about Artificial Intelligence (AI) and Machine Learning (ML). This was of particular interest to me given that Gemserv provides services to help clients ensure that they are using AI in an ethical way.

The 2019 Turing Talk was titled ‘Engineering a fair future: Why we need to train unbiased AI’. The lecture was given by Dr Krishna Gummadi, Head of the Networked Systems Research Group at the Max Planck Institute for Software Systems.

Dr Gummadi’s talk concentrated on the challenges of using current AI and specifically excluded problems associated with future developments such as self-driving cars or autonomous robots. He attempted to tackle the following foundational questions about man-machine decision-making:

  •  How do machines learn to make biased or unfair decisions?
  • How can we quantify and mitigate bias or unfairness in machine decision making?
  • Can machine decisions be engineered to help humans mitigate bias or unfairness in their own decisions?

As you can tell there was plenty to think about. Increasingly, ML is being used to address areas such as social benefits and policing policy. This is using large quantities of data that already exists to produce the algorithms necessary to decide things such as where to concentrate policing to produce the best outcomes or sentencing to reduce recidivism. This has moved the AI/ML away from defined rules and target outcomes into the realm of opinions.

There has been notable success in using AI/ML to uncover hidden links but, referencing  criminal justice and policing examples, Dr Gummadi argued that as well as making sure the algorithms are developed in an unbiased way,  we need to ensure close examination of the data being used to train systems. If there is bias in the base data used, which is commonly the case where human decision-making has been involved, we will end up training the AI/ML to be biased.

One of the powerful examples given was planning drug enforcement policing in Oakland USA, where neighbourhoods are defined along strong racial lines. The data used to train the planning algorithms was based on offences and came from previous years where the police had a view that predominately black areas had a bigger drug problem. More patrols meant more recorded drug crime in the predominately black area while non-police surveys were showing an even spread of drug use between the predominantly white and predominately black areas. Using the historical conviction data for analysis resulted in the previous bias being reinforced.

Thus, there is a risk that bias from previous human decisions were being carried forward into the AI algorithms resulting in stereotypes that are ingrained into systems going forward. We need to ensure that the algorithms that are developed are unbiased. More of the big tech companies now have Ethics groups looking at this, but we also need to make sure the data used for teaching the system is unbiased, which can be more difficult.

One idea Dr Gummadi proposed was that we can potentially train AI systems with synthetic data, even data that has been biased to support the outcomes that we desire rather than continue to reinforce the outcomes that have been in place. This was interesting but does assume that we can agree on the desired outcomes, which is very difficult to achieve in many areas.

The Turing Talk was an extremely interesting event and Dr Gummadi made some very good points about the need to stop bias being inadvertently introduced to AI/ML systems by using biased data. This is just as important as looking at the algorithms that are developed.

Ethics in AI is becoming one of the major considerations for companies that rely on being able to mine information from the data they hold. Major high-tech giants, like Facebook are now starting to set up groups, frequently in cooperation with academic institutions to look at the area (https://newsroom.fb.com/news/2019/01/tum-institute-for-ethics-in-ai/). While both Google and Microsoft have launched their own AI Principles (https://www.blog.google/technology/ai/ai-principles/ and https://www.microsoft.com/en-us/ai/our-approach-to-ai). The concern about how AI systems are trained can only be voiced louder and the need for expert guidance increase.

The lecture was recorded and is available to view here. The Turing talk was proceeded by Nadia Abouayoub from the financial sector talking about “AI and us”, which is also available on the recording.

If you would like to read more on AI bias, our Head of Privacy and Data Protection, Ivana Bartoletti has written about discrimination in algorithms which was featured in Forbes here.

 

Article Author.

Graham Patterson

Principal Consultant
Graham joined Gemserv in September 2018 as part of the Digital Transformation team. Graham has over 35 years experience working... Read More From Graham Patterson

Say Hi.

Did you like what you read? Did you want to find out more about the subject? Or did you simply want to get in touch with us? Either way if you would like to get in touch with us you can do so using the form on the right.

Gemserv will use your details to get in touch with you and to send you information about our products and services that you have requested, in accordance with our privacy policy. You can, of course, opt out of these communications at any time!

Get In Touch

Want to find out more?

Follow the links below find out more about the services we provide, our insight into the industries we serve or the opportunities available with us.
Sectors Capabilities Our Insights Careers