Preview Mode Links will not work in preview mode

Welcome to the AI in Education podcast

With Dan Bowen from Microsoft Australia and Ray Fleming from InnovateGPT

It's a fortnightly chat about Artificial Intelligence in Education - what it is, how it works, and the different ways it is being used. It's not too serious, or too technical, and is intended to be a good conversation of background information.

Of course, as well as getting it here on the website, you can also just subscribe to your normal podcast service:

    

“This podcast is produced by a Microsoft Australia & New Zealand employee, alongside an employee from InnovateGPT. The views and opinions expressed on this podcast are our own.”

Nov 6, 2019

This week Dan and Ray go in the opposite direction from the last two episodes. After talking about AI for Good and AI for Accessibility, this week they go deeper into the ways that AI can be used in ways that can disadvantage people and decisions. Often the border line between 'good' and 'evil' can be very fine, and the same artificial intelligence technology can be used for good or evil depending on the unwitting (or witting) decisions!

During the chat, Ray discovered that Dan is more of a 'Dr Evil' than he'd previously thought, and together they discover that there are differences in how people perceive 'good' and 'evil' when it comes to AI's use in education.  This episode is a lot less focused on the technology, and instead spends all the time focused on the outcomes of using it.

Ray mentions the "MIT Trolley Problem", which is actually two things! The Trolley Problem, which is the work of English philosopher Philippa Foot, is a thought experiment in ethics about taking decisions on diverting a runaway tram. And the MIT Moral Machine, which is built upon this work, is about making judgements about driverless cars. The MIT Moral Machine website asks you to make the moral decisions and decide upon the consequences. It's a great activity for colleagues and for students, because it leads to a lot of discussion.

Two other links mentioned in the podcast are the CSIRO Data61 discussion paper as part of the consultation about AI ethics in Australia (downloadable here: https://consult.industry.gov.au/strategic-policy/artificial-intelligence-ethics-framework/) and the Microsoft AI Principles (available here: https://www.microsoft.com/en-us/AI/our-approach-to-ai)