Preview Mode Links will not work in preview mode

Welcome to the AI in Education podcast

With Dan Bowen from Microsoft Australia and Ray Fleming from InnovateGPT

It's a fortnightly chat about Artificial Intelligence in Education - what it is, how it works, and the different ways it is being used. It's not too serious, or too technical, and is intended to be a good conversation of background information.

Of course, as well as getting it here on the website, you can also just subscribe to your normal podcast service:

    

“This podcast is produced by a Microsoft Australia & New Zealand employee, alongside an employee from InnovateGPT. The views and opinions expressed on this podcast are our own.”

Mar 27, 2024

The season-ending episode for Series 7, this is the fifteenth in the series that started on 1st November last year with the "Regeneration: Human Centred Educational AI" episode. And it's an unbelievable 87th episode for the podcast (which started in September 2019).

When we come back with Series 8 after a short break for Easter, we're going to take a deeper dive into two specific use cases for AI in Education. The first we'll discuss is Assessment, where there's both a threat and opportunity created by AI. And the second topic is AI Tutors, where there's more of a focus on how we can take advantage of the technology to help improve support for learning for students.

This episode looks at one key news announcement - the EU AI Act - and a dozen new research papers on AI in education.

News

EU AI Act
https://www.europarl.europa.eu/news/en/press-room/20240308IPR19015/artificial-intelligence-act-meps-adopt-landmark-law
The European Parliament approved the AI Act on 13 March and there's some stuff in here that would make good practice guidance. And if you're developing AI solutions for education, and there's a chance that one of your customers or users might be in the EU, then you're going to need to follow these laws (just like GDPR is an EU law, but effectively applies globally if you're actively offering a service to EU residents).
The Act bans some uses of AI that threaten citizen's rights - such as social scoring and biometric identification at mass level (things like untargeted facial scanning of CCTV or internet content, emotion recognition in the workplace or schools, and AI built to manipulate human behaviour) - and for the rest it relies on regulation according to categories. 

High Risk AI systems have to be assessed before being deployed and throughout their lifecycle.
In the High Risk AI category it includes critical infrastructure (like transport and energy), product safety, law enforcement, justice and democratic processes, employment decision making - and Education. So decision making using AI in education needs to do full risk assessments, maintain usage logs, be transparent and accurate - and ensure human oversight. Examples of decision making that would be covered would be things like exam scoring, student recruitment screening, or behaviour management.

General generative AI - like chatgpt or co-pilots - will not be classified as high risk, but they'll still have obligations under the Act to do things like clear labelling for AI generated image, audio and video content ; make sure there's it can't generate illegal content, and also disclose what copyright data was used for training.
But, although general AI may not be classified as high risk, if you then use that to build a high risk system - like an automated exam marker for end-of-school exams, then this will be covered under the high risk category.

All of this is likely to become law by the middle of the year, and by the end of 2024 prohibited AI systems will be banned - and by mid-2025 the rules will start applying for other AI systems.

Research
Another huge month. I spent the weekend reviewing a list of 350 new papers published in the first two weeks of March, on Large Language Models, ChatGPT etc, to find the ones that are really interesting for the podcast

Adapting Large Language Models for Education: Foundational Capabilities, Potentials, and Challenges

arXiv:2401.08664

 

A Study on Large Language Models' Limitations in Multiple-Choice Question Answering

arXiv:2401.07955

 

Dissecting Bias of ChatGPT in College Major Recommendations

arXiv:2401.11699

 

Evaluating Large Language Models in Analysing Classroom Dialogue

arXiv:2402.02380 

 

The Future of AI in Education: 13 Things We Can Do to Minimize the Damage

https://osf.io/preprints/edarxiv/372vr

 

Scaling the Authoring of AutoTutors with Large Language Models

https://arxiv.org/abs/2402.09216

 

Role-Playing Simulation Games using ChatGPT

https://arxiv.org/abs/2402.09161

 

Economic and Financial Learning with Artificial Intelligence: A Mixed-Methods Study on ChatGPT

https://arxiv.org/abs/2402.15278

 

A Study on the Vulnerability of Test Questions against ChatGPT-based Cheating

https://arxiv.org/abs/2402.14881

 

Incorporating Artificial Intelligence Into Athletic Training Education: Developing Case-Based Scenarios Using ChatGPT

https://meridian.allenpress.com/atej/article/19/1/42/498456

 

Incorporating Artificial Intelligence Into Athletic Training Education: Developing Case-Based Scenarios Using ChatGPT

https://meridian.allenpress.com/atej/article/19/1/42/498456

 

RECIPE4U: Student-ChatGPT Interaction Dataset in EFL Writing Education

https://arxiv.org/abs/2403.08272

 

Comparison of the problem-solving performance of ChatGPT-3.5, ChatGPT-4, Bing Chat, and Bard for the Korean emergency medicine board examination question bank

https://journals.lww.com/md-journal/fulltext/2024/03010/comparison_of_the_problem_solving_performance_of.48.aspx?context=latestarticles

 

Comparing the quality of human and ChatGPT feedback of students’ writing

https://www.sciencedirect.com/science/article/pii/S0959475224000215