Mar 27, 2024
The season-ending episode for Series 7, this is the fifteenth in the series that started on 1st November last year with the "Regeneration: Human Centred Educational AI" episode. And it's an unbelievable 87th episode for the podcast (which started in September 2019).
When we come back with Series 8 after a short break for Easter, we're going to take a deeper dive into two specific use cases for AI in Education. The first we'll discuss is Assessment, where there's both a threat and opportunity created by AI. And the second topic is AI Tutors, where there's more of a focus on how we can take advantage of the technology to help improve support for learning for students.
This episode looks at one key news announcement - the EU AI Act - and a dozen new research papers on AI in education.
News
EU AI Act
https://www.europarl.europa.eu/news/en/press-room/20240308IPR19015/artificial-intelligence-act-meps-adopt-landmark-law
The European Parliament approved the AI Act on 13 March and there's
some stuff in here that would make good practice guidance. And if
you're developing AI solutions for education, and there's a chance
that one of your customers or users might be in the EU, then you're
going to need to follow these laws (just like GDPR is an EU law,
but effectively applies globally if you're actively offering a
service to EU residents).
The Act bans some uses of AI that threaten
citizen's rights - such as social scoring and biometric
identification at mass level (things like untargeted facial
scanning of CCTV or internet content, emotion recognition in the
workplace or schools, and AI built to manipulate human behaviour) -
and for the rest it relies on regulation according to
categories.
High Risk AI systems have to be assessed before
being deployed and throughout their lifecycle.
In the High Risk AI category it includes critical
infrastructure (like transport and energy), product safety, law
enforcement, justice and democratic processes, employment decision
making - and Education. So decision making using AI in education
needs to do full risk assessments, maintain usage logs, be
transparent and accurate - and ensure human oversight. Examples of
decision making that would be covered would be things like exam
scoring, student recruitment screening, or behaviour
management.
General generative AI - like chatgpt or
co-pilots - will not be classified as high risk, but they'll still
have obligations under the Act to do things like clear labelling
for AI generated image, audio and video content ; make sure there's
it can't generate illegal content, and also disclose what copyright
data was used for training.
But, although general AI may not be classified as high risk, if you
then use that to build a high risk system - like an automated exam
marker for end-of-school exams, then this will be covered under the
high risk category.
All of this is likely to become law by the middle of the year, and by the end of 2024 prohibited AI systems will be banned - and by mid-2025 the rules will start applying for other AI systems.
Research
Another huge
month. I spent the weekend reviewing a list of 350 new papers
published in the first two weeks of March, on Large Language
Models, ChatGPT etc, to find the ones that are really interesting
for the podcast
https://osf.io/preprints/edarxiv/372vr
https://arxiv.org/abs/2402.09216
https://arxiv.org/abs/2402.09161
https://arxiv.org/abs/2402.15278
https://arxiv.org/abs/2402.14881
https://meridian.allenpress.com/atej/article/19/1/42/498456
https://meridian.allenpress.com/atej/article/19/1/42/498456
https://arxiv.org/abs/2403.08272
https://www.sciencedirect.com/science/article/pii/S0959475224000215