Preview Mode Links will not work in preview mode
Welcome to the AI in Education podcast With Dan Bowen and Ray Fleming. It's a weekly chat about Artificial Intelligence in Education for educators and education leaders. Also available through Apple Podcasts and Spotify. "This podcast is co-hosted by an employee of Microsoft Australia & New Zealand, but all the views and opinions expressed on this podcast are their own.”

Dec 11, 2019

Dan and Ray have been keeping an eye on how the press write about AI in education, and this week they talk about how the media write about AI for education projects, and discuss their personal perspective on the stories themselves, as well as the way that the media cover it. From "fears teachers could be replaced by robots" and "could robots do your job" to "exams could be replaced by AI in the future", there's plenty of headlines designed to get you to click and read, and in the podcast we're going deeper than the headline.

 

The articles discussed are:

 

TRANSCRIPT FOR The AI in Education Podcast
Series: 1
Episode: 12

This transcript and summary are auto-generated. If you spot any important errors, do feel free to email the podcast hosts for corrections.

This podcast discussion between Dan Bowen and Ray Fleming explores how the media is currently portraying Artificial Intelligence in education, often focusing on sensationalised headlines rather than balanced reporting. They critically analyse several articles, highlighting how the media's incentive to gain clicks frequently leads to misunderstandings and oversimplification of complex AI systems, such as the debate around digital surveillance in US schools. The conversation moves to the ethical implications of AI, covering issues like bias, transparency, and the potential for over-automation of tasks in the classroom, underscoring the need for educators to understand AI's true capabilities and limitations. Ultimately, the hosts conclude that while AI is a powerful tool with potential to support education, its value should be assessed beyond the hype, ensuring its application enhances the human element of teaching and learning.

 

 

 

 

 

Hi everybody and welcome to AI and Education podcast. I'm Dan Bowen
and I and Ray Fleming and guess what Dan, I've just unwrapped a parcel. It's getting closer to Christmas and our new podcast and equipment has arrived. The good news is
from next week we'll have excellent sound. The bad news is we couldn't work out how to be in the same place today. So, we're doing this over a telephone call
and it's going to be fantastic. Right. So, today we're going to look at how the media is treating AI and look at some of our favorite headlines from the past couple of weeks and months and we'll have a look at what we think about those. We'll try to unpick those because last week, if you remember Ray, we talked about some of the technologies underpinning AI and we went back to basics. But now, let's have a look at how the media has been portraying AI in education. What do you think? I think that's really important because what we often see is education organizations are doing projects and sometimes they're misunderstood and part of the misunderstandings comes from the way that media report because I don't know if you know Dan I used to be a columnist for the times education supplement in the UK and the job number one was to get people to read your column. Now that was in print but imagine these days where as a journalist your job is to get people to read just the headline and then click on the article. The purpose of media is sometimes just to get people to read. It may not be about telling the whole story.
That's a very good point. So, have you looked at any recently? Do you want to kick us off?
Oh, yeah. Okay. So, I was looking at an article from the Guardian from October and it was talking about a project that was happening in the US. So, the the headline is under digital surveillance, how American schools spy on millions of kids. And it was basically talking about some of the surveillance tools that schools use for example to scan emails or social media posts of students. I think a lot of schools do things like that maybe at a lower level but it's become a new art form in the US because of the danger of school shootings. The reason it came to attention for me was the US politicians have decided that every school should have a system in place to be able to digitally surveil their students in an attempt to anticipate and intervene. around violence and school shootings and the story goes on and talks about the kind of things that they are doing. So the way that students might trigger off an alert by writing a phrase. So it talked about an example of a South Carolina student in a middle school who was working on an assignment writing about suicide while they were doing an in-class English assignment. But of course the phrasing that she used in the document set off an alert. in the school system that then the school responded to.
And so it's really interesting how the technology is being created and developed and applied in scenarios that are good, but as in many scenarios, it kind of it flips over into something that isn't quite as obvious. So you end up having too many alerts. And it was talking about one school principal that's getting thousands of alerts overnight on things that are being said between students and so how do they then bring in a safe line for it and how do they make it be incredibly effective by being focused.
Do do you think do you think this might also be I'm just trying to reflect back on the podcast we did last week and I fell down the pit of despair into BI rather than AI if you remember just listening to that there when you're articulating that back do you think that's also possibly where AI I underneath the hood could be used even more effectively to to if there's a thousand messages coming through, surely AI should be able to pick those up. Possibly in some of these cases, AI is being used as the trigger, but maybe not used to also triage some of those signals.
I think you're right because they talk about artificial intelligence, but then when you read some of the examples, you think there's not much intelligence in this. So the example was the principal of a school district got sent from one system that they were using 3,000 alerts and included things like students who used the word gay in a document and either sent that in an email or stored it on their folder and so it immediately created alerts and so I guess the intelligence bits has to come the the intelligence bit in that case came from the the principal who got bored with being woken up at 3:00 a.m. by alerts and so called the boys in that were using the inappropriate language and just said find another way. Go and do it in a text message away from school or use different language.
The problem.
Yeah. And it does it does ring bells for me when I was teaching when we used to put filters in uh for the internet and there was this big debate where you know I remember going to a school in Finland uh and they totally unfiltered the internet. And in the UK we had quite strong filtering and you had a filter list each week or day and the IT manager would print out list the top things that the students had been looking for and things like that. You know, it was a reporting system. It was no AI in it. It was just reporting thousands and thousands of websites that people had gone on to and you had to manually troll through to see if they were inappropriate or not.
And I guess that's where this story attracted my attention because of that idea of well, if you're generating thousands of reports, the noise overwhelms the real thing that's behind it. And so, how do you get to the real problem? Because you're absolutely right, just blocking words doesn't work or reporting words doesn't work because you've got to understand context and all of those things. So if we see the use of AI for example in grammar and the way that it's helping people to write uh in a more grammatically correct way that isn't just about I see that word and I pull it out. It's actually about all of the context. So maybe V2, V3, V5 of these things is going to be the thing that then provides valuable support. But I would say based on my knowledge of teenagers, they're going to get round all of these things.
And and and you know, if you if you were just from an educational point of view, students in say 12 might be analyzing the media and they might be analyzing the effect movies have on suicide or they might have, you know, the the way that movies project pornography and things like that. And some of those things are legitimate academic areas of study. And you'll get lots of flags out of context which is quite interesting.
Yeah. And I guess it's it's part of that just because you can doesn't mean you should thinking with AI that perhaps we've gone a little bit too early in the US by saying every school must have a system like this in place if the system isn't really mature to deal with it. So what would be great is if in the future we continued to develop these things and made them more and more focused so that they genuinely deliver benefit to everybody and that's the time at which you might widen it out to for everybody to use.
The thing I find fascinating though about that article that you just mentioned was when you know it is something we all worry about you know in the US around the school shootings you know I hadn't thought about it before but that that surely is a is a prime example of where you've got that balance of the ethical use and then you know would if I had kids in a school in the US I'm sure I'd be more than happy for the security systems which are on anyway in most schools. with CCTV, the track for people wandering on site and things like that for security purposes to be going through an internal AI system to look for weapons or guns, you know, a bit like we use in trades where you can pick up people using technologies, drills and things that they not actually being tested for, you know, and and then they haven't got the um right certifications to utilize a specific piece of equipment. It's quite interesting.
Yeah. And And that kind of comes along to things like facial recognition of there are good scenarios, there are poor scenarios to use it in. And you know, for example, the object recognition piece of AI, the ability to look at an image or a video and look for a particular object. And the example you quoted there was, does the person that's holding this piece of equipment have the training to be able to use it? And an example in universities and schools would be, is somebody walking into the science lab, have they got their hair tied up? Are they wearing safety goggles? Are they wearing a coat and if they're not doing those three things, you know, that's a problem. So, you could use it in that way. You just don't want to go all the way into some of the scenarios we're starting to see where you're almost anticipating things. That drives me to the article you picked out, Dan,
around China and what they're doing with AI there.
And I know we're talking about China and sometimes we use it as a bit of a bad example sometimes where where maybe they might be using and pushing the boundaries of society a little bit. There was an article that I picked out and and it was entitled it was from the Wall Street Journal in September this year and it was talking about uh the title was under AI's watchful eye China wants to raise smarter students. Now it was video and it's worth watching. We put all the the connection studies in the show notes but it was talking about a growing number of classrooms in China that are equipped with AI uh through cameras and also through brainwave trackers
wearing your head to kind of work out with how your thoughts are, you know, what if you're actually cognitive, if you've got a cognitive load working, you know, if you're kind of daydreaming or whatever it may be, you know, but but even into uh levels that you've been using say in in the UK and Australia with classroom temperatures and things like that. So, I think again like you said, when you look at the headlines, you know, putting brainwave trackers, that's exactly what people want you to do is what, you know, this is crazy. But actually, when we've talked in our past podcasts about uh sensors and IoT and real-time data. I think that's what jumped out to me because these were talking about in in this particular article or video, they were talking about, you know, while many parents and teachers see these things as tools to improve, they've also become a little bit of a nightmare from the children's point of view. And I and I think going to your point there about children tying their hair up to go into particular rooms and lessons and things like that, you know, do you not think that in education we actually need to actually learn and make mistakes. You know, there's a lot of things around fail. You know, your first attempt in learning as an acronym. Do you not think the school's there to actually allow you to make mistakes and educate?
I I do because if I look back on if I look back on my career, all the most important lessons I've learned through failing and I suspect if I hadn't learned them through failing, I would still be making those same mistakes now. So, that's definitely a point I guess. The question is do we sometimes think about it from a technology perspective of well let's get rid of all the variables and then the result will be consistent every time and perhaps that's where the thinking is in these projects around smarter classrooms is well if we can remove the variables it's going to improve things but actually every student is a variable and you can't homogenize them all um but if you can understand them better so brainwave trackers gosh I'm not sure what I think about that idea But then the reality is whenever I'm putting an agenda to together for a conference or an event, I always know that I need to put the most interesting interactive session on after lunch because we all get the post- lunchunch dip. And so if I had brainwave trackers on everybody, I would know that after lunch things start to mentally kind of slide a little bit. And you could talk to any teacher in the UK and they will tell you the classroom that they hate to be in on a Friday afternoon when it's windy and the rain is coming from the west. and how disrupted that class is. And so I guess we have those things already built in to our psyche. We know those things have to be done differently. Maybe by applying brain wave trackers to it, what we're able to do is to say, well, we're no longer having to rely on the innate built-in knowledge that you've done from failing, but actually you can look at the times when your students need ping up. You can look at the times when you've got them in optimal optimal learning moment. and just be delivering a more optimal learning experience throughout the course of the day.
Yeah, I think you're absolutely right with that because there's there's the balance between your personalization which is almost like the panacea that everybody believes we need for education, you know, going back to the Ken Robinson industrial model of education where we teach in batches and and actually trying to teach to every student, you know, almost like the Netflix of education where our education is personalized to ourselves. But then for that we have the opportunity to fail and to learn and what our learning process is about is much more complicated than just taking people from some of the issues that might happen so that they don't learn. You know because I think you're right myself I believe that if you do fail you know if we were to helicopter parent our own children and actually put them in cotton wool and not give them those experiences then what what do they lose and what what is gained? by actually going through those processes and realizing life is hard. Classes and life and activities are often difficult to do. You know, I one of the things I like doing in um keynotes recently when I'm speaking to students is talk about the karaoke culture we live in and the fact that you know say learning a musical instrument. We watch TV and we see things that are very easy to do but actually learning a musical instrument is a good example where you could use AI, I suppose, to kind of find things you learn to listen to you to give you pointers on how to be better. But it, you know, the learning process is hard. It's messy. Uh, and it's great fun.
The one place you don't want to file though, Dan, is in your high stakes assessments, in your exams. So, I was I was uh I was reading a story in the Telegraph in the UK about exams. The headline was exams could be replaced by artificial intelligence in the future, private school chief predicts. And so, This was from the beginning of October and it was a story about GCSEs. So GCSEs is the general certificate of secondary education. It's the year 10 exam I guess equivalent for Australia and we don't do those anymore. We only do the year 12 exams. But the GCSE is that knowledge check at the end of year 10. It used to be really important. It was certainly important for me. But it was talking about the executive director of the group of leading private schools in the UK was talking about how those exams can be very dry. Um, in fact, they said a very dry diet for some students and that the numbers of the top end in private schools that are doing those exams is disappearing. And so they talked about the fact that what they described as the elite private schools trying new technologies to see what works and using AI systems to be able to replace the examination systems. And what I thought about that was it was really interesting that they're moving forward on what an assessment should look look like because perhaps the traditional assessment might have passed its day, but they were experimenting in elite private schools. And so, wouldn't it be great if that trial was more representative of wider groups of students because I think that you might have a certain cohort of people that go into those elite private schools that doesn't represent everybody else. So, if you're going to try and experiment with AI and exams, it needs to be representative of everywhere and everybody rather than and just representative of somewhere like I'm sure Eton would be classified in that and they've had what eight or nine prime ministers of the UK have come out of that single quite small school. So love the idea of taking assessment in new directions and using data and artificial intelligence in order to reduce the reliance on that final assessment and better able to assess all of the student skills. But my goodness, let's not just do it in a little group of elite schools. Let's make it much more inclusive.
Of course, I really do believe though that if we do take that that you know I know we don't want to branch into the formalized testing debate in these podcasts but if you are using technology to be able to understand and really learn what students are taking on board and do do know whether that's their prior learning or things they've learned in your lessons then surely it's it's good for that student to kind of drive deeper levels of thinking you know if we you know at a at a romantic level you know if if students in your classroom and they are actually really driving for example scientific thinking and we don't push them and push their boundaries and cite them with biology say for example and really try to drive their thinking around chemical compounds in chemistry or whatever it may be then you know they might not find the cure for cancer because they might be just they might they might just get turned off by the subject. So the more we can personalize the better but when we're looking at exams like the cloud collective said in one of our previous podcasts the the technology is there even to a level for mathematical notation. So it is something that that I believe is something that we all need to think about. Okay. And and so that conflates two different things together which I realize we've talked about but never really separated out is there's a need for students going through education today to understand about AI and the implications of it and the ethics of it in order that they are prepared for the workplace of the future. And then there's a separate topic which is the use of AI in education. in order to improve or support the education process. And actually, we've talked about both of those things and we need to continue
to talk about both of those things going forward. But just in that area of AI to support the education process, it's not a black and white issue, is it? Because I know that you picked out an article around the ethical questions about AI in schools.
Yeah, I love this article actually. It was from the Australian Association for Research and Education. It was by somebody called Erica Southgate. And it was really thoughtful article that that kind of was taking stock of where we are and almost doing what we're doing in this podcast episode looking across the landscape seeing things that are happening and actually predicting just generally that AI is actually appearing in classrooms under the radar a little bit in the fact that it's often overlooked. Some of the ethical aspects are often overlooked. The title of the article was an ethical storm is brewing. It's about AI. in in schools obviously and it was really interesting because it was a nice quote at the beginning. It said artificial intelligence will shape our future more powerfully than any other innovation this century. Anyone who does not understand it will soon find themselves feeling a bit left behind waking up in a world full of technology that feels more and more like magic. I love that little quote. But actually when we start to think about some of the initiatives that are happening in this article they kind of highlighted about something from the Australian Department of Education, they released their first research report into AI and the emerging technology in schools and it was about essentially well when we're looking at putting AI in and I know we've talked about AI for good in our previous episodes. They were really highlighting the bias that might be in AI and some of the pressing issues that were appearing around blackbox nature of AI, digital human rights issues, deep fakes and the potential for I suppose a lack of independent advice for education leaders who are making decisions about the use of AI and machine learning in the classroom which is I suppose the intended audience for our podcast here. So it's really interesting the way together.
Yeah. So that's really interesting Dan talking about that bias because actually even the starting point of the research could introduce bias because I realized as you were talking about that and I was just looking at it I actually contributed to the survey. So I contributed I responded to the survey that was put out by the team and one of the things that really surprised me was that they were thinking of AI in specific contexts about you know manipulating students about using data about helping stu support particular groups of students but one of the things they completely missed was one of the things that we talked about in a whole episode which was accessibility the way that AI is being used to make learning more accessible and so it's really interesting because everyone starts off with their perspective around what AI might be And actually, as I think we've learned over the last 12 episodes, AI is all kinds of the different things supporting different scenarios. So, you know, I think this is great work because it calls out that need for more expertise and for people to generally become more aware, but it also demonstrates just how wide the topic is.
Oh, to totally and you know, they came up with a framework based on the the report which is really good and brought together things around the ethical principles and how you can design and implement this for public use around that transparency and accountability. I think one of the nice quotes that she mentioned in her article was that she wanted to avoid teachers and students using AI systems that feel more and more like magic and where educators are unable to explain why a machine made a decision that it did in relation to student learning. So, it's very much around the systems being able to make fair calls and transparently explain anything any actions that have happened and uh and actually the people are accountable for the decisions at the end of the day. So it's about educating the the community in the businesses here we go again business decision makers easy for me to say um in schools to actually really try to drive innovation but at the pace and transparency and an accountability that is actually quite clear.
Yeah. And I think we've talked about this before the the ethical rules of AI and the fair and accountability and all of those things that sit within our framework about how you apply AI. There are different languages being used in different people's summary of it. But it does come back to that same point keeps coming back of how do you avoid bias? How do you make things more transparent? How do people have confidence in the decisions that being made? And then as organizations and individuals, how do we make sure that we are using it appropriately? Because if you're going to make a life-changing decision on somebody's behalf, then you're going to know want to know that it's being made well and you're going to want to be able to explain it to others in the same way that if you're making a decision about whether a student gets into a particular university that transparency makes it much easier for people to understand what's going on.
Absolutely. So, what about your next article? Right.
Well, I love the story and this probably goes back to the click the clickbait nature of headlines that we talked about earlier. So, this story was from the Sydney Morning Herald on the 15th of September this year and the headline was software never has a day off sick fears teachers could be replaced by robots
robots. Yeah.
So, you know, here you go.
Oh my word. Robots coming to destroy our jobs. And yeah, let me be honest, an virtually every presentation I've started for the last year. I've always started off with something about robots coming to destroy our jobs in order that I can destroy the argument about Roberts coming to destroy our jobs because in many industries, technology is helping people to do things. In many industries, that technology and the march of AI is creating new jobs in the same way that as we move from the horsedrawn car to the engine driven vehicles, we actually created more jobs than were destroyed, even though it seemed like hundreds of thousands of jobs would disappear. But then the other thing is I think I'm not very keen on the dialogue that I often hear out of Silicon Valley for example, which is, oh, look, the teachers are variable. If you can replace the teacher with some AI, everything will be great because that simply misunderstands the role of the teacher because a teacher isn't just somebody is there as a child minder or isn't there to drop content into the head of a student, but it's around the motivation for students. You know, I remember my best teachers still when I was at school and they weren't the ones that were best at delivering content. They were the ones that were best at motivating me to be a better student.
I do understand you sentimentality with this right and I don't want to like it's nice to have like opposing views sometimes and I do agree with you you know fundamentally but you know as an ex-teer I did jobs and you do a lot of menial tasks that could you know when I was looking at this article when you shared it with me you know there was one that jumped out which was around the roll call in the morning now that's kind of you know re you know you you're a teacher you're being paid to teach students subject knowledge and other things obviously around, you know, life lessons and whatnot. But that roll call in the morning where you you get in at 8:30, you know, there there are really good positives when they were in the class and you would make good relationships with the students and you could really talk about things and sometimes kids would kind of share um some really personal things that were going on in their lives in that morning. So, I'm not saying it isn't an important time, but there were activities that I did as a teacher, marking being one of them, the the roll call element. There's lots of elements that can be done by machines so that you can spend time and we'll get better quality time with students you know
and I think that argues for why AI and the development of it has got to involve more than just technology people because somebody outside of education looking into a classroom looking at a process would say ah we can save you 25 minutes in the morning by getting rid of the roll call but the reality is that that roll call is the point at which Maybe for the first time in that day, one child in that classroom or five children in that classroom talk to another adult and it might be the only time they hear their name. And so you have to look at it and say, okay, so how do you improve the process whilst improving the outcomes and the outcome isn't a ticked off register. The outcome is that you've got the day started well with a group of students and they feel engaged and they feel
that somebody's caring for them. So that's the challenge, isn't it? As we start to say, oh, we could do this autom ally it's like yeah but there's a deeper thing there and so that's why you know I felt passionate as you did about getting this podcast started because we can't just leave it to the technology people we need to raise our awareness generally about what's happening here so that we can all be part of the conversation going forward rather than somebody jumping up and going I've got an AI system that solves your problem and I think the same if we go back to that conversation about exams exactly the same marking of exams is a tedious process that maybe adds very little value for the student. Feedback on exams or assessments and telling you how you did and why things weren't great or why things were awesome. That's the most important thing. We all remember when a teacher wrote good work or here's where you went wrong because you learned from that process. And so let's automate the things that should be automated, but let's keep the innate human abilities built into the system as well so that we understand where that's that value is being added. Yes, totally. And and how did that article summarize that end up article? Because you know the like you're saying it is a clickbait article. It is very much around teachers being replaced by robots and it did I think when I looked at it was bringing in unions and all kinds of professors and things like that. Where did it end up?
Well, I think the end of the article was my favorite quote. So it's the president of the New South Wales Teachers Federation. So that's the union that's representing teachers. And the quote was, "Technology is a tool. It's no more a replacement for teaching than it is for parenting." And that's absolutely right.
Yeah.
You know, giving a giving a young child an iPad and getting them to watch a whole bunch of videos isn't parenting.
And so I love the point that technology is a tool for us to be able to help us to achieve more. How do we use it? It isn't just a replacement thing.
And that goes back to the core of what,
you know, we turn up to work for every day.
But there there is an interesting point that they make in the article as well and I'm going to bring it up anyway even though it's a bit of a bomb in it kind of says after that quote they talk about a lot of this technology being driven by technology companies who want to sell software and hardware and they don't see schools as anything more than a market and students as anything more than a client and that's not education. I think that's quite an interesting point because there a lot of tools and technologies out there where folks are kind of focusing on student impact and uh the way that's used in the classroom and the way technologies are used and surface devices or whatever devices it might be in the classroom and there's always people when the PIA rankings are released people are always trying to point fingers on certain elements whether it's the English language curriculum or whatnot and we don't want to get lost in this in this podcast but I think it is important and the reason why we're doing this podcast and my and the reason why I was very keen to join you in this podcast Ray was to actually explore and educate people around these technologies because I think they do have an impact and I don't think it's about software companies just selling software for the sake of selling. I think a lot of the conversations that we've had around this podcast is around business decisions at the end. What are your thoughts on on that political question?
Yeah, it it is I guess I kind of feel there are technology organizations where it's just about well how do I sell this product and then there are others which is how do I help my customers succeed because their success is my success and I don't think that's just about education technology that's anything that's in any industry but I think there is a growing understanding of how we can use technology to help improve the process of education and so that means we're on a journey we're not at the destination the other thing I wonder though is do We need a better definition of what good education means because the measure of education outcomes if you read all the papers this week was about how students did in a pizza test. And to me over the last 20 years I've seen students capability increase massively in all kinds of areas. Their ability to communicate, their ability to collaborate has gone through the roof. But the only metric of success that we're measuring is oh how do you do in this literacy and numeracy test? And you know all that the often happens at the end when those tests are published is people go, "Well, we need to go back to basics." Well, my goodness, I think students today are so much more skillful and powerful than they were 20 or 30 years ago. When I interview graduates, when I see people presenting at our global competitions, I feel threatened by their skills as they come into the workplace with things that it took me 20 years to learn.
Sorry to interrupt you there, but when you just mentioned that, I think it's a very good connection when we look back at say surgeons and things where they are actually moving on very very quickly. You're talking about the rapid rate of change there and the way this is a journey and when we compare I know this has been done a lot of times where you you know they do that comparison if you sent a teacher back 100 years they could carry on and teach Pythagoras you know in 1919 just as they teach it in 2019 whereas if you sent a surgeon back to 1919 things would be very very different and the rate that surgeons and the medical industry has adopted technology as being much more fast-paced and I think there is a there is an element of that in this. I think people do again look for that clickbait headline of software companies or hardware companies trying to ride that wave but it is about business outcomes and it is something that'll improve education.
Okay. So let me ask then so talking about clickbait headlines and that topic you chose Dan.
Yeah. Go on.
An article about robots taking over jobs.
Yeah. I I love this. article because there was a there was uh there was a couple of things the BBC did something a couple of years ago and I always do this when I'm speaking to students I'll I'll show them one of the websites and you type in a job and you say well will this job be taken by AI and uh or robot you know will robots take these jobs and it's it's very interesting because irrespective of edu and you mentioned it in our last question there that I raised was this isn't just about an education industry trend which is really really pushing transformation. This is across multiple sectors and there is genuinely a worry about jobs and things that are going to go and and this is could a robot do your job. So it was about new data from a research house providing the answer basically putting into a website you search your job uh and then you kind of see if that job would exist or not. So for example accountancy and some elements of processing that job at a at a 70% chance of uh not being around in the next 10 years. What are your thoughts on jobs generally in industry around new technologies and specifically AI and robots taking jobs? Right.
Oh, well, while you were talking, Dan, you might have heard my keyboard clicking because I went and looked up in that article on the ABC.
Yeah.
Teachers. So, could robots take jobs of teachers? And it said that fortunately only 22% of tasks performed by teachers are susceptible for automation. So that's a good sign I would guess. But yeah, there are some industries which will be fundamentally changed. And I guess the question is are we giving students the skills to be thriving in the industries that aren't going to be changed where there's still an innate human skill for it. So creativity is one of those areas, research is one of those areas. Um effective collaboration, those are areas where it's difficult to see a robot replacing your job. But are we preparing students for those worlds. You know, I think law is a really interesting one. There's a lot of really entry levelvel law jobs that are disappearing because they are being replaced by technology.
But but it is it is a process, right? So the way I think about it is that yes, some of those jobs like the the the one that always beggars belief for me is is having to do my tax return in Australia every year when the government knows how much I get paid because they they know everything about me. You know, they've got all my details about all things that I pay and my outgoings and things like that. Why I need to submit a lot of that information. A lot of it now is automatically populated. But a lot of that can be done via AI and then have some intervention human intervention towards the end. But it is a very stressful time of the year when everybody's trying to do run into accountants to see what they can do on a good good side and bad side in terms of money back from the tax but also being able to manipulate bills and things like that is is a bit of a dodgy one. When you talking about uh the end of the year. So I think uh there are elements and the entry level that you mentioned there was was the thing that jumps out at me. It's like the entry level jobs, the entry level accountancy, the entry level insurance uh assessment for example, the things that can be done quite quickly before you need somebody who's got expertise in that particular industry. And and we we doing a lot with the AI AI for good challenge which we should highlight in one of the next podcasts, right? where we're trying to get students to think about use of AI and how we they can utilize it for better purposes in in the world and hopefully upskill them.
So summarizing that that conversation with whatever we've talked about eight or nine articles now I guess the thing that I'm feeling now after this conversation is the headline is designed to make you click on it which is why the headline is sometimes more dramatic than the story.
The second thing is that often they are focusing on the slightly more controversial uses of AI rather than the everyday stuff. You know, an example would be the accessibility. You know, AI is used by every user with accessibility in mind probably every single day. I use it multiple times every day. So, sometimes those those things just happen in the background and they don't reach people's attention. And I wonder if the same is happening in other areas within the classroom. And then probably the third thing is my goodness, look, they robots destroying your jobs headline is the one that people will still click on.
And and yet if you if you look at the news report that looked at where AI is in higher education, so it was out of a story on study international which basically pointed towards we've got the world's first AI university in the Middle East where they've got three and a half thousand students studying AI specifically in Estonia where we've got a plan for one % of the population to get educated in AI and 1% may not seem like a big number but it's a big number in absolute terms. So there is a lot happening. The headlines are often more dramatic than the reality and then there's a creeping layer underneath where actually it is just helping us to do a job. Every time you press a button to get Siri to tend send a text message for you that's AI helping you. And I expect that kind of thing to continue to happen in classrooms. So probably what we should focus on for the next few weeks, Dan, is let's find those stories and those people that are making AI happen in the classroom and let's talk to them and understand a bit more about what's actually happening aside from the dramatic headlines.
That sounds fantastic, Ray. Thanks today. I really enjoyed those headlines. It was really good to get underneath some of the actual clickbait. Thank you.
Okay. And and Dan, let's next time, let's try and spend time in a room together
with the new microphones and see Both of our voices has turned into silky radio voices.
Fantastic. Thanks.
Okay. Thanks, Dan.