Preview Mode Links will not work in preview mode
Welcome to the AI in Education podcast With Dan Bowen and Ray Fleming. It's a weekly chat about Artificial Intelligence in Education for educators and education leaders. Also available through Apple Podcasts and Spotify. "This podcast is co-hosted by an employee of Microsoft Australia & New Zealand, but all the views and opinions expressed on this podcast are their own.”

Nov 6, 2019

This week Dan and Ray go in the opposite direction from the last two episodes. After talking about AI for Good and AI for Accessibility, this week they go deeper into the ways that AI can be used in ways that can disadvantage people and decisions. Often the border line between 'good' and 'evil' can be very fine, and the same artificial intelligence technology can be used for good or evil depending on the unwitting (or witting) decisions!

During the chat, Ray discovered that Dan is more of a 'Dr Evil' than he'd previously thought, and together they discover that there are differences in how people perceive 'good' and 'evil' when it comes to AI's use in education.  This episode is a lot less focused on the technology, and instead spends all the time focused on the outcomes of using it.

Ray mentions the "MIT Trolley Problem", which is actually two things! The Trolley Problem, which is the work of English philosopher Philippa Foot, is a thought experiment in ethics about taking decisions on diverting a runaway tram. And the MIT Moral Machine, which is built upon this work, is about making judgements about driverless cars. The MIT Moral Machine website asks you to make the moral decisions and decide upon the consequences. It's a great activity for colleagues and for students, because it leads to a lot of discussion.

Two other links mentioned in the podcast are the CSIRO Data61 discussion paper as part of the consultation about AI ethics in Australia (downloadable here: https://consult.industry.gov.au/strategic-policy/artificial-intelligence-ethics-framework/) and the Microsoft AI Principles (available here: https://www.microsoft.com/en-us/AI/our-approach-to-ai)

 

TRANSCRIPT FOR The AI in Education Podcast
Series: 1
Episode: 7 

This transcript and summary are auto-generated. If you spot any important errors, do feel free to email the podcast hosts for corrections.

 

This podcast discussion, entitled “AI for Evil,” explores the complex ethical dilemmas and potential negative consequences that arise from the implementation of artificial intelligence across various sectors, particularly education. The hosts examine how AI, even when created with good intentions, can quickly turn into a detrimental force, often citing examples like China’s social credit system and the manipulative use of deepfakes and political marketing to influence outcomes. A major theme is the blurring of the line between beneficial AI applications (like using facial recognition for class attendance) and “AI for evil” (such as using performance algorithms to fire teachers without human intervention), highlighting that the use of the technology, not the technology itself, determines its moral alignment. The conversation ultimately stresses the need for responsible AI frameworks focused on fairness, transparency, and accountability, emphasising the critical role of human judgment in high-stakes automated decision-making.

 

 

 

 

 

Hi everybody. Welcome to the AI and education podcast.
Thanks Dan.
Hi Ray. I'm Dan.
And we're going to talk about AI for education again.
Again.
Yeah. But Dan, this is the one I've been begging for.
You know, I said we did AI for good. We did AI for accessibility.
Yes, we did. kept bugging you and saying down down I want to do AI for evil.
This is the one.
Fantastic. AI for evil. So Ray, are you thinking about AI for evil then? Um any movies that jump out to you that that illustrate AI for evil?
Bit spooky you say thinking about AI for evil, Ray, like I do it all the time. But do you know I think for me it was like the Minority Report stuff. Do you remember they there was that context of thought crime is we've looked into your mind and we know you're going to be a criminal so let's just lock you up in advance. I mean that was Ultimate AI and Ultimate Evil.
Yeah. Yeah, that's true. It was great. I think the Terminator movies had a bit of an influence on me when I think about AI and robotics and autonomous machines and things because the interesting thing about that, if you remember the first Terminator movie, if you if you're old enough to remember that, okay, but the first Terminator movie, you had Arnold Schwarzenegger coming along and you thought he was the baddie in a lot of the movie, but actually he was trying to save the people that invented certain chips and things like that. So, I mean, it kind of played with your mind thinking, well, what was good, what was bad and also the ethics behind was the creation of making robots and the future impact of the things that were created would have a profound effect on the earth. So people coming back and kind of altering that.
So does that imply that you start off with a good idea and it turns bad along the way? I mean is that what happened?
I think it did it did there. I think but again um you know I think we'll explore a lot of that today because it's an interesting point you know and it's about the use of the technology rather than the technology itself in a lot of cases. So when we look at when we look at AI for evil, I suppose. Um, and start to kind of highlight some of those things. I think there's certain elements that people think about the sometime sometimes people call it black mirror or gray mirror where technology is used for kind of deceitful purposes or purposes that kind of border on kind of social injustice and things like that.
Well, you were talking a couple of weeks ago about some of the examples from China of what's happening there. So, just just remind me what that context was.
So, there's China doing lots of social engineering. They pop population. So they've got facial recognition. So speaking to somebody the other day and whether this is true or not, but one of the elements of all the population in in in China being kind of uh captured with a facial recognition, which you know that that's happening at the minute, but then if you jaywalk, then you get an SMS message and a fine, nothing else. Just an SMS saying, "Oh, caught you crossing the road in an illegal place, Dan. There's your $50 fine or whatever it may be." And then you kind of move on. So there's there's an engineering element going on.
But I've had that goes a lot deeper because I've certainly read the stories around your access to transportation, your access to where you live is linked to your social score.
Absolutely. Yeah.
That's very similar to kind of credit scoring.
Yeah.
But applied at a government level at a kind of all
and and then when you start to bring all those social uh networking kind of uh technologies into it as well where they say, well, Rey, if you're a bit of a dodgy member of society, should I be friends with you because the state knows that I'm friends with you? You know, my social scoring and my social social status would be kind of pushed to limit I suppose with the with with the with the government. So what would I do? Does it does it does it control a population? Does it do more than that? How deep do you think it goes?
So it didn't start as AI for evil. I'm sure it started as a well here's a useful thing. A bit like credit scoring. Here's a useful thing.
But you could also say you saying didn't start at evil, but you could say in society in China they don't think it's evil now, right? It's a different perspective.
Yeah.
So when we do start to extrapolate some of those things and think about the other technologies that are around at the minute like the deep fakes where you've got lots of images, the the and and video even now which which actually makes President Obama say things that he that he wouldn't be saying. You can you can do voice recognition.
So is that what a deep fake is? It basically creating a video or an image that looks real but is completely computerenerated.
Yeah, absolutely 100%. So you got those elements with where with AI is getting very clever and being able to manipulate that manipulate to a video level and that obviously connects in with all of the things you know we don't really want to go into the politics of it all but all of the elements and the gray areas around marketing and say presidential campaigns, Brexit, the way I AI was used to kind of create, you know, and those algorithms created genuinely by social media to kind of really focus on marketing, be able to kind of give you personalized services, but then have bent the rules slightly and actually had an impact on um maybe not even bent the rules, actually just had an impact on outcomes of certain elections by by manipulating the people reading the particular article. So, it's kind of interesting.
It's interesting when you put it into the context of marketing because as a 15-year CMO my focus was always well which customers are the ones that are most likely to buy.
So for me I wouldn't see that as an evil thing that's kind of my that was my job but then from a politics point of view it's well how do I influence somebody to think slightly differently you know who's closest to I can nudge over the line which is the stories with
and influencers there was there was a thing the other day wasn't it with a a camera app which would make you look older and and lots of influencers were kind of uh not that we need it, right?
I'm there already, Dan.
But but you know, yeah, the influences and the way things actually permeate in society quite interesting. A couple of the things I want to quickly touch on before we get into some of the deep areas around this and explore it a bit bit further. But but obviously when we looking at AI, we also thinking some people also think about autonomous war, you know, autonomous killing machines, drones being able to kind of be much more effective at at uh taking people out, for example, using facial recognition than uh conventional warfare. So, there's an element of that and there's also that element of unstructured data that we've got. There's going to be a trend over the next couple of years with our unstructured data where the cameras we've got in our homes. You know, we can't monitor all those cameras all the time, but we'll have cameras around in certain places and then AI will be looking at that video footage and then highlighting, for example, oh, your son is upstairs, he's just got his neck caught on a blind or somebody's just grabbed a knife from the kitchen to cut a melon and send you an alert on your SMS. somebody's in the pool and this has happened. So, you're going to get AI kind of looking at those things and starting to permeate into the house. So, as AI becomes more accessible and used across the consumer landscape, I think we're going to see lots more of those gray areas appear.
So, let me take you on a journey here because, you know, we're going to talk about AI for evil,
but it's it's a it's a gray line that you're crossing. So, so let me ask you a question. Facial recognition, AI for facial recognition, that ability to know who somebody is and do some work with the information that you've got. So, let's just test this on you and your thinking. So, using facial recognition to take a register at the beginning of a class, is that good or evil or somewhere in the middle?
That's good.
Okay. Using facial context. Okay. Yep. Using facial recognition as part of an assessment process to check that the right child is sitting in front of you
in an examination. That's good. Yes. Okay.
Using facial
Well, using facial recognition in order to oh China. So in China they're using facial recognition to distribute toilet paper in public toilets. So using facial recognition to give you toilet paper.
Um
good or evil, Dan?
Good.
Good. You see I'd have that as evil because I know that in 20% of cases it can't recognize people with facial recognition. So what happens if you're one of those people it can't recognize? it's going to be evil.
And I suppose what it comes down to as well, all of those examples you use when I said context as well is obviously say if you take that unstructured data in the home and and I know we explored it previously, but it's also where we going to keep that data. So some people would say, well, yeah, if I've got AI and I've got it in the boundaries and the data is in the boundaries of my own house and I've got my own sphere of Azure or whatever I might have to to do the AI, then I can actually do that, process it, analyze that locally. and therefore I don't feel as as if somebody's watching me. Whereas if that was going up to the cloud and somebody else's marketing engine was able to look at my images, then I'd be a little bit more worried.
Okay, so let's just keep testing this then. So um
I feel like it's an interview or a podcast.
I know that uh that there are smart mirrors, smart mirrors with cameras built in
and the idea is that they're using facial recognition and other things in order to be able to give you an indication of your health.
Yeah.
So a mirror that told you you were looking a bit tired.
Good or bad?
Yeah. Yeah. It's good, isn't it? Yeah. It's a good thing because
a mirror that helped you to avoid having to go for those yearly skin checkups to check for skin cancer where instead it just looked at you every day and then
that's that's excellent use,
right? Yeah. Because then it would it would be knowing if I've got a mole appearing or how sad or happy I looked. Emotional APIs. Yeah.
If I'm looking quite depressed in a way.
Oh, I'm going to come back to another example in a minute on that. But let's just keep keep going with the smart mirror for a second. That same smart mirror connected to your insurance company that decided what your insurance premiums are going to be based on what it's knowing about you and seeing for example cancerous growth or whatever.
Yeah, that's that's that's what it starts.
Is that good or evil, Dan?
I don't think I I I don't want to say anything's evil, but yeah, it's it's the use of that technology and it's the use of the data that has been collected. So again, starting out as a technology that that that's kind of for useful purpose. Then some people, you know, if you're a really safe driver and and society though penalizes for things like that already, doesn't it? So if you haven't got any points on your driver's license in Australia, you get a cheaper license renewal. So rewards are there all the time, aren't they? For people who are doing things the right way or put certain speed limiters on their car and things like that.
I I think the difference though is when we start to get to the world of AI, we're using incredibly complex algorithms that are black boxes. We don't know what's going in there. We just know the answer. at the end, you know, it looks at all those things and goes, "Good risk, bad risk."
And that's a very different world. Just on the facial recognition thing, one more to test with you. Using the facial recognition to do a motion recognition
and pointing a camera at a class to see whether the students are engaged or disengaged when a teacher is teaching,
good or evil.
See, I I've used that example a lot because I show that example and you do that and I've done it with adults in in sessions that I do. They're always bored. Is it good or evil? There's that's that's on a gray area for me because I think I think if you are you you know if if it can be used and that data can be used to help you target your lesson better you know if people if it is giving you general sign you know there's that element of personalization if it picks up the Dan in the back corner really isn't isn't coping very well today that's great what about the general sentiment of the class it might be that there's not enough oxygen in there
okay let's just get go one step further then I'm hoping to push you over the your gray line so using that information design which teachers you promote and which teachers you performance manage will sack. So you go,
this is horrible.
Yeah. Is that good or evil, Dan?
See, I think to be honest, uh, you know, as a as an ex school inspector and things like that as well, I know it's it's not as simple as this teacher's bad or, you know, they could have a bad day on whatever it might be and we collect data on people all the time. Maybe AI be better than a school inspector at looking at lessons because they could look all the time and have a consistent approach. But but yeah, being able to sack teach us on performance based on AI. You'd need to be very clear about what that AI was doing. So, I'd probably say evil.
Thank goodness. It appears we've got a different line between the good and evil, which is an interesting observation because I think everybody does have a slightly different point. You know, I always think in privacy terms, people have a different creepy line, but clearly we've got a different creepy line when it comes to good and evil because I couldn't imagine anything more horrifying than using a computer algorithm or AI or whatever you want to do to make a decision about human without a human intervention within it.
I think you need human intervention though. I think you still, you know, if if it is giving me a bit like the examples we did in the first episode with the judge and things like that as a teacher, it would give you a bit of a litmus test on the environment that you're teaching in, not necessarily going to give you something that that is going to do interventions itself, but it could do, couldn't it, I suppose.
Well, let me give you an example of uh what I think is definitely uh the wrong side of the line, you know, from an evil perspective is exactly that. So, in the school district of Houston, M
they used uh a whole load of algorithms to work out which were good performing teachers and bad performing teachers based on examination results and then without human intervention they made the decision about which teachers were going to be let go as a result
and you know to me that's AI for evil because we all know that performances between different classes has got a whole host of reasons and a teacher is only one of those things.
Yeah.
Big bigger factors are you know how many books you've got at home, what's the educational level of your parents, where did you start your learning journey when you entered that classroom versus where did you finish?
How however though like from a put my offset hat on from the UK, you know, couple of years ago, you know, I think it teachers are the most valuable asset in a classroom. I'm not talking about dollars terms. I'm talking about actually a good teacher, it makes makes a massive difference in a classroom as we all know. So, you know, the quicker you can weed out teachers who are struggling a little bit, the better because you know is an element as a as a parent and I think we've all been in this position especially in primary schools if my kids uh have got a substitute teacher for example for a for a long period of time then it does have a bit of a knock on effect to the kids learning so actually if we can if you can actually use data and AI to identify which teachers are underperforming then then there's an element
I think maybe one of the big things I focus on is you've got to have humanity as part of the decision- making you can't just make a whole load of decisions that are going to affect individuals in a serious way without having some clarity and transparency as part of the process and and humans involved in significant decisions.
So, so um
let me let me keep going. Sorry, Dan, you're going to feel like this is an inquisition, but but I'm thinking about self-driving cars these days.
No. Okay, go on then.
Now, you I know you know about the trolley problem, but let me uh just explain the trolley problem for our listeners. So, the trolley problem is about you've got a a train or a a light rail tram as in Sydney going down a track and tied to the track ahead of you is a person and the question is do you flip the switch so the train goes onto an alternate track to avoid running them over that's a really easy decision but then it gets much more difficult as it says well on this track you've got two young people on that track track you've got two old people what's the consequence and it was always really really theoretical until we've got to self-driving cars
because self-driving cars at points in the future going to have to make decisions about what it does
and and decisions it's going to take about if somebody's walking on the road in front of it and it comes to a hard stop and it knows the car behind is going to run into it. You know, should it be running off to the side of the road? Should it be running into a lampost in order to save somebody on the in front of them?
And I suppose the really interesting thing is this is an AI for evil kind of thing as well as an AI for good thing is if you're the car manufacturer that builds the car that guarantees that it will always make the decision to save you in the car rather than outside the car,
are you going to buy the car? And is that AI for evil?
Oh, wow. That that's almost an impossible question, isn't it?
You know, if brand A says we'll make the right decision and brand B says, no, we won't. We'll make the decision to save you every time, which brand do you buy?
Wow. Jeez.
It's probably a not answerable question. And I suspect if you did answer it, we'd probably feel badly about you then. So, let's let's carry on. Let's let's
Ah, that's a brilliant brilliant point though, isn't it?
And it actually it makes the point that that when you think about artificial intelligence and you think about all of the automated decision-m and all the things that are likely to happen automatically going forward, it's not black and white. It isn't, you know, I I've said, "Oh, let's talk about AI for evil because we talked about AI for good."
But really, it's the same technology, but with a different decision-m framework.
And and when you know, I know we've done a lot of work in Microsoft on unconscious bias generally, not around AI, but just generally when we're working with people and when we're working with customers and we're looking at UNC conscious bias. How does unconscious bias fit into AI then? Do you think
there's some really interesting examples? So, uh, one of the large organizations did some work around using AI to help them do better recruitment. So, if you think about recruitment, you know, we would advertise a job and get a th000 or 2,000 people sometimes applying for a job and you've got to somehow whittle that down. Yeah.
And that involves a lot of people and a lot of time. So, if you can automate that process, things might get better. So, one company looked at all the recruitment decisions they've made and built an algorithm to replicate those decisions.
Yeah.
The problem was that their existing recruitment process had what turned out to be unconscious biases. Scenarios where women were disadvantaged in the recruitment process because they use different language in their resumes. And that wasn't spotted until they started to automate the process and realized that women's resumes were being thrown out compared to men because of different language that was in use within them. And so you've got that whole challenge then of you don't know the bias exists. You code an AI system to replicate today's model and you find out that what you're doing is replicating the bias and making it faster and faster. Um, and I think there's been a lot of those scenarios as people have started to implement AI started out from a good place ended up being
yeah and and with this technology you know the other thing around that unconscious bias is those models will get trained and then you know people start hacking into those models eventually and poisoning poison those algorithms as well. So there's lots of interest in elements about you said earlier on about what's inside the box
and being transparent and open. I think that's really important
and and I think also it's having an open mind as you're approaching this and going well what could go wrong. I saw some press two weeks ago around some reporting of of an interview tool. So I don't know if you you know this a lot of the graduates and uh interns globally because there are big numbers of graduates coming out and companies have to interview lots of them often that first interview is done as an in an application you're in. an app on your phone. They ask you a question. You get two minutes to answer the question. So, you've got no chance to think about the answer. It's just like being in a real interview. But one of the really interesting things is that's been expanded out now to lots of other scenarios again for job interviews. And I was looking at thinking, gosh, that gives people that have bought the latest iPhone an advantage over people like me that have got an old iPhone. Because one of the things that makes you more attractive in an interview face to face is the fact that you're able to have a great conversation holding icon. And most people when they're talking on FaceTime
or Skype, what they do is they stare at the screen. They stare at the person they're talking to, which means on the other end, what you're actually looking at is somebody who isn't looking at you. The new iPhone is able to adjust the eye gaze. You look at the screen, but it makes it look like you're looking at the camera. That's going to mean if you do your job interview on an iPhone 11,
you're more likely to get a job than if you do it on an iPhone 8.
Just disenfranchis. Yeah. With the technology. incredible unconscious bias that's going to be surfaced by that.
Yeah.
And the impact, the knock-on effects onto other people. It's really fascinating that we're starting to see scenarios where there's really no ill intention, but things get used in the wrong way.
Yeah, that's it's really fascinating this subject, isn't it? So, when we looking at it, so we looked at some examples of AI for evil or a and AI and those gray areas there. When we thinking about AI in education and the areas that pushes the boundaries, have you got any examples of those.
Oh, I think the simple one from a university world because it's easy to grasp I think is thinking about student retention, student dropout. So, one in five students drops out in the first year. So, building an algorithm using predictive analytics in order to be able to tell you which students are likely to drop out is incredibly helpful. It's a $3 billion problem for Australian universities. So, if you can retain more of the students, well, financially it's a good thing. It also means you can got more funding for research, but also means that you've got a whole group of kids who have a better life chance because they don't drop out of university. Totally. So, it's good all round. There's absolutely no way that that could be AI for evil thing
until you start to think about if you can build that algorithm, if you're using that to intervene with a student to help them, that's a really good use.
If you use that in your admissions process to say the people we want to make an offer to are the students that are going to stay for the full three years, and 20% of these students are predicted to drop out in year one. Therefore, we're going to offer them a place last or not even offer them a place.
That's AI for evil because you've got a societal impact then because the kind of students that drop out are first in family, low income, low soio economic status diversity.
Yeah. So, you know, you actually got a a profound if you use it in that way, you've got a profound AI for evil scenario.
Yeah. But I suppose this has happened as well outside AI. You know, if you think about interventions, you know, with K12, uh we care about certain results in different countries. So like in in um Australia, we care about certain grades in naplan, in the UK, certain grades GCSE level. If you've got a student's borderline um you know, on a CD borderline, you know, one of those could pass, they could fail, and there's a big kind of um push to getting people across the line in standardized testing.
So I'm I'm thinking just in the UK context that CD borderline is it's almost like points for prizes,
prizes for points. So if if a student gets a C, you get a tick mark
and you get credit for it. If a student gets a D, then you don't get any credit. So from a school performance league table point of view,
you don't look great with a D, you look okay with a C.
Exactly. So you focus all So even without AI, you'd focus all your interventions in lots of cases on those students and then you'd miss out some of the students who are already flying. on A's and B's and things like that. So, you're focusing on getting people, you know, around a standardized assessing element to go from one boundary to another rather than the students who might go from, I don't know, making this up, but like an Fgrade to an Egrade or a B-grade to an Agrade
because it doesn't give you any benefit before AI. I was going to say that's that's been around for years. I'm sure it happened in my day.
Yeah. Yes. But it's the same thing, isn't it? So, so you can you can start using AI to intervene and there's lots of um uh uh school systems in K12 doing a lot of work around kind of predictive analytics and trying to work out what will happen in the future to kind of do much more personalized and we everybody's been talking for years about personalization and and things like that whichever country you're in. You know, at the end of the day, we want to make sure the learning is personalized to the individual and they learn at their own pace and um but it's an interesting one because you start to push those boundaries and and sometimes we blame it and AI for some of those things, but they've been around
for a long time. M
um and if if a computer did that then you'd say oh it's the AI's fault but we make bad decisions as humans
y
regularly
we probably are able to make them faster on a much fast bigger scale
using artificial intelligence than we were before. Yes.
Which means interventions can be better. You know our marketing can be better as personalized support for students can be better. But it also means that the unintended consequences Yes.
can also happen faster as well.
Yeah. And have significant impact. What are the One of the things can I I'll share with you one of my examples um uh and I and I do tell teachers this quite a bit because it was one where I came on stack uh every year I was in charge of six form in the UK um in the school which is the people going to university and it was very flexible so they go off site I've been working with K12 schools in in Australia at the minute who've got split campuses who have this problem as well but when students go off site if there's a fire or there's an issue then you don't know who's on site because is sometimes the campuses are so big in a university I suppose is the same they'll be offsite so who do you know is on site um and and also there was a monetary so so for me there was also monetary cost to it so every year I had to print out a plastic card take a photo of a kid plastic card print it out you know for 200 students a year it would cost like4500 at the time and then I saw something online which was a Windows-based biometric thumbrint reader for half the cost So I bought that, went in the school on the weekend, drilled a hole from the wall, stuck the thing on, um, plugged it in, ran some legacy Windows software
so students could register at attendance, they can get a library just by doing their thumbrint.
Yeah, it just actually was it was more rudimentary than that was just knowing if they were on site, if they're on site or it wasn't connected into any other systems just at that point. It was just are you on site or not? That's all it was. So we did that and it was much cheaper. But then suddenly there was a big outcry from parents because It was like the school is collecting biometric fingerprints on students. What are you doing? You know, this is before you add biometrics in your phones and things. So, I didn't do that for evil, but the backlash was schools trying to, you know, you stove fingerprints for the police or whatever it might be. And it was it was all kinds of horrendous headlines and things. In hindsight, what I should have done would have been thought about the word communicate the use of the technology to the parents better. Um, because I did a autonomy obviously to save money and to to do something that was safe for the kids. Um but but again it was thinking about those policies that I should have put in place, making sure I articulated why and what technologies we were using and how we're going to keep that data safe to the parent community.
First time I've heard that story and and it's interesting because I kind of got a few things out of this conversation. The first is I've been pushing for the last couple to say, "Hey Dan, we need to do an AI for evil things." I didn't realize that you were going to be Dr. Evil in the in the room. Um but the the second thing is there's a really interesting thing which is there was no ill intent in what you were trying to do but there were consequences that other people saw that because it wasn't communicated well because they were thinking through a different perspective they thought differently about it
and I guess the other thing that's come out through our conversation is everybody's got a different point where they think good versus bad positive versus negative is it's the same with privacy it's the same with all these other areas is each individual got a different thing. And so you probably need to think about the conversation with your constituents, with your stakeholders within the organization as you start to do AI projects because you need to think about doing it responsibly.
Yeah. Yeah. And and so so to tie it all together then, you know, are there frameworks that people can look at? Are there are people, you know, we hear lots of things about open air initiatives, Elon Musk does things, Microsoft do things. If you're a business decision maker, what what what should you be doing to kind of balance this?
Yeah, and it's an interesting conversation that we've had with with a number of education institutions, something we have internally as well. So, we're very much fixated in Microsoft on responsible AI. There will be times when we've turned down government requests to use our artificial intelligence for certain things. Facial recognition is contentious because it isn't uh as accurate with some uh groups in society as it is with others. So, we will make a decision about the use of that. So, we've got a framework around AI that has got some principles in it. It's got a principle of fairness. You know, are you going to be fair to all the different people as part of the decision-m process. So, facial recognition is a really interesting one if it's less accurate with women, if it's less accurate with people of color, and therefore, what is the consequence of the decision you make using it? Reliability and safety is an important one. Privacy and security. I always think about that with facial recognition. You know, if I went on a climate strike protest, are they going to be filming me? Will that picture turn into facial recognition? And will I one day turn up on holiday in a country where they scam my face at the border and go, "Well, you're not coming in because you're a climate change protester." So that that privacy and security issue is there inclusivity. So last time we talked about AI for accessibility, that's a a way that AI is helping, but also sometimes AI could be excluding people. My iPhone 11 example for the job interview.
Totally. Totally.
And transparency, you know, are we able to explain to people the decisions that are being made. We've got that going on right now within Australia about the way that algorithms are being used to ask people to pay debts back for the last seven or eight years. Now, is it transparent the way it does it? Um, and are you able to as a citizen get a review of a decision? That's a global question, not just a Australia one. And then the last one is an accountability. So, making sure that the process is accountable to the right stakeholders. Now, we apply that at a global level. We apply that in our AI projects. We apply that in work we do with government and yes, banks and commercial organizations. But I think that is just as valid to education to think about those things as you start to roll it out. And there are other frameworks being developed. There's some discussion in Australia about a framework. There's a great paper from CSIRO and data 61 about an AI accountability framework. It's got some really good examples in there. But that then applies when you think about AI in education. Sure, you want something at system level. But if you're doing something within a school or a university or a TA, how do you make sure that you've got some principles that mean it's not just an IT thing? It's not just a the person owns the student data thing. It's an organizational thing. And your fingerprint example is a really great example. As a as an IT manager, you could just go do that tomorrow.
Yeah.
But the in the knock-on impact on everyone else means you probably needed a bigger framework.
Yeah, that's exactly right. And I think there was nothing like that available at that point. And I didn't think about it. to be to be perfectly honest about it. I was I was thinking just about that technology and I suppose really good to know that there's those principles that Microsoft that we work in and you know Brad Smith wrote the book uh last week
started listening to it this morning.
Tools and weapons. Is it good?
The the first 20 minutes were great. I'll let you know in the future when I've read the full thing
because that's great way to kind of start the picture because when we we can look at all the new technology they're fantastic tools, you know, and and uh to encapsulate the tools that are going to be curing cancer and really driving society uh forward in a in a much more productive and better way are also going to be the tools that could be weaponized and used you know for evil. So I think it's really important that we do put those frameworks in place and localize frameworks and and and follow frameworks and the companies that we're working with trust certain elements of that technology and read through documentation to make sure we kind of using it in the best way. But also to pick your point a bit that I took from this podcast today was about that human element. you mentioned about the fact that there are going to be elements where AI will make decisions for you, but having human elements when the gravity decisions are actually really high, we we could argue that humans are less uh accurate than than AI, but having those that human element to to be able to be accountable and and have that technology as transparent is really important. So, what a great podcast, right?
Great. So, we talked about AI for good.
Yes,
we talked about AI for accessibility. We've now gone really black hat and talked about AI for evil. I think over the next couple of weeks, let's talk about how AI helps to personalize learning and makes life better for students and then we can feel really good about things in a week's time.
Fantastic. Thanks.
Okay. Thanks, Dan.