Preview Mode Links will not work in preview mode
Welcome to the AI in Education podcast With Dan Bowen and Ray Fleming. It's a weekly chat about Artificial Intelligence in Education for educators and education leaders. Also available through Apple Podcasts and Spotify. "This podcast is co-hosted by an employee of Microsoft Australia & New Zealand, but all the views and opinions expressed on this podcast are their own.”

Aug 26, 2020

In this episode Dan and Lee discuss the areas around Responsible and ethical use of AI.   They look at fairness, security, inclusive and transparent AI and how those that create and use AI systems have to be accountable.

 

Links:

https://www.microsoft.com/en-us/ai/responsible-ai?activetab=pivot1%3aprimaryr6

 

________________________________________

TRANSCRIPT For this episode of The AI in Education Podcast
Series: 3
Episode: 8

This transcript was auto-generated. If you spot any important errors, do feel free to email the podcast hosts for corrections.

 

 


Hi, welcome to AI podcast. Good morning, Lee. How are you?
I'm well, Dan. I'm getting a little tired of sitting in the same desk, same office, same place day in day out, but we got to do what we got to do, huh?
I know. Podcast fatigue, work fatigue, or just general fatigue, I think, isn't it?
No, never fatigued in the podcast, mate. It's always a pleasure to hear your ch voice.
Fantastic. And likewise, so today's episode's going to be on responsible AI. This is quite an interesting one for Microsoft and lots of other companies at the minute, I suppose, isn't it?
It look, it's it's front and center and I mean, we'll we'll get to the details. For me, it's a really big part of what I do dayto-day. So, I have a lot of a lot of viewpoints on it. Um, but I think it's really important in the world we live today because of the way AI is impacting all of our lives.
Yeah, definitely. And I suppose to set the scene then um because we've done lots of different podcast cost over the the the series and we've looked at different ethical uh issues around AI but if we set the scene today thinking about you know we've discussed quite a lot already about the way AI is already here the way this in our lives and the fact that we see lots of positives positive ways that AI helps us from you know accessibility healthare education government work um and then also the less positive ways I know we did the AI for evil episode when was on board. That was quite a funny one. And there's things like autonomous weapons, you know, the social scoring that we talked about in in China, for example, and other countries. We've had things in the news recently over here around facial recognition and globally in the US around uh policing. And I suppose there's many ways that we don't immediately see but experience every day. You know, things like the map systems we use, the phones we use, the pervasive pervasive, it's easy for me to say, pervasive it inside tools like Microsoft Office and PowerPoint and things like that um and even in our email feeds and on our mobile phones and in our digital assistants. So there's a deep penetration of AI and it's not just into our lives but it's in the world around us I suppose and and it operates uh pervasively across all the equipment we're using. And then I suppose what that's driving through society is the need for a humanistic approach to technology these days and and a more respons responsible I suppose that comes into day responsible and considered approach for using AI and I think when we were talking about this before you come up with Dr. Ian Malcolm the the famous doctor from Jurassic Park that said
yes I I do love this quote
yeah we I remember you saying you scientists were so preoccupied with figuring out if you can do it you never stop to wonder if you should so I think that's a good way to set the scene uh around this so responsible a I let's kind of go back to the basics then. Where's Can you give us an idea of a little bit of history uh around responsible AI and and and ethics?
Yeah. Yeah. For for sure. I I do love that that quote because it's and it's just become part of the zeitgeist now. We just everybody knows it, but people don't even probably remember where it's from, but it's because that movie, by the way, just side note, I think it's like 20 18 years old, 17 years old or something like 1993. That's that's aged everybody for the day. So look, yes, so thinking about age and going back. So yeah, look, so we we say responsible AI and I think we're going to almost decompose that a bit because we're saying responsible and what does that mean and then of course AI which is what we think deeply about in this podcast series. But if we think about responsibility, what we it sort of harks back to this idea of of morality or of ethics and ethics is kind of the thing that really was the starting point for a lot of these technology conversations around AI. You have to be ethical. It needs to be ethical. Um, and just from a little bit of history perspective, ethics is a like a lot of things, it it derives from ancient Greek. It's a Greek a Greek philosophical point of view, which means it's about your character, your person. It's about your moral nature. It the word ethics is kind of a pointer towards who are you? What is the footprint that you as a person leave behind uh by nature of your morals and your the actions that you live by?
Yeah.
So, the the challenge with that is is that that's ethics and that ethics is almost it's very personal Dan I mean your ethics and my ethics are very different and they're born out of our life lived experiences our chemical makeup of our brains and all the things that we do so it's it's really hard to say and this was I think why we've moved from ethical AI to responsible AI at least from a Microsoft perspective is because ethics is a personal point of view responsible is a a shared ownership a set of values
yeah Correct. You know, like if you think about uh you know, we all take some responsibility for say in this COVID situation, it's it's a it's responsible on us all to consider other people and wear masks where appropriate and behave in certain ways. But we see individual cases of people's ethics and their own morality coming into play where they say, "Well, that rule doesn't apply to me because I don't believe that belongs to me." So, you can see this dichotomy between how I view the world from my ethical point of view, but how we should all collectively behave as a responsible shared value. Another here's another good just thought of it. Another good movie quote for you. This is a Star Trek thing, isn't it? This this is Spock's view of the world, isn't it? You know, Spock is, you know, this idea for the the good of the many outweighs the good of the the few, the good of the want.
It's a great it is a great quote and that and that's this really idea, this idea of responsibility. It's the good of the many. It's the how do we make sure that nobody is harmed and everybody sees benefit. So, um, so kind of getting back to the point history, ethics, this idea. So, ancient Greek about the idea of ethics and morals. We take that as a viewpoint and think about it from a collective responsibility. And then again, I think we've used this quote before. Maybe we're showing our nerdiness, but it kind of goes back to, you know, an ethical point of view on on the laws of robotics as as Azimoff um you know, said many many years back in the 40s, you know, those three laws of robotics about a robot may not injure a human through action or inaction and a robot must obey the orders and a robot must protect its own existence. These are kind of formation of this idea of if we build technology that looks like us, sounds like us, behaves like us and is essentially a mirror of us, then it needs to be in some way embied with the principles that we hold dear as humans. You know, we don't murder is considered a mortal sin. We don't want robots to go around killing each other and killing people because we don't want people to
and I suppose the the speed at which um technology has really uh pushed the boundaries and I know we've said it a couple of times in the past. I remember I suppose one of my epiphany moments about speed of technology innovation in the last couple of years was just one day, you know, a couple of years ago scrolling through a a Twitter feed or LinkedIn or whatever it might have been and then seeing uh the SpaceX rocket landing on a boat in the middle of the ocean and it came out to nowhere almost and almost and self-driving cars as well, you know, suddenly because of computing power, all of these things being possible um And you know you can physically see sometimes some of that those advances have been small in terms of public facing but you know home automation devices those things that Tesla are doing you know suddenly you know you see somebody firing a car into space and you're like where how did we end up here so quickly um and I suppose that's where all of those areas around ethics are now becoming really really poignant right now right Look, you know, absolutely. I mean, yeah, you kind of the pace of change is hard to fathom even, you know, for you and I who live in live in the world of tech, the pace is fast and it's hard to follow. So, when you don't live in that and you and you kind of see it happening to you as opposed to being a party to it. Uh, and again, that's when we think about it from an uh the responsibility we need to take as a tech company and as an industry. You know, we we often forget that that people's lives are so impacted by that technology in in such significant ways that that it's important that we stop to consider that along the journey and and that's kind of I think we'll get into a bit of that as we talk.
Um but you know and and I know that the you know the thing that's always been kind of the front and center of the AI conversation has been this you know AI is taking people's jobs. AI will create the robots of the future that do all our jobs and we'll all be but all be all be worthless. And when I was researching our thoughts for this conversation I came across this uh this piece of work by a guy I've never heard of before um Joseph uh Visenbomb. uh so only in the 70s where I guess you know early days of tech that computing was starting to become uh a real thing in many ways and there was discussions on where and where where and what AI should and shouldn't be used to do to replace people. So you know at that guess in long time ago very raw conversation we were thinking about it in rep in that con context of replacing as opposed to augmenting humans which is where we you know think now.
Yeah.
But it was interesting you know the the kinds of roles that they were saying that should not be removed by AI. First one through me like a customer services representative person you speak to on the phone and you think how many times now is that just the the most
common you know but then the others you know a therapist a nurse um you know people who care for people who use human emotions to drive their activities
uh which kind of makes sense I think and and we we'll get into a bit of that human humanity piece of where that fits into the AI and then soldier, a judge, and a police officer. Now, if you look at those, isn't that like front and center today of where the argument is,
you know, policing, facial recognition,
army and warfare, and using weapons and autonomous weapons. So, it's interesting, you know, it is
and and there's always that, you know, when you reading those out then, you know, and mentioning some of those titles, you you think about the pros and cons of each of those. And when you were talking about the customer service one, I was thinking about, you know, 5 years ago, it was a nightmare when you ring up company wasn't it? And you'd kind of try to avoid the automated system, you know, and and and you kind of press zero a load of times or press hash to try to get to speak to somebody. Yeah. Get away from this like whereas now it's a lot more um pervasive and actually get you get through to some of these government services a lot quicker, dare I say it, in some cases. Um the there's, you know, the therapist element. That's that's one I had thought about before. That's really interesting where I suppose there's lots of bots now being created to kind of support people who are getting um tension and depression through things like co you know uh support is is really an important one where you know you can you can triage a lot of that you know in in health care you you've got all these pros and cons haven't you got you got to it's it's it's the very thesis of a moral decision you know do you just because you can do it should you do it but have you considered all of the potentials I mean the therapist one. Um, absolutely. You know, anyone would largely have argued in time that you want to speak to a person and have a therapist help you through your problems, but we're now breeding a generation of of younger people that are more comfortable communicating with the device, with technology. So, for them, a therapist that is a chatbot that is talking them through a problem is not as big a turnoff as it perhaps is for someone of you or my generation. And that's that's where you get these interesting generational shifts as well around the expectations of
Absolutely. Yeah, one of the only other ones that jumped out here was uh the judge element you put in and I remember in one of the early early podcasts. I remember there was something about um the use of uh a judge in the US using data and there was a big article about it whether they should or shouldn't and it was all about well using AI and some of that data analytics about reaffending rates and things to kind of support the judgment rather than just be the judgment. So you're not saying just because Dan lives in a particular postcode, his chances of reaffending are quite high. But actually um bringing that data in so you can make a better call rather than the call itself.
Yes. Look, it's I mean I know the example you mean I think it was a system the US judicial one of a couple of the states or jurisdictions are using called compass which was a tool to as you say assess a person's reaffending uh risk
based on a whole bunch of factors including you know race, eth, ethnicity, age, area they grew up in, schooling, and all these things. And I think what it shine a light on was all of that data tells part of the picture of the person, but the judge sitting in front of a human being talking and explaining their case and kind of using the human conversational piece of it is where a judge really comes in. That's what a judge is paid to do is to use their judicial,
which is that process you do in your head where you think through the scenarios and make a call make a judicial call on an outcome using the law as a backdrop to that you know to those decisions. So yeah look it's yeah it's great and you're right we there's so many of those examples in
so when we look into that and the fantastic work in the in the past and the historical elements to this why why is responsible AI then so important and what makes AI different in this area?
Yeah look and that's probably the most you know the reason why it's become so important today is because AI is not new. As we've talked about, AI has been around for a long time. But to your point you made earlier, Dan, it's that pace of acceleration of AI. I mean, just in the last five years, the growth and the and the what we the proximity to the human level of uh of intelligence, I'll use in inverted air quotes, but that reaching reaching par with humans on some of those very human things. So, you know, you think about what we are as a as the core of the human experience, you and I. and everybody else. We go out, we see the world, we hear the world largely, we perceive the world, and then we interpret that in our brains and make decisions based on that. And and AI has always been about how do we make that happen in this computer system, you know, how do we emulate those things? But what what it's always lacked, I mean, historically AI lacked the computational capability and the data, but we're getting that's where we're accelerating so quickly now. But it lacks that humanity, the empathy, and in some ways, and this is the ironic thing, it lacks the co biases that drive you and I to hear all of the logic in an argument, but then go ahead and do something that we choose to do because we choose to do that because we love to do that or we see that as the morally right thing to do. You know, it's you think about when you chastise your kids for doing something wrong. Um, and you know, you choose to you could get you look at the situation and my my daughter broke my TV last week. She dropped a toy, smashed a TV on the wall. Big TV, a lot of money. And argumentably, if you look at that from a data perspective, you'd say, well, she broke it. She has to pay for it.
But of course, cognitively I'm not going to go to my daughter and say, right, I want, you know, few thousand. So, it's that humanity where we make those decisions that that's what's obviously lacked. And
it's a good example.
Yeah, it's a it is. And and so historically, traditional intelligent systems in computing, you gave them an an input, they would give you the same output. And however you gave it, you know, however you inputed the data, you'd always get the same output. AI systems are different. as we know they learn they adapt and they change based on that input and so this is where responsible AIs come in because suddenly AI systems are a capable of making decisions that we couldn't predict or we can't expect we don't know what you know we can't always predict them and they are wrapped up in technological capabilities complex machine learning algorithms models that are largely a black box to everybody else and and and often to many people and that inability to see inside the box. And that risk of not knowing what the system's going to do requires us to think about that because it's we what we do know for sure is the system is never going to make an empathetic human moralbased judgment. It's going to make a databased judgment.
It's going to rehear all the data and go, I've seen a billion pictures of cats and I'm telling you categorically that horse is a cat because it's got four legs, two ears, and a tail.
And of course, you and I know that's that's, you know, not right.
Yeah.
So that that's what's what. So today it's so to what we are as humans. It's so capable of doing so much and yet there is so much potential risk if we don't truly take time to understand how the various streams of outcomes could occur based on the data and the algorithm we're using. That's why it's so important today.
Yeah. And and I suppose thinking about the advancements of AI over the and the pace of innovation that I mentioned earlier on, you know, I think when you look through that timeline, you know, doing research for this uh podcast episode, we're looking through the timelines and it was I think it was 2016. when vision AI started to come uh much more kind of par with or at least starting to get that parity with um elements in in the real world then we went in from vision to speech then interestingly to reading and translation where there's been a lot of work over the last couple years especially in education um and around that that accessibility area I know we did a podcast episode on that specifically where you where for me I think um you know I I thought there was a bigger leap between vision and speech recognition, but they almost came together and did that reading and translation. And obviously that speech synthesis, they all seemed to come together at the same time inside 2018. And then we've started to then push it even further with bots and things around understanding and language understanding and real um uh par when it comes to kind of answering questions and things like that.
Yeah. Look, it's interesting as you say, it's one thing to get a computer to know how to look at something or read something or say something. But the really key things are interpretation and how it understands what it's just read or seen. And that's that kind of that translation and language understanding where that's got to be done right. I mean, there's a great example right now going on in the press um around the issues of translating the COVID procedures and rules to those people in the country whose English isn't their first language. And we're seeing all sorts of furer because they're just translated badly. Now, I'm not saying they're translated by bots or computers, but you know, as as a as an example, translation and you know enabling everybody to be included and this is a really good one when we think about why is responsible AI so important because if it can't communicate with everybody and if computers don't give us anything other than the ability to translate so many you know languages so quickly and interpret them so much
then you've got this capability to make sure that everybody is able to know what's going on for example and follow the you know the right pathway.
Yeah. Yeah. No to totally it's it's fascinating really when you start thinking about there, you know, and and going back to our human parity element. I know we mentioned that a couple of times, but you know, when you start thinking about human parity, you know, humans, we forget sometimes that actually human decisions are pretty poor in a lot of cases generally as well. And we always try to get par with humans when actually a lot of these decisions can be better. And I know that becomes an ethical thing as well because, you know, if you can get
um uh a vision recognition system that is better than the human, you know, percentage of getting an image right, then then it can become better than the human of those things. But then we still don't trust it. We still think we are better at making that judgment call. Right. Well, and you know, so yeah, and using a couple of examples we already said, you've got the the the judge sitting in front of a um you know, a person on trial with all the data, but then making a judgment, a human judgment, that's a better outcome. But then a doctor who has to deal with so many patient and look at so much data to make assessments. If the data can be fed to the doctor in a uh an AIdriven way so that you can kind of focus in on the core problems as opposed to have to look at the big spectrum of information and try and read it from left to right
that makes the doctor's job the doctor better at doing doctoring if that's word but you know what I mean it so yeah you've got those two paratives of you know sometimes AI is better than us and sometimes AI actually is probably not you it's a good tool but it's not the answer. That's right. So, so there are there are risks on this. So, there's there's there's plenty of risks where uh you know AI has pretty big uh consequences. What what are your thoughts on some of those consequences?
Yeah, maybe we can bounce around a few of these, but there certainly look I mean the number one sort of problem child if you like of the AI space in terms of ethical challenges, moral challenges is facial recognition. Um you know, we've gone leaps and bounds in our ability to identify people facially. Um you know, we think about our phones today. They do this as a matter of course every day when you log into it. They're recognizing you based on 25 or so unique attributes around the distances between eye space and all that kind of great stuff. Amazing technology and it can change the way we all access systems and create new accessibility options for those that are unable to use typing or other mechanisms. So, it's a great idea, but of course, if it's poorly designed or doesn't get fed enough data, we see these examples of facial recognition that isn't able to distinguish between skin colors. or recognizes men and women differently if they're transgender. And you know, dealing with all of these different scenarios, it can be it can create really bad uh outcomes for individuals. And of course, you know, facial recognition to help people get in and out of a secure area, great thing. Uh say a prison system, facial recognition on the street used by police to identify individuals who might commit a crime, not such a good thing. And so same technology, but the imple implementation has a high risk.
Yeah, that's the Yes. And it's really interesting, isn't it? Cuz as you're talking there, you know, things are getting a lot better. Um, but there's two elements that jumped out to me. Obviously, the the the design elements really important. You know, making sure the and I know me, we make a big effort to make sure that we try to get as many uh different representations from different backgrounds when designing these things and designing the algorithms and thinking about the different um uh data that we use to train some of these algorithms as well. That's really important and I know we talked about that in the machine learning element and the fact the camera technology is getting better. So some of those things will hopefully get more and more precise as we go through but then but I suppose one of the other things you mentioned it earlier on was around the black box of AI you know people kind of just assume that this um black box and the algorithm inside there is you know set and it doesn't need to be referred to again and you know uh and changed. So for example when AI starts determining the men are better candidates for a or get better credit in a bank than women for example or or for a particular job role. Um and then how do we see that unfairness and then account for it inside that algorithm and how does it keep in the loop and keep getting feedback to make itself better. But then when that algorithm inside that black box um is adapting then how do we check on that and and kind of you know like a like a an oil dip thing in your car. Whatever they were called, whatever they call it. What is it called? Dip thing. That's the oil dip thing. Jeez. Technology podcast and can't even get that right. But um uh yeah, so you know, being able to kind of have a test of that algorithm and check um check what's going on. You know, we don't know what's in that algorithm. So, what do we do to kind of share with people that fairness, I suppose?
Yeah. Yeah. I think we'll we'll come back to that because I'm going to talk a bit about some of the things we're building and in that's one area in particular is how to do that check how to be because we're always going to make mistakes. I mean that's the to be to be to is to be human. I think look we're just banging out in the quotes here aren't we this
but that's exactly right we are we are by very nature we make mistakes but we need to be better at understanding and capturing those mistakes. Um so yeah look totally a big yeah you know it's a big one that blackbox challenge. Um you know the other one again that that we see a lot in the news these days is this issue of deep fakes and as we progress towards the uh the US elections in November. I'm sure this will be an interesting area because think fakes, you know, it's it is obviously it's probably most people recognize it as funny videos on the internet where you see somebody of importance or a celebrity or star saying something that is completely out of context, you know, and it's been faked. It's we're using AI to move the math and create that ex example of somebody saying something.
But of course, there have been great examples of, you know, we'll see a speech by I think the one I saw which is sort of, you know, ironically horrendous in thought, but is is something I've seen out there is Martin Luther speeches famous Martin Luther King's famous speech
but Donald Trump giving it and they've taken the speech and they've adapted Donald Trump's voice to say it
you know and and
how do you know what's real and what's not and but the reality that's a challenge and with with deep fakes is it's so good that it's very hard to tell so it starts to bring about this question of disclosure and transparency and the uncanny valley how much do we want computer systems to look and behave like us or do actually want some kind of you know disclosure pop up at the bottom warning this is fake kind of thing so you know you know what you're looking at this is another big area I think
yeah no no absolutely and I suppose it you know we can't we can't talk about unintended consequences of things without mentioning the table that Microsoft did we have to be proud of our failures
yeah absolutely I think the way SA dealt that was was fantastic when you're talking about the the responsible way when things can go wrong you know when you when you are pushing boundaries and developing um different technologies then you know you've when you create those algorithms and those algorithms do breathe a life of their own and develop themselves then things can go wrong and you end up with unintended consequences. What were your thoughts on on that actual Tabbot example?
Well, I mean it's a it's an interesting one and I've looked at this a few times because it's something I find myself talking about unintended consequences. The idea of Tabbot was to be a social experiment to see if you could build a a chatbot that would sort of capture the zeitgeist of the of a culture and a group of people and and reflect it back of course and it was built on an example of something we already had in market called Xiaawis which was a Chinese version of Tabbot an initial a Chinese language bot that has grown in usage. It's super popular. It's still out there today and it's very very it's it's part of Chinese culture. People know and talk about ZIG on WeChat and other places and the idea with Tbot was to build something for a different audience. And of course, what it did is the intention was there, but people fed it data, most of it pretty horrendous data, you know, fed it hate speech and all these things. And the bot's job was simply to interpret what people said,
recognize that as the culture, and then play it back and see if it kind of what that led to. Of course, what it led to is a a hugely racist and unpleasant experience, and something we're not we're not proud of the outcomes, but we're certainly learned a lot by the the process of doing that. Um, but you know what it leads to again I think is one of the it's a really it leads to this other I think the for me kind of the other big one in in this issue risks is the disproportionate impact challenge which is we built these things like a tape. Now Tbot was a real living one. So she was learning from language at the time but a lot of chat bots or engagement AI systems are built on historical data. So we feeded all this information about you know this is how this is our employment records for the last years. This is the social services records on who's claimed benefits for the last 20 years. All those kinds of things. And then the bot or the system or the AI simply looks at all that data and goes, you know what? I'm looking at this information and I see absolutely almost zero indigenous people in the soio economic data for the last 50 years. Therefore, so indigenous people are not relevant to me going forward. And that's the calculation. That's I'm certainly not saying that. I'm saying that's how the this the bot system would see that situation. It just reflects a point of view
that was wrong as as you know what we shouldn't have had but it reflects that back because it's only as good as the data you give it. So you get that very
I I I used to see that quite a lot when I was teaching. One of the things you know and this was years ago there was there was um essentially when I when I first started teaching I it was 15 years ago or or so there was um when I started I was a high school teacher and in year seven um the the kids would do a little test or quite a big test around spelling like a generic test on spelling and maths. and and some reading comprehension. And then you'd get given a A4 document for every kid in the class and it would tell you like a box and whisker diagram the likelihood of what GCSE results. So like the the HSC results kind of thing what what likelihood they were of getting that. Now nobody ever knew what that algorithm was and but essentially it would take in to account you know it was blackbox algorithm. It was it was AI technically um and it was it would give you an idea. So when the kids were in your class, you'd know technically what their academic ability was and that was wildly off, you know, when we had lots of we had I was teaching in London and we had lots of people that came in from overseas. So the the algorithm was kind of based at that point predominantly on UK population because that's where their data was. Um
so you know that a UK male who would score this on the English literacy tests were were going to be in a B-grade, but it was wildly incorrect for for some of the people coming in from overseas, from the Asia region, for example, from people coming in from Europe. It was it was um it was wildly inaccurate. It was a good indicator to give you a bit of an idea, but it it it you know, you'd look at it and you know, is almost kind of laughable really. It would give you some indication, but you know, it was very very disproportionate in this in this kind of results
and and you look back on it now and obviously it's clear to see that But this is, you know, part of the problem is the as systems are being built.
When you're deep in the middle of it, the logic is and obviously, you know, when you're building systems, the logic is I'll use the data I've got and I'll create the best outcome because that's the scenario. But it it's that it's that thinking about the broader accountability. So maybe that's a probably a good time and time conscious as well where we are. Maybe let's move into a little bit about kind of the how we've built our principles and our standards and what what's going on in Microsoft because I think it's probably good to share some of that. What do you think?
Yeah, definitely. Okay.
So Lee, tell us what that journey then over the last couple of years.
Yeah, look, I think because we've touched on a few of these things through the call and and um Leo, it's probably good to shine a lot on some of the the things we've actually done because as I said up front, you know, we start with this idea of ethics principles uh ethical principles that has led us to a journey of responsibility or a model uh and now we're building tools but and it's been a long journey. So, you know, Microsoft and I and I should preface this by saying we're not saying that we have the best approach. What we're suggesting is that we've built approach that works for our business as a software developer, as an organization that builds tools and as a company that that you know runs and operates AI services. Uh and there's learnings to be had in there and we're continually learning but it's been a journey over the past few years you know with some initially some thought leadership pieces around what is the future computing looking like as we move more and more towards AI and machine learning and data driven which was that book we published the um the future computed which we can um we should share in the the um links as well because it's free to download any not going to read that one and that's been a journey for us to then think about well how do we do it so we created some internal mechanisms so first thing we did was we created this thing called our ether working group which is our AI and ethics in in research team and we defined these six principles you know the ideas of the things that are most important to think about when building responsibility into your AI which is fairness as we've touched on you know hey doesn't it doesn't impact any person in in society inclusiveness as you said earlier, make sure when you're building these AI systems that you're thinking about everybody and everybody from every segment of life, ability, and all those things. Transparency, be sure that you can shine a light on your system and explain what it does. Accountability, own it. When you build something, it's yours and you need to own that and take responsibility. As we said, we are going to make mistakes, but own those mistakes and fix them. That's how you build trust. Uh, and then the last two kind of you hand inand is that reliability. safety, privacy, and security. Build good systems. You know, build good systems. Don't do this in a way that doesn't think about the data footprint. You'll leave behind the reliability of access to that data. You you're potentially collecting image images and data about people.
And and a qu and a question question on that front then you know and this is you know I know this isn't specifically Microsoft focus podcast but when we
when we've got a large partner ecosystem and essentially we are developing the bricks. So Microsoft or any company would develop a cognitive system and have our own principles about where we would use that. But a lot of the cases it's our partners that would build some of this stuff. So how do you uh what are the processes then you know so you know I'm just being devil's advocate here you say well great Microsoft put all these principles in place but what's stopping
somebody going out and just creating a drone with a camera on it to do facial recognition for example.
Yep. Look a it's a good point and know it it's that thin line between where does someone like Microsoft and you know I speak on behalf of Microsoft because that's who I work for
but we you know we take a responsibility to say look we build a platform and we build technology but we're not the police and we can't be seen to be policing how citizens private and governments choose to use technology but what we can do is invest our people and our time and our resources into understanding the risks understanding the challenges understanding the things that need to be considered and then sharing that with them. And that was part of that journey was
you know we built the ethics principles which were kind of a stake in the ground to say look these are the things we think are important but of course everyone takes a different view of it. My view of fairness is different to your view of fairness and so on. And so we started to build some practicable examples of these things that could actually be done by our partners by our employees by the ecosystem that are about not just saying okay we think you should not do X, but more about saying, okay, if you're going to build a chatbot or if you're going to build a facial recognition system, think about uh the resolution of the images you're capturing, are they good enough? Yes.
For the accuracy of the service you're trying to provide, are they commensurate with the risks of what you're doing? So, you know, if you're just using facial recognition to count people, fine. You obviously don't need to keep the data. You don't need to know really much more about just identifying if it was man or woman perhaps.
Yeah.
So, you it's pretty simple. But if you're doing it to say gain access to a a social access to a government office so you can claim social services, you don't want to mess that up. You want to make sure that everybody is inclusive and everybody has access to it. So you need to put a level of governance in it. So we built those, you know, we built those kind of guidelines if you like and built those processes. But then we also started building internally the methodologies that we're going to use as a company to ensure we get it right because there's two sides to this side. You know, there's Microsoft building products that people will use like Azure Lewis language understanding or Azure cognitive services and then as you say there are people that would just use the Azure raw tools and go and build their own AI service using machine learning and that and we need to kind of we need to do it ourselves as well as give customers and partners.
Yeah absolutely.
So after so after the the ether committee was formed you know in terms of that journey what happened after that?
So then once we had an ether committee then we started building some of these internal processes so the things I was just talking about um something that we call rail which is our responsible AI life cycle which is how do we ensure that responsible AI is fed into the software development process. So if you think about that more from an engineering perspective.
Yes.
Um but then we also established something that I I'm proud to respond uh be a part of which is the office of responsible AI. So these offices of responsible AI are incountry so here in Australia organizational groups loosely defined organizational groups that take on a localized view of it because this is the other thing is is of course uh Australian tolerances and and expectations and and processes around how AI gets used are going to be unique to this country and unique to our government and our governance and how we government's
uh approach to regulation and policy. So,
you know, that's a piece of work we do now locally. It's not just something that happens in the ivory tower of corporate, you know, Microsoft and Seattle. It happens in the countries where it matters. And know that's important for countries like, you know, for us, of course, but in Europe, in the Middle East and in other countries, These are critically important. Yeah.
But with that came this idea of standard as well. So this was the other big thing.
We we we put the offices into in countries. But we also took all those principles and thoughts and practices and created something called the responsible AI standard. And that's more as an organization about ensuring that you and I Dan and everyone else in our business understands what they are responsible for and what is expected of us all in applying a level of governance to AI. So it stops your fairness and my fairness bleeding bleeding in and and sort of taking a different direction,
but it provides the freedom for you and I to
express our our views of it
in a responsible group, but have the responsible group then define what is the right thing to do, the good of the many.
Great way to think about it. Yeah, absolutely. So where where's so where's the the the future now then? So where are we at the present day on our journey?
So well look, so it's a it's a three-part journey, but it was the let's define what what's important. Let's build tools and practices to make sure we do it right repeatedly. And then now we're at the point where we're actually building the tools in the systems to enable people to catch those mistakes. I think we mentioned this near the top of the show, this idea that, you know, you need to have these tools to be able to look inside the black box, for example. So, one of the great ones, great examples is we put a bunch of tools out there. We've got things uh interpret ML, which is a tool that will actually try and interpret your machine learning models to understand how it's going to operate. Uh differential privacy uh policies, whole bunch of great stuff. The one that really strikes me, I love this one is is a thing called Fairlearn. So Fairlearn, you can go get it on GitHub. Uh github.comfair learn I think is actually the uh the direct URL. And fairlearn is a tool that you can use with your Azure machine learning systems today that will learn. It's sort of a machine learning for machine learning. So watches your machine learning algorithm process data and sees where it makes decisions about say gender the uh ethnicity, post code, all these kind of things that the decision
and what it looks for is disproportionate allocation of uh answers based on any of those kind of what we know as being hotspot areas of fairness, you know, the things we talked about.
So to like machine learning looking at machine learning essentially exactly that it's looking at it and saying, "Okay, you know, I see your data is giving you all this information and you're starting to favor men over women in terms of who you're going to give the yes answer to and the no answer to. And then it it shines shines a light on that for you. It doesn't tell you kind of what it is. It just shows you that the problem is starting to flesh out. So it's that you know data drift is the is what we see is as data changes the model drifts delivers different answers and may start to create a disproportionate preference to a singular
and fair learn will shine a light on that and that's where we're going
because the thing the thing that jumps out to me there you know and I know it's not the same level but it's similar technology. You know I was using this with some schools a couple weeks ago is the presenter coach. Um
Oh yeah.
In in PowerPoint when you're talking and you know I' I've got this habit of you know you know addressing audiences and things and saying hey guys you know and it's very male orientated and like I just it's just something that I've always I used to do when I was teaching and it's just a horrible habit that I've got and I'm trying to get out of that but it it kind of basically tells your gender bias your speeches when you're talking. Um really really interesting and I suppose it's a kind of similar thing to that using using AI and ML to look at your ML algorithms and saying, "Hey, you know, this this may be um slightly biased here towards a particular gender or role."
Y it is it's it is it's really interesting and and what what it shines a light on and we probably should start to wrap up some of these thoughts, but
I think what it really shines a light on is it's one thing to to kind of embrace AI from a technology perspective and and hopefully we as a company make that super easy for everybody to get access to AI technology and to data because we make it cheap, effective, and affordable and scalable in cloud, but it's it's really as an organization, as any customer or individual trying to do this, you you've got to approach it with a very different mindset
because you've got to look at it and go, I'm not going to build something and then create it and then deliver it. I'm going to own something. I'm going to nurture a child if you like and I have to keep nurturing it. And so when you build tools like fair learn, it's one thing to build mechanisms so that you can see the problem, but you need to have a mechanism in your business to then catch that problem and fix it and build it in. It's that life cycle, you know, that's that idea that you need to be able to find change, fix change, adapt change, and then keep looking for more ch Yeah. changes.
Yeah. Absolutely.
That's really
So to summarize some of these points for today then because you covered a lot of stuff and I was making some notes as we go in here, but I suppose some of the key points I suppose Microsoft are and other tech companies have a responsibility to build good ethical systems and respond really quickly when issues arise like you've done and you've seen in the news recently, I suppose. Then there's this responsibility for AI adoption and usage um for everyone in the business's role whether that's in in a tech company or within a a government agency implementing these or within a school implemented AI there is that responsibility for adoption and needs to be clearly articulated and led and continually emphasized I suppose so it's not a one-off like you said it's a nurturing element
it's not a checklist
and then and then the one you just mentioned there which is really interesting it's it's more than just a governance processes. So, it's a culture of including everybody and questioning and empowering people um and what what feels safe and doing that in a safe way and being able to question and push back and discuss these things and use those tools help them.
Yeah, it's funny on that one. I've seen a couple of examples where people have said we've built a government a responsible AI checklist and they've got this tick box, you know, have we
have we asked have we been inclusive? Have we checked that the data is not sensitive? All these and you kind of go, yeah, those are important. things to think about, but they're not the checklist that lets you just move forward. They are always a constant evaluation.
Yeah, that that's that's just really jumped out to me today about that entire process. And I suppose uh we also talked about the role of governments. It's the role of governments to regulate and define operating AI. I suppose depending on the culture and the country and what are the acceptable limits in China or the US or Australia, it'll be different to other different other countries. And then I suppose what what you've done today and we've we've kind of meandered through is really shining a light on what it really means to be like a human and what's so unique about our ability to to cognitively reason and think um ourselves and how we develop those tools and that social construct of AI around us.
Yeah, we probably didn't say enough about that point. I think we you're right, we talked a lot about it, but the the the core of it is AI is a tool that helps humans ex live and deliver richer outcomes for other humans. That's the purpose of AI to create better experiences for humans in this world. And if we keep that in mind, then we can do amazing things, I think.
Yeah. No, absolutely. Well, let's end on that one. Let's do amazing things and then uh I'll catch you at the next podcast. It was fascinating today.
Thanks, Dan. See you soon.
Thanks a lot.