Preview Mode Links will not work in preview mode
Welcome to the AI in Education podcast With Dan Bowen and Ray Fleming. It's a weekly chat about Artificial Intelligence in Education for educators and education leaders. Also available through Apple Podcasts and Spotify. "This podcast is co-hosted by an employee of Microsoft Australia & New Zealand, but all the views and opinions expressed on this podcast are their own.”

Feb 5, 2020

This week we're joined by Lee Hickin, Microsoft Australia's National Technology Officer, who first of all tells us what he does and his background, and then talks about interesting artificial intelligence projects within public sector in Australia. He talks about the fish counting project in Darwin Harbour and the work being done in Kakadu National Park. What's clear is that Lee sees these successful projects as being a blend of technology merged with good professional judgement (something that makes sense in education too). We also talk about the responsible use of artificial intelligence, and what we're learning about good AI use - and why you can't just sit back and do nothing until the dust has settled. In fact, nearly half the time is spent discussing responsible AI, and frameworks for ensuring that we're using artificial intelligence well, in the service of users.

 

________________________________________

TRANSCRIPT For this episode of The AI in Education Podcast
Series: 2
Episode: 5

This transcript was auto-generated. If you spot any important errors, do feel free to email the podcast hosts for corrections.

 

 

 

Hi, welcome to the AI and education podcast. I'm Dan. If you remember in the last episode, we talked to Kate. And continuing that theme, I've sent Ray on the road to find some more fantastic people who are doing great work with AI. Let's listen to his interview with Microsoft. NTO in Australia, Lee Hickin, and look at how his thoughts can be applied into education later in the podcast.
Thanks for joining us, Lee. Tell me a little bit about you and what you do.
Thanks for Thank you for inviting me, Ryan. Thanks for having me on the the podcast. It's great to be here. Tell me tell you a little bit about what I do. My role here is the national technology officer for Microsoft Australia, uh, which is not a title that you hear a lot in many other companies. It is somewhat of a unique title to Microsoft. I can tell you a little bit of history of it if you like. It's a 20 21 year old role at Microsoft. It was initiated by Craig Mundy, our chief strategy officer many uh many years ago. But what I what I do here is my role is fundamentally a government leaison type of role uh focused on helping Australian government, national, state and federal uh governments to understand and get a sense of and be able to take advantage of technology as it evolves. So, you know, here at Microsoft, we're building technology today. We're building technology for the future. Y
my role is to ensure that our governments understand the long-term economic, social, political, and and opportunity for what technology we're thinking of and what technology we think is important to our national interests.
So, you're the robots are coming man.
I'm the robots are coming, but it's okay. They're our friends and we want to work with them kind of person. Yes.
Great. Okay. And what's your background? Are you a technology Are you a policy maker? Have you worked in government? Where where do you come from?
Well, don't tell anybody, but no, I'm not not in government and I'm new to government. So, it's for me it's a great learning experience. And actually, it's quite important for me as an individual to be close to the Australian government to learn about it because as a citizen, you know, I think we there's a lot we can learn from understanding how our government really thinks. But where did I come from? Uh, okay. Well, I'm I'm older than I look. Uh, I've been in IT and technology for now nearly 30 years. I've done a various number of roles both here in Microsoft and other companies. But I've essentially followed that almost standard uh career path. I was a a technical person. I was writing code 20 odd years ago. I was writing Pascal code. I actually I started writing Cobalt code. That puts an age on me. Um and I've developed through a technical role. I went into sales roles. I went into technical sales roles. Uh for a period of time I was working in product marketing here at Microsoft in the IoT business which was a fantastic experience when in a business that was growing.
I I don't know how do it because I can't imagine there are any boundaries to the role that it just is endless in the conversations about the technology but also about the policy implications and all the different use cases in public sector.
It's funny you say that and I was actually talking to somebody this morning that's just joined the company h in a similar kind of role and I said look the one skill you need you can learn everything about Microsoft technology and products but you'll never learn it all learn the skill of being agile because the role is absolutely as you say I am having everything from a conversation around policy and how to implement uh you know policy at a government regulatory level then I'm having a conversation about a particular stream of our technology whether it be security or identity or databases and then I'm having a conversation with a partner about how to build a partnership with Microsoft based on the technology they have. So it is a very broad role and that's kind of what draws me to it is the I'm being pushed every single day you know and I'm I'm not afraid to say that every day I I sort of wake up and think what have I got to learn today? Because I will be challenged every day. I have to know a little bit about everything. So I'm having conversations about quantum computing, around AI obviously um you know and in different sectors and different parts of the industry. So very broad, very difficult but I was told when I joined it's the best job in Microsoft and I'm sticking to that at least for the moment.
Well as long as you stay a lifelong learner then I guess you got a chance of keeping up with things.
Well absolutely and this is one of those roles where you know lifelong learning is is actually a job requirement. So, the good news is this is the AI in education podcast. So, we're up with you for lifelong learning.
I'm in the right place.
But I don't want your brain for what you know about education. Sadly, what I want is I want to learn from what's happening outside of education to see how that's relevant to education. So, you must see some interesting stuff in AI outside of education.
I do. And look, I mean, as a sort of a preface to that because I'll talk a bit about some of the the scenarios that I'm seeing, the kinds of customers and and the market dynamics. that we're seeing around AI.
So, a big chunk of my role as the national technology officer is actually to hold the seat of the Australian subsidiaries responsible AI champ, which puts me in the position of being essentially the owner and the driver of the Australian engagement in our customers and our partners and our and our government on the right way to do AI. So, you we I'm sure we'll talk more about this responsible AI approach. So, look, I does give me the opportunity to talk to a very broad range of customers and partners dealing with AI and look I'm seeing a a huge amount of obviously interest I mean there's interest from every segment every sector and if I look across uh government commercial areas financial retail um healthcare agriculture in particular is a very strong one in this market all the way through and then into of course into the startup and the innovation space when I deal with a lot of the startup hubs here in Australia and startup that are just looking for a way to use AI as a mechanism to either disrupt a market or solve a real problem in a particular space. So huge range of of potential ways in which AI is being used that I think probably would have a lot of similarities to the education space.
So tell me about some of those stories.
Yeah, sure. Okay. We're seeing a lot of of interest and I'd try and loosely bucket it into two sort of segments of the market that I see are being the most progressive in that space of using AI. And there is the I won't say philanthropic but there's the AI for social impact if you like. So a lot of the work in environmental sciences biodiversity and generally looking for the ways in which AI can be used to better understand the planet on which we live in. We have a program here at Microsoft called AI for good which is uh a mechanism by which we try to find those engagements and amplify them. And there's two that to me over the last year I would say that I've been working on that have stood out just because of this the unique nature of them. So the first one was the work we did with the NT government, Northern Territories government and their fisheries division back about six months ago now and the work there was this this really unique challenge. I mean this is the great thing about AI is this you know where does the problem come from to arrive at this idea where AI solves it. But the fisheries need to understand the levels of fish docks in Darwin and the surrounding waters. And to do that today you kind of you laugh when you hear these stories but they they will either put divers in the water to look at fish and I kid you not that that they would do and of course the challenge there is government is full of large crocodiles that are quite dangerous. So it became actually a matter of human life issue safety. So they look for different ways to do it. So they would put cameras in the water but then literally have these highly skilled scientists watching six hours of video footage counting one two three four fish on the screen. So then you see the problem you go well obviously AI one of the key fundaments of AI if we think about AI in the sense of it the creation of humanlike senses in a in an artificial way. So, the ability to see, listen, speak, and learn in the same way that we do. This was the same thing. Well, why can't a computer look at that picture and figure out what's going on in it better? So, we worked with anti fisheries to essentially do facial recognition for fish to look at these images of fish, identify what is a particular type and class of fish, and then count them for us. Things that computers are very good at doing. Binary basic mathematical kind of calculations. Is it what I think it is? Yes, it is. Tick incremental counter. Of course, the challenge is fish don't sit still and stare at the camera. Uh they fly swim in and out of the camera all the time. They're constantly moving and they don't have faces like we do. If you understand how facial recognition works from an AI perspective, there's about 24 data points that are measured across the shape of your face, gap between your eyes, width of your nose, size of your mouth, all these things. Well, you know, fish have similar visual elements, but they're not the same. So, you know, the the the challenge of thinking about something as seemingly rudimental from an AI and a mathematical perspective of counting fish and recognizing what they look like, it's actually becomes quite hard and AI transforms that. So that for me was one project, you know, I think where facial recognition really took on a whole new dimension. Um, and fundamentally, why did we do it? Because we want to understand better fish stocks for long long-term sustainability of fishing licenses and we wanted to save humans or keep humans out of a dangerous loop. So that that that's that's, you know, that's how AI can be really impactful. The other one I want to mention as well is more recently the work we've done with the Kacadoo National Park. And again, the Kacadoo National Park, similar kinds of uh concepts and outcomes. What we want to try and understand is better land management. How do we sustain the land we have and make better use of it? But what's most interesting about this one is again AI being visually used. So we were basically uh we worked with drones to send drones up and down great tracks of land across the Kakadoo wet lands park to take images. We take thousands of images. We stitch them all together and what we end up with is a picture of the land over a period of time. We do this over, you know, sustained period. You see the changes in the land. But what I think was most interesting about this, and this is really the edge of something pretty transformative, is we didn't just look at this as a scientific research progress. How do we understand the science of this? We engage with the local indigenous elders, with the local indigenous park rangers because we can capture the data. AI understand and we can bring all the data to the table and we can use AI to understand that data but do we really understand what the data tells us and this is a fundamental kind of challenge with AI is you can do all the smart you want but if you don't understand what you're looking at you don't really make good decisions so by in introducing the indigenous land knowledge that understands first of all there are six seasons not the four that we typically think of and those six seasons are driven by changes in the land and those changes in the land really define what is considered to be a healthy state of land the number of magpie geese there the the scope and growth of paragraphs which are the couple of metrics we looked at. So I think about that project and we're using AI to capture and measure the data but we're using indigenous knowledge to understand that data. You know that's kind of that edge of really now bridging between purely scientific research for the sake of science or technology for the sake of technology but technology and essentially you know ancient knowledge of how things are done to create something that's good for everybody and a better outcome.
That's a really interesting scenario as well because when I think about education, it's that blend between the things that can the data tells us and the things that are acquired wisdom.
It's funny you say that because it reminds me of another really a really good example because often there's a thought process that yes, if you you know you people people instinctively have these great capability to hold, retain and learn knowledge and you know we we talk a lot about you know the learned knowledge or the learned mind of a of an organization for example. But there's often this fear that AI is going to come in. Well, we can just program a machine to do what you did and you no need no longer need it. But we did a piece of work with Dan or EDI who operate and run the the train systems for for most of Metro Sydney and some of country New South Wales. And again, same thing. We had this concept. We had this tool that was capturing all these data off the trains to help better understand when trains will fail. Now, you've got engineers and train engineers who've worked on trains for 30 years and they know a train comes in off the off the track, they can look at a part of the battery or a part of the rolling stock and go that one's going to fail in about six months. I just know because I know these things and you capture that knowledge and you think well that's that's amazing insight. How do you predict for that and then start you know reducing the failures of trains and getting better optimized good outcomes commercially but you might think well the impact to that person suddenly their value is is challenged because the AI is doing what they did but the interesting thing was the more data we captured for that particular customer and we created this tool through PowerBI that let them really just play with the data. So suddenly as an engineer of a train engineer who is deeply passionate about trains and understands trains intrinsically but has never had the capability to see the data in this way. We found that they were actually then going into the tool on their own valition playing with the sliders looking at the data and actually looking for the things that they knew were there but didn't quite know how to I kind of draw the line the connection to. So what it actually started creating was this sort of newfound passion and excitement for well what else could I do? What what else could I learn now that this data has given me this sort of trigger point to learn that there's so much more information out there if I can look at the bigger picture?
So that's interesting because what is implying is that what was data science and the the realm of the propeller heads in the past is becoming something that's more accessible to everybody.
Yes. But I think we need to caveat that with a little bit of a thought of a conversation more about well what does that really mean and How do we do that? And is that sust or is not so much is that sustainable but uh you know how do we how do we make sure that we are getting the very best out of all the the people that are deeply skilled in some of these areas because yes in principle what we as a company want to do and I think what we fundamentally believe is to take the capabilities of something as rich and as complex as AI and let's not you know let's not kind of hide the fact that you know we talk about AI over breakfast with our kids as if it's a thing that we do but the reality is it's still intrinsically a not an unknown subject but it's a complex area. It's not fully understood and it's made up of moving parts and we get often say AI but what we've lumped together is the construct of of data and big data capture where the idea of machine learning and modeling the data science work of actually understanding that data and feeding the right data in to get the right outcome and then obviously you know tail end of that is really making the use of that data you know that like I mentioned with the indigenous example understanding what it means So all of those pieces are are bundled together into this construct of AI. And as a as a company, we we talk about this concept of democratizing AI. And we fundamentally believe that there is huge power and potential in AI. If we can make sure that everybody on the planet has the ability to use AI to solve the problems that are in front of them. You know, we all individually deal with many many problems around the world in varying different circumstances. AI has the opportunity to do that when you have the right data and the right tools in front of you. But but the art of data science is still a skillful art. But where we can take the the the need to capture huge amounts of data and the cost and challenges of doing that, wrangling that data, the access to in real terms a scientific model, but making it accessible in a way that somebody who understands fundamentally an industry or domain area specifically, but is not a data scientist can extract some value from those two things. That's what we're trying to achieve. So it it does democratize that. But I think we need to recognize the value that the skill of being a data scientist really is and that that ability to understand how to feed AI. And that's you know there's a to sort of almost tie back to that conversation around responsible AI. One of the key elements of responsible AI is what you put into it will largely dictate what you get out of it. You know there's a probably a well-known phrase that everyone's familiar with about you know what you put in is what you get out. And that's you know that's a data science skill is actually understanding the right way to feed an AI system to get the outcome you're expecting.
So just thinking about that equation a little bit that the other side of it though is I think we've all seen the stories of technologists because they can do something doing it and then later only much later does somebody ask the question of well should we have done that
so what's your what's your take on that I mean you must come across projects where technically what somebody wants to do is possible but you're asking yourself a question about whether it should be done.
It's it is the um I won't say it's the number one conversation that I have, but I would say almost every conversation I have with any customer or government around AI almost always eventually gets this conversation of are we doing the right thing? Should we do this? And and look and that's that's a very different situation to even two years ago maybe even where sort of I think we got into a world we're in a world of you know of technical acceleration was driving a a lack of consideration if you like and so no it today yes absolutely this construct of whether we should you know is the right are we doing the right thing the challenge we have is um and it's the right approach you know it's sort of the Spider-Man thing you know with great power comes great responsibility we recognize and that as a company and and we urge our Silicon Valley and uh North Seattle brethren to think in terms of this what we offer as these large cloudscale vendors is tremendous c capability and we recognize that. So we have to take some responsibility for that and that's about creating that democratization of the technology but also creating the mechanisms and the the culture if you like actually to think about those problems from the context of the bigger picture. You know yes we're solving this problem here today but what is the consequences of this technology if it got deployed into scenario X or scenario Y. And that doesn't mean you shouldn't do these things. It just means you need to consider more and this is this is the fundamental difference between AI and largely any kind of very complex technology before we've had before you know you look at data analytics and big data capture work and anything we've done where data is driving a decision outcome it's largely up until now been driven by this idea that we feed it some data and a human makes a decision because they look at what the data says and we make good decisions we make bad decisions but we make decisions that are attributable to an individual when we hit the AI world and we've got computers making decisions based on data we've given it that we've may or may not trust built by models that are technicians as you said that have just built a model because look that seems like the most efficient and if we you know a programmer mentality is what is the most efficient way to solve a problem. Efficiency doesn't always equate to equality for all individuals or all needs or all outcomes. So you've got these sort of you know out unknown outcomes driven by a chain of events along the way by individuals, technicians and others. So it's sort of created this mindset where we have to you know not rely on the technicians to just build the models but have the business and those around the business and those who lead and own businesses to take some responsibility for the the impact of their investments in technology. It's a long answer to a short question. Sorry.
Okay. So but it's made me think of another thing which is partly part of the reason why Dan and I started this podcast was we felt that people would benefit from knowing more behind the as well as what's on the surface. So, I guess my question now is hearing what you're talking about, there's a whole load of mousetraps along the journey that could lead you to say, "Well, I'll wait until somebody else has found out where all those those hidden things are on the journey. Why Why not do nothing?"
Um, why not do nothing? Um, well, I hope we don't all do nothing. I mean, look, the obvious it it would be easier to do nothing in some ways because there's no risk. Um,
but I think you know and let's not demonize AI in the sense that it is this potential for great chaos and and destruction and you know all the negativity we see around it. Obviously there is a huge potential opportunity for AI and we've seen that today in you know in those narrow pockets where AI is being used in its most innocuous form. You know we have applications on our phones that help us better understand the world around us. I mean for me the most obvious one because if I'm traveling overseas as I used to do quite a lot translating text You know the if you if any of you listening try and think back to what you might have done 1015 years ago to try and travel in another country and translate text getting a taxi in Korea for example is an almost impossible experience Uber and translate tools and all these things just have made that so much simpler so I can see how AI has that potential but look yes obviously there's a lot of demonization around that and that could lead to that idea well don't do anything because there's too much risk involved what we're trying to do and I I think we you know this is where I think it's fair to say Microsoft is trying to take and and works towards taking a leading position in the market which is to make sure that AI is is of is broadly available to as many people as possible through that democratization through that simplification by putting it into our tools and apps and services so that we as a as a as a human experience by using AI we get more comfortable with it because there's a fundamental thing which here which is uh and and I'm I'm an 80s movie fan so I've lived through all of those movies that told me that Terminator is going to come and destroy the world if we as long as we flick the switch. To sort of dispel that idea that AI is not to be trusted because it ultimately leads to an intelligence that will see how we can stupid we are and get rid of us. AI isn't that. AI is just a mechanism today where we can accelerate certain outcomes. You know, medical diagnosis, we can use AI to speed up that process and do more, see more people, help more people. We can use AI in um in helping people get better connected to government services. We're seeing that today here in Australia. Our own government uses, you know, we've done work with some of the government agencies to use AI in that sort of chatbot style scenario to just simplify the process and help more people access services and not just simplify it so more people can, but simplify it and make it more accessible. And this is another area where I think AI has a huge part to play is suddenly you have a computer that can be far more aware of the intricacies of the different human condition and can speak to and listen to and engage with people. with differing uh needs uh and create a common experience that we can all have. And I that's one of the fundamental tenants of that approach of responsible AI is to is is inclusivity. So yeah, look, it's easy to say there's too much risk, let's not do it. I think if we provide the tools to make it available, provide the guidance on where the risks are and then allow the humans and the individuals to kind of build that trust in the two to start building better and more outcomes. Grant There are, you know, we know around the world there are also scenarios where AI is being used in what we would all largely consider to be not things that we wish to see continue. You know, um, lethal autonomous drones and with the stories we hear from China in terms of social score indexing, but it doesn't have to be like that. That isn't really the true image of AI. It
it strikes to me that the most optimistic headlines and the most pessimistic headlines are probably both equally wrong. That there's some nice happy ground in the middle where It's not contentious. It's adding value to people's lives and almost becoming invisible.
Well, look, and that's a, you know, I think there's the the the great uh the great quote which I'm not going to be able to remember at this point because you never can when you have a camera and a mic in front of you is, you know, the the greatest technologies do they just weave themselves into is Arthur C. Clark, I think.
Yeah. The best technology is indistinguishable from magic.
Exactly. And that's absolutely true. And I think that's, you know, that is the true magic of it when they this, you know, when it becomes indistinguishable and you know my children a 9-year-old and a and a 12y old who thinks he's 18 of course because most 12-y olds do to them technology and AI are just naturally occurring phenomenon that is just how world the world works you know that is unique and then I think you know when we think to think back on the story I talked about with Kacadoo where technology like AI is actually enabling a new conversation between our science and research organizations and our indigenous population Those are the kinds of barriers that you know that break down and AI is an enabler of that. So it's those positives far outweigh a lot of the negatives in my view.
I think that's great because that almost reflects the same kind of scenarios in education is how do we take the best of human capability and knowledge and then pair it with better insights into data and better support for decision- making at scale. Whether it's a thousand students in a class group or a million students in a school system. It's that combination of those two things. brought together in the in the same way we've done with other technologies.
I look I think you know we we are if I think about it from the education context and having two kids in school going through those years I'm very acutely aware of the need for the importance of developing the mind and you know I think maybe controversially but I think we are a long long way from being at a point where AI truly replicates the the nuances of the human mind and the the capability of a human to make a good decision and to make an inferred decision based on knowledge and data. AI can solve some really scale problems for us and solve some problems that we just are physical limitations. You know, our our eyesight and our mind is just not capable of processing data at the speeds computers are. But our ability to understand what's right and wrong, what's good and bad, what's a decision to be made in any industry and where wherever your specialization is, that's still a truly uniquely human attributes.
Brilliant. Okay. Well, hey Lee, thanks very much for all of your time and all of your insights into what's going on. I don't think we've talked about the Kacadoo example before, so that's really interesting to hear that scenario. Thanks very much.
No worries. What you need is an AI bot to go through all that words now and figure out what we learned.
Somebody will invent one one day. Thank you.
Thank you.
Well, Dan, what did you think of that?
We're the bots which are going to now decipher that entire interview, Ray. I believe.
Oh my word. And that's why we go and talk to people that are smart. than us because I think we're just about to show a lack of intelligence like typical bots. I mean, I thought that was really fascinating the conversation there,
the different ideas and and how much we spent talking about the responsible use of AI. It wasn't just about the whisbang technology.
No. And and I thought the Kakadoo example really shone through and especially with the professional judgment that added that extra value to that. What did you think of his comments around that?
Yeah, I thought that whole conversation around you can see so many things from using technology and the technology can make so many smart inferences but you then add on to it that human capability the what he was talking about in terms of the knowledge of the indigenous rangers some of which would be yeah yeah yeah I know that and some of it would be well this data has revealed a new story to me
yes
and that's where I thought there was an incredible parallel to education that Silicon Valley's view that it's okay all you do is sack all the teachers and replace it with AI that Silicon Valley mindset which is is the teacher is the variable and therefore if we get rid of the variable everything everything's good that it comes through there of it's that combination of sure you can see some things with technology you can use AI to predict lots of amazing things but putting the professional judgment in parallel with it that's about%
helping people to achieve the best outcome yes
as opposed to replacing teachers replacing rangers replacing people with technology Yeah, I I totally agree and I think that really shone through and the parallels for education like you're saying with teachers and that ranger and the experience and the the subtleties of being a teacher and understanding a thousand signals from each individual student, the emotional intelligence that you need, the perception that you need to put in place, all of that comes together and I think that Sean really threw in that that example because when we do look at the even our marketing videos around AI, they all illustrate the technology and sometimes we need to really really shine a light on how that's used and who actually brings that to life and actually acts on that information. So I think it's absolutely fantastic.
The other thing that came out strongly to me was when I asked Lee that question of if it's so complicated and you've got so many things to think about, surely one option is to do nothing. So why not just do nothing? And his answer around that came back into you don't want to wait because you're losing an opportunity a to keep up but also to learn as you go because I feel like doing this podcast, we could have waited until we knew enough before we started started the podcast, but instead we said, well, let's dive in and use it as a learning process as well.
Absolutely.
And Lee's answer around AI is exactly the same thing is, well, don't wait, learn as you go. And
don't wait, innovate.
Sorry, that's a bad tagline, mate.
Dan, I'm going to send you straight back to the marketing cupboard. Um, but it's the that way of keeping up with innovation. It's that way of learning things by doing them rather than waiting. until everything is settled and you can just follow along.
Yeah. And and that that comes across not only in the IT side of it in a school or university setting, but also in terms of the students as well. We look at the curriculum and we look at how far behind the curriculum with with latest technologies like bots and AI and cloud and it's about trying to empower those students to understand the technologies of the future as well as the IT staff who might be doing things in terms of moving to the cloud and things like that. So it's it covers a range of environments including teachers as well and lecturers.
Yeah. And I suppose it answers that question also about well why do we need to teach our students about this because this will change by the time they get to the workforce and that's true software changes interfaces changes some things change
but on the other hand we know that digital is going to be a bigger part of the workplace a bigger part of our personal lives and so having some of those conversations around what does a digital transformation of business process look like which might be a bot or it might be some predictive analytics There's going to be AI in there somewhere. It may not look the same when a year six child actually gets into the workplace, but the techniques are going to apply and that digitization of processes is going to carry on.
And it was also interesting, I suppose, being a national technology officer, his view at the start, where he came from, his broad range of technology that he, you know, the different companies he'd worked for and his broad kind of view that he kind of likes to learn about technology and actually had that intrinsic kind of nature in him of going through a lot of uh the technologies that we have and the competitors have and other people have and what's out there just generally interested in that technology to then be able to look at the application for that. So I thought that was quite good as well that broad range of topics that he has to cover in his role.
I think we're all the same, aren't we? Which is how do we constantly learn?
For me, it's how do I constantly learn to keep up with the young whippers snappers? But h how do I keep my knowledge relevant and current and keep going? And that's everything from doing online courses Seth building an equivalent of MBA as I go along, but also things like this where that interview was great because I learned a bunch of stuff from Lee. And so finding other people to talk to that are smarter than us is a process by which we are both sharing with people that are listening but also learning ourselves about applications.
And you mentioned previously as well about the business school. So we should put that in the show notes again as well because I think that's quite poignant in this episode if people are listening in and have missed a few previous episodes. The business school's fantastic.
Yeah, I think I think You're right because that AI business school
which is we'll put it in the show notes but it's aka.msai.
Yeah Dan smokes again. Yeah
but that is about how do you help business users see the value of technology and how do you help them to understand the implications of AI. So it's not about the technology it's about the application of it. And it's a great way to selfeducate and if you're in IT to educate the people around you that aren't in IT about the implic because not everybody can afford to send everybody out on a long external course.
Yeah. Now you look as if you get passport ready to go there. I think we should send you out to interview somebody else. Ray, what do you think?
Okay, I'll go and see if I can find somebody else smarter than us. It's not going to take me long, I think. So, see you next week, Dan.
See you next week, Ray. Thanks.