Preview Mode Links will not work in preview mode
Welcome to the AI in Education podcast With Dan Bowen and Ray Fleming. It's a weekly chat about Artificial Intelligence in Education for educators and education leaders. Also available through Apple Podcasts and Spotify. "This podcast is co-hosted by an employee of Microsoft Australia & New Zealand, but all the views and opinions expressed on this podcast are their own.”

Sep 9, 2021

Today Lee and Dan talk about GAN'S, Games and other goodies from using AI to upscale games and 80s TV shows to GitHub Autopilot. 

Links:

OpenAI Reveals Details About Its Deep Learning Model 'Codex': The Backbone Of Github's 'Copilot' Tool | MarkTechPost

 

NASA is using AI to take better pictures of the sun as space telescopes can be damaged staring at it | Daily Mail Online

 

AI breakthrough could spark medical revolution - BBC News

 

This YouTube channel is using AI to 8K-ify classic game intros and cutscenes - The Verge

 

Think, fight, feel: how video game artificial intelligence is evolving | Games | The Guardian

 

Microsoft-powered autonomous beach-cleaning robot is here to clean our shores - Roadshow (cnet.com)

 

New toolkit aims to help teams create responsible human-AI experiences - AI for Business (microsoft.com)

 

GitHub - microsoft/ML-For-Beginners: 12 weeks, 25 lessons, 50 quizzes, classic Machine Learning for all

Collections - JenLooper-2911 | Microsoft Docs

 

________________________________________

TRANSCRIPT For this episode of The AI in Education Podcast
Series: 4
Episode: 10

This transcript was auto-generated. If you spot any important errors, do feel free to email the podcast hosts for corrections.

 

 

Hi Lee, welcome to the AI podcast. How are you?
I am well Dan. Lockdown in Sydney like your good self I'm sure as well. But uh despite the lockdowns doing fine and keeping myself busy at home.
Ah yeah no absolutely and uh you know I've got I've got kids on teams meetings around the house. this morning and you know it's uh it seems to be different this year though or this time uh I think teachers like previously it was more of like a World War II spirit and kind of crack on and you know keep keep calm and carry on whereas now it's like uh proper learning coming through. So you know I'm I've designed a jewelry box this week. I've uh
uh wow uh yeah I know there's been a heap of stuff there's been teams meetings on about Egyptian uh kind of pharaohs which is quite interesting. So yeah I'm running between meetings and one minute I'm talking about Azure signups and the next minute I'm doing a jewelry box. So so hopefully I'm I'm talking about the right content to the right audience.
All all learning is good learning. I um I mean it's it's kind of funny. I was tweeting my kids um about this because my kids are obviously working from home as well. Yeah. Uh we our school uses Google the Google platform so use Google Meet a lot and we did you know we've done all the classes set them up. Um but I said to them you know I think they said you know we have to work work from home like you dad and I said yeah well you know I think about it. I've now been working from home since March 2020. So, we're now it 16 17 months of constant. So, when you just said then, you know, the second time around, it's I mean, I know for a lot of people it's in and out of these lockdowns, but it's kind of become the the new normal.
Um, but my kids are loving it. I don't know how your kids are coping. My kids are actually finding the working from home, apart from missing their friends, they actually find the the day structure to be um pretty good. So, yeah, so far so good.
It it is. But I've read a couple of art though and I suppose it does permeate into the AI space because we've we've talked a lot about you know some of the like in the last episode or so we talked about the way AI can be used for mental health and things like that and you know it was it was starting to be used in the first element of the pandemic and now
you know it's coming to the four quite a lot I think rather than being a sort of fringe technology people are I'm starting to hear more and more people talk about the way they put it in structure in the daily commute, the way they're exercising, the way they're doing mindfulness, the way AI is supporting them and looking at what they're doing. And you know, there was a article um I'll have to dig it up. I can't I I haven't prepped it for the for this particular episode today, but there was an article talking about what kids are missing out on because if I think about my son last year, he missed out on his entire year six experience almost because, you know, normally there's a lot of ramping up, there's a lot of things they do, there's the prom, there's um you know, all these things they do as as year sixes to end the school year and graduate almost and and my daughter now has gone through almost two years in year last year two and year three where she spent maybe at least 60% of her time at home. Um so there's a lot of connection being lost and it's just talking about you know and and some of those year 12 students you know it's not just about the content it's about the experiences and and students you know um uh my partner's daughters are in uni's doing midwiffrey one's doing um um uh English and they've gone through two years of courses being held on Teams and Zoom. So, you know, there's all of the the internships around that that people miss out on. There's all those experiences and on-site things like uh groups, community groups, uh university newsletters, university interest groups that people don't connect with. So, people are getting through content, but they're not getting through social community.
Yeah, it's a it's a really good point. I mean, I My kids are a bit older than yours. I'm in year 10 and year five. Um so they're not quite at that point yet where it's the you know those big transition phases like year six or of course those big important years like 11 and 12 and all years are important but those you know the points where you kind of finishing up school was sort of in the middle. But you're right that kids are it's not they're getting the education and I'm see my kids are kind of still getting the learning done and as I said quite enjoying the pace because they can learn at their own pace I guess but they're missing the experience is so what it'll be interesting to see what the long term impact is on kids that spent the last you know those formative years of school not having the experiences of the camaraderie but the connection of being in that community how that changes their impact you know kind of change and we've talked a bit before about the imp digitization of the youth today and how our kids are all growing up in a world that's so different to what we did and they're using technology all the time and they live in devices and all that kind of stuff that you know we think that's going to have this big impact because they just connect differently to individuals you know by son's all of my son's friends who he's very close with are online that he plays with you know through games and through discord and conversations
whereas we went out to the playing fields to play with our friends because we're that old
so you know just with that and then now with the intensity of co creating these kind of lockdown moments it'll be an interesting long experiment to see what that what that impact is
yeah and and and you know don't want to riff on this for ages but like the the a use of AI in things like some of the parental control areas because you know there's it's similar to the security things that I see technically in in in businesses where they use multiple tools to manage their security right so they got you know they got their firewall software they've got their defender or endpoint software they've got they all and it creates gaps and that's what I'm seeing as a parent now because I'm thinking well
what is Megan doing on her iPad on Roblox what telemetry am I getting back as a parent from that well it's minimal I've got some parental controls in the Apple ecosystem then when they're on Xbox, you know, I've got the Microsoft consumer credentials on Xbox. So, I'm managing that. Okay. Um, because I know what games they're on, how long they're on for, and that's also connected to the consumer account in Windows, so I know what browsers they're on and what they're doing generally and the AI is feeding me back some of that data. It's not necessarily AI, but it's just giving me a good data. That's
intelligent. It's intelligent data. Yeah. It's trying to make me out to be the AI to analyze that data, you know, but but Then you've got the the other aspects where people are starting to really try to police kids mobile phone and data for older kids possibly, you know, like some of the Optus apps you can get now and the Vodafone apps that can manage data plans and turn kids data off and mobile phone data off so that you can focus them in. So there's like there's these gaps appearing where you've got to keep looking for different tools, you know, who's managing the iPad, who's managing the Xbox, who's managing the
Yeah, it's it's I think uh I look And it's it's an interesting one. And I think maybe to put a cap on that and get us moving forward, I think I think we should do a a session a podcast on uh parental management and parental controls and technology because I think we'd I think we'd explore it a lot of interesting ideas and thoughts around that. So, let's put let's put that one in the in the show notes for the future.
Yeah, definitely. So, what's been going on in your world then, Lee? Today's podcast, we're going to be looking at some of those some of the the the things that are happening in the news at the minute, and there's been a lot of big announcements around our our main um Ignite conferences and so on, inspire and things like that over the last six months. So, um we've thought about bringing some of those to the four. What's been what's been happening in the news from your side?
Uh yeah, look, that's really good. Yeah, and I think yeah, there's a lot of stuff out of Inspire obviously that was last week I think now is as of recording today on the 27th I think we are. Um
so look, yeah, obviously I mean inspire and again one of those virtual events and generating a lot of news, but as you know Dan, we've talked about this before. My world is um largely looking at where Microsoft's technology has a a social or a national impact and I look at kind of the big big markers around technology like AI and quantum and others. So AI has been top of mind right now and and I think we talked last week uh last time on the podcast about copilot and the outcome of the the GitHub copilot solution which is built on that uh uh the pre-trained GPT model that we've bu been building with open AI for some time.
Yeah.
But look let's rather than talk about some of the specifics and there's a lot of interesting stuff going on. Something that I've noticed more more often than not now in the AI world is we're seeing a lot more what I would call that democratized access. So basically very complex rich AI tools being productized and then made available to people or individuals or you know anyone really kind of enthusiasts and modders
to to take those tools and do interesting things with them. And of course the challenge you have with that Dan is when you democratize a tool like AI, you know, and we can, you know, AI is a very broad term, but let's say something like, um, the one that's really popping up a lot at the moment is GANs. Um, your general generative adversarial networks, and we'll talk we'll talk a bit about that.
The more and more those kinds of models, uh, it's a machine teaching model become available, more and more people can use them. And there's always the use for good and use for bad. So, I'll talk a bit more about it, but what, you know, what, you know, do you see this? Are you finding that, you know, you're finding AI tools are just kind of more open and available and almost in everything these days.
Yeah. Yeah. Absolutely. And and what's what's peing my interest is I'm seeing lots, you know, I think when I was doing uh talks to students and to educators and to general, you know, the the technology community, I I'd use like
possibly one example every three or four months. You know, it was snow leopards. I know I know one of my colleagues used to like uh make a joke, oh, Dan's talking about snow leopards again. But now it's turtles, it's fish. It's birds. It the the the use cases of people using it uh and using these technologies are becoming you know huge. Yeah. I'm seeing so many examples now and it's it's almost like which examples do I pick? The police force is another one. You know the medicine medical area. It was just you know I did a talk last week and it just kept going on and I think which which one do I actually pick here because there's so many good examples now
which is good. I mean it's great to see AI sort of finding its way in good ways into so parts of society. But there is that negative side. I you just thinking we talk about generative adversarial networks and we kind of say that phrase. It's probably best to to explain how that works because it's an interesting you know people we talk about AI models and we are most people kind of get the idea that AI is about kind of behaving like a human decision process. You know we're trying to emulate some of those human cognitive services.
But when you think about it a a g because the way I was like I went learning about this and thinking about GANs because GANs get used a lot in um things like uh upscaling or or image improvements or kind of rapidly iterating on something to to create either an enhanced version or an artificial version something. So if you the way a GAN works in simple terms if you didn't already know Dan is you kind of got this idea of well you let's say you have a a piece of data a real piece of data like a picture for example a picture of the Mona Lisa it's a good example picture of the Mona Lisa what you then do when you build a gen a generative versal network again is you've got one side of it which is and this is the adversarial piece of it. You got that generates things. So basically it looks at the real data and creates lots and lots of fake versions of that data. You know just slightly modifying the pixel colors, the density, the layout and the kind of and you end up with all these lots of fake versions of the Mona Lisa. And then you so that's the generator side and then the other side of it is the discriminator which is this other tool that then looks at the images that the generator creates and goes ah that's fake. Uh oh that looks kind of real. Oh no, that one looks like the real thing. And what's happening is every time the discriminator looks at it and says, "I see that's fake." The generator goes, "Okay, ignore that one. It's not good enough to fool the system. I'll create another one." And it creates another one. And the discriminator goes, "Yeah, that looks more real." And what you're doing is this really rapid teaching process to get to a point where you've created something that is looks like the real thing. So, you know,
great idea when you think about So, some of the things I've seen in the press this last couple of weeks, um, people are using this technology. There's a company out there called Topaz. Uh, Topaz will build a product called Gigapixel A I and it does it's a it's a GAN based service that lets you upscale images. Now I'm a bit of a gaming nerd as I know you are too. Um and a and a science space nerd. So there's uh so NASA's using this technology to upscale images of the sun that is taken through space telescopes.
I know it's incredible.
And the same technology is then being used by a bunch of YouTubers to upscale gaming uh assets and gaming videos and gaming images from games from the '9s and stuff. So you
I saw those phenomenal. It's absolutely real. Yeah.
Well, I was actually this morning I was watching one just to kind of get myself on it. And you know which one I chose? The Masters of the Universe He-Man trailer from the 1980s.
It's so good.
It's so good.
It's perfect. It's pixel perfect. But it's the same technology that of course we, you know, that it works in the deep fakes and starts to create, you know, you know, versions of Tom Cruz, you know, doing things that he shouldn't be or or Donald Trump. So, you can see where you get this challenge that something like adversarial networks can create these amazing experiences for humans to see old content or or old things in new ways.
But it creates this challenge of well if you how do you make something available like that but it only gets used for good and of course you you can't you know you put a technology out there somebody somewhere is going to find a nefarious use for it.
Yes.
So now you have this problem of you need laws and regulation to build the what we call the guardrails. And this is this is kind of what I'm you know to get back to your original question. What am I seeing going on in the AI world right now? This is the big topic at the moment is this challenge of
building the guard rails. So AI needs regulation. No question about it. AI can't just be allowed to run free because it is both a very powerful tool and an incredibly dangerous weapon.
So guardrails create that kind of safety barrier. Yeah.
But in order to build those guard rails, people who build the guardrails need to understand the technology. They need to understand where those risks are. They need to understand how how a deep fake is made, but also how you know medical imagery can be improved for better diagnosis. and understand the mechanics of that. So it's simple process but deep context and then as a society
we all need to define what those boundaries are. We can't just let lawmakers and governments say well that's now allowed and that's not allowed. Society needs to shape that
and in order to do that we need to then empower our lawmakers to make those laws which then again requires us as society to understand that tech to value that technology in our lives and to trust that the system is going to work for our good. And so you see this is the conversations that are going on right now in AI circles. We know we need guardrails, but who defines them and how does society have a voice in that? And do we trust it? And do we all have it the empower do we all feel empowered to be a part of that conversation? So, it's a big topic, but that's kind of what's happening right now. I don't know. What are your thoughts on on that?
That's huge. What are my thoughts on that? I'm like, yeah, it's it yeah, the the you know, we've we've interviewed and spoken to people and we know how important the ethics and the and the the guard rails and that that governance are and and you know I'm in awe of people like yourself and the the folks that are looking at this at the uh in Australia at the regional and national and international level because um it's absolutely where we should be it what it's what differentiates good use of technology to bad use of technology and you know if we if we disregard it all which you know I do worry about that polarization of society at the minute you know we being polarized with vaccines we've been polarized things like controversial things like Brexit and Donald Trump and blah blah blah. You know, things like AI are very easy to polarize and just ignore and then we miss things out. You know, I was looking um uh this week at the proteinbased scientific breakthrough. I think that was through Google um but there was there was an AI um uh model built to predict the structure of all proteins in the human body which is massive.
Yeah, that's that's phenomenal. Um you know and and that that's really available to like supercharge some of the discoveries of new drugs, new uh you know, it's it understanding that jigsaw puzzle. It's almost like cracking well it is like cracking the genome. You know, they've just managed to understand how all these proteins kind of connect together and and that that's that's that's absolutely phenomenal. Um
uh and and the AI has to be used to do that. You know, similarly, you know, we did the genome manually to a certain extent and now things like proteins which are almost impossible to do are being cracked and Without AI, if we discard or disregard certain aspects of technology, then you know we've got big benefits. You know, you got to jump in that curve.
Look, and this is the thing, isn't it? We don't want to sacrifice the value we could get in the future from the things we're building now, the big scale models, the huge data sets, the increased complexity of machine learning and AI tools, but at the same time, the democratization of it. We've got to make that happen. But, you know, at the same time, we've got to ensure that you know that those that seek to do harm through those things are not are not free to do it and you'll never stop bad things happening. I mean you know it's kind of the eight old adage you never you can never stop criminals being there but you can build
societal norms that make criminality wrong and in the same way we can build societal norms around AI that recognize that the way some ways of using AI are bad and some ways are good and it's you know and you got to look this in the context of of the of your human journey on this I mean we are just
so early on if we think that you know if we generally accept that AI was you know was as a concept was formulated in the in the 50s and really
built around this the 80s and then the 90s and we went through those two winters of AI and we're kind of really only now just coming to the point where we're actually building something that is actually promise delivering on the promise of what AI was always
capable of doing. Yeah.
So it's almost like it is it's like you know it's a baby it's a tiny baby which And it's and it's the ideas, right? And I think that's one of the things and and you know, I know I've been biased here from a Microsoft point of view, but it's about creating the platforms for other people to develop on, you know, and then sometimes those things are mind-blowing and sometimes you don't even know that they developed on our platform or whatever other platform it might be. But then there's still those those odd ones though. The one that jumped out to me is the is the beach uh the cigarette but beach cleaning robot. And there's there's always one thing that you sit down and as as a member of the public, you sit down and think sort of good idea but somebody can go around and do that as well. It's kind of like so some of these things you know you you see some of the technology like and when when when the technology is deployed say to with the turtles for example when it's looking at feral pigs um and and managing that the technology sounds romantic and then when it's picking up cigarette butts using a drone on a beach you're looking think ah similar technology but I don't know if that second one is really like worthwhile developing and you know it's fantastic the the the approach I think it's the team of tech ticks they're called I think it's uh two two guys that kind of set that up Martin Lucart and Edwin Boss uh like Dutch engineers who set that company up and they've got this um uh startup going around which is fantastic and it's picking up these cigarette butts and things on the beach and and you know it's a good idea um but then you know it's kind of exploring well where can I be extended is that a use case yeah it's there's kind of like quirky things that happen which are which are serious but you know as people in the public think there's a is it easier to go around pick up
yeah I mean yeah it kind of plays that is is this the best use of technology but I think what it kind of highlights to me Dan is this it kind of gets back to the very core of it that we have this tool that we don't really know yet what it can do you know AI or machine learning I mean let's I think we we again we've had conversations this keeping those two separate but
the ability for machines to sift through data and create and identify and then predict patterns that we can't see. And then the application of that learned model, inferred learned model into an AI system that can look at the world and perceive the world like we can and do things that, you know, like picking up cigarette butts on a on a beach. It's a step on that journey towards teaching machines to identify things in our world that actually do need a solution to it. You know, if you scale it out, cigarette butts on a beach to plastic on a beach to plastic in the oceans to debris in space. You know, think if you can kind of draw a bow between, you know, an automated vehicle on the beach doing that to one day launching an automated system that can go up into space, identify space debris versus actual satellites and clean up that. That's the promise of AI, but that journey is hard. Most people aren't going to see that journey and go, "Okay, that's where we start." But if we don't have like if we're not allowing the innovation to happen today, so if we don't give those two Dutch engineers and the Microsoft people they work with the license to go crazy. What a great idea. Build it. It's not economically viable. It doesn't make a lot of sense in the short term.
But if we don't learn that bit, how do we get?
So I get true.
I get your point. But it is it's
it's a learning process and that's I think one of the challenges that AI has today in society is that you know your every person on the street might see some application of AI and go well that's a waste of you know taxpayers money. You know what's the value of that? But you got to look at in the steps of where it's taking us towards and how we're getting better at understanding how to apply AI which and you know keeping on that tone of learning how to do AI better um something else that I'd seen before because I'm privileged in my job to get around the teams inside of Microsoft that work on society and ethics for AI but something they've just released uh I think it was last week or the week before is something that I've had had my hands on in the past but it's called the hacks toolkit so it's essentially the human AI experience toolkit and what it is It's a set of cards, but it plays like a bit like cards against humanity kind of cards. It gives you these, it's designed for for engineers, for UX user experience, for software developers to have these cards and to play through the things that they might not think about. So, if you think about AI in the context of uh you know, I live in my world, I do AI like this or I I do software development like this.
Yeah.
This is about giving them the opportunity to ask those questions and we'll put a put a link in the show notes, but
it asks you those questions around and the application of AI. So that you might look at it and go okay so what do I need to think about before I build this what do I need to understand about the interaction of humans in the system and then once I've built it how will it evolve and humans interact with it and what are the things I need to think about
yeah
it's really smart thinking about not just the technology but the human aspect of that technology
you mentioned the GitHub I saw that in action on on a video and then suddenly that was a bit of an epiphany moment to me can you expl This is very bizarre. Can you explain like in a couple of sentences to the podcast listeners what that cool pilot authoring is because it's pretty amazing.
It is. It's pretty amazing and and I think we can probably we should talk a bit about the you know the the the challenges of of the way in which it does what it does. But look, Copilot is a um it's a a trained model built on the the constructs of the GPT model which is a it's a it's a language prediction. engine. So essentially what we we we show it thousands and thousands millions of examples of of language and in this case language that was derived from GitHub code submissions. So it's built on open source. So we're not stealing anyone's data.
GPT just acronym busting generative pre-trained transformation algorithms.
Correct. Yes. It's a nice fancy word for a a type of model. But the beauty of the GPT transformer is its size and the amount of data that can operate on. So I think the latest version GPT3 175 billion parameters. So if you think about a parameter in a data set being a point where the model was able to make a decision about what it saw and then learn something. So it's 175 billion neural snaps in your brain going decision decision smarter smarter. Anyway, the upshot of this is co-pilot in GitHub will watch what you type. So you know as you expect the the the tool is watching what you type both your comments and your code. So what's really smart about it is you can write you could write a bit of code that says in comments um the next bit of co you know this next bit of code is a recursive model to generate random numbers from a pool of data from you know the pulled from a a data set X for example.
Yes.
And then you write the first code declaring your write first line of code declaring your function and then co-pilot will see first of all that you've declared this function. So that's the function we're going to operate within. And it reads your uh your or notes and says okay what you want to do is you want to build a automatic random number generator very simple example and then it will automatically fill in the code now that filling in of the code is a combination of have I seen examples of a random number generator before yes probably millions of them inside of GitHub do I understand the constructs of the language you're writing in i.e. you know if I'm writing in Python or something like that and how to write good structured code to do a random number generator and what you end up with is a combination of past proven code fragments and predict predictively build new code that didn't exist before and it gives you m it gives you multiple versions of so essentially I mean there's a great little you go look at the videos of it you'll see somebody typing in the the notes writing the first line of code and then copilot just fills it in and and as the programmer you can accept it take one of the other examples or completely remove it and write your own amazing idea and you know same thing about when we've seen GPT3 doing things like language prediction so you give it a a 10 page document and it will give you a one paragraph summary of that by understanding not just words but context and association and semantics and the language structure. So it's very clever.
Yeah.
But of course here's the challenge when you look at it. It is unreal.
It is. But then what you've got is an issue and this is gets get back to that issue of sort of ethics and uh kind of the right thing to do question is if it gives you a fragment of code from somewhere else Are you stealing the code? Did you is the code under license that allows you to use it? What are the terms of that license? Because the code is now presented to you outside of the constructs of its licensing model. Now, you know, with that licensing model could be any number of different open- source license models.
And if it builds new code,
then
whose responsible and whose IP is it? And so, but I think again, it's one of those learning points a bit like uh if you remember Tay the Taybot we did in and and and multiple other examples of not just Microsoft and other vendors where we build something,
it doesn't always behave exactly as it predicted. That's the beauty of AI. It doesn't behave as predicted.
And instead of going, oh my goodness, the sky is falling. It's terrible. Shut it down. We it can't work like that. Let's never do that again. We have to kind of look at it and go, okay, so it's not perfect, but it's learning and we can learn from it and it can develop better code. And from a from a coder's point of view,
I can write faster code. I'm not actually having to write the code.
Exactly.
In theory, over time, my code should get better because it's going to learn from my
code as well. Yeah.
More efficient coding. And
that means as a coder, I'm less spending less time writing code and more time working out what that code needs to be like and working with a business and we get a better connection between code development and operations or in fact something like better streamlined DevOps model. So, it's an interesting
Yeah, it is for all. Thanks for explaining that because I did you know that that is that is phenomenal. I I I can't describe it anymore. It's very difficult on a podcast, but when I saw somebody just putting in a couple of comments, you know, in starting their code, putting a couple of comments in saying what it does, what language they're using, and then it's developing thousands of lines of code, uh it's it's so good. Um uh but wow, you know, that's that that that does start to kind of blow blow your mind a little bit. So, anything else then from the the book of news for from Inspire. Anything else to update the the listeners on? You know, there was lots on general technologies around things like Windows 365 coming out and and and some of the general Microsoft technologies, not really connecting into to AI or anything that we we talking about on this podcast. Viva, I remember that was mentioned as well and that was great because uh uh we had a chat about that with one of the engineers a couple of uh episodes ago which was fantastic to see that through there.
We did. Yes. So, no, you're right. We did mention we were going to talk about Inspire. We probably been talking for a long time and not even mention it. So, look, you're right. There's not a lot of AI that came out of it. The only thing that stood out for me, there was actually one thing in particular, lots and lots of cool stuff, and we should obviously share some links,
but Windows 365. So, yes, obviously big announcement there, the ability for you to be able to now deliver Windows 10 and 11 um ser desktops essentially uh streamed apps and services directly from the cloud to your PC and kind of you know, giving us giving our customers the opportunity to really experience the best that service is on on any device, on any hardware in a very managed environment to maintain security controls and all those things are really important to a lot of business and enterprise customers. There is at the heart of that and I we and we should probably go do some details on learning on this but underneath that actually is actually quite a lot of AI in our system in the back end about how we optimize that experience for for the user. So if you think about this being streaming a desktop,
it's not like the old days where you know streaming a desktop meant streaming down the entire thing onto their machine and then it ran locally. We're talking about really streaming now. So making sure that what we deliver to the desktop is what the customer needs at that time and then predicting what the customers or what the user is going to do based on the current experience. You know what they're doing at the time. So it's a lot more lot more of a
uh intelligent experience. So that's probably the only thing that stood out to me is that you know maybe maybe people aren't thinking about it but a lot of AI goes into building the capability to stream something as rich and complex as
Yeah. One thing that jumped out to me And I suppose then just to kind of clear the this this section off would be the the sustainability element. And I know I keep talking about that, but my conversations over the last three or four years about using cloud technologies generally and and AI and all all the kind of things that come with that has usually been about cost and people always get caught on this onrem manageable cost opex, capex, all these kind of things. Whereas the conversation now I make sure I add into my conversations that it's about moving to a more sustainable approach with your carbon footprint. So the sustainability calculators and the you know and again not trying to be biased about it but the Microsoft approach to um ensuring and working towards that net zero emissions um targets that Microsoft put in place. You know I think me for me I feel really passionate about that element and you know I like to know that when people are using the technology that they're actually um thinking about the that we are investing in those techn ology so that it's as environmentally friendly as it can be and going forward to be to be net zero. So that that you know I'm I'm glad that we keeping that uh progress going and there was lots of uh discussions about where we are towards those targets which which is great to see.
Look and it is important uh and it's and it is again a lot of AI goes into the background of kind of understanding the modeling of the distribution of and the measurement of some of those carbon uh carbon challenges that technology provides. and you know to be non-bias you know yes we're doing a lot of stuff but I know that there's a lot of great work going on at Google as well around their carbon commitments and carbon sustainability and it's important that not just us but all of those absolutely kind of large players in this industry are on top of this problem and it's great to see that it is
an industrywide thought process you know we're all thinking about that and I think for us we're thinking a lot about how do we help our customers get on top of that sustainability it's not just yes we talk a lot about what we're doing to ensure our footprint is reduced and we're doing the right things, but we know that a lot of our customers are also saying, "Well, that's great, but how do we do that in our downstream business? How do we use your knowledge?" So,
and that's an area where, you know, where we want to get into and we want to teach more more people. Um, but no, super important thing.
The other there was another thing that cropped up this week wasn't an inspire thing, but I wanted to share it before we close out this this show for this week.
Okay?
Because I think it's really a really important one. It's the kind of thing that I want to see more of.
Um, I we'll share the link, but what we one of the challenges I've seen when I go out and talk to customers around these kinds of technologies and I faced it myself as I've sat down and thought you know what I need to learn a lot more about machine learning I need to get more hands-on with the technology I need to get programming again it feels like a really big hill to climb it's not an easy domain to get into there's lots of techn courses out there to learn Python and learn coding but machine learning not just about the technology but also about the constructs the ideas behind it the methodology behind machine learning is something that's really important to teach. And so actually what we've got is we have built a uh a course um one of our um wonderful engineers in the Microsoft uh business in the Microsoft machine learning business has built built a course called machine learning for beginners. It was published just in fact just this this week on the 25th um we'll send the link out there and it really is a real but it's what's interesting and this is what caught my eye. It's kind of all the learning steps how to create models how to take your first steps in Python and build that using tools like Azure machine learning, understanding natural language processing, but there's one of the learning paths which I've got to get to when I get there. Yeah.
Predicting rocket launch delays with machine learning.
Really?
So the minute the minute you talk about rocket launches, I'm like, yeah, I'm in. I might want to get that.
That's fantastic. So need to share.
Great course. I think we should share that. We should definitely share that with the listeners because it's uh everyone should learn a little bit about coding and machine learning. I think it's an important skill for the future.
100%. Well, thanks for sharing your uh knowledge and your kind of insights this week. Uh Lee, it's been a great great to kind of chat again. And um I think in the next episode we're going to try to speak to um a couple of interesting people. We've got two fantastic uh ladies lined up that we're going to speak to. One around Minecraft uh and music and another one around governance. So she's going to be fantastic. So uh absolutely look forward to speaking to you then.