May 6, 2020
In this introduction to Season 3 we meet our new co-host Lee Hickin. We also look at what inspired him to get into technology and also look at some of the topical news around pandemic apps, the sharing of data and even play some tic-tac-toe with a WOPR.
Covid Safe app information - https://www.msn.com/en-au/news/techandscience/experts-explain-why-theyre-not-worried-about-covidsafe/ar-BB13oBVC
Paper on contact tracing - https://arxiv.org/pdf/2004.03544.pdf
Ai for Health - https://blogs.microsoft.com/on-the-issues/2020/04/09/ai-for-health-covid-19/
________________________________________
TRANSCRIPT For this episode of The AI in Education Podcast
Series: 3
Episode: 1
This transcript was auto-generated. If you spot any important errors, do feel free to email the podcast hosts for corrections.
Welcome to season 3 of the AI podcast. Um, I promised last time
that we were going to mix things up this season. Um, so let's start
by introducing my new course, Lee Hickin. Yay. Welcome, Lee.
Hey, thank you, Dan. Welcome. And I'm I'm excited to be here.
Fantastic. I'll let you introduce yourself in a bit, but I suppose
people might remember your voice from back in uh episode 19. Uh so
your voice might be familiar there and you can tell us all a bit
about your kind of history and the things you're interested in and
we'll talk about season 3 and the really cool content that we've
got coming up. Ray's passed the mic on to you. So he'll be avidly
listening in every week I'm sure to make sure it keeps us on track
and kind of following us on Twitter and giving us some feedback. So
that'll be quite fun as well. So Ray, if you're listening, hi, hope
you're well. Tell us about yourself, Lee, from your own general
point of view. about your family and where where you kind of come
from and what your role is in technology at the minute.
Yeah, no worries, Dan. Thanks, mate. And uh and and thank you, Ray,
for for giving me such big shoes to fill. It's uh it's a bit of a
challenge to be honest with you, to think about having to fill in
your shoes, but I'm really excited to be here. Uh and really
excited to get involved in this. I mean, I love to bit like
yourself, Dan. I love to story tell and I love to talk about AI.
It's a for me, it's, you know, it's just one of those things in our
world now that 10 years ago, I probably didn't know that much about
it and today it's sort of front of center in my life. So, so the
opportunity to be here is super exciting. So, anyway, but yeah,
look, a little bit about me and you're right, I did I did episode
19. I think it was interviewed by Ray back then on uh on sort of
some of the work I do here at Microsoft. So, I'm here at Microsoft,
but much like Dan uh yourself and I am the role I hold here is
called the National Technology Officer, which is an interesting
role. There aren't many other companies that have a title like
national technology officer. Um you know, a lot of companies would
have a CTO or chief technology officer which is a kind of similar
role but the interesting thing about this role the national
technology officer was built into Microsoft's business nearly I
it's 20 odd years ago now uh when uh Ray Aussie uh was thinking
about this idea of how we should have parts of our business that
think about the future and help our governments and our sort of
country nation states think about technology and the impact that
technology is going to have on social economic and political uh
sort of dynamics in the country. So that's where the job came from.
I've been in this role now for a couple of years.
Came from Microsoft before, but uh Dan, as you might know, I also
took a stint out a worked at Amazon for a little while, you know,
great experience. We've seen both sides of the world and but I'm
back here at Microsoft because I missed you Dan and I miss you. I
did miss you, but not that much. Um but no, I'm back at Microsoft
in this role and it it's a super exciting role. Um and I'll talk
more a bit about the role, but you know, as you said, from a
personal perspective, This podcast was interesting to me because
obviously the foundations of it are in education and AI. Um, and
I've got family of my own. I've got two kids. I've got a boy who's
13 and a daughter who's turning 10. Um, both very technically
savvy. You know, both right now living the very technical life at
home through remote remote schooling. Um, and you know, and
technology is important to them and it became important for me to
think about what does the technology that I work with and and
promote and talk about what does it mean to their future, you know,
and AI of course front and center to thinking about the future of
what our children's lives are going to look like. I'm sure we'll
explore that more over the over the season. So yeah, so for me, you
know, having two kids has kind of brought a a very personal
perspective to to what AI is. Um, and Dan, you know, you and I were
talking earlier about devices. I'm not ashamed to say that I have
probably got I definitely have more Alexa devices in the house than
I do have people. Um, we we use them extensively. I'm a big user of
that kind of technology and I love using it and I love exploring
ing it. So, you know, for me, technology is home life and work
life. But in terms of my role, and sorry, I'm giving you lots of
content here. In terms of my role at Microsoft, that NTO role. So,
aside from all of that government work where I kind of talk about
things like AI and IoT and machine learning and help governments
understand how these things operate and what services they could
deliver the citizens with them. And that includes education as a
sector as well as healthcare. The other role I hold here, Dan, as
you know, is I'm the responsible AI chart for Australia. H and
that's an interesting role again that's quite unique to Microsoft
in that we have an entire established committee from uh from Brad
Smith our chief legal counsel in the US all the way through to
people like me whose job it is to think about the ethical principle
standardized approach to how we deploy and use AI to think about
those questions that we should be asking about transparency about
accountability about inclusivity and the risks and the rewards
associated with AI I and that's a really fulfilling role. I get to
talk across the whole business. I get to listen and learn about
amazing things that people are building with AI and bring a lens
to, you know, to to quote um somewhat of a sort of a an old movie
favorite movie of mine, Jurassic Park, just because you can do
something, should you do something? And that's a great conversation
we have around AI. So, look, I think that's probably more than
enough about me.
That's that's brilliant though because I think you know the idea of
this podcast as well is to to really bring our own lives and our
own experiences into into play here and think about multiple
industries and multiple technologies and how that all kind of
interrelate us. And you mentioned you mentioned that Jurassic Park
element earlier on in episode one. We talked about the favorite AI
and I think uh I was talking about Terminator and the the way that
actually it was not in terms of really the AI but we were looking
at robots at that point and the interesting thing for me was the
way that um that technology had actually overtaken society in the
future was coming back to sort it all out. Um that was the kind of
interesting juosition I was kind of going for from there. But now
you've mentioned Jurassic Park and I have to ask you about your
favorite AI and what like influenced you to learn more about
technology and AI.
Yeah. Uh yeah, of course there's a lot a lot of influences from
that time of my life in the 80s. I'm a I'm a child of the 70s, but
I grew up in the 80s. So yeah, all that kind of stuff's really
important. Okay. You know, you're the first person that's ever
provided me with a positive spin on the Terminator story. So, you
love the idea, the fact that the AI came back from the future to
fix things that we'd all messed up. I like it. That's a great way
to think about it. I'm I don't know if I can be that positive. Um,
oh, look, yeah, deep 80s sci-fi fan. And funny enough, when I talk
a lot about AI to customers and to the market, we get into this
conversation of of of 80s sci-fi or at least the 80s interpretation
of that has really shaped a lot of actually how we think about AI
today. You know, our perception of what AI is generally speaking in
the population is driven by things like the Terminator and Skynet.
You know, how many times have you seen the Skynet meme as the way
to describe something about AI? It's it's very common.
It's very it is. And so, um your question about what is my favorite
one. Uh again, we're back into war and fighting. So, I'm a I'm a
huge fan of the the movie War Games. I I 1983 was the movie.
Yeah,
I love that movie. Um, I was sort of a little bit obsessed, I
guess, with the idea because at the time I was uh I was 12 years
old. I was a nerd on my computer tapping away all the time. That's
what I did with my time. And so the idea that you could you could
have that impact at the time, you know, with computers was sort of
world expanding for me. So if anyone remembers for you for those of
you who want to go do the digging in war games, there was a
computer called the Whopper, War Operations Plan and Response
Machine, which was the worst example on video of a computer. If you
ever remember, it was a sort of green box of flashlights.
Um, that's right.
But that was an example of an AI that would play out these
scenarios for World War II. And obviously, you know, the positive
element of that, of course, is it worked out that at some point
there is no right answer. There is no way to war at the end. Yeah,
that's right.
That's right. So, you got that idea that, you know, something as
complex as rich as what do they call it? Thermonuclear war is a
comparative to tic-tac-toe in that there's a futility to it. So, I
guess that for me was the, you know, and uh look, I was so obsessed
with that movie, I ended up actually naming my son after the child
in the movie. It's called the the doctor, the guy who invented the
Whopper. His son was called Joshua. My son's
I thought you were going to call him Whopper then. Yeah.
Joshua.
Yeah. Like like a Big Mac Whopper. Tom, but um but yeah, look,
movies is a big influence and and you know, I think it's
interesting. I don't know if you've discussed this in previous
podcasts. Haven't listened to all of them yet myself. Um but
there's a saw a whole bunch of really interesting influences that
kind of got me interested in it. Um, you know, big reader of Isaac
Azimov sci-fi and Arthur C. Clark stuff, you know, kind of the
content around
uh the laws of robotics and and all that kind of stuff. And it's
fascinating stuff for me. You know, I grew up in that era of that
time of space exploration and technology just kind of burgeoning.
Um, I'm also a big uh Douglas Adams fan. Uh, so if you any Douglas
Adams fans out there, you would remember obviously Marvin the
paranoid android, the uh the the dress AI, but also the uh the
serious cybernetic corpse genuine people personalities. All the
doors on all the ships have a personality and AI personality.
Yeah. Yeah. You remember? Yeah. And it's
absolutely
it was just a funny take on it. I thought that was uh fascinating.
So look, you there's so many things, but you know, the 80s
negativity I think is is an interesting myth we have to dispel, but
you know, it's a good place to start in terms of people
understanding kind of what what is AI cuz it It gives at least you
a context for it. You see a robot talking, you see a thing
interacting,
you get a sense of that what AI is and we can explore more of that
I guess through the uh through this interview.
So currently I know this is going to date this episode but I think
so thanks for giving us that insight into yourself Lee and and it's
really I love the way that that when we think about AI everybody's
got different perceptions around it but it does end up going back
to 80s films. It must be it must be the showing age of everybody
that's joining the podcast but But also like in the current
climate, uh I suppose one thing that that's worth noting for this
episode for the for the people tuning in is obviously we going
through this epidemic at the minute and and I know we've had quite
a lot of conversations internally and you know in the media around
all the applications that people are using now around uh the COVID
safe apps and things like that. And obviously we don't want to go
down any rabbit hole today of of what what's good, what's bad.
There are but it is interesting how the different technology come
companies and the different governments uh addressing this issue
and and just from a NTO point of view with your role at the minute
are there things you can kind of share with everybody today around
what your thoughts are around this and I don't mean about any
particular type of technology but
what about how can we de deconstruct that the technology behind
some of it because some's AI and some may not be or yeah just just
interested on your take
yeah no absolutely and you're right I mean the role because of the
role and the way I get to work with governments and help agencies
and others. I I see and hear a lot of good stuff and it's I think
that the thing that's for me Dan has been the most um I guess
heartwarming uh you know kind of gives you some faith in in the way
we think as a civilization is just the the massive surge of
everybody wanting to help and everybody wanting to contribute
collectively uh without any kind of look for commercial outcome
generally speaking that's the way things have been I mean the early
example when you think about the co safe app uh the co safe app
there is the work that Google and and Apple did, you know, they
came together and said, "Okay, how do we collectively help build
and deliver this construct around contact tracing, which is
the,
you know, kind of the idea that we're trying to build with these
tools." Um, so I think that's that's kind of a critical thing is
that, you know, and and we Microsoft like a lot of other companies
just went to government, to health agencies, to those that are
dealing with the challenges and said, "What can we do?" You know,
what do we need to be able to do to help you either sustain your
business through like we're using now through you know technology
like teams or what do we need to help you do to better understand
the problem you're trying to solve in the health sector or
whatever. So that that for me is kind of the big the big piece. Um
but you asked about the co safe app and it's interesting because
obviously that is both something that is on everyone's minds I
suspect uh you know in terms of should I shouldn't I what does it
mean what and there's a lot of talk of risks and all these kinds of
things as you said I don't think we need to unpack all of that
that's a you know probably a rabbit hole that we want to get
into.
I think it's important to understand And when we look at it, why we
are being asked to do it? What what's the need for the app itself?
And and that fundamentally is this idea that is widely accepted
health communities around the world that contact tracing is the
most effective way to establish and understand the patterns of the
virus itself. You know, so the contact tracing idea is that if we
come into close contact with each other and we uh you know, one of
us contracts the disease that we can quickly and easily identify
the spread pattern and and uh and eliminate it.
Um, but what we don't have here, and this is the thing I think a
lot of people be concerned with, is that the app, you know, it's
it's gathering data about me. It's learning where I go and what I
do. It's going to capture all this information about the people I
see and the places I go. Look, that's patently not true. And
there's been plenty of deconstruction of the app on Twitter and in
other places that explain and the app itself, you know, it's built
off an open source bit of code. You can go look at the open source
code. It doesn't do those things. It really is just designed to do
things. So, I think the key thing here, given we're on an AI
podcast, is to say that the co safe app is not an AI app. You know,
it's not there to artificially learn about what you are doing to
build some profile of you as a citizen. It's kind of there to
understand what you do and at the time when you choose to allow it
to um you know, keep you safe. But it's I I think it's a really,
you know, good example of one of those scenarios where something we
could never have done before. Technology and data coming together
to create a safe environment in a situation that is, you know,
unprecedented in this pandemic at least in our lifetimes the
situation I mean I'm it's sort of it's a food for thought if you
think about a 100 years ago in the time of the Spanish flu
uh and the and and the imp impact that had globally around the
planet but we would had no technology no ability to track tracing
no ability to weigh these things yet still millions of people died
and it's a terrible tragedy
but I wonder what we would have had
with the technology and the ability to just quickly and easily
identify um you know people at risk so yeah oh yeah it's an
interesting It it is and I think the other the other interesting
thing for me and I know there's a there's a paranoia about privacy
and things but I think that's also quite a telling sign from the
communities as well because I think people are now more concerned
about what data governments have on them or third parties have on
them. You know over the last um couple of years we've had the
things that are happening with with elections and in the US and
things like that in the UK through Brexit and there's a lot of
things. So people like were really concerned about that privacy
element and, you know, not even using it. People are thinking, you
know, about the lockdown laws and if they'd be lifted and how are
they going to manage, how governments now going to manage society
and really going down like a a a kind of dark and deep path. But it
is, I suppose, encouraging, especially from um my my side when I'm
working with schools a lot and working educational establishments
who really care about privacy a lot. It's good to see people having
that level of conversation even though sometimes people uh can get
quite paranoid for right reasons or wrong reasons like we did the
same thing for the national health record in Australia didn't we uh
like people still thinking about that so you know I think it's a
good good the people are pushing back and saying well what what is
happening with these applications because
it shows that the society generally is becoming more aware of how
people use our data and things.
Yeah definitely yeah well and that awareness yes absolutely it's
growing and and you know in some ways it's arguably good to have a
healthy sense of skepticism of of any, you know, kind of of
anything in your life just to to validate and check. And I think,
you know, AI has been it's delivered a lot of really good things.
There's no question about it. AI in healthcare, AI and humanitarian
work, AI and just helping us really get to the heart of some
problem, some big humanitarian problems around the world. Super
valuable.
The way that the people working together is quite interesting as
well, isn't it? Because I remember in one of the podcasts just
jumped into my mind there. We were talking about the Bushfire app
and the the app in New South Wales was different to the app in
Victoria and I was in the snowy mountains right on the border. The
Victorian app was telling me to get out get out of dodge and the
New South Wales app said you're fine, you know, which app do I use?
How does this work? What are your thoughts on that kind of, you
know, the the way different states in Australia or probably
countries might be starting to share data and things?
Yeah, it's a it's a really good point and obviously you know,
something like the Bushfire app, which is, as you say, state based
is a is a really dangerous example of how, you know, the great
thing about having a federated set of states as we do here and, you
know, similar to the US, is that you get these boundaries of of
autonomy and ownership and and, you know, and New South Wales looks
after New South Wales people and Queensland the same and so on and
and it's good from a
kind of distribution of of function and value and and and being
building government services that are applicable to individuals,
but as you say, in situations where we have a a national or in this
case a global uh scenario going on. Um you can start to see the
cracks. You can start to see where data not being shared or
integration of systems. You know, it's kind of like I think there's
always a I don't know if it's a um an old wives tale or an urban
myth of the idea of that. Yeah. When the when the British started
building train track when the British were over in the US and they
were building train tracks, they had two different gauges, you
know, the sort of the the two different size of train tracks and
nobody ever thought that the fact that when they neat that they
might actually not work because it's two different trains if not
even if it's a madeup story. It's a good example of this idea that
we can have data in New South Wales about uh you know bushfires or
in this case um you know healthcare record data
and if the data is different in Queensland even if we shared the
data it just wouldn't work together you know there's that challenge
of integration and integration costs money creates complexity and
often obuscates some of the data value.
Yeah and when we're looking at education we're going to see we
we're going to have between schools as well because every single
school or every single university kind of you know are are
different and they do have their unique cultural values but
underlying that there's a data schema that are common between most
and you know that data sometimes isn't surfaced you know internally
uh within the school and certainly often not shared uh among
systems. So this is a really interesting kind of take on it. So
when we're looking at the um the the co are there any other things
you've noticed in terms the trends that have that have come up.
Have you seen any AI being used or any types of technology that you
think are kind of jumping out?
Yeah, know it is. Look, there's there's lots of things jumping out,
you know, in AI in particular. I think one of the things that we
may have sort of we touched on this earlier is this if you think
about what AI is and you know that we should definitely do an
episode about what is AI and really break apart some of these
moving parts because it is a phrase we just use, you know, without
thinking that we talk about AI and as you and I both know there's
detail behind that behind that term. But, you know, we're seeing AI
being used because it's good to spot patterns in data at large
scale. That's a a very basic construct. If you think about it, AI
can look at a huge set of data that you or I could never get our
head around and see the most minute pattern occurring over time or
over a dep, you know, depending on the type of data and then
provide that insight to help, you know, a human clinician in this
case or someone else identify an atrisisk patient or identify a
hotspot flare up or identify a predetermined set of conditions that
are likely to cause the risk of a a resurgence of the the co 19 uh
virus. So that's an interesting dilemma. You know, in this
scenario, we're all sort of talking about the co safe app and
privacy and the concerns and the asurances of our own civil
liberties, but then at the same time in the health care sector uh
and and largely in governments nationally, internationally, we're
actually seeing this opening up and they're saying, "Look, we've
got data about our patients and our citizens and our cases of co
let's share that with other countries other places to better
understand so they can be better prepared or we can better
understand our own and it's a really good example of you know when
you have data at scale and you have the access to the AI tools to
sift through that data for the patterns you know you don't know
what you're looking for then you start to really enable this sort
of global data sharing idea and we're seeing that come together now
we're seeing in the US sharing governments and they're sharing
government and Microsoft has been working with uh the White House
and with University of Washington and others to create a platform
for open data set sharing as as have many other vendors and places
and we're sharing the data not not just plain data but the labeled
data so people can use it very quickly to sort of make decisions.
So I think that's one thing I have noticed is that you know as we
know data underpins the value of AI and data being shared is more
broadly happening because we understand now the value of of how
quickly we can move to things like co 19 situation with the right
data and the right tools to to do it you know and the issue of
privacy is not doesn't go away in fact we should maybe put it in
the show notes but we've we've written an article Microsoft um
chief scientist a guy called Eric Horvitz has contributed to a
paper on the protocols of privacy around contact tracing so
thinking about how do we do contact tracing but also maintain the
privilege the privacy of of individuals as we do it so we're always
still thinking about the the need to have the tools, but also the
right to maintain that. Um,
I've also seen quite a few universities and schools and things
bring up chat bots. I've seen quite a lot of that myself, which is
quite interesting. I remember I did a LinkedIn post on how to
create a chatbot because it's it's quite easy. But, um, yeah, lots
of universities and people doing remote learning have been creating
chat bots to help interact with it or interact with their
university or interact with their lecturers. Quite interesting use
of chat bots and through governments, right?
Chat bots are a funny thing. I mean, yeah, we again a whole I think
you you've already done the conversation the episode we're coming
now
but um you know we think about chat bots in the in the co world you
know people are using chat bots to just get answers to questions
quickly people are largely worried scared concerned and don't know
all the information and we don't have the capacity for the beer
person on every phone to answer every single person's phone call
but a chatbot can deal with that level of of uh you know influx of
calls but also and this is the thing I again I think about what AI
can do chat bots can communicate to you in whatever language you
need to speak. They can address their answers in the pace that you
want to respond to them at. They can deal with people that have
limited communication capabilities or speak different languages or
just are not comfortable communicating with a person perhaps around
something sensitive like their health. So you yeah there's there's
a there's a lot of good in that as well as just the kind of the you
know robotic voice kind of mindset people might have around chat
bots.
Yeah, absolutely. And and this I read something about the research
accelerator program and the open data set you mentioned there
earlier. on is that is that something that specific is that a
global thing or is that uh US only or is it a way that everybody
should
it's global and yeah we should definitely uh put some notes in this
but sort of again you know thinking about the things that I love
doing about my job and and why I why I came back to Microsoft to
take this role on is because we just constantly look for ways to
contribute back um into society into the world we so you you'd be
familiar I mean you may have talked about it before the AI for good
program
which is you know bordering program of investment for finding ways
that AI can help improve the lives of citizens around the world and
everybody you know regardless of country and and and everything
else.
So one of the things we've done with the COVID research accelerator
is under the AI for health program we've kind of built a specific
stream of that invested a ton of money into that program such that
we can really target where COVID research vaccine research uh COVID
research into um into hotspots into identifying communities at risk
uh or into simply just helping governments better understand the
scale of the problem. Uh we put a bunch of money and process into
that under the AI for health program uh to help organizations get
access to Microsoft people resources and technology and we've
coupled it with the uh an opening up of the Azure data sets program
which is a tool we have in the cloud as you'd know for people to
host open data. So data sets that are available for everybody NASA
data in there we've got New York cat taxi data in there whole range
of stuff. We've opened up that for COVID data to be shared
globally. We've made it free for customers to upload that up to 10
terabytes of data.
We'll help you clean the data, meta label it, and tag it and then
put it up in there so it can be shared uh globally.
That's that's fantastic, isn't it? I think, you know, the things
that people are putting in place at the minute and you know, we
were talking before this podcast about our own kids and remote
learning and whether that's really going to change education. And I
think there's another question in there about how this is going to
actually um change the way we address these global pandemics as
well, isn't it? Because um after SARS and and some of those other
epidemics there there was a lot of uh issues around how those
actually you know obviously in terms of the diseases themselves
this isn't an epidemic podcast but you know the diseases were
fundamentally different so they are much more different to transmit
but actually when you're thinking about what technology companies
and generally society have done and governments have done to try to
deal with this. Um I think it's going to put us in a better
situation for pandemic too or as as well I was speaking to one
school leader this week and and you know they were relaxing the
laws in the state that he was in which is in South Australia and
they were starting to go back a little bit to normal and he said
actually I'm actually doubling down efforts because he said this is
going to come back in a couple of months and he wants to make sure
that you know they've learned from the mistakes they've made trying
to rush things through and make sure that they prepared for
pandemic two which might happen in three months time, five months
time, five years time. So, okay. Yeah, I know. Yeah. So, he's
waiting for this, you know, just imagine him storing away some
sandwiches and some tins of tuna. Um but um yeah, thanks again,
Lee, for for um supporting us with this podcast going forward.
It's great to hear from you. Yeah. And and when we look into the
season ahead, I suppose just a bit of a flavor. Season one um we we
looked at some of the general implications of AI and very much on
the basics of what these things were. AI for evil was one of our
fun podcast we did at that point and really tried to explore some
of the ethical issues around AI. Season two we started to uncover
well now we know a little bit about AI then what are some people
doing in industries such as government in the police in healthcare
very generically and then also some of our partners about actual
customers and data scientists are actually doing in this space um
to really drive that and and we were sitting down yesterday uh to
think about what we're going to drive for season 3 and we were
really thinking about lifting the lid on the technology and looking
it from some of the emerging areas as well around say quantum and
aiming to unpack some of the some of the technologies under under
the hood I suppose and really try to make this as educational this
season as educational as possible as well as bring in some experts
in from the feel. What are your thoughts on on the season ahead,
Lee?
Yeah, look, it's honestly, mate, that's why I signed up for it. I'm
just really excited by uh, you know, what you've done already that
the stories you told over the other episodes are fantastic and it
really is that journey of kind of walk you walk, crawl, walk, run
in terms of your AI skills. And we're kind of getting to that point
where, you know, we're going to start running and that's great
because we're going to get into the details. I'm I'm so looking
forward to talking about quantum entanglement, um, stuff like that.
Super exciting stuff. But no, I think I think it's looking good and
and yeah, we want to get a little deeper, I guess, in some of the
technology, but still keeping it to the point where, you know, it
makes sense and we can all kind of relate to it. Uh, I'm looking
forward to I think that and I think we can maybe even think about
talking to some people that are that are really doing AI in the
real world today, practical AI that's being used in business and
and bring some of that to the journey as well. So,
fantastic.
Looking forward to it. Looking forward to it.
Thanks. Thanks again, Lee. And see you in the next episode.