Nov 1, 2023
After 72 episodes, and six series, we've some exciting news. The AI
in Education podcast is returning to its roots - with the original
co-hosts Dan Bowen and Ray Fleming. Dan and Ray started this
podcast over 4 years ago, and during that time Dan's always been
here, rotating through co-hosts Ray, Beth and Lee, and now we're
back to the original dynamic duo and a reset of the conversation.
Without doubt, 2023 has been the year that AI hit the mainstream,
so it's time to expand our thinking right out.
Also, New Series Alert! We're starting Series 7 - In this episode
of the AI podcast, Dan and Ray discuss the rapid advancements in AI
and the impact on various industries. They explore the concept of
generative AI and its implications. The conversation shifts to the
challenges and opportunities of implementing AI in business and
education settings. The hosts highlight the importance of a
human-centered approach to AI and the need for a mindset shift in
organizations. They also touch on topics such as bias in AI, the
role of AI in education, and the potential benefits and risks of AI
technology. Throughout the discussion, they emphasize the need for
continuous learning, collaboration, and understanding of AI across
different industries.
________________________________________
TRANSCRIPT For this episode of The AI in Education Podcast
Series: 7
Episode: 1
This transcript was auto-generated. If you spot any important errors, do feel free to email the podcast hosts for corrections.
Welcome to the AI podcast. Uh, I'm Dan and look who's beside
me.
Hey Dan, it's Ray. You'll remember me from when we first set up the
podcast together in 2019.
I sure do. This is exciting time. This is podcast reboot,
right?
This is like getting the band back together, Dan.
It is. So, what we're thinking is, as we've I alluded to in a
couple of episodes previously because AI is moving so quickly and
the technology space is really driving a lot of change and
transformation specifically around generative AI now which is one
aspect of the entire debate around AI. We can really start to focus
in on looking at some of these new trends because it's moving so
quickly, right?
Oh gosh, it is moving so fast. I I I think about genative AI from
the moment I wake up to the moment I go to bed because the the
business I'm involved in is all about genative AI.
You go to bed sometimes. But the the really fascinating thing is
despite the fact I think about it 24 hours a day,
y
it's still moving faster than I can cope with. And I'm only trying
to stay ahead of that one piece of technology because things are
moving so rapidly. And it's not just the technology that's moving
rapidly. It's the ways that we can use it. Ethan Mollik described
it really well. He talked about it being like battlements of a
castle. And some areas are inside the battlements and some areas
are outside the battlements of what we can use it for. And we still
don't know where that jagged edge sits because every day there's a
new use case scenario that just genuinely makes me smile about what
it can do. And then also we find some things that it's really lousy
out that we thought it could do.
And so I think this whole new world of kind of human centered AI
rather than technology centered AI, which is how I think about
generative AI, about the human interface, the way that we think and
do things is fundamentally different from what we started talking
about four years ago, which was machine learning and the binary
ones and zeros version of AI.
Yeah. And there has been a blur in that area, hasn't it? Because
the the science of AI has been around for quite some time and we've
talked about the history of it a lot with Lee, with yourself. We've
explored where it came from and the kind of uh journey around AI
itself. But I think we are doing this podcast as well. this new
series that we're going to move forward with is to also take some
opposing views I reckon because the conversations you're having
around the business side of AI the outcomes conversations I am
having around the technology the implementation of that the
governance and the security element that they're often against each
other right and there's a friction in businesses and schools and
universities where the outcomes of the students the outcomes of the
teachers the real business processes that can be changed are kind
of log ahead with the speed of that change and the the way that
technology is implemented.
Yeah, it's interesting because it's AI and so it's a technological
discussion I think is the starting point.
Ray, yes,
during my campaign.
Well, no, Dan, I'm not because something I I wrote recently was
very much around the decisions about this are going to be made in
the boardroom, not in the server room. That it's actually about a
fundamental process change that's possible. If you go and read the
white papers from the researchers and the management consultants
and all the government organizations, they're talking about 40%
productivity improvements. And so the potential is to change the
way that we do things and the way that we run organizations, not
how do we make a technological change. And that's why I I find it
so relatable because it's about business processes. It's about the
things that change as a result of it, not about how do we make a
small change with data.
Yeah, I I I do feel passionately about that as well. But in the
conversations I I'm having, you have to also tread carefully with
this new technology as well because you don't expose information
that you may inadvertently have not got uh general exposure to with
the security and the governance elements may not be in place. And
and we've got this tension between the speed of getting something
out and actually the tension of waiting to get something out and
making sure it's 100% proof, right?
Yeah. I think it's about elevation. It's elevation of the role of,
for example, the IT team or the CIO in an organization
up to the boardroom because that's still not the case in every
organization.
Yeah.
But it's also about elevation of the conversation. So, one of the
things I've been doing recently, Dan, is I've been going through
the Australian Institute of Company directors course about
foundation for directors. And one of the things that keeps coming
back that we keep being hit around the head with by the the
facilitators of the course is thinking about the director's
mindset, not an executive's mindset. The director's mindset is
about strategy and direction. It isn't about implementation. And so
if we're thinking about elevating the conversation and the role of
the CIO, that is also about strategy and direction.
Yeah.
Not not just about the day-to-day And I think most CIOS would say,
"Yeah, but I do think about that long-term strategy and direction."
There's still a gap, I think, for many people between their
responsibilities in a technology world. Yeah.
And their responsibilities in a business enablement world.
Yeah. And and that's also coming round and quite evident when you
look at the way the digital divides open. And I'm seeing this more
and more. If you remember even where are we now? We're in November.
So even in January, this is when some of the school systems are
banning
Mhm.
technologies like chat and some more kind of embracing that and
some are being more thoughtful. So where do you see that sitting at
the minute between that ban it kind of mode and this digital
divide?
You know banning only works on the bits you can control. I was
talking to a major university in May 20,000 of the users on campus
had used chat GPT on campus. 20,000.
Yes.
So So imagine if you banned that. How how many would be using it at
home anyway or on their phone when they're on campus?
So I I think putting the lid on it is really difficult to do. If
you look back and go, do you remember when we banned Google search
in schools because people could just look up the answer and then we
banned Wikipedia and then we banned YouTube.
Yeah.
The three things that are probably the biggest learning platforms
in the world
were initially banned.
And it didn't stop people using it. It just just meant that people
were using it in different ways. So if you think about it, if you
stopped the use of chat GPT in the classroom or on the campus, it
just means students and teachers will go and use it at home with no
controls and no guard rails. And you then open up the possibility
that some student have access to to it when others don't.
We talking just before this episode was recorded. just chatting
about this and you mentioned about the the kind of autonomous
vehicle problem and I think that's kind of evident in this place as
well isn't it because or in this debate because of the fact that
when we thinking about a digital divide and people banning and
people not banning these things I think there's a danger and I
think in episode one almost we talked about the human parity of
technological systems being the the technology already surpassing
human parity so there's almost like a need for IT leaders to think
well we need it to be 100%. So do you want to explain that that
that autonomous vehicle problem because I think that's really
evident.
Yeah I think if we if we go back through the history of the way
that we've done things in technology
yeah
we've tended to use a gold standard which was is this perfect you
know the go and look at this data interpret it all is it perfect
and and the easiest way to understand that is most AI projects
historically have probably burnt 8% of their and 80% of their time
on cleaning up the data in order to be able to use it.
Yeah.
The self-driving car problem that I talk about is about that
difference between is perfect what we're striving for or is better
than humans what we are striving for. So in the self
difference there massive.
So the data says that self-driving cars are safer than human-driven
cars. The data says self-driving cars are better than human-driven
cars, but 85% of people in North America wouldn't trust a
self-driving car. Now, I think part of the problem is that most
drivers are above average, or at least they'll tell you that
they're above average.
But the reality is a self-driving car is safer.
But people hesitate around that because it's like, yeah, but It's
not 100% safe, but what a million people a year die on the roads.
So the current human standard isn't perfect either. And in
technology projects, we've often not measured against the current
human standard. We've measured against some
and that's that's evident when people roll out new versions of
software, isn't there? They'll wait sometimes Mac operating system
or Windows operating system, you know, six months after it comes
out, sometimes years after the first version comes out. So in
quotes, you can ing out any of the teething troubles with the with
the software. So that's something that IT pros are sort of used to,
I think.
Yeah. And I think that's where a mindset change is going to come in
because 100% right in two years time once we've cleaned all the
data is a good outcome. Is 95% right instantly a good outcome? You
think about for example feedback on essays. We know that Generative
AI or AI generally can mark essays more consistently than humans,
but we still probably don't trust it.
And we probably want to check everything that it's going to say to
a student to check that it's 100% accurate. Humans aren't accurate
either. I mean, I read some research recently. If you are
submitting a homework assignment or an exam assessment, you want to
get it marked first in the pile rather than 10th in the pile or
30th in the pile because the earlier in the pile it gets marked,
the more generous the humans are in the marks they give you.
Yeah.
So, you know, humans aren't perfect. So, can we get to that mindset
which says actually good enough is good enough and let's move
forward on the process. So, if you could give good enough feedback
to your students now the minute they finish the essay rather than
in a week's time when you've had time to go through and write and
review it all and give them some feedback. That's an interesting
question that I think is going to come back again and again.
And that personalization element We've always talked about that
with Napan results 6 months after the nap plan exams happen and
what is the validity of that and the longer you leave here the less
valid that feedback is the business models have changed as well
with this haven't they you're looking at companies who have when
you're talking feedback there companies that have been doing
plagiarism checking schools thinking about assessment I really
think there's going to be a breakthrough in that area at some stage
because that that can't keep moving so the actual processes
underlying in some really key aspects of universities and schools
are going to have to change because there's there's no two ways
about there. AI is already impacting those areas especially around
assessment.
I mean when you say plagiarism checkers, I still see people saying
that they're using AI detectors and that takes us back to the
accessibility thing. So AI detectors do not work. Full stop.
There's papers, there's lots of other things you can go and read,
but if you go and read the things coming from the people that are
experienced in this stuff. People like Ethan Mollik on Twitter or
LinkedIn, it's very clear the research is there. AI detectors don't
work. And if you think they do, what you're actually doing is
disadvantaging certain groups of students because what it will do
is pick up people who for whom English is a second language and say
that those things have been written by AI, but they haven't.
They've been written by people. But the writing that they use tends
to set off an AI detector
and there is an underlying sentiment around fairness and
reliability and trust which is a secondary conversation to it
because obviously there is an element in certain aspects of
utilizing AI where you might want to put in invisible watermarking
on images and things like that but I think that is getting the
reliability security and trust element on that argument which is
very important to tech companies are working on that at the minute
um is very separate to the assessment and plagiarism and AI
checking and it's been lumped in the same conversation
sometimes.
Yes. And and the the other thing that comes in is the bias piece.
Well, it it displays some bias and in fact I saw an example last
night where somebody had asked it to draw an image of a great
teacher and all four images were male. Now the interesting thing is
you can spot those biases and you can fix it in the system and I've
seen the chat GPT for example its bias has been changing all the
time in order to start to actively remove the biases. But if you
think about how do we remove human biases because there's a lot of
human biases. Like for example, if if you're an education system
with 100,000 teachers and I told you what I said earlier, which is
that papers marked first get a better mark than papers last March
10th or 20th. If you wanted 100,000 teachers to change their habits
to remove that bias.
Imagine how long that would be. I mean, first of all, you got to
convince them it's true. Then you got to convince them to change it
and then you've got to keep reinforcing it. Whereas if you've got
that kind of bias in a computer system, an AI system,
you build a rule and suddenly it fixes it. Um I think about I asked
Chat GPT to create for me a list of 10 doctor's names February
March this year and all 10 names were male. And if you go into it
now, you get a broad mix. Now, the reason it gave me all male names
was because the top 10 doctor's names out of the US surveys are all
male, but now it's been programmed to remove that bias. It's now
doing a better job of it. So, it's actually overriding human bias.
It might also be overriding human reality, which is that many
doctors are males and that's what shows up in the data. Yeah.
So,
yes, there are these problems, but I believe that they're probably
more manageable. And let's go right back to the beginning. This is
an emerging technology. It's amazingly how fast these things are
being dealt with and managed.
Yeah. And and the impact that it's having, I think, is is evident.
Even though you call it an emerging technology there, I'm still
staggered, and I don't want to keep going around in circles with my
narratives around this, but I'm staggered at the amount of
applications that I'm seeing teachers come out with. You know, this
this week alone I was looking working with a school dascese
actually who were looking at creating texts for students to read
which are one reading level above what their current reading level
was. This dascese is working on literacy really heavily and going
back to basics which is fantastic. So that is a perfect application
for generative AI and they can do that. So you are talking about
personalization happening really really quickly and if you can do
those and solve those business problems really effectively and like
you're saying with 95% accuracy, then let's do it because we
actually have an impact in the classroom today rather than in a
year's time.
Yeah, that's absolutely right. And sure, we need to be aware of all
of these other issues, but fundamentally we can improve some of the
processes that we're doing. We can improve the support we provide
for students. We can improve the way that we engage with students
or with parents. It's so many of the things that involve
interaction can be prove and we need to jump onto the use cases and
the benefits of those use cases and testing those things out rather
than probably the old world which is oh well we can do that once
we've fixed all these other things. We can do that thing about
predicting which students going to drop out once we've cleaned all
the data
in five years time. So there is a thing about is good enough now
better than perfect in six months time or 12 months time or help
us
and I always always argue with some of this kind of stuff. I know
when we're doing reading progress when I was a governor in a school
in the UK and when I was doing offstead school inspection work it
was very much the school budgets however controversial this might
be the school budget for was for that year. So when people were
saving up that school budget for a long-term minibus for example
there was always this tension in a governor's meeting of saying
well that $150,000 we storing for the minibus will come to fruition
in 3 years time when we've got enough money to buy this minibus.
However, that could be used as a reading recovery program for a
year three student now. So, there is a genuine need to get impact
now rather than thinking about these things too deeply I suppose.
And there's an interesting element which we were talking about
previously through China
which you mentioned about the fact that China's got a even though
they they've got their own interest in social norms around
technology. They've got a different take on the way they utilize in
this, right?
Yeah. There's some new regulations coming out in China. So, if you
think about the social norms and what is and isn't acceptable.
They're talking about consumerf facing chat bots and things like
that. The one of the things they have to do is test scenarios. So,
they I think they've mandated a minimum number of tests. You must
ask it 4,000 inappropriate questions. You must ask it 4,000
appropriate questions and then you have to manage the responses.
But what is interesting is they're not saying It shouldn't answer
any inappropriate questions. What they're saying is it should
refuse to answer 95% of inappropriate questions, but equally it
should answer 95% of appropriate questions.
So, what they're trying to say is we recognize it's not going to be
perfect,
but we don't want to make it so perfect that then it won't do the
job we want it to do. And and that's interesting because if you
think about that in the cont of an education. Let's say you you
build a chatpot, you put it on your your school's website, somebody
will go and get it to have a bonkers conversation that is
inappropriate. What they're saying is we recognize there is a risk
of that happening and we're going to mitigate against most of those
scenarios, but we're not expecting everything to be 100% accurate
because if we go for that, we're going to lose all the upside
benefit. And the upside benefit in that scenario of a chatbot on a
school website is perhaps you're making information more accessible
to parents or students or they can get help on their assignment in
the middle of the night
and quickly rather than waiting for somebody to you know be at the
other end to support them with their tutoring or whatever it might
be. Give them a
LLM example of their maths question they stuck on immediately which
could solve you know 70% of all of the queries that come through
maybe more.
And that's why I'm thinking about now we got the old bang back
together Dan is you and I it's it's that reset point because we're
going to have a ation going forward about how do we help this staff
to improve the processes going on in education? What can it do?
What can't it do? But it's going to be very different from I think
where we first started off where it was a lot of technology
conversation. Yes,
this is about a human- centered conversation about how technology
helps rather than it being a technology centric conversation. And
and I think that's the fundamental difference because I I spend
almost all my time now not talking to IT people and talking to
leaders of organizations about the way that the organization can be
transformed or the processes can be transformed not about the
technology piece because we can have a human centric conversation
because when you're talking about generative AI you can show things
that everybody can relate to you show a real conversation you show
it interpreting real information it's not a It's a not a bits and
bites and widgets conversation. It's about genuinely transforming a
process.
But but I think as well and and this is why this is going to go
really really in a in a different direction because generative is
moving things forward. But we do need to also have a goal in mind
with this podcast as we walk through to make sure that people
listening do understand where the different types of AI fit because
there is confusion. There is a divide happening at the minute and
we want to bring everybody along on this journey. to make sure it's
equitable for all. So, you know, there's AI, which is the
generative style of AI, you've got the data analytics AI, you've
got cognitive services, you've got all these AIs that can read
documents, and then it's the use cases that are the key to say,
well, where does that fit in?
Is that in the generative space? Is that actually that's actually
data analytics problem, which is where we kind of focused
in season one. I suppose it was that data and AI element, the
cognitive services, the machine learning But now it's really ramped
up and moved into a you know completely different service of its
own I suppose.
Well and the other thing we have to add into that is the blend of
consumer services and
enterprise
you know kind of enterprise services because actually many of the
scenarios now you can test with consumer services.
So imagine a scenario around the learning reading levels for
example that you were talking about. You can test that that
scenario works with chat GPT or with one of the other models and
know that your scenario is going to work, but then you go and build
it in enterprise services. You'll go and build it in Azure Open AI,
but you can test it with a consumer level service. And so that
opens up many more opportunities. It also opens up a whole load of
conversations about well what happens when students are accessing
consumer services or teachers are accessing consumer services. Is
that okay? And in which scenarios is it okay and not okay? And
where do you provide the guidance. And the thing I'm thinking about
that we're going to get to over the next however many episodes is
going to be about how do we get a blend of different voices. So I I
don't mean your your voice, but three different voices. So one
would be people that are the practitioners. They're not AI experts.
They're not technology experts, but they can see a process, an
opportunity for
yes,
help. The second will then be the the kind of generative AI experts
and by that I'm meaning the the the people that understand the
potential to transform something,
but they see it from the perspective of what this allows this
human- centered AI. And then the third voice will be the technology
voice, the CIOS and the IT teams who are going to have a
perspective built out of their legacy and history. I often used to
say the main role of a CIO is to keep the head teacher off the
front page of the newspaper.
Yeah. Yeah.
But there's a legacy that comes from that. It's a fasile example,
but there's a legacy that comes from that about what you do about
risk that what you do about accuracy and all that kind of stuff. We
need to blend those voices
and also Yeah, absolutely. And I I also I was reading a blog post
by one of our interviewees uh recently as well, Nick Woods and the
health team as well from from Microsoft. And health is another
example. And I think we always look back could help and say you can
take a teacher out of a school today and put them in the school 100
years ago and they can do the same thing and they know the board
and they at the front and the sage on the stage kind of mentality
and I know that's been facitious and teachers are much more
technologically advanced these days but you take a doctor in a
surgery even 10 years ago and they wouldn't understand the robots
and the use of that technology and I think the health sector is
always a good litmus test for me because that's really really
innovative and actually has a massive impact today. They don't
think about what's coming up in the next 10 years even well they do
but you can see some of this AI technology already impacting
patient care. So I think from my perspective as well it'll be good
to bring in people from other industries and see that speed to make
sure the schools and education innovate as quickly because the the
post from one of our I think it was it might have been Simon Cos or
Nick Woods one of the health executives but it was a really well
thoughtout post saying we need to grasp this opportunity now and
and move really quickly in the health sector with AI.
Yeah. And health is interesting because it's a highly regulated
industry, more regulated than education, but
somehow
it's able to I think innovate at system level
uh a little bit faster than other regulated industries. You know,
banking is highly regulated, but they're using it in banking. So
yes, I think you're right. There's both regulated industries where
we can get some examples from. There's also add in the commercial
world. I'm speaking at a in a few weeks time at an event with
somebody from Penfold Wine about how they're using generative
AI.
Oh, we have to get that in on the conversation then. Brad will be
happy.
But there's a lot of scenarios being used in other industries and
yeah, let's get those examples in as well. One of the the benefits
I've got now is I'm working alongside people working in those other
industries. Let's hear what's happening in retail. Let's hear how
it's going to disrupt global logistics. Let's hear how it's going
to disrupt disrupt the wine industry because out of those sto is I
think we will find interesting parallels that might excite some
ideas in education.
Brilliant. And and go back right to the start here for that human
centered approach which is now different different to any other
kind of era we've been in before. This is really driven by end
users, isn't it? So get those end users on the on the podcast and
get that conversation moving.
Yeah. What's exciting to me, Dan, is that I've always had
difficulty engaging my children in what I do because
yeah,
they didn't have that much interest in technology or or at least
they pretended not to. And now there's some really fascinating
conversations going on with my kids because of the potential of
changing things not in a technology way.
I think we need to get my daughter on as well because yeah, I was
on a conversation in the um car yesterday actually and she's asking
her Snapchat AI for quizzes on princesses all the time. Give me 10
questions of Disney. princesses cuz you went to Disneyland and it's
interesting conversations she's having with that and it's
interesting just listening to the way she interacts with AI. So I
think getting those different perspectives would be excellent.
So we should interview some students. The other thing we should do
down
yes
is we should interview AI. We should get
Oh wow, that's a great idea.
We should get We should do an interview with Chat GPT and make that
a whole episode. Do you remember when we had a bot join one of our
podcasts in in series one I think it was? And it was a very robotic
voice. Let's have a crack at having a podcast with as well.
Yeah, bring on this season. I can't wait. Thanks again for
rejoining the podcast. And and if Lee's out there listening as
well, you know, Lee's got a new gig out there supporting the legal
organization in Asia, supporting the AI conversations there. So, if
you're out there listening in, Lee, thanks for holding the for you
in another episode and see what's happening in the the world of AI,
literally.
Brilliant. I am very excited to get this uh going and uh find some
interesting people to talk to.
Let's do it.