Preview Mode Links will not work in preview mode
Welcome to the AI in Education podcast With Dan Bowen and Ray Fleming. It's a weekly chat about Artificial Intelligence in Education for educators and education leaders. Also available through Apple Podcasts and Spotify. "This podcast is co-hosted by an employee of Microsoft Australia & New Zealand, but all the views and opinions expressed on this podcast are their own.”

May 25, 2020

Keeping the theme of education and learning, this episode will provide a look back, way back in to the human obsession with creating artificial replicas of themselves. When did it start, who started it, what is the AI winter and how did it accelerate so quickly in the last 10 years. 

________________________________________

TRANSCRIPT For this episode of The AI in Education Podcast
Series: 3
Episode: 2

This transcript was auto-generated. If you spot any important errors, do feel free to email the podcast hosts for corrections.

 

 

 

Welcome to the AI podcast. Uh, hi Lee. How are you doing today?
I'm good. I'm excited to be back again in the studio or at least in the virtual studios recording this.
Yeah, it's fantastic, isn't it? Under the the current guise of kind of working from home, I suppose I sit back and think about all the different opportunities that we can take this podcast and today's one's really interesting because we did say that we were going to look under the hood and we're going to bring in together today a really brief history of AI which you've never really explored going right back from the the initial conception of AI I suppose and going right back to basics and then bring it up to the modern day so we can start our story over the next couple of episodes to really talk about machine learning and artificial intelligence and what that actually how we actually do that what it actually means and really get under the hood of AI. So if we go back to basics here, Lee, you know, I suppose uh this is really reigniting our interest in AI. Where did it start with you and where did I AI generally begin?
Wow, big question, Dan.
Yeah, just met. Yeah, sorry. But but look, I think it's a really important topic and and because we kind of we do tend to assume that AI is this new thing that's, you know, sort of infected and impacted our lives in so many ways, but it's a there's a really really long history to it and we'll try and keep this condensed to the time we have.
Yeah.
But look, you know, you know, for me, um, and I'm always fascinated this I think these kinds of things are the intersection of of history and technology is always a fascinating area to get into. Um, and it really goes back I mean it goes back hundreds of years in thousands of years in the sense that where we it really goes back to constructs around Greek mythology. Um, and in fact the earliest kind of from my view Yeah. the earliest kind of view that we when we start to hear about this idea of what we would now call artificial intelligence, you go back to the the the Greek mythology of Talos. So Talos was a sort of you know a Greek um a god, a guardian if you like. Um but it was this idea of a giant automaton, you know, a machine that
that worked alongside the human experience and and that's kind of the kind of the principles of it, you know, and you could argue when you get into things like the the the Minos and the Minotaur and other areas of Greek mythology, that's where there's always this human fascination from from a very early point. on with the idea of creating a replica of themselves, you know, an idea of creating a human or or or a being in the uh image of a human. Um and and it's not just the Greeks, you know, there's the in um I think is it it's Jewish folklore, there's these ideas of golems, these ideas of if anyone plays Minecraft, you know, your iron golems in Minecraft that protect the villages.
That's born out of a very real idea of, you know, an artificial being that was there to protect the people. M so yeah it starts a long way way back there uh there.
Yeah I suppose it goes from that the the interesting part as well and I'm sure we can explore this as we go but that human form element something that always kind of intrigues me because when we start talking about AI and robotics and that kind of coexistence and the uncanny valley where things look like people I suppose in the ancient times in ancient Greece and things were in the human form and sometimes they weren't you know so it's kind of interesting to see how that's kind of evolved as well and then going right to things like Frankenstein and Mary Shel, I suppose.
Look, absolutely. And it's, you know, if you it's interesting as you go back and look at the initial conceptions of this idea of a of an automaton, you know, an autonomous humanlike being. In the old days, in the very early days, of course, there was a lot of connection between religion and belief systems that drove to these kinds of ideas. And so, they tended to be godlike and and huge in stature and and almost better than human kind of uh thinking. And as you you know, as you referenced the idea as we've gotten kind of closer and closer to the to the machine looking at us and us looking at the machine and seeing that kind that similarity we hit the uncanny valley idea but I think it's you know yes we get into kind of ideas in the early part of the century around Mary Shelley's Frankenstein which I think often people would look at it as a you know it's sort of often positioned as the classic horror story but it's also kind of the an early idea of where in the general populace we started to think about the idea that there is could create something that is closer to human than perhaps you know the godlike ideas of the past. But but you know Dan we're talking about this idea I mean that's closer to robotics I guess and autonomous autonom um and and that and that leads us to AI but the other thing I think we need to just at least call out is because they often get blurred and we think we talked about this last week last on the last podcast is machine learning and AI and they kind of have followed very different paths and where we're at today as we'll get to is kind of of where they've really merged in a very impactful way.
Yeah.
Yeah. And and and just just looking back as well, you know, when we started getting into that element of robotics in the the Second World War, you know, in the 40s as well, Isaac Isaac Asimov's book and coming up with the the three laws of robotics and things like that. But then also, I suppose the the war itself in World War II produced a lot of people thinking outside the box a bit like we're doing in current pandemic times. I suppose people think outside the box and try to solve large problems and start to try to work out how you can get a machine for example when we talk about Alan Turing to crack things like the Enigma code and and you know some of those uh cryptography kind of uh areas you know quite starts to get quite interesting there where uh the invention has come out of those kind of areas as well from you know unfortunately more but I suppose that's driven quite a lot to this as well.
Well look yeah I mean it is it is often an unfortunate reality is that a lot of human um evolution if you like or our huge steps forwards are often driven out of um you know sort of major global events be they negative and positive. I mean you know again not to get too far off subject but where we are today sitting in this in this you know co 19 world as we call it this is a major event that has changed the way a lot of organizations are are using technology and and it's just another example of that that that point but you know you talked about three things there uh Dan and and and I like let's pull them apart a bit because I think we need Alan Alan Turing deserves a bit of time out of its own on its own just simply because of the the unique thinking but just you know if we think about the machine learning piece and um you know fundamentally machine learning as an engine to feed AI was this construct of this idea that a machine could not just be given sets of data and then tell you the output of the sum or the output of the the mathematics behind it and that's yeah you that's kind of the precursor that mathematics was the foundations of machine learning.
Yes.
But what it became and it was and it's as long ago as 7 late 1700s 1763 was when this first idea something called B theorem this idea that if you have data sets of previous uh you know outcomes or experiences and if you tell it a system or or not even a system at that stage even if out from a from a purposes of having a a mathematical equation the outcomes of pre incidents can help you predict the outcomes of futures. And it st it was the first instance of this idea of using mathematics to predict versus just simply tell you the answer. You know, if you remember back to your high school maths, it was always about the fact that maths are absolute. You know, maths are you can't get wrong. You it's either right or wrong
and machine learning is unique because it's not.
So B ser theorem then? What was that? What was that based on? Have you done any research on that?
I've looked Yeah, I've looked a little bit into it. It looks I mean there's a depth for mathematical it's essentially based on this construct around statistical inference the idea that statistically speaking you know you you know you heard the saying you know 99% of statistics are incorrect um you know but it's this idea that statistically speaking if I can if I see the the the outcome of previous events then I can predict the probability of an event and it's it's a bit like actually it's a bit like uh heads and tails or two up.
Yes.
You know when you flick a coin a number of times There's a predictability that says if you flick that coin 10 times on its heads, the the chances of it being tails next time based on kind of the logic of mathematical mathematical statistical probability is it probably could could be tails. But you kind of you start to unpack that and this is where it gets a little interesting is there's really no mathematical logic to that. It could be tails a thousand times in a row, but there's a predictability that it could also be heads. And what is that? What is the likelihood of that? And that's where you get into today's modern machine learning ideas. We talk about predictions and the accuracy of the model. And we talk about, you know, 80 to 80 or 90% accuracy being pretty good and pretty close to
human uh you computational thinking, but it's always a prediction. It's never accurate. So I think it's just an interesting one to kind of keep in mind as we get through this journey
that there's the mathematical world of machine learning and then there's the sort of the almost the uh automaton mechanical world of AI in the early days forward. So
and which and that that went into the asimoth era then I suppose where where robotics so that was the early 40s right?
Yeah as was early 40s and you know and even before that the Russians it was a a Russian play in the 20s um I can't pronounce the Russian name of it but it was essentially translated to Rossam's Universal Robots and it was a play in the same way that you might have watched a you know you watch a play for entertainment as they did back in those days.
Yes.
But it was a play where where there was a robot as a character. And again, it's that unique idea that to us it seems so logical now. We have robots in our houses doing cleaning the floors. But this idea that you could present to the public an idea of a of a robotic human that talked and acted and behaved like a human but was a machine. It must have been uh you like like when we landed on the moon kind of moment. It was one of those incredulous moments. You can't believe that this is actually happening in front of you.
But yeah, as an author again in the 40s, you know, really for thinking and this idea that you know robots will be will be a part of our life and we have to think about them and how they interact with the human experience.
Yeah, it's a real really interesting turning point because then it really rapidly accelerated then didn't it? So you know even though these ancient thought processes there it was almost like a perfect storm coming together some people were kind of really out there thinking about some of these areas you know you got asimov at the same time you've got the war and all the techn ology that are produced around cryptography and things and then Azimov coming Azimov coming up with his three laws of robotics. Can you remember them? I can remember a couple.
It was I think it was I think I think it was a something along the lines of you know robot not injuring a human being was one.
Yeah.
Um and you know not allowing a human to come in harm was one. I know that. And then it was something around not obeying a robot mustn't obey any orders given to it by human beings except Um if if it's which would the first one we probably read those before but
yeah exactly it's quite interesting then there's then there's the robot protecting its own existence um for as long such protection does not conflict with the other laws it's all
that's right
in essence around you know
uh people and humans being the servants in this context as well which is which is where his philosophy came from and I suppose we still dangling with a lot of those simple principles now when it comes to even you know quite complex AI and machine learning. So it's kind of been
it's I mean it's quite interesting isn't it when you think about Azimoff's laws of robotics and and how long ago they were thought of but even now as we get into the sort of challenges of ethics in AI and all of where we are in in the modern world we still foundationally are trying to grapple with grapple with this idea of if we build something that looks and acts like us How do we make sure it doesn't it protects us and we don't become subservient to it and it doesn't become you know you know sort of too close to us incredibly deep thinking but you know I want to come back to you mentioned touring
you know all touring in the 50s um and I know it's something you you you've been putting the thought into but you know I think exploring what the idea of the touring test was or something you know do you want to talk a bit about what kind of your perspect your perspectives on that touring test?
Well I I suppose you know it fascinated me when I was teaching one of the things that really got kids engage with technology was talking about technology and encryption and secret writing and going right back again like we've done today going back into the into the past and where people were hiding biblical writing and things like that and stenography and then turning photography and then the way that Alan Turing a mathematical genius from the UK sat sat around and thought well how can we put his brain to kind of trying to resolve the enigma encryption algorithm using machines uh And there was a colossus machine at Bletcher Park in the UK uh which was used to do lots and lots of mathematical kind of number crunching to try to break into some of those codes and there were several different things which were involved around that because the the machines that the Germans had in the submarines at the time the Enigma machines were very difficult to crack and they were they were consistently moving the algorithms were consistently changing and and there's lots of movies written about it I suppose but then from that you know obviously
you He learned a lot and he also came up with some landmark papers as well and and one he came up with uh in the 50s then when after he came up with thing called a tuning test which basically meant that you know you'd type in some questions into your machine and then when you can't distinguish between whether there's a person or a machine on the other side then that was his two test then that's when you've got that kind of parity of kind of I suppose the suppose the kind of first into artificial intelligence. It's not really artificial intelligence when you start you know when I was doing stuff with uh computer programming myself and you know we all did it I suppose we created code when we learning basic and all the different languages and you you create a bit of an AI to kind of be able to type in and say hi and it says hi back to you know my kids doing that with Alexa you know hi Alexa where are you are you you know happy you know all of those kind of simple questions to to test the barrier of is there somebody the other side Are they listening to me? They making decisions on and you know Turing came up with that.
Yeah, absolutely. It's so long ago and he's a super a very intelligent man. Um and even my daughter, you know, for a long time she would talk to Siri as if it was another person. Um and and almost I think I think she felt Siri passed the touring test. She didn't but you know she thought she did. Um
yeah,
but but you know I think and you raised that point because the touring test is such a often thought of as a really critical moment in the idea of AI, but you also to bear in mind at this point in the early 50s when Alan did that, we weren't talking about AI. AI wasn't really a term that was kind of largely used. But what the touring test I think instigated for me at least is this idea that there's a philosophy behind AI. We have to, you know, to to have a machine a test that could be proven to be humanlike is kind of testing the waters on that human willingness to accept that a machine could be like a human, you know, to that could be um indistinguishable, I think is the right term, between, you know, a human and a machine.
Yeah, that's right. And and then I suppose from from that, you know, it's very similar to the physics stuff, and I know we're going to talk about quantum later on, but this was there was just like a a a you know, an amazing coming together of minds at the Dartmouth Dartmouth conference in in the mid-50s then, wasn't there?
Yeah. Look, and that's really the inception of it all. And and the Dartmouth conference was was exactly as as as you kind of highlighted there. It was it was uh as was the time back then I mean you got to remember as well back in the sort of the 50s in the 60s7s in the birth of computing in the way that we think about it today. You know these are the times when we were building data nets we were building computers we were really kind of just finding our feet and understanding this idea. There were lots of these kind of collective mind groups where people come together and think about these big issues. Um and this was one it was it was at Dartmouth College obviously the Dartmouth Dartmouth conference in the us and essentially was a guy called John McCarthy who is widely credited as the person who first coined the term artificial intellig intelligence. So you know took all of that historical reference points and saw it as this development of a of a new kind of intelligence an artificial type of intelligence um and so yeah so between John McCarthy and and a range of other people that were at the conference Minsky and a few others who are well known in the world of mathematics and science and that's another interesting We're still back then in the domain of science and maths. We're not in the sort of the computing domain. We're not really in the kind of general intelligence domain. We really are still deeply embedded in
Yeah. mathematics of it all.
Absolutely. And that's where you know when I was I was teaching computing at one point as well and when I was looking through all of this history you know I always used to talk talk about this and the way that you then develop into expert systems because it was very much in that theme. that hey you know maths is a pure science and actually anything that humans do you know we can predict and we can do you know we can we can model and you know the the brain yes is complex but and underlying it all there are kind is like a binary computer there's either electricity going through it or not so there was very much underpinned by that kind of um you know binary thinking so you out of that then developed a lot of expert systems which weren't really AI right but you know I to do them with kids in school. I used to talk a lot about um uh them developing kind of simple algorithms, you know, so they could guess the type of beer they were drinking. No, I was doing kids, you know, that seems mixed.
Yeah. Yeah, I know. But I I I used to say to that as an alien landed, you got to pick a subject which you're interested in, whether it's guitars or music or whatever, and then they create this almost like a decision tree, you know, and and they do that in in Excel and they just click create little macros jumping between different pages. And it was quite good. So you know you know we we'd look at the things there was one at that point one expert system which they developed in Switzerland which um was uh a death one. So it was basically about euthanasia and it would you know it essentially ask questions and I think that system kind of
exists now and there was no AI in it but it's kind of you know almost almost there was a great video I used to show with them and it was essentially you know how you feeling are you depressed you're not you know you've got like five or six questions and if you if you if you uh were were that way inclined, you would give lethal injection. So, you know, when I show the kids that they came up with some fun examples about, you know, yes, the Pokemon, you know, is it yellow, is it blue, is it green, what type, you know, all that kind of stuff. And those expert systems, I suppose, were reborn in that time. They were very mathematically orientated, right?
Yeah. Absolutely. And look, and you know, it's interesting that when you think about the the way in which AI is AI developed over time, it's also somewhat often a product of its time and a product of people's expectations at that time of what technology can deliver. So you kind of see this sort of almost this kind of constant uh oneupmanship around well what people expect versus what technology can do and and so on. Um but it's the other thing that's worth noting as well as just thinking about it is you know we when we think about the the Dartmouth conference which and again you know I talk about the Dartmouth conference like it's a major event there were about 20 people at that conference this was not a community of people that was like uh you know thousands of people from around the world that understood this domain very small group of people who really understood what we were talking about or what was being talked about.
But it was about that time the same sort of time that the conference took place um that one of the attendees that Marvin Minsky who I mentioned earlier first developed what we what we now call and at the time I guess was thought of as being the first neural network. The first approach to saying okay machine learning is moving out of the mathematical and into the are pseudobbiological in that you know neural network is essentially the model referenced on the way in which our human brain synapses connect data points. Um and that simple idea was happening at the same time and I think this is actually if we look at this on the AI journey and the kind of history of AI
around the mid-50s the birth of the the term of AI being coined and the development of neural networks was when we started to see I think what the foundations of what we call modern AI today. This sort of interconnect between models that learn and artificial intelligent experiences that use that model to behave and react in the real world. So, it's a really important point in time is is that sort of,
you know, late 50s um period of time and and there's a whole bunch of stuff that was going on there that that was um kind of just new learning for for for us in in these things.
Yeah, absolutely. And and then after that, I know um we we mentioned this very briefly in I think when we talked on AI winter uh after that, right? When when did that come in? What what what stimulated that?
Yeah. So, look, that's an interesting one and and and it's I guess unfortunately named the AI winter is actually named after the nuclear winter and it's the idea that as you know those of us that grew up through the that that period of time in the you know with the shadow of nuclear war is that winter is that kind of that decimated period which where there is nothing happening and no signs of life. so to speak. Um, and so look, it wasn't immediately. So in the 50s, we had this, you know, this sort of excitement around AI. There was lots of stuff going on. I think one one in particular, one that always rings in my mind, partly because I mentioned last week, last time we spoke around the war games and the whopper and the idea of tic-tac-toe being the most fundamental of what we call a reinforcement law reinforcement learning model. Um, and that early that that was in the 60s. They developed this idea of tic-tac-toe is a model by which you can teach a computer to learn about or to reinforce its learning through continuous play. Um yes but so so things were exciting. I think we were getting into a point where um you know sort of the sky was the limit so to speak but it was then um so about 70 late 70s uh the um association for the advancement of artificial intelligence was established. So out of that Dartmouth conference many of the participants in that created an external body that really want sought to create a broader open market opportunity. for AI.
But what was happening at the same time is, you know, as is often the case in when academia and and I guess to a certain degree religious and theology work and other things sort of start to intersect. There were competing papers that argued that this is not how the human brain works. So particularly around the neural network piece, you know, the idea that we had developed this neural network or Minsky developed this neural machine. There were then uh out of that came a book called percept perceptatrons which was this idea of trying to explain how the human brain worked, you know, and how we mar married that up to the idea of a neural network.
But at the same time, there was a lot of content being written that basically um diffused that whole idea and said it was wrong. And you know, that's not how the human brain worked. And it really put a damper and that was what kicked the I the AI winter off was this general kind of push back on the on the idea that the human brain could be emulated in computers.
That that was it was almost a you know, if you like, a sort of a um a travesty of the human experience that you could try and explain human thinking and human the humanity of it.
Yeah. And I suppose and I suppose they're also limited by computational power as well at the same time. So you're kind of you're trying to go through these things. You're limited in what you can do and it kind of is a self-fulfilling prophecy really, isn't it?
Absolutely. Of course. I mean, yeah, you're kind of hit with all of the all of the things that we now know as the reason why we're reintegrating AI were there at the time. was not enough comput power to do it. There was generally not enough understanding of the domain. So there was a lot of mistrust and distrust and and I guess there was also applications and then and and what were you going to do with this kind of problem? I think it was that in that early stages pre the first AI winter because it was two uh it was this sort of well that's great but what's the point almost people would question why do you want a computer that thinks like a human
and it's so expensive to do that anyway you know when you look at all the statistics now I think we I was in a I was in a um a briefing rec ly we talk about human vision and it takes a certain amount of you know images per second to to kind of understand and and try to do that cognitive recognition whereas you know if you wanted to do that in second DC would cost you millions of dollars right
yeah absolutely I mean cost was a huge inhibitor of that one and and that's led to some really interesting stuff we'll get to in that of present day but as you mentioned earlier at the back of that AI winter what what what was born out of that was the expert systems that you mentioned earlier and that was
you know because what we started realize is that computers still had all this potential, but AI possibly wasn't the world wasn't ready for AI, I guess, is is how I'd probably quickly describe it. But expert systems really drove businesses forward, you know, and as you said, it was something that really helped everybody understand how a tech system could help the human, you know, how help us in business and help us in life by by just simply crunching data at a bigger rate than we could through expert systems.
Yeah. No, absolutely. What was the catalyst to get us out of that AI winter? then was was in a particular time frame. You know, I'm thinking back to when I was in uni and things, AI wasn't really, you know, when I did my university modules in the 80s and 90s or whatever, you know, you've got there was one small module on neural networks, but you know, it wasn't really a thing at that point in in universities really.
Yeah. No, it's I'm I'm trying to think back now because I'm like you. I'm I'm of that generation. I'm trying looked what there was as you came out of that first AI winter in the very early 80s which was this as I say the birth of the um sort of these expert systems models generally from what I can read around it I don't have a lot of detail on it but there was a lot of research cutbacks into critical areas of of AI uh you know sort of innovation and technology particularly you know at that point in time kind of thinking back to the late 70s early 80s we're talking about um you know the US government DARPA and a lot of the like the big big education facilities Kangi and that kind of stuff were really the only places where this was happening. Um, and so I think you know because of their lack of funding that kind of that's where we that's where it all died off. But then yeah, as the 80s kicked off and now we're building expert systems you know you and I would know because working for Microsoft
that was a you know early days of Microsoft really growing and and technology in general becoming more normed becoming more acceptable because we started to put computers into offices. You know I remember working in the in the 80s is where we were just moving out of this idea of, you know, mainframes and minis and you were getting into the the desktops, the Commodore PETs of the time and the kind of the, you know, the um MS size and there's a whole bunch of computers that were in those early days, you know,
starting to make computing
uh accessible again. And I think that's where we got the Reaper.
I I just remembered my first experience of it just as you were talking there like I there was a game on the Commodore Amelia in the 80s um It was called Speedball, I think. And there was a soccer game as well called sensible. Yeah. And and they and they were pushing heavily that um there was um AI in the game and it was very much so that when you were playing games, I think
previously to when computing and gaming became quite popular, you know, you used to have the clock radio games like Donkey Kong and things like that on handheld consoles and it was very much like, you know, you you get to the top. I think Mario when he first came out, you know, was on Donkey Kong and you had to get to the top and rescue a lady at the top right, the princess or whatever. And then you just repeat that game for for your entire life and it just gets faster and faster into his eye. And then um uh you know, similar to some of the games on the Spectrum where you just a racing game, you know, and you would just kind of repeat quite a lot, they'd use AI quite a lot. So that when you're playing against um you know, you're playing soccer against another team, then you know, you'd have AI involved in that. And I suppose that's where my interest in AI came into when you saw when I was playing Kita games and and those games started to say, "Well, I've got artificial intelligence in my game." You know, it was a USP for the games. So, you you play I think if you remember the soccer games like you used to be able to do specific techniques and win most of the games, you know, you go diagonally and score goal for example. And then the AI got really clever inside that and may you know maybe as war and things started to drive some of the computational and mathematical thinking I suppose games also have got a part to play in that. you know, especially in the 80s where, you know, there was a lot of pop culture around there, you know, Back to the Future and things like that and people weren't interested in the way you could do things and games for me was kind of my was my first fay into AI properly.
Look, I'm I'm a child of that period and I remember the games a lot myself and yeah, you've got, you know, the lawn man and and Tron and all these movies that kind of were bringing that into the com into the sort of into the general mindset, but you know, it's interesting. You talk about the games and we should get this back to the AI because we get to the future now.
Yes.
Um but the games at that time and and it's you know if you if you think back now if you remember there were books written on how to win at Pac-Man. There are people that have you know these super high scores on um on Donkey Kong because what they figured out was it's not AI. What it was was patterns. And as long as you have the time and the energy and the and the intelligence to follow that pattern, you can learn that actually Donkey Kong doesn't randomly throw the barrels intelligently try to get you. you know, he does it in a particular way based on a set of pre-programmed logic. And that's kind of, you know, early days that was the perception of AI was good enough because we could build enough logic into the code to make it seem like an a AI experienced it. It really felt like the system was was playing against you, wasn't predicting your moves and behaving, you know, in a way to either make the level harder or not.
Yes.
Um, but it does kind of bring us, I guess, and you should back to to the point
yeah the modern
of where we are which is you know so post that time in the 80s and then there was another drop in what we call the second AI winter late 80s to early 90s which was really more attributed to the fact that there was just an economic bubble in the world we there just money wasn't flowing into those areas of of technology and AI there was a sort of a if you remember actually late 80s uh was the video game bubble you know burst and you know the ET famously being shoveled into my into holes in the ground in Mexico was the end of this kind of human the the public fascination with computing was starting to die off a bit. But let's get back to the AI and where we're at today because then what we hit is in the I guess early 2000s as a couple of things happened which was you know the birth of companies like Google and the Google search experience that we all know and and see today and and the broad development of cloud computing. as well as significant advancements in the field of mathematics driven by access to just exponentially larger computing and uh general purpose computing and technology that was actually built for the mathematical computation of the many now machine learning algorithms and models we had. You kind of had this confluence of all of the right ingredients. You had the lowering cost of technology, the great the mos law coming to an end in technology. is just getting, you know, diverse and strong in its different types of computational technology and the algorithms getting better at predicting, you know, models and outcomes and and you know, and creating speech and voice and vision and image recognition. It was all just getting better at this exponential rate. Um, and that's kind of where
I think the birth of what we have today. And, you know, there's a couple of really big milestones that we've talked about in the past that, you know, we we we we all kind of love largely know remember the stories of a big blue beating Gary Gar Kasparov in chess you know and again early stage AI but it's about a computer being shown so many games of chess that it has a knowledge or at least an understood construct of how chess is played and that's an early idea isn't it machine learning I see a million pictures of things and I know what that thing is and you know and
2012 a tremendous moment in human evolution that moment where We really peaked our capacity as a as a sense of purpose on this planet and we invented or Google invented an algorithm to detect cats on YouTube videos. I think that to me is kind of the the beginning the beginning of the end if you planet.
Yeah. It's interesting when you Yeah. you go back and think about it like that because it did that that happened quite quickly, didn't it? And and when you're looking back to the I remember the deep blue chest thing and I think that's always been it's always been a something from the beginning and right back to the the four where chess was seemed to be this this um higher intelligence if you make a computer with chess that's it you know whereas where our viewpoints on different things now image recognition you know ve very much changed but it's almost like the the the Turing test for AI as well as if they
yeah absolutely and it is it's um there's so many similarities you can draw yes between those very early days of what Turing was trying to achieve and what we do today in terms of the way we use voice and vision and and sort of types of AI services
that's brought us up to this was the modern day and I know we've got a couple well a lot of episodes coming up to kind of really some of these tools but as the national technology advisor for Microsoft where are we today we've looked at a lot of this history now where do you see us at the moment and what have we got to look forward to
yeah look it's a it's it's sort of a million-dollar question and it's front and center what we think about today so let's let's kind of you know yeah we talked a bit about some of the things that you know the Google cat thing is quite funny But generally speaking, you know, we're at that point now where um you know, there is so much advancements in the three key areas of of of data and accessible data, computational power, and just the the richness and the complexity and the capability of the models that we're seeing almost exponential jumps, you know, within months now. You know, what was what was a 40-year timeline to get to the point where we've got a, you know, an algorithm that can detect a cat, we're now every every few months we're leaping forward in just the the incremental points of accuracy and experience we can create through you know AIdriven voice interfaces or or image recognition things but it's important that you know what hasn't really changed is the fundamental mechanism that is we need data to teach AI systems the basics of how we think operate and behave as humans to create that artificial intelligence you know we still feed these models huge amount of data labeled or unlabeled. We still use mechanisms like reinforcement and supervised and unsupervised learning to have the systems figure out how to make the right decisions. And that's where we're at now is I think we're at a point where the technology is capable of making pretty much any decision we throw at it in a general AI construct. You know, we and that's a topic we could think about is the general versus uh sort of narrow AI mindset of of where AI can actually solve these very um you know rich detailed problems. We've got AI able to solve these problems. It can probably solve anything we want to throw at it in a in a specific area,
but we're now grappling with the what should it solve and how should it solve it and what is the boundaries we want to build between us and the device as it makes those decisions and how do we make it an extension of our life versus a replacement of our lives. And in many ways, we're almost right back at those early thinking that Azimoff and others at that time were having which was if we build this we need to build a fence that protects us and it from each other and that's I think you know that that's a huge amount of the work we're doing now and
um you know which is not just Microsoft of course it's the industry in general but we're very proud of the fact that we're doing a lot of very specific work in this in fact we just recently the build conference that's happening this week online we've been announcing some amazing capabilities we're building into our AI tools that let our customers try and understand some of that b fairness and biases that can drift into the data we feed into systems. So, we're really working on that one.
Yeah.
But for me, you know, where do we go next? And we you mentioned this earlier, we're going to talk about this in the future is we're almost not hitting the limits of uh you know, the amount of data and the amount of computation we can throw in it. But we are we're hitting some economical limits because you know it's not whilst cloud is economically much much more viable than it ever used to be, there are costs associated with huge amounts of data of human to compute and where we're heading towards is the supercomputers and then the quantum computers and and the intersection of quantum and AI is is probably the next big horizon. Um I would say to anyone who's uh anyone who's watching if you've come across a TV series called Devs the EVS it absolutely is an Alex Garland story that explores this very idea of what happens if we build a quantum computer and we feed it all the data we have and we give it the intelligence to predict like that basian bay theorem
what would it tell us and I think that's where we're at that that that's that kind of that amazing potential future we have in front of us
yeah it's it's fantastic to hear you talk about that and the way that that every you know things are are moving rapidly you know and as well you know the fact that cloud like you said is also making supercomputing and and quantum computing available to anybody's fingertips is is exciting for all of this. So it's it's Fantastic. So, so this is like the foundations for modern day, AI and machine learning. You know, I think it's great to kind of just set the scene. Um, this is interesting and I think people um have got their own opinions on where AI sits in in a particular businesses and going back and thinking about some of the ethical issues that have always kind of pressed everybody when we thinking about new technologies. It's quite interesting to see the cycle starting to repeat itself again. It is a weird storm. We do have a habit of repeating ourselves but there is a there's a sort of a well-known or a welltrodden path in this world of of of constant innovation which is you know in order to understand the future you have to take a look at the past and I think hopefully what we've done today is give a sense of the reality the past of AI good and bad
and and let's keep going forward now um
yeah I'm really I'm really looking forward to the next uh episode le so thanks very much today and I'll catch you in the next one
looking forward to it thanks