Preview Mode Links will not work in preview mode
Welcome to the AI in Education podcast With Dan Bowen and Ray Fleming. It's a weekly chat about Artificial Intelligence in Education for educators and education leaders. Also available through Apple Podcasts and Spotify. "This podcast is co-hosted by an employee of Microsoft Australia & New Zealand, but all the views and opinions expressed on this podcast are their own.”

May 27, 2021

Dan and Lee take stock of the current climate for AI from security and chip shortages to policy and government strategy.

 

Kate Crawford podcast and commentary on the planet/sustainability cost of AI - The Atlas of AI: Power, Politics, and the Planetary Costs of Artificial Intelligence: Crawford, Kate: Amazon.com.au: Books

 

European legislation on AI - Europe seeks to limit use of AI in society - BBC News

 

Microsoft purchase of Nuance - Microsoft makes $20bn bet on speech AI firm Nuance - BBC News

 

Disney Project Kiwi - Disney Imagineering’s Project Kiwi is a free-walking robot that will make you believe in Groot – TechCrunch

 

Who wrote the dead sea scrolls - Who wrote the Dead Sea Scrolls? Digital handwriting analysis and artificial intelligence offer new clues - ABC News

 

Building AI Partners - I created an AI boyfriend. Here's how it went - Hack - triple j (abc.net.au)

 

Call for $250m injection into AI economy - Tech industry urges $250 million AI budget cash splash (afr.com)

________________________________________

TRANSCRIPT For this episode of The AI in Education Podcast
Series: 4
Episode: 6

This transcript was auto-generated. If you spot any important errors, do feel free to email the podcast hosts for corrections.

 

 

 


Hi Lee, welcome to the AI podcast this week.
Hey Dan, how are you? How are you going?
I'm good, thanks. I'm looking forward to this week uh I'm getting on my first flight which will be fun. Uh
I don't know if you've been Where are you going?
I'm going to go to Melbourne.
I'm going to Melbourne, too. We probably should have checked this out before we started. recording, but yes, I'm traveling too as well. And uh I've actually had a couple of flights. Um still a bit different, you'll find. I don't know if you haven't been out there for a while. It's still it's a bit different. Masks everywhere.
Yeah.
But it is kind of normalish.
Yeah. I'm just going for the coffee. I'm not I'm not going to meet customers or anything like that.
It's a long way to go for coffee. You must be pretty desperate for good coffee.
Yeah, absolutely. And I've just signed up as well. New South Wales just signed up for the uh the COVID jabs. So you can sign up if you're in particular group of I think it's between 40 and 49 or something. So, I've signed up for that. So, that'll be an interesting one to see if I can get the old uh FISA jabs as a sign of the
time you it's it's FISA for you young people under 49 because if you know this is an interesting week for me. I just turned 50 this week. So, I've sort of sit on the on the border of you know am I an astroenic or old person or am I a FISA young person? I don't know where I sit.
You you got to put your age against the vaccine now, right?
Yes. Well, technically I'm 50 and the rules are you know it's over 50 is Astroenica is okay but um but yes big week for me turning 50 this week which was uh you know something of a thank you bit of a milestone bit of a shock to the system I can tell you as and when you get here Darren uh but also Mother's Day of course you know great to spend some time with my wife and kids and talk to my mother who's in the UK still and has had both of her Astroenicas she's just desperately waiting for her flights to open but unfortunately could be a while away
that's exactly the same year my mom's had the Astroenicas as well so she's uh but the You know, it's going to be an interesting one, isn't it? And I think it's had a lot of impact globally as we all know, but also across the technology fields. And and today's episode, what we're going to do is hopefully set a kind of draw a line in the sand or set a flag down to say, well, this is where we are at the minute. You know, this is what the trends are. So, we can reflect on that and people listening to the podcast in a different year, in a different era, uh, can look back historically and say, "That's what Tan and Lee were rabbiting on about. a year ago. Um, and and we can see if things have moved on because you've been you've been on the podcast a year now, right?
Yes. I was as as you say, planning on this when I was looking back. It's been it was April 2020 when I took over the reigns from the uh you know the um the inevitable Ray Fleming um no longer with us to to go conduct these. But yeah, so it's been a year um which kind of is a bit weird to me because um it feels like it's just shot by in some ways, you know, it feels like it's been really quick. But at the same time, you know, it's been probably one of the hardest and slowest years for a lot of us. I can't remember now, Dan, you remember when we first when I first stepped into this seat and episode, season 3, episode one, we did a bit of AI, VI technology,
and we would have done it like this, you know, over a virtual conversation. In fact, I don't think you and I in the entire season 3, 16 episodes, and now season 4, we're up to season six, episode six, we haven't actually sat down in the same room, have we?
No, that's true. Absolutely. I know. I I Yeah, it's bizarre.
It's crazy. when we've done these things, we've spoke spoken a lot and we spent a lot of time together, you know, and and it's it's been interesting. Look, for me, it's been great. So, I mean, hopefully our listeners are getting something out of it, too. But for me,
it's really forced me to have to, you know, to kind of go and do a bit of research, to think about topics, and it's created for me the opportunity to have new and different conversations because it, you know, we've met some amazing people this season. You know, always amazing to talk to to Michaela and Katie and Ema. Amazing people we've spoken to, um, which I've really enjoyed. And then last season, you know, just we met some great people then you've remember the session we did with outer on research that was in AI
awesome awesome stuff um but look it's been quite a journey thank you for letting me continue to do it hopefully uh we're still keeping some listeners listening in um but look and hopefully one day we'll get to do this in person and you know what I've seen quite a few podcasts move to the uh the video format Dan uh one I follow quite often is the uh Major Nelson's the Xbox podcast bit of a bit of a sad gamer person myself and um but he's completely to podcast video podcast.
Yeah, I've seen that recently. And there's lots of uh lots of Easter eggs that he puts in his uh videos as well, which is great. I I saw it changes it.
It does.
So maybe we need to think about that in the future. Maybe we need to be videoing in the future, right?
People need to see our faces.
Absolutely. Why not? You know, multimodal kind of input. And I suppose what I've learned as well generally and what I'm setting myself up for as well is, you know, some of the trends that are coming up and actually focusing on exams and looking what I'm going to learn more cuz it does force you when you're probably listening to this podcast or uh when definitely when we creating it we look and research and try to find interesting topics and it allows you to kind of really appreciate uh the the entire facet of technology and the way artificial intelligence is kind of appearing in different contexts and we'll have a look at at different areas from this from the legal aspects today to some of the fun techn logical advances that have happened and and what what we what we can do to reflect on that. I suppose
look this the one thing I've noticed and and you've probably seen this over the last couple of years you've been doing this is there's just an almost a deluge of advancements new ideas new new new capabilities technology stories good and bad about where AI is intersecting with our lives um you know just this morning interestingly enough I was writing some content for a presentation I'm giving next week at a at a conference for the pharmace pharmaceutical industry on on AI and the use of AI in healthcare and you just start to see it bleeding into every facet of every business. Everyone is looking to understand where AI can play a part and also trying to understand well what is it good is it bad is it right is it wrong you know how is it going to apply how's it going to impact me and so it's it's good to see these conversations but it's crazy how much is going on isn't it
yeah no absolutely and if we if we think about about this as a this discussion as a bit of a road that we're going to travel. Let's start at the beginning I suppose. So I'll start with in in in reality in terms of where this journey is kind of moving currently. Um so a lot of people are worried about security. People have been working from home now for a year. There's hybrid work coming into it and security is jumping up a lot in whether you're an IT professional or whether you're a consumer. You know lots of your things now you're buying. You know people are moving to a lot more online shopping. Um people are kind of working from home and connecting via their teams meetings or zoom calls. So security is now becoming really important and there's a lot more telemetry coming in for those IT pros. So one of the first areas which are which are kind of jumping currently uh to the front of of IT professionals minds is the use of AI to manage those signals that are coming in through various tools that various organizations doing because you can't sit there and just monitor logs that are coming in. I was looking at a tweet yesterday where where um somebody was talking about the amount of telemetry that's coming in that he can't he can't deal with, you know. It's like X product is sending, you know, thousands and thousands and thousands of signals per minute to his email box, you know. Um so I think AI I'm seeing definitely at the minute go, you know, really holding its own in that security space. And I suppose that's that's a reality, you know, when we talk about AI, people are thinking uh and we've covered a lot of kind of uh AI that's really out there to do with facial recognition and things which is starting to permeate but security is happening you know right now and being useful and people are highlighting some signals yeah and and I mean look security is important uh critical not just important and and but it's almost as much the the privacy aspect as well because you talk about that issue of the signals I mean obviously there's a security issue in that that data is you know potentially a risk or it highlights and indicates different people's patterns, work life patterns, where people go and that kind of stuff, but also the privacy side of things. You know, how do people feel about systems that are essentially monitoring their activity? And and you know, you and I know because we work in a company that, you know, we build technology that does that kinds of stuff for around the to help businesses better understand the work life balance management. And this is always this interesting aspect with AI. And every conversation I've ever had, Dan, it's that sort of balance of good versus bad, you know, evil and uh versus the value that it brings. Because, you know, a tool like that is tremendous. valuable in helping people, individuals like you and I understand the balance and make sure we make time to be doing the things that are also important outside of work. But the flip side of that is it's also a big data set that tells somebody how if people are working and are they doing everything they should do and so you kind of you almost have to trust in this trust in the system in which you belong to accept that.
So so when you this it's a really interesting point. So when you look at the the general worldview and technology do you think that's shifting at the minute then current when we looking 2021 and looking back and actually currently where we are. Is that is that world view shifting around AI?
Uh look, it it is but and I think that many of our listeners would probably see this and feel this intrinsically as you watch the news and listen to uh you know kind of what's going on around you if you sort of put your ear to the ground and listen to the tone of international conversations. We're clearly in a very almost I wouldn't say dark place but we're in an interesting place. um issue what what co did has created this you know AI created this sense of individual countries need to be capable of maintaining their operational capability in the face of a massive shutdown of the global supply chain you know so you know and all of those things are managed by technology and it's always tools and tech companies that kind of help manage global supply chains and and deliver services and so these issues of sovereignity you know what what is what is Australia capable of doing by itself without any other foreign influence. What is the resilience of Australians society's ability to cope in the event of something like a pandemic? And of course, AI and technology, it's not just AI, but big data and the capturing of information is what helps us understand these things. The more we learn about how we behave and how we operate and what we need and what we don't do, these help us build future plans. So, it is look, there's a there's a general sense, I think, around the world that um we're we're nationalism. We're all trying to kind of become a bit more self-reliant and self-interested and that's a terrible thing but it's unfortunately a natural outcome of human a human human re reaction to a situation like co um but at the same time I mean you've got amazing work and operations going on where people are connecting across borders and trying to help you know you people in India right now with the terrible situation going there many people trying to help and look underpinning all that is technology you know we always forget this it's easy to forget that technology and tools and the inter interaction of data and computer systems is what enables a lot of these to happen and ultimately security becomes a risk because AI is being applied to make these things operate smarter and faster. So yeah, look, it's it'll be an interesting reflection in a year's time to see if we've changed it all.
Yeah. No, absolutely. And and you know, I I can't re-emphasize that point as you were talking there. You know, there's so many examples that are jumping into my mind about the way different countries are trying to become resilient on chips now because there's a chip shortage 2021, the the global chip shortage which is now apparently going to be pushing into 2022 um which is having a knock on effect for everything from cars to laptops to you name it and and then obviously there's there's a a thought process also around electronic cars you know the Tesla moves there's a lot of talk around crypto at the minute so people are people are really trying to diversify and think differently and innovate and a bit like when when when if we learn from history in one of the ones we talked about with the Dartmouth conference and the AI winter It almost feels like where we could reflect on are we in like a a new AI summer in quotes, you know, as co pushed us into that, you know, a bit like wars do when they push you into innovate and you go through this cycle. I wonder if we're on a on one of these kind of waves at the minute around AI because it is becoming uh like you're saying a general talking point uh globally and people are starting to become you know focusing on sovereign aspects but then but then there's also the fun stuff as well, right? So the AI being used in games like the Black Shark AI and Microsoft Flight Simulator where we're using 3D uh models of the world and real geographical uh information system data to bring in weather patterns and things to games and and even plot a put that tanker that got stuck in the Sears canal a couple of months ago. Uh put that in live in a uh in a game. You know, there's this phenomenal use of cloud computing and I suppose we've seen in you know I think it might have been a year maybe even six months where companies have pushed things like cloud gaming like Google Stadia which is now being kind of moth bowled to a certain extent and and you've seen Project XCloud from Microsoft come out where you're using cloud compute and and on all that kind of stuff is it's really interesting the way some of the some of the commercial AI is also permeating into consumer space and and they're all pushing each other at the same time at the minute.
Well, it's it's interesting. I think you used the example just there about kind of how wars kind of push countries forward and you know there's an that if you want to paint it that way as a positive outcome. There is that issue that thing that you know we have seen a lot of things come out of that and and you know space travel people always question the value of space travel but of course our our exploits in space uh missions to Mars and the moon and other things that we push have actually created a lot of things tangible practical things that we use today in our real world um I'm struggling to think of actual examples but I know there are many um uh you know little um things that we we use today that are as a result of the investments in space program but you the black dark stuff is is when you mention that in flight simulator because it's an interesting one. Yes, amazing technology to take that flat imagery and convert it into 3D. Um, amazing from a gaming point of view, but of course, you know, it's it's intelligent AI systems that interpolate the data and turn it into buildings, but it has amazing impact in other areas like military usage as well. So, it's always back to that ying and yang of AI. It's this kind of interesting tool, but where do you apply it?
Yeah, that's right. And and I suppose that the the other element when we thinking about that the connection with consumer land and and that application is also we've seen a massive push for AI and accessibility at the minute as well. So we've seen lots of examples of AI being used. I think that I think people resonate with stories, don't they? And and when you look at the stories that that we see a lot around AI, there are things about um there was one that Microsoft did the other day about saving turtles um and kind of using recognition and image recognition to to classify fish in in harbors and lots and lots of things that capture people's hearts and minds. I suppose when we thinking about AI and agriculture and and those aspects, but also that element of accessibility where people are seeing AI in their lives where supporting them with writing, supporting them with reading, connecting uh people with technology which might not have been able to before, people are deaf, people who are uh visually impaired. So there's a lot of lot of that connecting as well from consumer land.
Yeah. Look, it it is and this is what you know we spend our time talking a lot about is AI can fall on either sides of the of the rails in terms of good and bad or you know positive impact negative impact and we've always got to kind of emphasize that positive impact and talk about that but that that said and I don't want to I want to I'm just conscious of our time down we got so much to talk about today um but I wanted to touch on one thing because I I just want to bring it back to a really interesting um book that I've that I've seen this last uh last few weeks I think it was early April it was published so you made the point around chip shortages and you know and obviously there are hardware shortages but there are also this issue of in order to keep building this AI hardware you know the Alexexas and Google homes that we all love in our houses or other computers and devices that that are generating all the machine learning there is a requirement not just for the hardware but things the rare earth metals so and this is a bit that's been really interesting for me to look into you you hear this word rare earth metals it sounds sort of almost mystical but there are literally these things take lithium for example in batteries there are a absolute finite supply of things like lithium and other things in the world that we use at an incredible rate to build hardware to do the build the technology that creates this world we live in and we're running out of them and they won't be a replacement. They're not like thing they're not things that we can just manufacture ourselves in a meaningful way or an economic way today. So this book um that I was looking at it's called the atlas of AI power politics and the planetary cost of artificial intelligence. It's by Kate Crawford who she is in of herself a a very distinguished PhD researcher in this area. So she happens to be an AI researcher at Microsoft. She's also, you know, a wellrespected PhD in the field of AI herself, done some amazing work on this. But the book is a we'll put the link in the show notes. A really interesting book on this issue of the fact that AI we maybe need to rethink how we're doing AI just simply because of the the sort of the physical cost that it is to generate all this uh intelligence that we need to create the hardware. So yeah, that was not to be negative, but that was an interesting one that came up for me. But there are a couple of other more fun things I guess happening in the the world of
that is a very interesting topic though because the way when we looking at everybody's jumping on say Elon Musk's bandwagon at the minute and Tesla and electric electric cars but that's pushing the uh the need for lithium and other uh supply chains around that up as well. So we think it might be you know we kind of pulling our dependency off and pushing into other rare metals really I suppose and I need to read that book because you mentioned it a couple of times now Yeah, that's right. We're pushing dependencies. Yeah. Yeah. Kate Crawford. Yeah. Um uh Yeah. But look, we it's a really fascinating subject because as you say, we all sort of think a lot about AI from the experiential side of it. You know, I have an app on my device. What I don't think about is what was the hardware required to build that device to have it capable of building that, you know, that app for me to to enable me to have that service. There's this supply chain behind AI and it's not just in the scale of compute needed to do massive machine model inferencing. to kind of learn how to do speech recognition for example, but then just the physical devices themselves, the the Alexexas and the phones and the tablets and the PCs that we seem to turn over at at an alarming rate as a society. You know, it's a refresh your iPad every month, every year kind of thing.
Yeah. No, you're absolutely right. And and when we're looking at the the consumer element again, I know going back to the fun stuff, we kind of going, you know, real real nub of the the the tricky comments here, but then picking up some of the light-hearted things in terms of some of the light-hearted things that happen. in the news at the minute in in 2021 as well around uh the walking robot, the Disney project Kiwi.
Yes.
The use of uh handwriting recognition for things like the Dead Sea Scrolls. I know you've been digging around in some of these examples and eventing ones.
Yeah. Well, look, the two you mentioned there was another one I think that the um uh someone in the CSRO had built sort of recreated the idea of an AI boyfriend. So, you or an AI partner. So, how can we readress that issue of creating kind of the artificial uh life partner that will
you know and and this the seriousness of that was how do we stem loneliness and solitude and issues of people being disconnected from society you can build these devices that will help people communicate and across in the Asian countries they've been doing this for years they've kind of recognized the value of a of a robotic or automated um you know communication device but it's an interesting one the one the one that stuck out for me just because I'm a bit of a Disney geek and I think it's really really cool is the project Kiwi stuff which should put the link in there but there's a great article on how Disney have taken this journey to go from kind of you know I love Disney's idea that they of sort of tackling the impossible and then making it come to life
and they they've built essentially a fully autonomous free walking robot now you've seen the Boston Dynamic stuff you know the dogs that can chase and run and and the robots that can walk around
but they're not designed to emulate the human experience you know they are kind of designed to emulate human tasks but there's there's one called Project Kiwi, which is essentially, you know, spoil the surprise, it's a Groot um from Guardians of the Galaxy
which is full of person, ironically for a tree, full of personality.
Um and they built the robot not with the intent of it being uh, you know, to kind of do something important, but just to connect with people, to show people that a robot can be, you know, kind of humanistic in its tone or create a personality. And I think that's you know, you know, what's the value to the global to the world? Very little. What's the emotional impact of human beings that see it. It's really connecting
and and it's it's good that because Disney is they uh it's their sweet spot, right? Because they bring characters to life. That is their it's all about storytelling, bringing characters to life. And I suppose they're the perfect partner, you know? It's not a it's not a Boston Dynamics uh creativity and robotics that you're looking for. You're looking for that that reality coming forward and that creativity.
Well, and I don't know if you're much of a Disney fan, if any of our listeners are, but um
so I mean One of the very very early things that Disney created as an experience for you to go to at the at Disney um land at the time was the Hall of the Presidents. Now this is obviously very American, very focused on how Americans kind of value their their their ex-presidents, but it was a hall of presence. It was audio animatronics which was Disney's early stage development of those kind of a automated robotic type creatures. Very very basic by today's standards, but in a sense back then in the 60s that was the first iterations of kind of an artificial intelligence, a version of Lincoln giving the speech again a version of you know the other presidents giving famous speeches or communicating with people and people believed or wanted to believe and that's an interesting point when you get to AI and robots is at what point do are you willing to suspend your your know your head knows it's not real but what point does your body and your emotions give in to say I want to believe it feels like it's real it's an interesting uh dichotomy
yeah no absolutely and when we when we start now applying some of these so 2020 one it's been I I think this podcast we've highlighted quite a few things that are happening uh around legislation. So governments and and global bodies are starting to think about and even the Catholic Church we mentioned the Rome call to ethics a couple of episodes ago uh are starting to weigh in on this debate around European AI legislation and the Australian CSIO uh kind of strategies to retain talent. So there's a There's a lot of strategies and policy that happening at the minute. What where do you where do we start with that journey if we putting a line in the sand to where we are now?
Yeah, it's a good good because there is a lot going on. So I mean the there's two big things I think in my mind I'm seeing happening right now as we speak which is in what May 2021. There is the the the European uh legislation or the European proposal on legislation for the use of AI is is a really interesting one in so much as you know, a bit like GDPR, which the Europeans put in place a couple of years ago, um, which has become not so much a globally accepted standard, but it is certainly a globally recognized benchmark and model for how our general data protection is done. You know, how what what privacy and rights we all have. And they've attempted to do a similar thing with AI. Um, now there's a lot in there, so we're not going to be able to get through all the moving parts of it, but essentially, if you think about the way that GDPR defined the regulation, the rights, and the process that needs to be built, by uh or or the rights that are afforded to individuals. The process that is needed to be put in place by organizations and the responsibilities that lie within government policy legislation for the application of general data protection. Uh that's the similar kind of construct but for AI which is a big thing to do and we should probably pull apart a few bits of that. I know what you think
and yeah definitely and and this has come about then just to set the context because we've been talking about you know say individual companies like Microsoft or Google or whatever coming up with their own AI strategies and now we in a point in 2021 where governments business was we uh developing a lot of that technology and thinking about the ethical use of that technology like any company might do around cars and things they come up with their kind of limits and safety tests and and and whatnot and we'll come up with our ethical principles that we might want to apply to some of this new technology. So you're saying with the EU proposals now it's you know large jurisdictions trying to grapple with this and come up with their own list of policies and frameworks. Is that right?
Well yeah talking about actually the example you give the car is a really good example. So if you go and look at any car manufacturer in the world you know no car manufacturer wants to build a car that crashes or a car that that explodes or a car that is unsafe for people to drive. Nobody wants to do that. So every car manufacturer will put in place you know processes in their own manufacturing and their, you know, assembly and design principle to test it and then they go and test a bunch of cars to make sure that they, you know, kind of do what they're supposed to do.
But what you, you know, and that would be that's kind of good, but then what you need is some governing body and some legislation and then some some rules that mean that you can't you not only have to do that, but you have to meet a certain standard and you have to be held accountable for when you don't meet that standard. And then you that's why you have these sort of global bodies around the automotive industry and and it's the same Now with AI in that, you know, any company building AI, us, Google, Facebook, anyone building it, you know, probably and I'm I'm only speaking from for our own company, but you know, we don't intend to cause harm. Our intention is never to build a product that has that causes harm. What we have all recognized as individual companies is that there is a risk of harm. There is a risk that if used in the wrong way or if used prematurely that certain AI systems might cause damage. Um, you know, obvious, you know, I think we could agree that the social scoring system that was used in that is used in China um has a major impact on social damage. You know, it causes massive issues around the halves and the have nots the you know the there's a scoring system that determines the life you will lead and that that's a big big risk. But it this is more interest more interesting in that what the European legislation sets out to do. Sorry, I shouldn't call it legislation because it's not law. This is about building um a dro uh which ultimately becomes things like legislation, the law,
but it things like just thinking about the fact that what is high-risisk, what is a low-risk AI system, building categorization so that people understand that a you know a machine that is going to move people around like a lift or machinery that works in a you know certain way medical devices these are high-risisk AI systems you know they have a risk of causing harm if done badly
uh biometric surveillance that kind of stuff you know there's there's risks associated with that
and by recognizing the difference between high sorry between high-risisk and low risk
we can start to apply the right kind of process accordingly. Sorry I'll let you
yeah no that that was what I was thinking. So when you're looking at high-risisk systems it's not saying that it's it's identifying those but then saying well if you got a high-risisk system like a medical um device or whatever it may be then you know let's make sure that the data sets are kind of complete and high quality and there's validation in there and the the AI is open and transparent and it's not a black box. Um Yep.
So you're putting in guidelines around that. It's an interesting approach.
Well, look and and it sort of makes sense in some ways and there there my view is there's good and bad in this. So I what's great about it is it by categorizing that way it means that we can apply the right level of of focus and effort and responsibility to the right tools. You know we're not putting, you know, oodles of con of red tape and problems around an innovator that wants to build a an AI light bulb that switches on when you walk in a room kind of thing. You know, that's kind of low risk. But if you're building a medical device to measure someone's heartbeat and accordingly, you know, manage medication for that, you want to know that that's going to work and it's going to work absolutely. And if it's using AI to infer something, you want to make sure those two are different. So that it's good to build those two separations. I think that is an that's an important validation of why this legislation or regulation needs to exist.
So as a as a planetary CTO, Mr. Hickin, so like I'm just trying to digest what you're saying there, but if you are in the EU. Now, this is an EU proposal. Now, the EU are made up of multiple member states.
The globe is made up.
Oh, yes. Oh, no. That's sad. And then the uh the globe is made up of, you know, the countries. And going back to your point at the beginning, you know, we we're in a currently in 2021, we're in this odd kind of ju position of of wanting to be nationalist and and sorting everything out together, but also needing to work together.
How how do we how How does legislation like this manage to peripherate across multiple states and and globally?
Well, that's that's so on the flip side of all that good stuff, this is where it becomes challenging and GDPR suffered the same problem in so much as it was a you know, it's so the language is the same like GDPR the regulation for this AI affects persons located in the EU. So that's easy to understand if you live in the EU, you are you get the value from this and you are under the regul of this that is you know it is your world but then it also states that even if the provider distributor or the users themselves are not in the EU so now suddenly if you are not in the EU but you build a piece of software a solution that operates and interacts with people in the EU
then by definition you have to build it according to the EU policy if you want to operate inside the EU.
Yeah.
So that that becomes a challenge. Yes. As you say because you know that that that really blurs the lines and this is always the problem I think with early stage regul around anything like this is when we use vague terms or we use broad brush that language which is really the only way you can do these things in early stage is what you end up with is uh interpretation you know that the model of the law it's always about interpretation so things like for example a prohibited use case for the AI in the new EU legislation is uh manipulating human behavior to their detriment that is something that's a prohibited use now human behavior to their detriment.
Trying to define that in any other way other than a a personal, you know, what is what is considered detriment? What is the considered their behavior? It's really vague and that's always going to be a challenge I think.
Yeah. No, it it is a mindfulness and I suppose it's each country is is is looking at this, you know, Australia looking at this and each individual, you know, the US are looking at this and and they're using
their own lens and, you know, even when we look at privacy standards, you know, and that's a good example. I suppose people have navigated that and use GDPR as that that you know good to have you know experience kind of minimum bar to entry when we're looking at uh operating those things. But I suppose my worry you know when you're just talking there is if you do have a country that's kind of more um I don't want to say liberal it's not that but somebody who may may have as many controls and may want to do that say when we do look at some uh China for example though they want to use those tools and technologies. And I know we can't answer all of this on this podcast, but it does open those doors sometimes and and as technology companies even coming back to that adage that we always use, just because you can do something doesn't mean you should. And taking the Microsoft example where we stopped the use of our facial recognition in the US itself until such time as a legislation was put in place to manage AI in public order. You know, it does Yeah, it makes it very complicated.
It it does and I think what it highlights is is one one thing which is look you've got to admire what the EU are doing here is trying to build I think what what I would describe as guard rails and control systems to ensure that everybody aligns to a particular way of doing things and everybody understands the rules of engagement around the environment which they want to build and I think that's a really good thing. It's critical when we think about AI systems like this or AI legislation or AI pro processing laws and rules that we think about it from a global scale. We have to we can't build rules in Australia that are different to rules in Europe which are different to the rules in the US because then we live in a global society and we and we create artificial barriers to innovation to value and to opportunities for Australian businesses and so on to get out of that. Um so I think that that we you know that that is an issue when you build something of this nature is that you you kind of assume everyone will follow behind you. Now GDPR was what what was good about GDPR was was there was almost little to no negative value for anyone in society. I mean, you and I all would value the fact that we can have, you know, rights to our data, rights to removal and rights to change and that kind of stuff in this scenario. And and all it did is Everett did was impose uh, you know, constraints and fines on organizations that didn't do that. So, it kind of meant, you know, it was good for you and I. But in this world, it's kind of different because it's putting rules on things, but then also defining public acceptance or or or human tolerance for technologies what is considered to be you know you like that term human detriment it's putting some words around phrases that aren't as clear I think that's where it's going to be a little harder to work on
yeah and I think the the good thing about the EU good or bad thing which you've seen at the minute May 2021 again just reminding our listeners you know we've seen a lot of antitrust cases coming up now around multiple types of technologies involving multiple types of companies which are which are the pointy end of some of the discussions we have in here and this was one of the things about having large body like the EU is they always follow it up with massive whopper fines. So, so inside this uh proposed uh legislation or or you know legal documents here there's a huge fines you know 4% of animal turnover worldwide of you know and there similar things in GDPR uh you know they who corporizations and and you know even small corporate corporations were kind of using this technology um We're talking about serious pointy end of the thing or very sharp
shops
which is what makes it kind of you know it makes it work because at the end of the day when the risks when the costs are that high not to doing it then it sort of incents many organizations to to do it. So look it'll be interesting to see how this plays out. It is early stages. It's only just come out as a proposed set of legislation. It's going through lots of reviews at the moment. Um we'll see. I look I'm personally I see that this is a good step towards where we want to be. which is guardrails to protect citizens and civil civil society, but freedom for organizations to then understand what they can innovate on. There's one thing that stood out for me, Dan, that I think you uh you're going to like because this is a real throwback to conversations we've had over the past when we've always at some point we have to reference Isaac Azimov and the laws of robotic.
Um, and there's one of the things in there which around transparency which requires and I'll read the words explicitly all non- high-risisk AI systems. So, anything that was in that high-risisk category we talked about that interact with natural persons. That's an interesting phrase. Will have to explicitly inform them that they are communicating with an AI system. So your touring test is dead. Yeah. It has to tell you before you start, hey, by the way, this is an AI system. Which I think is a really interesting point because it recognizes the issue that however good or bad AI is and however much we follow the guardrails,
it's hard, it's getting to the point where it's actually sometimes hard to know what's real and what's not. Yes.
You know, deep fakes on one side of the of the bill, you know, automated bot that may or may not be a person, but it's really hard normal.
I find with bots at the minute, if if if my question gets answered really quickly, it's a bot. If there's a lot a lot of thinking going on and and it becomes frustrating, it's probably a real person. Um, interestingly, where we are, you know, we've talked a lot about that that policy, but I think it' be good to just put a line in the sand as well around strategy. Now, I know the um the government in Australia have come up with a digital economy strategy. um and and lots of proposals in recent budgets.
Um can you give us a bit of an insight into what what that kind of
is is looking for and what that should deliver and when we come to look at this in a year's time or listening uh we can see if it's delivered what it should
look yeah absolutely it's really fresh so I think we yeah you we're going to just really skim the surface and we maybe talk about this more in the coming months but so yes this was this uh new uh digital economy you put down as a precursor to the budget which is happening right now
uh and look it's great in many way It's great. It's a recognition that Australia is investing in our digital capabilities, our digital skills. We see digital growth as a priority for the country. There are those talking in the public forums that this is not enough. You know, it's a good step. But when you compare it to what say, you know, an Australian bank might spend on it, which is somewhere in the billions compared to this, then it feels like not a lot of money because we're still talking about millions of dollars in general. I mean, the overall spend is billions. But then when it comes to the things that are important to this conversation, which is the AI plan, then we're talking in the millions. So
yes,
I look so a couple of big things. The strategy is very broad. It's around building foundations for us to be a digital economy and you know setting growth priorities. It does recognize that we need to build capability in things like AI, blockchain, quantum and other things. It's a bit broad on what it means by by capability. You know, is that skills? Is that physical buildings? Is that new investments in in industry? We don't know.
Yeah.
But it's it's uh you know, that's in there. And then when we think about the AI plan specifically, which is the one I'm the most interested in for two reasons. One, of course, it's AI and that's where we spend our time, but B, because the investment has gone into the CSRO or a large chunk of the investment goes into the CSRO, data 61 organization whom I have, you know, an awful lot of respect for, I work very closely with, and and they're going to be the to essentially create this national artificial intelligence center to drive adoption of AI technology.
Right.
Very excited to see that. Frankly, 50 50 something million dollars probably not enough. It's probably good for year one. I mean, when you think about the scale of some of these things, but we'll see where that goes. Bit of money on skilling and graduates. So, it's great to see uh that we are making that investment. I think to me, Dan, I would interpret it as as step one of something we're going to need to keep doing, but I think the government's done it this way so that we can invest today, see what we get from it, build the value, and then further invest in it in in future in future years. Um, one thing I will say if you I don't I wouldn't make you read through the whole thing. Um, There is a really interesting point in there around investments in the aviation technologies in particular and one that I think many people would not rec really really realize is something quite important but funding of nearly $2 million $18 million into a national drone rule management system to develop a system to coordinate lowaltitude airspace rules and restrictions on drones. So who would have thought that we need to have such an investment in in drones but then you look around the world I think you tweeted the other day about uh drone in Shanghai building QR code in the sky and hundreds and thousands of drones. So yeah, I mean to me I'm sort of I'm not a big drone person, but I'm starting to realize that that's becoming an area that is having a massive impact in the um in the aviation.
Yeah. And and it's good to see that Australia trying to take the lead with some of this because the things you mentioned there, quantum computing and we've talked about that in the past and and the Microsoft work in the University of Sydney and and in other areas and and blockchain, you know, crypto this things are starting to innovate and obviously It's good to see them called out rather than be generically mentioned about putting money into, you know, computer skilling or whatever it might be. They actually focusing it in a bit more and it's focusing on the areas we've talked about in this podcast which is great and actually having the the skills plan as well and speaking to
email last week as well on the on the podcast, you know, there's
there's some really interesting points in you. It seems to be sometimes I get a bit cynical when we look at these strategies. There's there's lot of strategies that that happen. I'm sure you've seen a lot as well. And government strategies always look great on paper, but but putting budget against it and actually being able to think strategically might be the impetus we need to get out of this economy where we still relying on things being dug up from the ground. Look, absolutely. There's no doubt about it. We, you know, we like every country in the world needs to to invest in this to get it right to be building the tools and technologies that we need. for our own children to grow up in a in an economy in a society that we aren't reliant on on stuff that we dig out the ground as but you know even that intersecting what we're doing here with those industries that you know however much we might sort of see them as challenges to climate change they are the industries that built Australia they are the industries that made our country the economic powerhouse we are in the Asia region and globally and so it's great to look it's a delicate balance but it's great to see the investment as you say very specifically going in and and what what I'm really pleased to see is that AI action plan is both about creating a place where it can happen and that's the one I'm really interested to see how you know
frankly we can be assistant in that but then also investing in the skills investing in the graduate programs making sure that we don't just build something and then forget to staff it and house it with the generation of people coming up who can actually make use of it
yeah and I think there's a lot in here as well around small business and and startups which is great I feel I feel it's very vibrant in our area in a Australia. Um and the the the only disappointment when I was looking at this uh these notes around this was was around safety, security and trust. You know when you are looking at millions, you know that that is massive at the minute. Security is is is a huge area going back right to the beginning of this podcast and I think um it feels a little bit light on that. There are elements in there but but the security in a national data security action plan putting you know couple of mil into that it seems a little bit light but I There's more projects.
Yeah. Well, look, I agree. It is it does look light on the surface. But what you've got to remember as well is that this is in addition to the cyber investment plan that the government put announced maybe three or four months ago, which was another uh 180 something million dollars in cyber skills and stuff. So, look, it's not just what you see, but you're right. I think overall
anyone would look at the numbers and go, look, it's good, but not enough across any of the sectors. You know, you these things just cost money and time. Um, but look, I you know, as I said, what I'm happy about is we've got start a path and we're actually doing something tangible and meaningful. I think CSRO is the right place to be doing that to start with. Um, so look, I I yeah, I think I'm excited.
Yeah, absolutely. It's a good start.
Yeah,
we should we'll be watching it. We should be watching it and talk more about it as it pans out.
Yeah. And it and it'll be good to look back at this episode in a year's time and see actually what has landed and where this is. You know, I think, you know, it was pretty clear that we've seen a lot of this uh AI emerge merge. We've our episodes have been pretty much on the money and kind of um really following the trends that are happening in in in the entire AI space and we can see that people are starting to like put their heads above the parapit now and start to think well we do need to start to legislate and think about the uses of this technology because it's becoming more prevalent. So and and this strategy is great. Send ending on that is is a fantastic piece and I'm looking forward to catching up on this in in a year's time. It should be great.
Absolutely. Absolutely, absolutely, man. It feels like we've we've we've talked our listeners to death on this topic. So, maybe maybe we should call it day and uh and get ready for the next episode, eh?
Yeah, absolutely. Thanks again, Lee.
Good to talk, Dan. Take care, man.