Preview Mode Links will not work in preview mode
Welcome to the AI in Education podcast With Dan Bowen and Ray Fleming. It's a weekly chat about Artificial Intelligence in Education for educators and education leaders. Also available through Apple Podcasts and Spotify. "This podcast is co-hosted by an employee of Microsoft Australia & New Zealand, but all the views and opinions expressed on this podcast are their own.”

Oct 8, 2021

In this podcast Dan and Lee speak to a lead policy maker Aurelie Jacquet and discover how policy is created and how this has developed over the last several years.

 

Aurelie - Linkedin : Aurelie Jacquet LinkedIn

ISO Standards: ISO - ISO/IEC JTC 1/SC 42 - Artificial intelligence

 

________________________________________

TRANSCRIPT For this episode of The AI in Education Podcast
Series: 4
Episode: 11

This transcript was auto-generated. If you spot any important errors, do feel free to email the podcast hosts for corrections.

 

 


Hi, welcome to the AI podcast. How are you, Lee?
I'm very well, Dan. It is good to be back again. Every time we do this, it just feels like it feels like months since we've done one before, but I think it's only weeks, isn't it?
I know. It is exactly. Um, and it's uh very cold in Sydney at the minute, so it's good to warm up. with a podcast. So, we got a special guest today, uh, Lee, haven't we?
We do. We do have a very special guest and and I think we've all we all seem to know our guest through various different uh mechanisms and ways in which we work. So, it'll be uh I'll let you do the introduction, Dan, but it's wonderful to have her here.
Yeah, fantastic. So, Orurel Jac, thanks very much for coming on to our podcast today. I really appreciate you uh you joining us and uh again, I may pronounce your uh name wrong here because I've been out the Europe for so long, but the that's French origins, right?
Definitely. Definitely. You can't get more French than my name. It's uh there's a typical one. Um absolute pleasure to be here, Dan Lee. Um always happy to cross paths on many different occasion.
Fantastic. And you guys, I feel a bit left out here because you guys know each other already through various boards and and strategic standards conversations that you're having around Australia and globally. So, so this is really exciting. I'm going to learn a lot in this this this this podcast today. But but tell us about yourself or how how um you came to I think you started off as a litigator, right? So what got you interested from that area to go into AI?
Yes, for all my fault I started as a lawyer. Um I actually like to change from time to time or rather on a regular basis. So what happened is I actually like going to court and doing appearance and looking at international law. I was getting into international arbitration. Then um did a quick change to financial services and ended up in the fun world of uh algorithmic trading at a particularly fun time. I got there in 2008 and I was part of one of the banks that actually um got completely restructured.
So was a good way to to have a first jump um into financial services and I was looking after algorithmic traders. So at the time they were doing so well so high frequency traders um which is pretty much automated trading using algorithm to trade. So that was my first exposure to um let's say using the term AI broadly and AI and um again for when I looked at my next standard I started to work with startups um looking at blockchain and um somehow the topic of AI came along from um actually the advertising industry that was looking at ethics and AI back in 2016 and the topic stuck. It all made sense. Algorithmic traders AI um it was a very easy way to link
that sounds fantastic though because that algorithmic trading element and the way that latency and speed of transactions and and things have to happen on on the stock exchanges and the AI. I suppose I suppose that's an interesting one. We've never covered that before. I suppose that's where some of the AI and definitely some of the high frequency algorithms kind of were born, right?
Well, so there's very very similar problem that occurred because that was the boom of algorithmic trading at the time. And what happened is there as you know there's been a few market crash so there were no rules. So I had to look at the rules and implementing the rules globally. So um things like that intent to trade how do you prove that when you've got an algorithm um we had bans also suddenly on uh in Australia on um algorithm use the use of um high frequency algorithm. There was very bad press about algo traders. So Lee very many of the stories that lead us through.
Yeah, I you was as you were talking about it um or I was thinking there was in my head that came to my my memory where there was a movie about this and I was trying to remember when it was. So this was back in 2001. I don't know if you were in Australia then or how long you've been here but there was an Australian movie um called the bank uh which was David Wen and a few others and it's exactly this issue of you know the impact of algorithmic trading. So it's you know we talk about AI like being a sort of a newish thing but as you say the very foundations of it core of where we started to understand the implications of it and I guess getting to the ethics of it is is now relatively old in terms of you know the the real world we live in. Yeah,
definitely pretty agree Paul.
Yeah. Yeah, it is. Yeah, it's um it's fascinating that that journey and so I have to ask as well and forgive me for asking but um obviously you're originally from from France and you would have been working in that sort of in the EU and in the in the French world which is a really interesting world. I I cross path with my European colleagues quite a lot around what goes on in Europe around legislation and law and ethics and all these kinds of things. So, what when when did you move over to Australia and what drove you to come to Australia to do the work that you do now? It's great that you're here by the way.
Absolutely.
I actually moved um quite early on um I um I got uh tricked uh by very good advertising. Um
my universities on the in France was um on the pathway to um let's say the Australian embassy and I was actually doing already a um a degree um in common and civil law um so covering different jurisdiction uh and as I say on my way I was actually passing the Australian embassy and they had this amazing picture of a guy in a suit um on a windsurf going to work with his briefcase and in the coldest of winter with the snow in Paris where it's like really gray.
I'm thinking this sounds like a good lifestyle.
And um
definitely
I wonder where they got that photograph of you Lee.
Yeah. I don't know where that one came from. You You couldn't make that story up orally though. That's So that's true. That's what it was. You You saw the dream, did you?
Did.
Wow.
A few times. You know, reinforcement.
Yeah. Reinforcement learning occurred as as you walk past this image and wow. And and that's what drove you to apply to come and live in Australia, the the promise of warm weather and wind surfing.
Yes. Yes. Um we uh in my family, we all were thinking about uh we all went to different places. My brothers in the US, we all travel on a regular basis,
but um Australia seems like the more promising land.
But it it is it is and we're glad glad glad you're here. And
just thinking of that, you know, I'd love to um I'd love to really exp- with you as well because the the thing that racks my brain around standards and you mentioned that you were talking about global standards there early on and hopefully we get on to this through a conversation today but um you at the currently are the chair for AI standards and and that that that that is fascinating because the the thing that always wrangles in my brain is you know we seem to have got laws and legal standards like everywhere. So like you know when we took the GDPR standards in Europe for example and you know the standards you've got in Australia and Australia are not shy from adding their own standards on top of other standards. So I'm just wondering you know without going straight into that I suppose you know how do you see the role of that standardization of AI like both in Australia and then globally because you're on that you're chair of that standards board and that you know that I suppose that's one of the big questions I suppose
definitely so for me the the the ISO standards are uh really the they provide a a practical response to organization um and explain to them how to implement AI responsibly. So that's what we do. We go beyond the AI principles um we provide repeatable responsible AI processes um that are recognized internationally over 50 countries um the biggest international initiative um in this um space and effectively what we we're here to do and you see that with the EU it's just getting started we define international best practice that will inform um and help guide regulation to the extent it's adopted but we see a move there so so for me um but I'm very biased um the standards are um the most advanced initiative on uh the implement of AI responsibly.
Yeah. And and and of course, you know, Oruri, as you and I both know, because we we participate in a number number of these activities. Um it's interesting you say that the standards are the most advanced because there are so much there's so much going on today in the world of trying to get a grasp on what do we need to do to do AI, right? And whether that's the ethics, the standards, the governance, the process, the regulation, the law, and all of these things coming on together. Um and so your work in standards, I mean, you sort of see is the most forward thinking. Do you think that we do you see that we're going to get to a point of view point where we have a set of global standards for AI? Because this is one of the things I think that Dan was alluding to and I see as a challenge is we've got the European document that was recently released on their view of standards. We've got Australia building that and I see little pockets of organizations all over Australia thinking about what's what's right for AI. What what's your view on on getting to a point where globally we can all agree as the major, you know, contributors to this conversation? America, France, Germany, Canada, everyone. Are we ever going to get there? Is it something we can achieve to have standards like we do is
I'll definitely um do my best on that. Uh my aim is to avoid um GDPR too. Uh right.
So, but in a nutshell, if you think about it, at least the way I've seen the development, what makes me hopeful that um standards are really um taking up is looking at the work of the different government whether it's US, China or um other countries there's always a strong reference to standards as part of their policy. So from a global perspective um even the OECD as part of their AI principle they also refer to um consensus standards. So there's it's already um it's not an afterthought. It's been thought through and part of the policy. Um in Australia um you see that um effectively you have also the standards that are part like the C in CSRO the the ethics papers there was already a reference to standards um um the Australian human rights commission is also um talking about standards generally and um and effectively we've got the standard Australia uh road map on AI standard that's explained um the steps we we need to take to to promote international standards um and benefit Australia. So from that perspective, I think we've got um um a good way forward. If um I have to think um about the EU now, um I'd say that actually um it's quite a line. Um the EU came to do a presentation to us like two years ago uh when sorry that we were allowed to travel um in Dublin um explaining the the initial plan um so and you'll see that um why you've got the AI act what's I found um extremely satisfactory is that you've got the certification the ecertification piece that relies on um on the standards and specifically a sen and to actually um um mandate the certification process. So our work is definitely um leverage um globally. So
so far really pleasing and and I mean obviously you live in this world trying to manage this is the fact that if you look around the world at the US, the UK, Australia, the EU when we think about this the baseline of of AI we we get the principles or the ethics of it. You know, we all kind of align on some very core common things. Fairness, equitable access, transparency, accountability and so on. But what I find interesting then is this is where I think where standards come in is how do you apply it? So for example, in the in the EU example, there is the sort of the high-risisk and low risk categorization. Here in Microsoft, we think about sensitive use cases and we use those as our benchmark. And I'm sure if I you know, we look around the world that all it sort of what you end up tapping into is what is acceptable within any particular governed country and you know and for example we have China at sort of one end of the spectrum whose acceptance for what is you know allowed is far outweighed and they would arguably say they are still think about it as being fair and secure and transparent but the their definition of fairness is kind of different so how how's that going to get reconciled because the standard implies common consistent regular approach and expectation you know as a citizen I expect the same ex I expect the same uh exp erience when I'm dealing with something that is governed by a standard but that standard's going to be different because in AI worlds ethics comes into play and ethics are very malleable. What's your thought? Do you think that's is that a challenge for what you do or is that just uh you know something that's being considered in that process?
Yeah to to be honest as well I put again my bet on standards because effectively um they're the best organization to drive consensus um on very complex issue for years like for for centur So if you look at governance of organization which is like um standard 9,01 or the security standards which is 27,01 um you had many international participant um and um they it drove it drove best practice um to um globally on on very tricky topics and I think as we move um standards we move away more and more from um technical standards to more social technical standards.
We're looking at trustworthiness um as I say across um across technology. So and that's not something new for for standards because as I say we like we've got standards on environment on um governance etc. So it's just bringing this topic back in the in the technology world.
So that's interesting. you bring up that I love that idea of socio uh sort of sociote technical standards because it's it's almost if you kind of look at that trust barometer and you look at with the way in which we think about things that it's not something that has a fixed and hard line you know there's a there's a sort of a barometer of of what's acceptable and what's not acceptable um and maybe that's the way that when you think about standards applied to things that have a very social technical impact as opposed to purely a you know implementation impact like ISO you you just implement to ISO standard it's kind of fairly strict and rigid whereas something like AI where you need to have a social factor into it you need to think about sort of what is acceptable in the social fabric it makes it a bit more challenging the so the as you were talking about ISO the thing that I sometimes hit on I'm keen to get your view on then is we have things like ISO standards that and here at Microsoft you know we build our services to operate data centers to ISO standards as we would do like anyone else does and they apply globally but then also we find that we have Australian localized based standards and you know things like IRA as an assessment framework or ISM as a as a standard for information security across government which is different of course to what may be defined in things like ISO and then we go an even level further then we have sort of the Victorian privacy principles that are different to the federal privacy principles and we find having these layers of standards. Is that something you think is unique to Australia? We're just a bit sort of split apart in our state and federal thinking or do you see that in other parts of the world as well? Say where there's all the different member countries. Is that a similar problem they have as well?
So I I only came to standards in while I was in Australia, so I can't comment
before. Um you you'll always need um some degree of localization. I think um actually um the privacy standards uh that was defined just recently or uh
um is a good example. It's actually helping translating the GDPR to the privacy principles and that's a good example on how standards can help um doing the translation um from let's say stronger requirement to well different requirements um and and that's a good way forward. So at the moment for the AI we obviously ISO you it's just putting the baseline a best practice baseline and then obviously there will be some need some adaptation but that's um that's what uh will be done at a industry level to to fit with their specific requirement.
That makes sense.
And from an industry point of view our problem is that we're inventing technologies you know we're inventing the kind of bicycle before roads are invented in some cases you know so if we coming up with new technologies is there you know How do we speed up the process of standardization I suppose before things go wrong? We could invent say well we have invented cognitive services that do facial recognition and then often what we see is then the use of those possibly for bad rather than for good because there's no nothing really regulating that entire cycle. Um so I'm just interested to your thoughts on the speed of standardization and the speed of development in in technology.
So Again my personal view I found standardization a lot faster than the legal
that's a good point there.
So come to best practice it takes time to um to build best practice um and to build international best practice also take time. If I look back at the conversation that we were having on ethics um like back In 2015, um you were just seeing the US government, the Chinese government like um starting with their white paper. Um we were um just at the beginning of um the ethics principles. So so to actually look at best practice at that point um was um not even possible because they were just getting an understanding of the technology. So there's always a way to go faster. Um but with technology I think um as we develop there there's a time like we've seen lots of different whether it's blockchain IoT smart smart cities all those thing um like it's been a big uptake of um new technology and you see with standard it has taken it takes time to get 50 countries to agree together I think actually three four is quite fast to get all everyone to agree to a common best practice. But in terms of going faster well I think um we'll become more um used to the to this topic I'm getting involved in the quantum uh conversation around um responsible quantum and what it means. So you see it's like similar topic that comes but in different um aspects. So we'll become more and more familiar with this conversation and more adaptable. So the change that we're making for AI um at least I expect that a fair few will be able to be carried on other emerging technology and we can have a process where um effectively we um develop those standards faster
and and to Le's point I think Le always explained that for AI that you you you go through the life cycle you need to manage through the different life cycle that's not be any different from other tech other technology because that's one piece that's changing
and that will have to change for any other emerging technology.
It's it's a it's an interesting point that you bring up Dan in the so much as um and uh Orally and I involved in in some committees where we've met with we work with Ed Santos who's the and now exited human rights commissioner and you know he talks very much around this issue of um you know regulation we need to do we need to regulate but we can't regulate law shouldn't move quickly. You know, law shouldn't be moving trying to move at the same pace of technology. You've got these three moving parts. You've got the tech industry moving at super high speed. We've got we need standards to ensure that there's commonality and expect, you know, common experience and that and and and expectations of what is good and right. And then you've got regulation and law and and this is in the tech industry, this has probably never been such a big issue because tech systems were never so intrusive into the human condition. Whereas, as we move into things like AI, which has a very impact impact role on the way we as humans live our lives and what we can expect from government systems, technology systems and then and it'll get more and more with quantum. It's an interesting challenge and yeah you're almost sort of always you're absolutely right or it is that kind of cyclical uh ongoing process because you'll never really end it. You'll always be kind of looking to rethink the standards but we've seen this before. Um you mentioned IoT Dan and I used to be in the IoT world in at Microsoft and one of the big challenges in that business was everybody content, smart buildings, smart systems, smart control environments, but nobody wanted to be the one that gives up their IP to create a standard or adhere to a single standard that we can all work to. So, we ended up with, you know, multiple different technical standards and
different building standards and then you're and it intersects with building codes. It's a the world you live in orally is a difficult one to correlate, I imagine, from given that you know so many moving targets around technology and ethics and ethics is such a a morph thing I'd love to get your thoughts on on ethics and how ethics really plays a role in standards and artificial intelligence and you know how important are ethics? How do we fit them in? How do we use them as a tool versus a you know something that's hard to define?
So I'll make a big proviso. I'm not anist.
No, no, I know.
Um so my let's say I'll take a more pragmatic approach from my perspective. Um I tend to remind that um organization that when it comes to AI you first have to comply with the law and that's very much to Ed's point because about three quarter of the incident I still remember the if you remember the deliver um algorithm that was in the first page last year uh the issue there was to be honest um they forgot about local labor laws and didn't embed it in the design that was called fairness. It's got definitely an aspect of fairness, but that's also the labor laws. So you see a fair few of those um issue come up. So that's the the first reminder and again back to your point, it's possibly because with AI you have to have a more multi-disiplinary approach. It's through the life um you have to act through the life cycle and you don't it's not very well embedded yet in organization. So that conversation comes at the end through a let's say more traditional project um pipeline than the one we should use when developing those systems. So we've seen the difficulty not just for AI system for for normal system but it's um being tricky. So the law yes the first part
then
definitely the reputation um is um another aspect reputations uh Repetition risk has always um had its ups and downs. Uh we see now that there's um it's becoming at the forefront of regulators. So there's much more attention to it with a conduct piece that's being developed um uh by the AC in Australia. Um we also saw what happened like the the social impact from the privacy regulation. So that um the concept of fairness exist already in privacy. So So we see the reputational risk um increasing um and then obviously for me that's when I'd like to call it the cherry on the cake. So that's where you start to get to the ethics piece and when you have companies that wants to be part uh be an intrinsic part of society and play an essential roles into promoting the society and that's when you get there you provide great services that actually um improve um um the the daily lives of individuals and that's uh that's looking at making choice about um the values you've got and um how you promote them. Um and and that's um to me is a cherry on the
That's a really good way of thinking about it. I love that kind of analogy of just building up to it and you make a really good point uh around the law correcting what I said earlier which is that whilst we don't want to move quickly to regulate or create law laws around AI specifically. The fact is many of the laws already exist to protect individual citizens and and and society and those laws need to be factored in and into the way in which technology is is applied to our daily lives and and that often isn't the case given the example you gave there as well. So yeah, it's a really good really good approach.
Definitely. So like we've talked a lot about the standards and the legal aspects of this. Would it be great to hear your kind of thoughts or on the young people and especially young women uh get involved and starting out careers in in AI and technology generally because like your interesting story from litigation and moving across is fantastic but what would you tell young people I suppose getting started on a career in AI or in technology and especially girls out there who are kind of keen or might be listening listening into this podcast.
I guess my um Well, my pass has not been straightforward. So, there's um you can keep going in a very easy path. So, sometimes you can find possibly the different opportunities on the side. So, be curious and look at all those opportunities. At the moment in the AI world, it's um booming. Um I my issues why I went to the standards was actually I saw all the work that was happening um overseas um in the US in Europe back in 2016 um on AI and the policy around it and I was just going I want to stay in Australia so I need something to happen here um and so so that's why I really just u we formed a group of people and actually pushed for um for the standards we said like look we we would like to do that and um put a submission that got accepted so I think definitely taking um initiatives um that goes out of your comfort zone um is always a good way to learn to go. From my perspective um international uh initiatives are always worthwhile. It's not because you're in Australia that you can't do it. Um most of the initiative I'm doing I'm working with responsible institute and the world economic forum too. Um so um it's a small compromise to participating in some amazing work. Um so that's an important piece and I think when it comes to AI what's particularly important is to have that multi-disiplinary approach so to actually um learn about each others to have that for me that's one thing that I've been tremendous value from the standards is you've got academia, industry and government that are all in this forum and you have very different perspective and to understand how everyone see and understand um what AI is and what AI should be um is extremely important and possibly that links back if I go to data scientists what I see the the the young one they tend to say oh um how I do fairness and um while It's important for data scientists to understand the impact that they will have on society. It's not for them to be responsible for everything um the system has to do.
Yeah.
So it's understanding who's the discipline that they can lean on um to help them build the best algorithm.
That's a really good it's a really important point that that multi-disiplinary multi-disiplinary piece about the way in which you think about AI. I I it's obvious to me uh orally as you're talking about it that you have a passion for it and you have a passion for standards which seems crazy to people who aren't into standards but I get it and you know you know Jeff Clark who sits on Microsoft side and he has this you know I talk to him about it and he will joke that you know nobody likes standards they're boring but for him it's a deep passion and I see I see that passion I think is that maybe one of the learnings as well for anyone thinking about this career is that you know find your passion in it find something in it that you want to really make a change with make a contribution to because I think that I see such a difference when I see that passion in in people about something versus that kind of I just sort of you know I do it for different motivations but when you're motivated by the passion to make a change I think that really drives a different kind of level of engagement from people certainly seeing you.
Thank you. Um well I do it's it's quite um and for me it's quite an incredible job to be able to well work with every like 50 countries around the world and you've got some amazing talent, Jeff included, always keeping me on my toes. Um, and and to, as I said, maybe it's I choose law in the first place. So, um, to to be able to participate and drive that conversation in what technology should look like. For me, maybe what frustrated me in the very initial conversation was um AI um can be good, AI can be bad. And I was just going ah am I going to wait for that or should should I just do something because um I don't want an algorithm to tell me um what I should do or how I should
can I just ask you you're speaking about those 50 countries there. What is the representation you know not accurately but generally around females in in that space globally? Is is that is that something that we need to address because obviously we know you know at a at a small level the bias can creep very quickly into algorithms. What what are we doing in terms of the standards and at the policym decision to make sure everybody's represented there across all multiple genders and communities as well.
So standards um definitely pushing the initiative and encouraging more women uh on board. Correct. Um I'm as I say I'm the head of the delegation. Um as I say we've got um like some other delegation that are leading and that pass too. So it's been really um nice to see for me I'm new to the world of standards so I've seen there's many women that are involved in um the expert committee. So um quite happy uh but on if I have to think about the woman piece more broadly um to be honest it's not an AI issue um we had it in law we had it with the startups. Uh we had it recently with boards. Yay change. So this um you know um while you have it for AI technology um it's just a a variable topics that um hopefully is changing for me learning from those industry. What's really key is one um first acknowledge the women that are there. There's lots of women that I know that doing some amazing work. The um the first person that got me um to talk uh on AI and ethics way back was at the woman in data science and AI but it was Stanford and UTS that was Teresa Anderson.
Um so
yeah
and the MCAT was full of women.
So
but I think I mean your point well made that it isn't really just an AI problem. Of course it is broadly across a whole range of industries and communities, boards and so on. Um, and and you and I are both involved in the women in AI awards that are coming up next year. Um, which is great to be and I think one of the things I've noticed and I don't know I'm not saying the problem is in any way solved but I've noticed that actually a lot of the big the big thinking in the AI world certainly here in Australia and to a certain degree as much as I can see globally is actually led by some very senior women whether it be you know the Kate Crawfords and the Genevie Bells Alan Broads of the world,
but we do have I think I mean my view is it seems like Australia is punching above its weight in terms of female contribution to the AI global narrative. What do you do you see the same or have I am I missing something do you think from a bigger picture?
I definitely see there's lots of women leading in especially in the ethics field like the data science I've seen lots of uh women there too less um come across them but more so in the ethics
um piece and to be actually sorry I'll correct that. Um the woman in AI award is actually a testament that there's actually really some strong women in AI um in Australia and um that was a fantastic event and I think what's missing to be honest and that's what um Angela and Beth um from women in AI are building it's really a a strong community that promote their voice and showcase the talent. um to say again um to to really say here that we are the amazing woman we have and um this is the work that the leading work they're doing so I think that's the most important piece um again if I think about my years in the startup world um one of the first thing I heard was oh there's um no women with the skills um so it's just Yeah,
bring
put put them up there. shine a light on them like you say. I think that's it's and and give them an opportunity to demonstrate just how big the contributions are that these women are making. I I'm I'm quite looking forward to the hopefully we can actually go to the event next March and be there in person for it. Um so we we're getting close on time. Uh or I I have one last question for you at least from me and um Dan might have other ones. Um it's a bit of a hard one for you, but it's the crystal ball question. Um because someone asked me this the other day. So, I want to ask it to you. So, where where do you see this future going? Where if you look forward to 2050, and I think we have to look that far ahead because it's hard to sort of look 2030 doesn't seem that far away. Where do you think we'll be in an AI world in 2050? Where do you hope we are in an AI world by 2050? So, in 2050, um, no more pandemic. AI is like so
I can travel anywhere. Um, back to working in an hour uh because of my AI powered plane. That would be my no um without um jokes apart. I think first um yes to like we all all looking to use AI for um um sustainable goals and uh you know right now the news are a bit glimp. So to actually have AI really help us out on um to leverage technology for um pandemic which is already done like there's like lots of good example of the work that's done in Australia to help with the pandemic um but also with the environment being the second one is uh is going to be really important to turn the tables around. So that's where I'd like to be. Um so really pushing for technology that can help us uh tackle this challenge. This second piece is is very much um hoping that um those algorithm will be used to promote society. So what I see what worries me and what still gets me in the conversation is you limited to your data and you're only as good as your data. You see more and more um options to sell your data for services like this is starting to um to come out more and more. So what about the possibility for growth? What about the um the fact that if you're on the HK case actually you might be um more looking like a very um interesting candidate rather than just the ed cage. So it's um to have algorithms that promote dignity, human rights and um know that the edge case um and the exception can actually be the exception that drives progress um would be um a very good place place to be and a more trivial possibly um uh improvement. Um if uh my um let's say my uh music accounts was actually um updating and not suggesting me always the same music or the music of my daughter and possibly sending me random piece that I can discover to promote let's say more creative and um encourage discovery, maybe failure in discovery um would not be too bad news.
I definitely agree with the last one.
Yeah, I love that. I love that. Well, Orly, thank you so much for your your work you do in Australia and globally with with this le I feel very passionately around standards and ethics in in AI. We talk about it a lot. So, you know, you were leading the way and and really paving that way to get to those outcomes you said about the environment and and that personalization. So, Thank you on behalf of all our listeners and and everybody who work with you for for the effort you put in and thanks for joining us on the podcast today. You've been fantastic. Thank you.
Thank you Dan. Thank you Lee for the invitation and with absolute pleasure talking with both of you.