Nov 19, 2023
All the key news since our episode on 6th November - including new research on AI in education, and a big tech news week!
The publishing arm of American Association for Advancement of Science (they publish 6 science journals, including the "Science" journal) says authors can use “AI-assisted technologies as components of their research study or as aids in the writing or presentation of the manuscript” as long as their use is noted. But they've banned AI-generated images and other multimedia" without explicit permission from the editors”.
And they won't allow the use of AI by reviewers because this “could breach the confidentiality of the manuscript”.
A number of other publishers have made announcements recently, including
the International Committee of Medical Journal Editors , the World Association of Medical Editors and the Council of Science Editors.
https://www.science.org/content/blog-post/change-policy-use-generative-ai-and-large-language-models
https://arxiv.org/abs/2310.20689
News Article: https://venturebeat.com/ai/microsoft-unveils-lema-a-revolutionary-ai-learning-method-mirroring-human-problem-solving
Researchers from Microsoft Research Asia, Peking University, and Xi’an Jiaotong University have developed a new technique to improve large language models’ (LLMs) ability to solve math problems by having them learn from their mistakes, akin to how humans learn.
The researchers have revealed a pioneering strategy, Learning from Mistakes (LeMa), which trains AI to correct its own mistakes, leading to enhanced reasoning abilities, according to a research paper published this week.
The researchers first had models like LLaMA-2 generate flawed reasoning paths for math word problems. GPT-4 then identified errors in the reasoning, explained them and provided corrected reasoning paths. The researchers used the corrected data to further train the original models.
International Journal of Educational Technology in Higher Education
https://educationaltechnologyjournal.springeropen.com/articles/10.1186/s41239-023-00426-1#Sec8
Looks at chatbots from the perspective of students and educators, and the benefits and concerns raised in the 67 research papers they studied
We found that students primarily gain from AI-powered chatbots in three key areas: homework and study assistance, a personalized learning experience, and the development of various skills. For educators, the main advantages are the time-saving assistance and improved pedagogy. However, our research also emphasizes significant challenges and critical factors that educators need to handle diligently. These include concerns related to AI applications such as reliability, accuracy, and ethical considerations."
Also, a fantastic list of references for papers discussing chatbots in education, many from this year
https://arxiv.org/abs/2311.04926
https://arxiv.org/pdf/2311.04926.pdf
Parsons problems are a type of programming puzzle where learners are given jumbled code snippets and must arrange them in the correct logical sequence rather than producing the code from scratch
"While some scholars have advocated for the integration of visual problems as a safeguard against the capabilities of language models, new multimodal language models now have vision and language capabilities that may allow them to analyze and solve visual problems. … Our results show that GPT-4V solved 96.7% of these visual problems"
The research's findings have significant implications for computing education. The high success rate of GPT-4V in solving visually diverse Parsons Problems suggests that relying solely on visual complexity in coding assignments might not effectively challenge students or assess their true understanding in the era of advanced AI tools. This raises questions about the effectiveness of traditional assessment methods in programming education and the need for innovative approaches that can more accurately evaluate a student's coding skills and understanding.
Interesting to note some research earlier in the year found that LLMs could only solve half the problems - so things have moved very fast!
https://arxiv.org/pdf/2311.07361.pdf
By Microsoft Research and Microsoft Azure Quantum researchers
"Our preliminary exploration indicates that GPT-4 exhibits promising potential for a variety of scientific applications, demonstrating its aptitude for handling complex problem-solving and knowledge integration tasks"
The study explores the impact of GPT-4 in advancing scientific discovery across various domains. It investigates its use in drug discovery, biology, computational chemistry, materials design, and solving Partial Differential Equations (PDEs). The study primarily uses qualitative assessments and some quantitative measures to evaluate GPT-4's understanding of complex scientific concepts and problem-solving abilities. While GPT-4 shows remarkable potential and understanding in these areas, particularly in drug discovery and biology, it faces limitations in precise calculations and processing complex data formats. The research underscores GPT-4's strengths in integrating knowledge, predicting properties, and aiding interdisciplinary research.
https://arxiv.org/abs/2311.04929
Overall, the paper presents LLMs as powerful tools that can significantly enhance scientific research. They offer the promise of faster, more efficient research processes, but this comes with the responsibility to use them well and critically, ensuring the integrity and ethical standards of scientific inquiry. It discusses how they are being used effectively in eight areas of science, and deals with issues like hallucinations - but, as it points out, even in Engineering where there's low tolerance for mistakes, GPT-4 can pass critical exams. This research is a good source of focus for researchers thinking about how it may help or change their research areas, and help with scientific communication and collaboration.
https://arxiv.org/abs/2311.06261
This paper examines how AI tools like ChatGPT can change the way cybersecurity is taught in universities. It uses a method called "Understanding by Design" to look at learning objectives in cybersecurity courses. The study suggests that ChatGPT can help students achieve these objectives more quickly and understand complex concepts better. However, it also raises questions about how much students should rely on AI tools. The paper argues that while AI can assist in learning, it's crucial for students to understand fundamental concepts from the ground up. The study provides examples of how ChatGPT could be integrated into a cybersecurity curriculum, proposing a balance between traditional learning and AI-assisted education.
"We hypothesize that ChatGPT will allow us to accelerate some of our existing LOs, given the tool’s capabilities… From this exercise, we have learned two things in particular that we believe we will need to be further examined by all educators. First, our experiences with ChatGPT suggest that the tool can provide a powerful means to allow learners to generate pieces of their work quickly…. Second, we will need to consider how to teach concepts that need to be experienced from “first-principle” learning approaches and learn how to motivate students to perform some rudimentary exercises that “the tool” can easily do for me."
https://arxiv.org/abs/2311.07491
What this means is that AI is continuing to get better, and people are finding ways to make it even better, at passing exams and multi-choice questions
https://arxiv.org/abs/2311.07387
Good news for me though - I still have a skill that can't be replaced by a robot. It seems that AI might be great at playing Go, and Chess, and seemingly everything else. BUT it turns out it can't play Minesweeper as well as a person. So my leisure time is safe!
https://arxiv.org/abs/2311.05019
Finally, I'll mention this research, where the researchers have proposed a new method of ChatGPT detection, where they're assessing the 'energy' of the writing. It might be a step forward, but tbh it took me a while to find the thing I'm always looking for with detectors, which is the False Positive rate - ie how many students in a class of 100 will it accuse of writing something with ChatGPT when they actually wrote it themself. And the answer is it has a 4% false positive rate on research abstracts published on ArXiv - but apparently it's 100% accurate on Reddit. Not sure that's really good enough for education use, where students are more likely to be using academic style than Reddit style!
I'll leave you to read the research if you want to know more, and learn about the battle between AI writers and AI detectors
And outside of research, it's worth taking a look at work from the metaLAB at Harvard called
"Creative and critical engagement with AI in education"
It's a collection of assignments and materials inspired by the humanities, for educators curious about how AI affects their students and their syllabi. It includes an AI starter, an LLM tutorial, lots of resources, and a set of assignments
There's way too much to fit into the shownotes, so just head straight to the Book of News for all the huge AI announcements from Microsoft's big developer conference
Link: Microsoft Ignite 2023 Book of News
________________________________________
TRANSCRIPT For this episode of The AI in Education Podcast
Series: 7
Episode: 3
This transcript was auto-generated. If you spot any important errors, do feel free to email the podcast hosts for corrections.
Hi everybody. Welcome to the Iron Education podcast and the news
episode. You known so well last time, Ray. We thought we better
keep this going, right?
Well, do you remember Dan? I said there's so much that's happened
this week that it'll be difficult to keep up. Well, there's so much
that's happened this week. It'll be difficult to keep up, Dan. So,
as you know, I've been reading the research papers and my goodness,
there has been another massive batch of research papers coming out.
So, here's my rundown. This is like top of the pops in the UK, you
know, like add 10. Here's my rundown of the interesting research
papers this year. So,
interestingly, there's some news out that apparently it is okay to
write research papers with generative AI. So, the publishing arm,
the American Association for the Advancement of Science, now that
is a mouthful, for Unfortunately, their top journal is called
science, which is not a mouthful. So, they've said authors can use
AI assisted technologies as components of their research study or
as aids in writing or presentation of the manuscript. So, you're
allowed to use chat GPT and use that to help you write a paper as
long as you note the use of it. And the other interesting thing is
however they have banned AI generated images or other to media
unless there's explicit permission. So that's interesting because
some of the other papers are saying they'll allow you to create the
charts using AI. And they also have said you cannot use AI as a
reviewer to review the manuscript. And their worry is
you'll be uploading it into a public AI service and it could breach
the confidentiality.
Now that was a big one because Science Journal is a big proper
journal. Uh but a bunch of other academics journals also big proper
journals have come out with the same. So the International
Committee of Medical Journal editors came out with a policy in May,
the World Association of Medical Editors and the Council of Science
Editors all came out with policies. So it would appear that
although there are many many schools that won't let you write
anything with AI, the official journals will as long as you're
declaring it. And maybe that's a good policy. So that there's
there's a link in the show notes to that. So whizzing down the
other research apparently learning from its own mistakes makes
learn large learning ma models a better reasoner. So this is
interesting. This is research from the Microsoft research Asia
team, the peaking university and Xanong University. They developed
a technique where what they do is they generate some flawed
reasoning um using llama 2 and then they get chat GPT to correct
it. And what they're finding is learning from those mistakes makes
the large language model a better reasoner at solving difficult
problems.
That is really interesting. Another bit of research, there was a
really good bit of research about the the title is the role of AI
chatbots in education colon systematic literature review. So
now what that means is somebody has spent their time reading all
the other papers about the use of AI chatbots in education. They
read 67 papers and they pulled out the both the benefits and the
concerns. So um the benefits for students you know helping with
homework, helping with study, personalizing the learning experience
and development of new skills. They also found that there's benefit
for teachers. So time saving and improved pedagogy. Are you a
pedagogy or a pedagogy person then?
Pedagogy. Okay. And they also then pulled out there are challenges
and things like reliability, accuracy and ethical consideration. So
none of that should be surprise to people. The paper is a good
summary about all the research. It's also a fantastic list of these
67 other papers, many of which have come out this year. So, good
paper. Uh really good if you if you're faced with colleagues who
are going, I don't understand what this is all about. I don't
understand why this would be good for teachers or students.
Give it to them.
The next next paper was titled more robots are coming.
Large multimodal models can solve visually diverse images.
You're picking all of the research articles. really long titles
here.
No, Dan, that is just the title. That isn't the abstract or the
whole paper. So, Parson, do you know what Parson's problems are,
Dan?
Okay. Parson's problems are what they did with computer science,
which was basically give you a bunch of code and then jumble it up
and you have to try and work out what's wrong and where it should
be in the right place.
You imagine that? So, uh it's like me returning to code I when I
was 16. Got no idea what order it should be in. So, what they do is
that's a way that they think of giving students interesting
challenges where it's more of a visual challenge, you know,
structuring the code. Unfortunately, they thought that was a way to
defeat large language models and it isn't because large language
models, as you know, have worked on developing large visual models.
So, they can actually look at code, work out what's wrong, and tell
you how to do it. Statistic time, they can do it 96.7% of the time.
That's using chat GPT4 vision. So, significantly applications for
computing education because it's really good at solving parts
and
and it's a multimodal effect using images when you were talking
through that then I think okay you know you're analyzing text or
code but it's actually using the pictures to organize that that's
that's fascinating
and it's been moving fast because halfway through this year it
could only solve half the problems now we're towards the end of the
year
7% that's yet
hugely significant Did you ever get 96.7% on any of your exams
now?
Okay, next paper. I promise you I'll only read the title. The
impact of large language models on scientific discovery colon a
preliminary know this one as well. So this is this is mentioned in
like yeah this is really exciting.
Yeah. So basically what they did was look at how good is it at
helping in scientific discovery across a bunch of scientific
domains. So drug discovery, biology, comp computational chemistry,
materials designing, and solving partial differential equations. I
don't know what that
it's like a differential equation, but only a part of it
presumably.
Okay. Well, I'll never have to do one in my life because chat GPT
can do it for me. So, what it's found is it's really good at
tackling tough problems in that area. They say that the research
underscores the fact that they can bring different domains of
knowledge together, predict things, and help with interign. where
they were talking about materials and compounds that they'd found
in a matter of weeks rather than 9 months or so. So, you know,
obviously material science this is going to have a a profound
impact. So, that's quite interesting.
So, the next one has got interdicciplinary title is an inter oh
blimey Dan an interdisiplinary outlook on large language models for
scientific research. I need a translator for the titles. So,
basically it talks about how large language models can do
scientific research. So just like the last paper, so it talks about
ah things are going to be faster, they're going to be more
efficient, but it's looking at the research processes themselves.
It also talks about the downside. So things like integrity and
ethical standards and how you manage that and deals with things
like hallucinations. But it points out that even in something like
engineering where there really is not that much tolerance for
mistakes,
it can pass the exams.
So it's great for archers that need to think about how these models
can help them in their own research and help them with
communication. I I built a GPT to rewrite a scientific paper for
the reading age of a 16-year-old. And the reason I did that is that
honestly I find many of the papers quite inaccessible. So helping
out with scientific communication could well be
yes
building a wider audience. Okay. Uh let's whiz through some others
now really fast. So as fast as I can read the titles. A paper
called with chat GPT. Do we have to rewrite our learning
objectives? A case study in cyber security. So basically they
looked at and said how does it change the way that we both teach
and learn about cyber security. The great things that they found
was that chat GPT working alongside the student helps them to learn
more quickly and helps them to understand complex concepts much
better.
But it then raises some questions like if chat GPT or AI you can do
the early study stuff. Will students just skip past it? And so the
question was, how do we keep them engaged
in the simple to-do things that they can then build upon as they go
further through?
Really good paper. I think it applies to other areas of learning as
well. Uh the next paper, a step closer to comprehensive answers.
So, oh sorry that wasn't the whole title. The rest of the title was
constrained multi-stage question decomposition with large language
models. So the whole paper boiled down in a sentence says AI is
getting better and people are finding ways to make it even better
at passing.
There seems to be a lot of that now, doesn't there? There's several
you've just quoted there all talking about the actual the pass
rates and the way they're actually getting more accurate.
Excellent.
Yeah. And also the question about how do we change assessment and I
know we've got an interview coming up with Matt Esman where we'll
talk about some of the assessment stuff. Okay. So other things Uh,
next paper, assessing logical puzzle solving in large language
models. Insights from a mind sweeper case study. Okay, so Dan, I
know that you were playing Mindcraft when you were a kid. I was a
mind sweeper kid. Good news.
I have a skill that cannot be replaced by a robot. It seems that
although AI is great at playing Go and chess and every other game,
but apparently can't play mind sweeper as well as
I've got a unique not to be this by robots's job. Now, there were
two different papers. I'm only going to reference one. One's called
Demask, unmasking the chat GPT wordsmith. Now, that's quite a
Reddit friendly title for the paper. Uh, but basically, they
proposed a completely new way of being able to do AI detection. And
what do we think about detection?
They do not work. So, demask was demasked. The next day, they said
this can detect things. The next day, somebody proved that it
couldn't. Uh, there was another paper that came out that said, "Oh,
we've got a great way of detecting things and it detects
everything." I am ever so suspicious with this research whenever it
doesn't talk about false positives. So, a false positive is where
it says this was written with chaps GPT and it wasn't. And unless
the false positive rate is super super low, a teacher is going to
be accusing a student of cheating when they have not. We always
talking low percentages, but if say it's got a false positive rate
of 1%, which is very low. Very low. That means if you've got a
class of 30, every three assignments you're going to be telling a
student that they cheated when they absolutely did not. So
look,
pretty much we can be sure that they do not work. And then the last
thing, this wasn't a paper, but something I thought I'd mention.
Harvard have got a really, really good website called AI
pedagogy.org. Well, because otherwise it's AI pedagogy. org about
and critical engagement with AI and education. It's some really
good stuff with the humanities. There's syllabi, there's
activities, assignments you can give to students. It's worth
watching that as it develops. Thanks for sharing those.
What about the tech world, Dan? Because it was Microsoft in Ignite
last week, and I know that on this podcast you do not officially
represent the voice of Microsoft despite the fact that that's who
you work for day and night. But Dan, you'll have been watching
Ignite. Tell me what's exciting.
A AI infused through everything as we know. And I do think there
was there was a quite nice story narrative to this. It started off
with the hardware side of it. I know the the partnerships that
companies okay in this context Microsoft were doing with Nvidia,
but also the first party chips that were being created. So there's
an Azure Maya chip that's now being created, an Azure Cobalt CPU
that's now created. So there's several different interesting
pushes. and architectures which is all meant to kind of support all
these AI workloads in the cloud. So I I think there was a lot of
coverage in that section. You know everybody was mentioned Intel
Microsoft own inhouse stuff Nvidia also some of the Nvidia software
which is interesting is also running in Azure now as well. So I
think it's very much bringing lots of the hardware acceleration
together. So I thought that was a good opening for Satia.
So it's not just new data centers being built around the world.
There's new data centers with new computing capacity.
Yes, that's right. And even interconnected capabilities where even
down to the level of the different fibers, there was a hollow core
fiber that was introduced as well. It's always interesting to know
the things that are going on in these data centers which is
individual atoms being sent through holo fiber rather than via
light. So very very interesting technology and from the hardware
side. But obviously then spinning to the software side there was a
lot of things which came out. Some of the big notable things for
the podcast listeners Azure AI studio is now in public preview that
brings together a lot of the pre-built services and models prompt
flow the security responsible AI tools it brings it all together in
one studio to to manage that going through there's lots of that's
based on the power platform if people have been playing with that
in the past so there's a lot of drag and drop interfaces going on
to help you kind of automate a lot of this prompt generation which
which if people are technically minded on the on the podcast people
and bots for quite a lot of time with those kind of tools. So
that's kind of good to see that emerging out of the the keynotes.
So look out for that Azure AI studio. So our public preview
definitely worth having a play with. There's a there was an extra
announcements around the copyright commitment which might not sound
that interesting but it's quite you know if you do something if
you're legal firm or a commercial firm and you use co-pilots to do
something and generate content for you then there's a copyright
commitment has just been expanded to include OpenAI which means
that Microsoft will support any legal costs if anything should be
picked up by third parties around that.
I love that Dan because I know that it's been there in the
Microsoft copilot but I love the announcement is now extended out
to the Azure Open AI service and the reason I'm excited about that
is because that's what we build in my world. Uh we're building on
top of the Azure Open AI services. So being able to pass on that
copy Right. Protection is really important for organization. Hey
Dan, before they mentioned CCC, the copyright copyright,
the co-pilot copyright commitment. Um, they also mentioned the Azia
AI content safety thing which is what was used in South Australia.
I remember reading the case study about that which was about
helping to protect.
Yeah, that's right. So that's a good call up. There's so many
things here. Yeah. The Azure AI content safety is available inside
Azure AI studio and that allows you to evaluate models in one
platform. So rather have to go out and check it elsewhere and
that's there's also the preview of the features which identify and
prevent attempted jailbreaks on your models and and things. So that
you know exactly for the South Australia case study they were using
that quite a lot but now it's actually prime time that it's now
available to people who are developing those those models which is
great. Lots of announcements around 3 being available in Azure Open
AAI. which is the the image generation tools. There's lots of
different models now. GPD4 turbo in preview, GPD 3.5 turbo. So,
there's a lot of stuff which are now coming up in GA as well. So,
there's lots in the model front as well, including GPD4 Turbo
Vision.
Yeah, I I like that turbo thing because that seemed to add more
capabilities a bit like the OpenAI announcements. They mentioned
the Turbo stuff, but the other thing was just like the Open AI
announcement It was also better faster better parody as the open AI
costs that were announced at their dev day as well around um
developer productivity. So the stuff which is announcing go github
so co-pilot chat and then github copilot enterprise will be
available from February next year. So for devs there was a lot of
things have a look at the book of news we put that in the in the
connection there. One of my really exciting announcements that I
listened to was or were more excited about I suppose was the
Microsoft fabric has been available and I know that doesn't relate
technically to generative AI but it's really good for a lot of my
customers that are using data warehousing as one of their first
steps into AI analytics and then all of the generative AI elements
on top of that so co-pilot and fabric co-pilot and powerbi lots of
announce there including things around governance and the expansion
of purview along there so that was really excited but then we went
into the the kind of really exciting bits around the productivity
platform So then we've talked about Mrosoft copilot. So one of the
first things to to think about as well is that that it has been a
bit complicated with Bing Chat Enterprise and and Bing Chat tools.
They now going to be renamed Microsoft C-Pilot essentially. So
that's going to be the kind of co-pilot that you'll get which will
be inside any of your browsers, Safari, whatever and also inside
your sideloaded bar in Edge as well. So C-pilot is going to be that
new name for Copilot Enterprise um which is being chat enterprise.
So I'm even making a meal this this thing where we try and make it
a bit easier for people to understand. And then the good thing is
as well that they've now announced copilot studio which brings
together all of these generator plugins custom GPTs. I'm sure
that's something that you're going to be working with quite a bit.
Right. So that's going to be able you to customize your co-pilot
and co-pilots within the Microsoft 365 organization. If you're an
enterprise or customer, create your own co-pilot. pilots, build
them, test them, publish them, and customize your own GPTs in your
own um organization. So, that'll be really exciting.
I am I'm excited also by the fact that um I can't always remember
all the names. I remember there being Viva and Yava, which I love,
has been renamed into something else, but now I only need to
remember the product name Copilot because Microsoft 365 copilot,
Windows Copilot, Bing Chat's been called Copilot, Power Apps
Copilot. All I need to do is think of a product name and I copilot
on That's exactly right. Yeah, there's a lot of other other new
interesting copilotes that were announced as well around new teams
meeting experiences with co-pilot with collaborative notes. I've
been using quite a lot of these internally recently and lots of the
intelligent recap stuff is really good as well. So there's a lot of
co-pilot announcements as well you can get lost in the weeds with
around PowerPoint, Outlook and all of those tools. But really,
really good in integrations and I suppose you know we're going to
see a lot more of that. The the interesting element as well is that
Windows AI Studio is available in preview coming soon as well. So
that's the that's the other thing I'm sure you'll be working on Rey
where is being able to develop co-pilots and Windows elements to
your C-pilot ecosystem as well. So you'll be able to deploy SLM so
smaller language models inside the Windows ecosystem uh to be able
to do things offline as well. So there's going to be a big model
catalog in there. So that'll be quite interesting. So you've got
the copilot stuff and you've got the Windows AI studio. studio
tools as well for devs. So that'll be quite interest.
Great. So everything's in the cloud and everything's got a
coil. Exactly. There's lots of copilot stuff included for security
as well and I've been playing with security copilot. That's that's
essentially your generative AI. If you get an incident that happens
in your environment and there might be ransomware attack called, I
don't know, north wind 568 or whatever it might be. That's probably
something that exists, isn't it? But anyway, that that'll then tell
you where that origin of that ransomware might be from. give you
information about what what that actually does. So it's it's like
guide for security size or so that'll be really really interesting
when that when that comes into GA because it does get quite complex
in the security area. There was a lot around the I suppose the the
tools around dynamics and things like that. So co-pilot for service
management, co-pilot for sales or more enterprises who might be
using dynamics there was a whole heap of of co-pilot automations
around the power platform which is citizen development platform the
Microsoft release so power automate I've got a whole heap of things
around there about generating scripts generating documentation for
governance there was a whole raft of products now available around
you know your supportive tools with inside app development but also
the way you can use copalot to create things for you as well so
there's a lot of stuff um in the in the power platform which is
quite exciting but there was so many connections we put the book of
news in the show notes here, but very very exciting for right from
the hardware right up to citizen development. So, you know, I'm
looking forward to seeing these coming.
So, if I'm in the IT team, I should go and read the book of
news.
If I'm outside of the IT team, I should just add the word co-pilot
onto anything I'm talking about. Okay. So, Dan, we've just done the
whole two weeks of news about research and the Ignite stuff and all
the developments there. We've been talking for about 20 minutes.
So, we just need to go and check the internet because there has
been one other piece of news going on which is that Sam Olman may
or may not be CEO or chair or not CEO of open AI. I mean in the
last 20 minutes
fascinating the the thing that really intrigued me and it's made me
think obviously there's been a lot happening in this space like
your thoughts on this um the board of open AI I supposing about the
actor structure the board of the open AAI it's a nonfor-profit
board board. It's a 501c I think they call it in in the US which is
your your kind of nonprofit entity and it it feels like that
there's some tension going on with our nonforprofit entity but
nobody really knows there's so many things going on on online about
this but I was the interesting thing for me was that there's six
people on the board and even just doing some research and trying to
understand who those six people were and how that all works was
quite interesting fe what what does that mean?
Well, it's interesting because yeah, I've been doing the director's
courses recently and it's all about the strategic way of thinking
as a director which has tended to stay detached from the execution
so that you're setting strategy. So I I find it quite interesting
that the board have made something that gets really really close to
execution. You know, normally they're working much longer time
scales but perhaps they're not working in longer time scales
because uh just before we restarted the I saw the news story was
that they might be talking to Sam about coming back
as CEO cuz I I had thought of it as a Steve Jobs kind of thing when
he left Apple and then he came back what is it a decade later and
saved the business. It's kind of like that. I hadn't expected it to
be over the course of one weekend though.
So, uh we've got to try and get this out on Monday just so that
we're vaguely up to date. I don't think I'd realize that OpenAI is
a not for-p profofit that is focused on how to achieve
AGI. So, kind of achieving that general intelligence is what
they're going for as the as the nonprofit. So, I don't think I'd
really understood that that everything they're doing at the moment
is a step towards reaching that general intelligence
position artificial intelligence. That's what it's for, Dan. Oh my
goodness. We'll have to go back and edit it so we don't look
stupid. But, you know, that is what Open AI is all about is how do
they reach that general intelligence level? I think this is a
little road bump on the way. for everybody, right? Because it
doesn't matter how big you are as a company, when things are moving
so quickly, whether you're a school or whether you're a university
or a commercial customer or a or a large nonforprofit, you know,
you have to be very careful about the direction like you're saying.
I suppose there are things in place like you're saying for the
courses you've done where you do stay strategically at arms length
so you can make long-term decisions and there's a lot going on.
very very quickly with open AI specifically. I don't think I can't
think of any company that has propelled so quick and had such an
impact. So these things do happen and they have ripple effect. They
do send ripple effects down
through the the communities but it does give us a bit of a
thoughtprovoking pause to think okay where where are we going with
this technology?
Would would you ever have believed that the news like the CEO of an
AI company being sacked would be like number three or number four
story on BBC news or the Guardian or those to make it into
mainstream news is really fascinating in such a short period of
time. Damn, there has been so much news. We have been covering two
weeks worth of news. That's why it's taken us so long. But my
goodness, we better stop because this is supposed to be a quick
snap of the news. But the key for everybody would be find the
links, find the papers, find the news on the show notes. We
definitely won't put anything in about OpenAI. Go and just Open
your favorite website to find out what's happening on that because
you'll be more upate than us.