What is the most effective way to bring AI into the classroom?
- Articles, Blog

What is the most effective way to bring AI into the classroom?


[MUSIC PLAYING] LAURA MCBAIN: It is a
real honor and pleasure to be sharing the stage with
such experts in education, learning, and design. I have been scouring
their profiles and reading their
interesting publications and just looking
at the great work that these two, in
particular, have been doing to really disrupt
in the most beneficial way and provide really interesting
and equitable opportunities for young people
and for teachers. What does AI in the
classroom look like? Is it a boon? Is it a curse? Is it an excitement? As educators and as
designers and technologists, what really is the right way to
bring this type of experiences into classrooms to make sure
that classrooms are not just serving the students who
are always being served, but how do we serve the students
who have been most marginalized by an educational system? And so how this is
going to roll tonight is that we’re going to
have a conversation. I’ll ask them some questions. And then there’s going to be
an opportunity for y’all to ask some questions of them. Before we get
started up here, it’s important to find
out who’s in the room. Who are we in this space? You made it here to
Stanford somehow. So you might be an alumni. Congratulations for coming. But let’s just find
out, a quick raise of hands, just so we know
what your perspectives are and what your
experience is so that we know how to gauge what we’re
saying and what we’re sharing. So if you are an educator– doesn’t matter
which grade level– maybe raise your hand. If you’re someone that works
in technology– a technologist, if you will– raise your hand. Great. If you’re an alumni of
Stanford, raise your hand. Wow. Congratulations. Good for you. You made it back to campus. And before we jump into the
conversation about what, I also want to get
a sense of how we’re feeling about this question. When we think about
artificial intelligence and what it might look like
or what it might should not look like in a
classroom, as designers, we really think that learning
can be an evocative experience. and how our emotions are
connected to what we’re learning is really important. So how many people, by
the raise of their hands, feel optimistic about the
use of AI in classrooms? Raise your hand. By another raise of
hands, how many people are scared about the use
of AI in the classrooms? Great. So there’s a lot of optimism. There’s a few
pessimists in the room. That’s great. That’s always good to
have a balance of both. And then before we jump in,
as educators on the stage, we are going to
have an opportunity to connect with one another. And someone who’s been a
teacher myself, sometimes when we sit at panels,
we actually don’t get to talk to the
person next to us. We show up, and we listen,
but we don’t actually engage with the
person next to us. And so in order to kind
of set your learning goal for this panel, I would love it
if you could take two minutes– maybe a minute each– to turn to the
person next to you and ask, why did you show up? Why are you here aside
for the free food? What else are you here for? And what do you hope to learn? So take two minutes. [CHATTER] These are the educators. They’re like, you
know the hand up, but you know what that means. So thank you all. I really hope that
as you connect back with the person next
to you or around you, and you think about
why you showed up, really this is your question. We’re engaging in
a conversation. We’re going to have
thoughts and opinions. And we all have our own
thoughts and opinions that will emerge throughout
this conversation. So I hope that you
revisit at the end with the same person
about what you learned, the metacognition of this event. So let’s make sure we
spend a little bit of time at the end connecting back. So we’re going to kick
it off not in the future, as AI is often known
as the future– today, there was an
article for those of you who read The New York Times,
about a company out of Seattle that was able to– I think it’s called Aristo– that created an
artificial intelligence that was successfully able to
pass the eighth grade science test– it was a multiple choice–
at a success rate. And so this is
happening all over. What does this mean? I read it today. I’m like, whoa, what does this
mean for future of assessment? What does it mean for
how we measure content? It’s happening. And they only started working
on this a couple of years ago. So the ramifications
of this are quick. So before we get
into this question, I want to take us back a
little bit as educators and think about– I’m going to pose to you both– thinking about a technology
that you used in the past. So what technology, when it
was first introduced to you, struck you as problematic,
but you eventually found it successful or
useful for learning? JAMES LANDAY: So I’d
like to be old enough to say whiteboards or something. [LAUGHTER] I might look that old, though. My former TAs here will
say many classrooms in the quad don’t
have whiteboards yet, believe it or not. So, I mean, as a teacher,
computers and phones or whatever in the classroom
are always an issue. And many of us flip
back and forth. Is it about you’re
not exciting enough as a teacher or a lecturer
that someone needs to be checking
Facebook or Reddit or whatever it is these days? And so I’d say that’s
always been for me an issue of how should we deal with that. Some people ban them. Some people say,
oh, in the back you can use them because maybe you
have a learning need for it. More recently, I’ve
been swapping out ideas of how do we use, let’s
say, interactive polling like Poll Everywhere
or other products that add a little
more interaction. But I think it really depends
on the size of course. I teach of course
with 150, 200 people. And doing anything that can get
people involved is important. So even a laptop,
I’d say, is one. KARIN FORSSELL: I went software
in my thinking rather than hardware. Online grade books–
when they first came out, there was a lot of
discussion about whether or not we should put
assignments and grades and all that stuff up there so that
parents could be watching minute by minute to
hound their students and see if they’d
turned everything in and what their grade was and
all that kind of thing– a lot of worry about that
from the teacher side and from some parents as well. It’s really interesting to see
how the culture has shifted. Now, it’s written into
the teacher contract. It’s like, you shall use
the online grade books. This is the one that
you have to post to. And you have to do
it every six weeks. And so the first
deadline is coming up because the school
year already started, and they’ve gone
that far already. And it’s interesting to watch
how the grade books and what they can do actually
has been driving some of the education we’ve
done with teachers about how to grade. And it’s uncovered all these
really interesting questions about, should you
really factor in a zero? What does it mean? It means you don’t know. It doesn’t mean that they
didn’t get any points. How do you handle
missing assignments? How do you handle– should you average? What if they were learning
across the course and so on? So there’s a lot
of discussions now that are happening on the
teacher side about how you should grade because of the
features in the online grade books. JAMES LANDAY: Or even
how it misleads people. KARIN FORSSELL: Oh, yeah. JAMES LANDAY: I know teachers
who use mastery grading. And their scale is
0 to 3 or something. But then it gets
converted to percentages. And they’re like, my
kid’s getting 50%. KARIN FORSSELL: A 75. JAMES LANDAY: What happened? KARIN FORSSELL: Yeah, exactly,
there are some of those issues that we’re working
through still. LAURA MCBAIN: I mean, as
someone who used an online grade book myself as an
educator, I couldn’t remember how– again, the zero
was like, do I put a zero? Do I add a one? What’s the scale I want to use? And then the student’s
constantly updating it. And also how many grades
could I put in there? And what grades? How many grades
should I put in there? [INAUDIBLE] 100, 200? And then it gets to
this multiplicity of how many grades really matter
if we’re learning for just growth and really deep content? But this question– and I’m glad
we got started with this one– because I think Karin
what you suggested=– and also James, you–
is like, the technology changes the way we teach. The pedagogical factors
that you describe– not only in just the
assessment, but also how we use a whiteboard–
what does it make us think about how we interact
or how we design learning experiences with students? And these technology,
whether we like AI or not, there might be some potential– and we’ll just
discuss that tonight– about how does it
change the way we think about teaching and learning? And so, for us,
one of the things that we are interested
in is there’s just so much out there on AI
and Twitter and Reddit, as you said, and it’s
hard to sometimes know what it exactly looks like. We talk about it. There’s a lot of articles. There’s fantastic developments. But we thought we would
really start our conversation with an example that James has
provided of one of his products to have us a look of
what it could look like. So it will ground us
in an shared experience of what AI might or could
look like from a student’s perspective. And so our tech crew in the back
is going to cue it up for us. Before they do that,
James, could you give us an overview of a little
bit of what we’re about to see? JAMES LANDAY: Right. So I do research in
human-computer interaction. And one of the techniques
we start with often is instead of just building a
technology for several years, we do design
techniques, including creating a video, a
concept video, a way for us to work through
the ideas and also be able to put it in a way
we can show other people and get feedback. So the concept video
that we’re going to see was made by three
Stanford undergraduates during the summer internship
three or four years ago. And it was in response
to a project idea that I was trying to
get off the ground. I was inspired by Neal
Stephenson’s novel The Diamond Age,
otherwise known as, Or, A Young Lady’s
Illustrated Primer. And in The Diamond
Age, this girl has this tablet-like
computer that is in the form of
a narrative that is a tutor to her
over several years. And when I first
read the novel when I was in graduate
school in 1995, we thought, wow, that would
be so great as a tutor, but that’s impossible. That’s what we would at
the time call AI-complete. Today, somebody would call
it AGI [? as ?] necessary. Roll forward all the way to
today’s tablet computers– the iPad came out– and we
started to think, oh, maybe you could build such a thing. And then we really started
the project seriously in the last two to three years. And this was one of
the kick-off artifacts. So it represents what we thought
maybe part of the experience would be for the child. You don’t see
what’s on the thing. It’s more of how
might you use it and how might we get
kids doing learning activities that are
embedded in a narrative outside of the home. Because as a parent,
the last thing I want is just another thing
where my kids are sitting behind a screen in the house. So this was our vision. It has changed over time. But this is kind
of where we were thinking about three years ago. LAURA MCBAIN: So I wanted us to
see this video because it gives a contextual place
setting for all of us of what it could
or might look like. And so I want to direct
a question to Karin because Karin, she
runs, obviously, and directs the LDT
program here at Stanford, but you also work as a teacher. You’re in Palo Alto schools. And so when you
look at this video– and I say putting your
teacher hat on, if you will– what excites you
when you see this? Or, really, what
barriers do you see when you watch an example of this? KARIN FORSSELL: Well,
first of all, can I just say how awesome it is when
science fiction comes to life? That’s really cool. I mean, today– JAMES LANDAY: Not quite yet. KARIN FORSSELL:
Well, more and more, we’ve got communicators, right? And we’ve got replicators– 3D printers and printers,
for that matter, you can send things wirelessly. So it’s nice to see
that it’s coming. I really get excited. One of the technologies
that is in here that is so usable already
right now is augmented reality. It’s not that hard
to download an app and create an experience
where you basically create an overlay on
some physical experience and teach somebody and
give it depth, which you know I think of
that as helping students to become more expert. Experts notice things
that novices don’t. So you can see
things in new ways. So that’s super exciting. And I love that
this is a vision. And I can’t wait to
hear what actually works and what stumbling
blocks have been. Oh, another thing that I
really liked seeing in here was getting the use of
computers away from answers. Computers are really
good at checking answers. So we have lots
and lots of ed tech that just says,
yes, you’ve got it. Yay. But what I’m seeing in here
is more posing challenges. It’s like pushing kids
to think about things in their own heads, getting
it out of the computer, as you were saying,
James, out of the screen and into the world. That’s fantastic. I do worry a little bit
about framing regular school as detention, which, I
mean, I don’t see that when I go into regular classrooms. I know it’s sometimes not
as exciting as we’d like. All the teachers I know
want school to be engaging. So that’s awesome. But teachers are really where– the biggest question
that this raised for me was where are the teachers? Who’s writing that algorithm
that prompts things? And we can go there a little bit
in this conversation about all the questions of,
who’s designing the experience for whom? LAURA MCBAIN: Great. I mean, that is kind
of our next question. I think it goes to– and I wrote down,
does it belong? Does AI belong in the classroom? And I think you just
led to the question. It’s like, how does it? And who designs AI with students
and for learning experiences? And so, I mean, do you
want to jump in on that? Because I think that’s
obviously right up your alley as a learning
experience designer. KARIN FORSSELL: Well, I
think there’s no one answer. I think that the answer is
to always pose the question. Who did design this
particular experience? It’s not something
that just exists. Computers do not
invent themselves. So whenever you have a
curriculum, somebody wrote it. And there are values
embedded in that. And you want to just
acknowledge that and say, OK, are these my values and think
about who was designing this and who were they
designing it for? JAMES LANDAY: So, I mean, for
the thing we’re building now, I’d say we’ve
collaborated with people in [? GSC. ?] In fact,
there’s an LDT student here in the crowd who has helped with
some of those exercises that are in our current version. But to me, that’s a
research question also. How do you make
the tools that are going to allow curriculum
specialists or teachers to be able to do this without having
to understand the pretty big lift of the technology? And that’s still
where the research is. So one thing you don’t see
in that video is my vision for this has been,
oh, this is something a child might do outside
of the regular school hours to get people maybe more
engaged in learning, that maybe are turned off by
the traditional school system. And one of the problems
there is, well, what happens when
you get stuck when you’re doing one of these
learning challenges? You don’t have the teacher. Maybe you’re don’t even
have your parents around. And so one thing
we’ve been working on is chat bots that can help the
student when they get stuck. But that’s a huge
research question. How do we make a chat bot that
could understand the content that you’re learning right
now and how to get unstuck without [? interwriting– ?] KARIN FORSSELL: In that context. JAMES LANDAY: –hundreds
of rules in that context. So there’s a lot of hard
PhD-level questions there. But back to the
original question of AI in the classroom,
to me, that wasn’t my goal for this project. But if I want to step back, I
would say that’s like asking, does water belong
in the classroom? Does electricity belong
in the classroom? Anyone’s going to say,
yes, as long as it’s being used for something where
people are going to learn or properly. If you use it
improperly, it’s a waste. And right now, AI,
although the field has existed for 50 or 60 years,
it’s just starting to work. And so it’s going to
take some time for people to figure out what the good
uses are just like computers in the classroom, which
has probably been happening for 30 or 40 years. And as a parent
and PhD, I’m still unsure if they know what
they’re doing putting computers in the classroom. I almost feel like,
no, we don’t need that. KARIN FORSSELL: We’ve
had this conversation. You can keep going. JAMES LANDAY: So I feel
like, yes, it belongs. But you better find
the right applications. And it should probably
be in a way that’s going to somehow offload the
teacher from something that could allow them to give more
help to people who need it, when they need it. LAURA MCBAIN: It
begs the question– and I think there’s an addendum
to this question now– is like, does AI belong in the classroom,
and under what conditions? And so I’m curious. Thinking about the schools
that you all have worked in, what are those conditions? You highlighted a few,
James, around thinking about. Is it proper? When do we not use it? What are the conditions of
a really great classroom? And when do you see that? So, Karin, from
your perspective, what does it look like for you? KARIN FORSSELL: I was going
in a different direction when I was listening
to the question. It was like not
necessarily– well, I’ll come back to the
conditions in the classroom, but one of the things that
people gloss over a lot when they talk about technology
in education is, does it actually work? LAURA MCBAIN: That’s
a great question. KARIN FORSSELL: Because
there are so many things that people are going,
wouldn’t it be cool if–? And it’s like, yeah, it would. If 100% of my kids had
access to XYZ all the time or in class or out of
class, or if I as a teacher had access to a cart
of one-on-one computers at any moment that I
needed it in my classroom, I could do some
really great lessons. I could pull them
out just as needed. That might be five
minutes out of my day, but it’d be the
critical five minutes. But then you get into the
reality of wait a second, OK, so is that actually the best way
to deploy a bunch of computers at x hundreds or
thousands of dollars each? Is that ideal? We start to have the kids take
them home because actually that’s a really powerful move. Now do they all have access? Do they all have
the same devices? How do we manage that? It gets really
complicated really fast. So a friend of mine
once said, teachers will use technology
in the classroom when they’re as
reliable as the lights. Well, I mean, we manage. Most of the time, we don’t
have the lights go out. Electricity is pretty darn
stable in our classrooms. And we have some conditions. We have backup plans
for those unusual times when the lights go out. But we cannot be planning two
lessons for every class just in case the technology
doesn’t work. So that’s one answer
to your question. It’s a very different one. What are the other
answers to the question? You know what? Technology can support
any kind of teaching. It can even support
bad teaching. It amplifies. It transforms. It replicates. And so what I’d
like to see more of is technology that has a very
intentional philosophy built into it and that
the teachers are helped to understand what that
philosophy is and possibly even taught along the way. If you think about it, teachers
finish a teacher ed program and then they go out to teach. And they get– what– maybe
three days of in-servicing after that every year? How do teachers learn? Well, they seek things out,
and it’s all over the place. But if they’re working on these
devices and these systems– the online grade book I
mentioned– the online grade book could prompt
information about, hey, are you sure you want to
calculate a zero because here are some of the pros and cons. So we could actually do a lot of
teaching through these devices. LAURA MCBAIN: One of the
things that you both mentioned early in this
conversation is thinking about access and equity. And I think part of one of
my big fears in particular about AI is thinking
about how it is in equitably
accessible within schools. And obviously there’s
schools that have lots of technology and access. And that is just rampant. And yet, and one of
my big fears about AI, and I would say a lot
of emerging technology, is that it’s so inequitably
available across– as you said, if you can’t turn
the technology– I was one of those
teachers, Karin, that I had three
lesson plans when I taught one year because I was
like, do the computers work? Does the technology work? And I had A, B, and
C lesson plans when I started teaching at a school. KARIN FORSSELL: Too much work. LAURA MCBAIN: I had to. KARIN FORSSELL: Too much work. LAURA MCBAIN: It
was too much work. And so, I mean, regardless
of the tech conditions in the school, I think that’s
my biggest fear on this, is how do we ensure that the
folks who already have access to this type of technology– is this not widening
the gap of inequity that are available for students? And I’m curious, how do
you all respond to that? What are your thoughts
on how this can really reduce inequities
that are already pretty rampant in our
educational system? JAMES LANDAY: Well, I mean,
I think any technology, whether it’s AI or computers
or anything that costs money, if one set of
people have it, it’s going to increase inequity. And so you want to pay
attention of funding, how we can make sure other
schools have their funding. Even at a place like
here in Palo Alto, where we have the parents give all
this money to [? Pi ?] and it all goes to Palo Alto. Maybe they should give
half of it to Redwood City. And some other places
might be a nice idea. But I think something
like AI, well, we need to step back and
go, what’s it good for? It’s good at finding patterns
in large amounts of data. So what could we use that for? For example, it could help
us notice a student who’s starting to slide
across several courses because something’s
going on in their life. And maybe today
we don’t as easily diagnose that early on when
we could maybe help them. So we might also think
about where would we use this in a positive
way to close inequities for people who are more likely
to have these things befalling them in their education life. So it’s like any tool–
what we design it for, what it’s going to be good for. So if we care about
inequity, we can think about where might we use AI to
help and focus from that. So what’s the problem
we’re trying to solve? Otherwise, AI’s just too
generic to even mean anything. KARIN FORSSELL: Yes, and– yes, I mean, what he said. The examples that are some
good ones that exist out there, I mean there are tools that
can predict when you’re going to have a lot
of kids absent because of massive amounts
of attendance data. And, wow, this time
of year you usually have a flu coming
through, and you can plan for things like that. There’s a company called TeachFX
that does voice processing. A teacher can
record their lesson, and it will code it for teacher
talk, student talk, or group talk, or silence. And one of the
things that I really like about this particular tool
is that it actually is not– the data belongs to the
teacher or not to the district. So it’s not used for evaluation. But teachers can look at the
patterns in their own classes and see, wow, I have
a goal in this class to have kids talking more. Am I doing it? And so it’s pretty impressive
how some of the data can provide information to
help support better teaching. I would say one thing that I’m
quite concerned about as far as AI and equity is that we have
a diverse group of designers who bring each their own lived
experience into the process of designing the tools so
we don’t miss some really obvious opportunities and
complications because we’ve seen that happen in
a lot of other places where tools were designed. JAMES LANDAY: Yeah,
I mean, I think that question about
diversity in the design team also applies to
diversity in the data. So we all know about
bias in algorithms. And that’s one place
where if we were to develop some AI-based
tool and we just did it in Palo Alto, that’s
not necessarily going to work well for a lot
of different school districts. So making sure that as
we turn these things into real technologies,
they’re not just trained on the elite people
who maybe could afford them at the beginning and
therefore they’re somehow substandard
for the rest of us who are trying to apply them. So thinking about that. Again, I think
these are problems that people doing AI have
stepped into a lot of potholes over the last five years. But also they’re
getting educated because big mistakes
made by companies like Google and Facebook
get a lot of headlines. And so now people are starting
to work on these issues. Some of them are design issues. Some are technology issues. And some are just having empathy
for who you’re designing for. LAURA MCBAIN: In about
five or 10 minutes, we’re going to jump
into your questions, but I have a couple more
questions for you all. As you just mentioned,
I think part of changing and diversifying
the workforce who are designing AI starts early. I’m thinking about young people
that are in classrooms today who will be the designers of
tomorrow if not today already. And so, James, I’m curious,
because you’ve had all these really cool tools, how would
you imagine helping teachers– we have a lot of
educators in this space– train and teach
their young people about AI today in
their classroom? What would a lesson look like? What would that look like so
they can explore not only how it works, but then also the
ethical and equitable things so they can wrestle
with the challenges that we’re all wrestling
with right now on this stage today in a classroom
to get at not just computational thinking, but
all the ethics that come up when you’re really designing
these really technical tools? JAMES LANDAY: So I
would say there’s not really consensus in
the field of when certain topics in
computing should be taught. If you live in a place
like, again, Palo Alto, people think their kids
should be programming at age four or five. And I’m feeling like
that’s a little ridiculous. There’s other things
they could be learning. But what’s the right
time is unclear. People are like, oh, we got
to push AI into eighth grade. I’m like, they could
get it in college. It’s OK. But this idea of computational
thinking and trying to understand how does
computing, how does data, how does it affect
us as a society, those are the kind
of topics that I do think we want to push
earlier into whether it’s middle school or high school. And I think there’s been a
real step forward in that. For example, the new AP
computer science course that’s more about the
applications of computing rather than the traditional
just learn to program Java or whatever I think
actually gets into more of these societal issues. And I don’t know
if it’s perfect. But I think that’s the
direction of the kind of things you want to make
sure that somebody at the middle school or high
school started to think about, not necessarily just programming
per se because just programming won’t get you into a lot
of these kind of issues at that age. KARIN FORSSELL: I
would say that this is my one plug for where I wish
the curriculum were different. I wish we studied more
probability and statistics in school because
I think that those lay a foundation on the
whole idea of the assumptions we make that go
into data analysis, and that we might
be able to spot some faulty arguments and
some other things like that better if we had more
in that direction. JAMES LANDAY: Yeah, I
mean, I’ve been of the mind that this may be
sacrilege saying this in the School of
Engineering at Stanford, but that it might be more
important that kids take statistics and linear
algebra rather than calculus in high school just because
these are the technologies. AI are based on this. And I’m not sure
you’re going to become a mechanical engineering,
which there are still some of those folks out there. All these computer
scientists, that’s not the key topic for them. And we’re still teaching
what they needed to put rockets on the moon. LAURA MCBAIN: So I
have one more question, and then we’re going to turn
to y’all and [INAUDIBLE] questions. There’s educators and
technologists in this room. And you guys are raising really
some interesting stuff around like computational thinking,
[INAUDIBLE] and statistics. And a question that
all of us as designers, we like to wrestle
with questions. And so for the educators
in this room– and this is for you, Karin,
what questions do we need to be asking around
this topic in our schools, with our principals,
in our classrooms? What questions do we
need to be asking? Or, what do we need to be doing
to answer our own questions? And then the same
thing for you, James, as a technologist,
what questions that we need to be taking on
to really answer for ourselves what’s going on in this area? KARIN FORSSELL: In
the field of AI? So I have a standard
set of questions that I like to ask about
any kind of technology in a learning environment. And I don’t mean
this just in schools. And it gets to the
heart of what you were saying at the very beginning. You mentioned disruption. That’s the fun word that lots
of people like to throw around. And ultimately, the question
I had about disrupting is who is not being served
by the current system? So a lot of us actually
got to university with the current system. It worked OK for us. But there are lots of
people for whom this is not working, whether they
have special needs or didn’t have
access that we had or a number of other
possible things. So the questions
that I like to ask is who are you
designing this for? Whose problem is it? What do you know about
them and their context and what’s really hard? What are they needing to learn? What are their goals? What do they want to accomplish? And what are the
barriers to that? What approach is likely
to help with that? And so going to the
learning sciences literature and understanding what we do
know about how people learn and how can the technology
actually help do that. So it’s a
problem-focused approach of trying to understand
what the pain points are before we
start throwing technology at the problem. JAMES LANDAY: So
I think it’s more of a general answer in that any
kind of technology or lesson that we bring into
the classroom, we need to think about
what is the learning goal. What are we trying to achieve? And whether it’s
just some new way of doing content
or technology, it has to come back to that
fundamental question and derive from there. So I think that’s not different. And so we need to make sure
that at least those of us who are parents demand that
of the schools when they say, hey we’re giving
everyone a Chromebook. KARIN FORSSELL: Why? JAMES LANDAY: Why? So that they can
learn Google Docs? I think my kids
can learn that OK. I don’t think they
need that in school. So you have to tell
me why do they need it and how are you going
to use and how are you going to make sure it fits
with those learning goals and is used appropriately. And same thing if
we bring in AI, why? Not just because, hey,
AI is the cool thing and clearly it must be good. We need to come back to,
what’s it going to get us and is it worth it? KARIN FORSSELL: Because you’re
never going to be able to prove it works when you don’t
have a particular outcome metric that you can measure. There’s lots of noise. When you’re doing
education research, there are so many
different variables going on in any
particular– it’s hard to do an isolated anything,
definitely not in the wild. Randomized control
trials– oh, my goodness. I mean, you’re
not going to prove that things work that way. Are you’re smiling? JAMES LANDAY: Well
I feel like I’ve gone on and off in my career
over 25 years of dabbling in educational technology. And I usually quit because it’s
too hard to actually evaluate whether it works. I mean, it’s
really, really hard. And so– KARIN FORSSELL: Especially
something like a Chromebook. JAMES LANDAY: I’ve said,
ah, I’ve had enough of that. KARIN FORSSELL: A
Chromebook is not designed to teach you something. It’s a platform. It’s agnostic as to content. In the LDT program, we tend to
design for a particular content outcome. And therefore we have much
smaller bite-sized pieces. LAURA MCBAIN: As a school, and
I was a principal and teacher, I was always thinking about
technology for not consumption, but for creation. How are my students using it? What are we creating with
it, and through the process of using the tool
to create something, we wrestle with the problems. My hope, I think,
for AI is how do we get students to
create something that allows us to wrestle with
it and play with something that never existed before. And I think there’s
some beauty in that, potentially allowing
students to play and create using these emerging
technologies that allows them to think
differently about problems. So I want to kick it. We have all these wonderful
people in this space. And I know people have questions
that they’re wrestling with. And I think we have a handful
of folks walking around with a microphone. And so I am going to ask for
folks when they do stand up– and if you have a card
that you want to pass in, those will get to us. But we’ll start up
here in the front. And if you could also
just really enunciate so we all can hear,
that would be amazing. AUDIENCE: I may have missed
something very important. This is a question for
Professor Landay, I think. Your film shows a
little girl wandering around having all kinds
of exciting experiences, in the bamboo, meeting a
guy, and all that stuff with the idea that
can we recreate that by programming and by AI
program so she doesn’t have to walk around and she can
have those same experiences? Or do I miss the point? What was the point of the film? JAMES LANDAY: So the
point of the film was trying to show the
experience that she would have with such a system. So that system is
actually a story she’s reading that
is encouraging her to go do those things
out in the real world as part of the story. So when she’s reading
the story, she maybe has some challenge about
finding a group of trees and writing about it or finding
there’s a wall if you navigate 100 yards in this direction and
learning about the earthquake. So that story is
actually driving her to go out and
do these learning activities in her world. AUDIENCE: What’s the connection
with artificial intelligence? JAMES LANDAY: So the
artificial intelligence is on several aspects. So one is that story
is customized to her. The story that this
child would have, the story this child
would have, and the story this child would have
is totally different based on where they live
and what’s going on there. The problems that
are posed to them are going to be different based
on what their current learning level is. For example, if there’s a
math question versus writing or science is going to be
based on what the AI system has learned about
their current level and what’s the most
appropriate kind of question to show them next. And then finally, when
they’re doing a learning task and they get stuck,
how could they ask questions of the system
to help them get through that? That requires a tutoring
agent, an AI agent, to help them get unstuck. So AI is infused in several
parts of this story. AUDIENCE: Hello? Yes. So You said something very
interesting, one of you guys. Are you familiar with a book
called The Mismeasure of Man by Stephen Jay Gould? It’s a critique about the
way IQ test was designed. And it resonated very
highly with what you said. To keep the story short,
there’s something called factor analysis in statistics. And two people looking
at the same data using different
biased assumptions came up with two completely
different explanations about the nature
of intelligence. A British guy
imposed this notion of unilinear IQ that was
used in the British system for many years. The American guy came up
with a different thing. How do you set up
some sort of watchdog as you use data so
that doesn’t happen? KARIN FORSSELL: It’s
called peer review. AUDIENCE: And my background
is a signal processing. I got a PhD from [INAUDIBLE]. So that’s where the
background comes from. Thank you. KARIN FORSSELL: I
call that peer review. [LAUGHTER] I mean this is why we share,
or this is why we publish, is so that other people can look
at our assumptions and do it a different way. And one of the
neat things that’s happening now more
and more in research is that people are
publishing the whole data set so that multiple people
can do their own analyses on the same data
set, and we don’t have to replicate
all that messy data. But, yeah, it’s hard because
there’s no guarantee you got it right the first time. And there are lots
of assumptions that are embedded in any
research analysis, data analysis that you do so
you have to sort of put those assumptions in there
and then put them out for scrutiny and debate them. JAMES LANDAY: I would
also say different fields have different levels of when
something’s thought to be true. Fields like computing
and my field, I feel like you can get away
with stuff with one paper. On the other hand, if you’re
in, let’s say, astrophysics and you wanted to claim
some new phenomenon, it requires multiple
different analyses from different instruments
all saying the same thing before it’s true. So that field’s more mature,
and we need more maturity in other fields. And that includes replication
and open data, et cetera. And I’d say we’re moving
there, but we’re not yet to astrophysics level
in a lot of these areas. LAURA MCBAIN: I’m going
to poll on this question because I think it’s
a follow-up question. How do you think AI
can mitigate biases within the current
education system? Is that even possible? KARIN FORSSELL: Can
you have an algorithm that checks algorithms? JAMES LANDAY: Yeah. KARIN FORSSELL: That
would be really cool. [LAUGHTER] JAMES LANDAY: Whether
it’ll halt, I don’t know. So, again, algorithms can have
bias if the data is biased, but we can also try
to look for that. And there’s lots of great
research going on right here at Stanford with
people looking at how to remove algorithmic bias. And so you there are ways
that that’s going to change. And we could use AI to find
things that look out of whack and look into it. I mean, this also
gets into what’s AI versus what’s data science. So some of these things are
just purely, we have data, and we want to
analyze it and look for either positive or negative
trends that are going on. It’s just a tool. AUDIENCE: Following up on the
question about the transition time and what we should be
doing to prepare students, you talked a lot
about how students might be able to get the skills
to be behind designing AI. But if we know that
most people probably won’t be in that position and
most will be experiencing AI, and we have one
generation– let’s say 30 years– before artificial
general intelligence comes in, what do you think
we should be doing or could be doing to prepare
students to be interacting with AI at this point? JAMES LANDAY: I think that
assumption of 30 years to AGI is a big assumption, but it’s
as good as anyone else’s guess. It could be 100 or never. But I think, I mean, it’s like
what everyone’s been preaching over the last 20 years. You’re going to
have multiple jobs. And you have to be flexible. And we need to teach
people to do that. In terms of this, I think
understanding data– so statistics,
understanding data science, being able to analyze data. Those are skills that
people are going to need. And people will have
more of those skills. Then with AI,
people need feed it. People need to train it. It’s not going to do
everything by itself. So I think the skills that– if we asked everyone here
what are the important skills at being successful
in a job are still going to be skills–
being able to communicate, being able to write,
being able to get along with others on your team. In the end, those might
still be the key skills. KARIN FORSSELL: Being able to
think critically, recognizing that things are designed. And it’s not just AI. I mean everything we
interact with is designed. And so having that sort of
just foundational understanding might help. LAURA MCBAIN: Both of you
mentioned data science, which I think is fascinating. We’ve been talking about AI, but
I also think that understanding computational
thinking– and there’s a question here about
NGSS, which is the National Generation Science Standards. And so to what
extent do you see– and I’m going to
zoom up a little bit. We’ve been talking a
lot about classroom, but I’m going to
zoom up a little bit to a little higher
level on policy and state-level standards. Do you see data science
and computational thinking and this type of thinking coming
into what we call the classroom standards? Do you envision that
happening anytime soon for us as educators who are trying
to be the early adopters in education? KARIN FORSSELL:
So I’ll tell you, I taught middle school
computer programming and video production and web design
and broadcast media and a few other
things like that. It’s great. Teaching electives is awesome. I tell kids that that’s
what you do in colleges. You take only electives. The challenge for me was
that there were no standards. And this was a decade
or two ago when that was the big new thing. And so a lot of
other organizations have come in to
fill in the gaps. So there’s the International
Society for Technology in Education, esty ISTE. And they’ve developed standards
and that kind of thing. How likely are those
to become core? I mean, science, English,
math, history– those are the four core subjects. And I don’t really see computer
science or computational thinking becoming the fifth. I think it’s more
likely that it becomes part of in different places. Bits and pieces become
part of those places. So I don’t think
we’re going to have a curriculum or a whole new
standards sort of thing. Yes, for those people who want
to use it as an optional extra, but for it to be more
central in our curriculum, it’s going to be, we’re going
to pick up some of this in math, and some of this in
science, and maybe English and social studies as well. My guess. JAMES LANDAY: I think it
could come in, but if it does, it’s going to be so late– it’s going to be an 11th or
12th grade– that it won’t impact all those other classes. So I do agree that
I think there will be a segment in
your science class where instead of
doing some experiment, you’ll get a data set and
talk about how to analyze it. KARIN FORSSELL: Right. Which is what just
happened, right? Next Generation Science
Standards have just been– there’s a whole engineering
unit in science classes now. JAMES LANDAY: And in math,
there will be a similar thing. So I see those units
being spread out. But then I think the bigger
change is going to be college. Everyone’s going to have this
kind of data science course. And they’re going to use it not
like computer programming where we think, oh, well, some
people need data science. It’s going to be used in
almost every other field. And so that’s why it’s going
to become a [INAUDIBLE].. KARIN FORSSELL: Everybody
has to at least understand what it means. LAURA MCBAIN: Questions? Couple more. I see a lot of questions. AUDIENCE: Teaching the
correct path to a solution is really fundamental to
engineering education. And so here’s why I’m a
little bit scared about AI. I mean, it comes up with
some great solutions. But we have no idea
how it got there. And so our students,
we want them– if you’re considering
reliability, if you’re considering all
sorts of social implications or whatever, there is a
correct path that we want the students to follow to get. And you give a bunch
of data, it can come up with some really
amazing solutions, beat anybody at chess. But how did it do it? And so I think I’m
concerned that AI is incompatible with an
engineering education because we don’t know
how it got from A to B. And getting from A
to B is fundamental. If the student makes
a miscalculation, at least if they
followed the right path, they’re on the right path. So could you comment on that? That’s frightens me a
little bit about AI. JAMES LANDAY: So there
was that article today that you mentioned
in The New York Times where they showed at the Allen
Institute for AI in Seattle that they could have– this
AI-trained system could get, I think, 90% of the answers
correct for an eighth grade science exam. And it was based on this
natural language processing model called BERT that
came out of Google just a couple of years ago that’s
all the rage because Google has so many computers there, they’ve
trained massive data sets, and this language
model works well. Now, that system got 90%, but
it doesn’t know the answer. It has so much
language in it that it could get to the right thing. So you couldn’t
ask it, well, why did you say that the magnet
was the thing you needed to you pull the iron out of
the rest of this material or whatever? LAURA MCBAIN: [INAUDIBLE] JAMES LANDAY: But
the key here is we need to distinguish between
AI, which is this idea that we can make intelligent systems
that have some autonomous behavior, and deep
learning, which is a particular algorithm that
is all the rage because it’s working really well on a lot of
these classification problems. Deep learning is what’s
made everything work so well over the last five years. But deep learning is never
going to get you your AGI. There has to be a lot more
progress in algorithms and AI before we’re going
to have systems that could show you the steps
and know why you did it wrong. That’s why there’s still a
lot of research here in AI. This is why we created this
new institute at Stanford for human-centered AI– because we think there has to
be really new algorithms that are going to push us forward. So deep learning is
not going to do it. And if it’s just deep
learning, then it’s great for fixing a
lot of our businesses and finding patterns
in a lot of this data. Great– there’s a lot of
money to be made there. But it’s not going to solve
the big huge challenges that are still out there. And that’s why there’s still
a lot of research to be done. AUDIENCE: Hi. I have a related question
about transparency. So as mentioned earlier
about the online grade books and that there were
challenges of things like if someone was using a
different scale of mastery, what did 75% mean? And my question is what is
being done to help the parents and the teachers who are
using these systems that are powered by AI to
understand, if not, again, back to the
question of what is really in the algorithm, how
exactly was this determined, understanding what does
it inherently mean? How do I use it? And can I trust it? KARIN FORSSELL: So
let me be clear that I wasn’t saying that the
grade books had AI in them. Right. OK, just to be clear because
that would be very different. I think in that example,
the teachers are still making the decisions, and they
are still communicating them to the parents. But I do think there is some
really interesting question there about trust. Why do we trust? I think about this all the time. Why do I trust when I’m driving
down the street that nobody is going to come over into
my lane and run into my car? There are rules. And we all know
what those rules are even though we know some
people break them sometimes. And I think that the
publication of those rules is a very important
piece for us to trust that somebody is checking it. I’d like to believe that
there’s a police somewhere. I don’t know exactly who
that is that’s policing all these algorithms
that are messing with all sorts of things. I mean, there are algorithms
that are messing with the stock market, right? I’d like to believe that
there is some organization– and maybe there
is; you tell me– that is checking on these
things and saying, hey, that one’s not cool. JAMES LANDAY: I mean,
I don’t think there is. And I think one
big difference is when we have public
infrastructure like the roads, then we have some
government agency that’s responsible for enforcing laws. Now we have this huge amount
of private infrastructure that’s controlled by people
like Facebook and Google and all of these companies. And right now, there really
isn’t a lot of control of it. Now the question
is, is that what it’s going to be in 20 years? Or is this just like
when they first started driving cars on the road? It’s like, hey, get your
horse out of the way. Or the horses were– we had to learn what did
we need to make this work. And I think a lot of these bad
instances of technology use– whether it’s using Facebook
to manipulate people in the election or just sell
you junk you don’t need– is going to lead to
us probably having more regulation in this
industry than we do today. And, of course, the
companies don’t want it. They want to make as much
money unfettered as they can. But it’s going to be some
amount of we need oversight. And I think we’re
still in early days, and nobody’s really sure
what is the right amount. KARIN FORSSELL: So
California actually did pass a law two
years ago, I think, about data privacy that
really locked down. Companies aren’t allowed to
keep students’ information. I mean, we do have a
federal regulation around. Just like HIPAA for
your health information, there’s FERPA for privacy
of educational records. So there is some of that. JAMES LANDAY: Yeah,
that’s just data. KARIN FORSSELL: That’s the data. JAMES LANDAY: But
nobody’s looking at the algorithm itself. KARIN FORSSELL: That’s
not the algorithm. That’s true. But at least the
companies that are likely to be running the
algorithms on the data have to be a little more– JAMES LANDAY: But this is where
a lot of controversy lies. California has passed
some good privacy laws. It’s not quite Europe
or anything yet, but companies are
pushing to say, oh, we need a national
standard so that we don’t have different rules. And we should think
about that just like we do lowering air
pollution standards for cars. It’s about making
it lower rather than we need one standard. So we need to be
careful as citizens. If we really want this
kind of protection, we need to make sure we
don’t let it be diluted. We’re probably
going to need more. But I think we’re still early. And even people who are
expert in this area, which I don’t claim to be,
they’re not sure yet what are the right solutions. But, as a public, we should
be debating them and trying to figure out what they are. LAURA MCBAIN: So we have
so many questions here. And I’m going to
keep going to those. But I want to honor some of
the folks who wrote the card. And this one brings us
back here to this campus. Someone wrote, how do
you think AI is being used in Stanford’s classrooms? Given everything you just said
about privacy and data science, how do you imagine
it’s being used now? Or how might it be used
in Stanford classrooms with your students? JAMES LANDAY: Did
I say they still have chalkboards [INAUDIBLE]? [LAUGHTER] KARIN FORSSELL: So I know
that one of my colleagues has used some natural language
processing in a course that is focused on tutoring English
to non-native English speakers. And that would be more in the
context of providing a support tool for practicing a
particular learning outcome. But that’s not generalizable. That’s not like all
Stanford courses. JAMES LANDAY: I don’t have
knowledge about a lot going on. I mean, there’s obviously
things in digital humanities of trying to use data
science as a tool to look into humanities topics. I don’t know how much
of that is pushed into actual undergraduate
curriculum at this point. Now for sure, when we talk
about education at Stanford, it’s not just
undergraduate courses. It’s also graduate. So if you look at the most
popular classes on the campus right now, machine
learning is like this class that can’t actually fit in
any room I think they have. In fact, it broke the
registration system because some programmer
had hardcoded 999 as the maximum
number of students in a class at Stanford. Nobody who ever took a
class from me would do that. But anyways, who’s
taking that class? Well, sure, a lot
of undergraduates, but a lot of graduate students
because they know they need those tools
for their field. So that’s where for sure it’s
happening in their education. It’s in all of the graduate
fields of science, engineering, humanities and business. People are using AI
techniques for doing research. AUDIENCE: Here. Thanks. Professor Landay,
I’m particularly interested in hearing
about the world lab and how cross-cultural
collaboration may have presented some surprising
insights to the use of AI. Maybe [? Shelly ?] could
speak to that, too, with her experience in Japan. Thank you. JAMES LANDAY: So my
world lab experience came out of I did a 2 and 1/2
year sabbatical in Beijing. Usually sabbatical
is one year, but we liked it so much we stayed. [LAUGHTER] And in working there
and collaborating with mainly native
Chinese folks, I’ve had this
interesting insight that I’m sure the design
thinking folks knew forever, which was sometimes we have
cultural misunderstandings that actually lead to new ideas. It’s almost the fact that
we see the world differently because we’ve experienced
things differently can lead to new ideas
that come out of it. Similarly, just living
in a different culture leads you to
question things that are almost unconscious
to you every day because you just have
done it your whole life, so you never even
think about it. It’s just automatic. But when you’re thrown
in a new culture where it might be done
differently, you see things that you didn’t see. So I realized that
for someone who works in design and human
computer interaction that having cross-cultural teams
could really lead to new ideas. And so I thought
that was interesting. In addition, I did
research on how people might understand
different interaction techniques differently
based on culture. And so we studied, for example,
giving positive feedback versus negative feedback in a
behavioral change application. And we did a large study. And we found that
if we had given a native Chinese
negative feedback, that led to more positive
behavior change in this environmental
application versus Americans, it was the opposite. Positive feedback
led to better change. Now that kind of result
would lead you to go, oh, if we’re going to
design some interface, it might need to be different–
not just translate the words, you actually have to
understand the difference in the cultural context. So that led that a
lot of collaboration. And we’ve brought students
from Stanford over to Beijing and did a four-week
intense class with Professor [INAUDIBLE]
in the music department. And the teams were all
cross-cultural– half Chinese, half U.S. And that really led
to a lot of interesting design. AUDIENCE: Two
questions to follow up on the points about
privacy and trust. On curriculum design, at least
today in California, there are ways for professional
educators, school administrators, school
board members, parents, and the general public
to provide feedback on state education
curriculum frameworks. And with AI, how do you
determine that there is an actual published framework? What are the students
supposed to learn? What’s going to be
published as what’s going to be in that algorithm
so that stakeholders can provide feedback before that
algorithm starts going into the classroom? And secondly, regarding
privacy, numerous and widespread nationwide recent information
security breaches, including ransomware compromises
that have locked down the entire networks of numerous
public school districts around the country, show that
for school districts that are already
underfunded as it is, that cyber security is an
area of severe deficiency. As the amount of information
being kept about students and teachers skyrockets
exponentially, how are school districts going
to deal with cybersecurity when they already have enough
trouble dealing with it as it is? KARIN FORSSELL: Or how
are we, as a society, going to provide the resources
so that the schools can deal with it? JAMES LANDAY: I agree
that that’s a big problem. I’m suspicious that
what these schools will do is they’re going to
offload it to Google or these big companies who they
say will take care for them. And then we have
to maybe regulate those big players to make sure
that they do what we want. In terms of your first question
about the spec curriculum, again, I don’t
really think we’re going to have useful
AI systems that don’t have some sort of
outline of what is the learning objectives and what is the
data that’s being used. And that’s going
to be similar, I think, to how people
probably are– and, again, you guys
are the specialists in approving textbooks and
other curriculum at that level. It’s not like it’s
just like, hey, we just trained this
on the internet. Here’s the system. That’s not going to
give us anything useful. There’s going to
still have to be some kind of curriculum
design that goes into any of these AI systems. So whether that’s going
to be at the right level to be able to prove it
or not, or whether it has to be every possible answer
that this system could ever give you has to be looked at– well, if that’s true, then
these things will never work. So that’s going to be
a problem if they’re, for example, using deep
learning to train on a data set. So in my video example,
when we have this chat bot, one of the exercises
we have is a student is learning how to estimate
the volume of a box because in the story they
have to get some candy or something across this river. So if they get stuck
on that problem, how does the chat bot know to
be able to answer questions about density or
volume, measuring the dimensions, et cetera? One idea we have is we
can train an AI system on a large corpus
of data that allows you to talk to the chat bot. We want to be able to
also then just say, I want to take the
Wikipedia page for density and train it on that, and
be able to answer and ask questions about it. The question is, would just
knowing that, hey, part of it was trained on the
density page, would that be enough for someone
to approve it? Or is that like, well, I don’t
know really what’s in there. So these are open
questions that I think we don’t know the answer to. And I think raising
it is the time to raise it as the researchers
who are building those systems will have to try to figure
out what a solution might be. LAURA MCBAIN: I’m going
to put one more question, and then we’ll go back to
the audience in just a couple more minutes. This was an interesting
question that came up. And I don’t think we defined
it even as a panel, is how do we define classroom? And what does that
mean for learning? A lot of the
technology that we’ve seen, at least in my
experience as well, has been for what we consider
non-traditional well-resourced classrooms. And so I’m curious how
we define classrooms, and how do we make
this technology just more accessible for quote,
unquote in their question “more traditional-style
classrooms.” There’s two questions
there, Karin. KARIN FORSSELL:
Yeah, I was going to say there’s a
few layers to that. Because one of the
things that’s interesting is that a lot of technology
designed for learning actually starts in an informal
passback device in the car kind of thing, like
[INAUDIBLE] or you can call it a lot
of different things. But basically it’s a lot easier
to design for a family unit as a learning
environment than it is for the complexity
of a school environment with
classrooms and 30 students and all those kinds of things. Then there’s also the
interesting in-between space of non-formal learning
environments like museums or zoos where families show
up and interact and things like that. And there’s a lot
of cool technology stuff going on there, too. Traditional
classrooms, these days we can assume that
they have internet. Yes? We can assume the
teacher has a computer. We can’t assume a whole
lot more than that. It’s a little early
yet to be talking about– but we can assume
that most schools have some form of digital student
management system and probably a learning management system. The student management
system organized and keeps track of the kids–
bodies, attendance, grades, discipline. And the learning
management system usually handles
things like homework, is more classroom focused. So I mean how do you
define the classroom. Is it one unit of teacher
with x number of students regardless of physical presence? LAURA MCBAIN: I don’t know. I think that’s why
it’s so interesting. I feel like I’m wrestling
with that question as well. As a classroom, it’s like– and it’s also reminds
me of how we reimagine learning because what you
highlighted which I think is really important is that
learning happens everywhere. It’s in the car,
as you describe. It’s in the museum. It’s in a quote, unquote,
“traditional space” where students are inside of
a building or brick-and-mortar institution as we are now. And yet, I wonder how AI
transcends that physical space, that temporal space,
and allows people to learn in different times
and different modalities that’s really geared to their kind
of personalized learning. And I wonder, given
that potential, what is the role of
teachers in that space? How do they facilitate
those experiences? And that’s a wondering I
have as well, as a teacher. JAMES LANDAY: I mean, I think
in a really long-term view that we can imagine
that students can learn in a lot
of other spaces than just the
classroom building. We already know they do. But maybe that becomes
more of education. But what I worry about
is when people think, oh, we’ll just create that smart
primer or that smart tutor, and you just go learn yourself. And then we lose the whole
social aspect of learning, the importance of school. And the fact that this
whole movement that don’t come to Stanford, you
just do this online learning is a better deal– yeah, you might
save money, but– I went to a public
college up the bay and I feel like I learned a lot
of stuff in the laundry room, on a road trip up to Tahoe. There’s a lot of other
places where that happened. And so I really worry about that
we think, oh, just because we have the smart tutor,
we get rid of schools because at some point they’re
better than a teacher. And I just think that social
part is a part that to me maybe that’s what
the classroom is. The final thoughts. I think we want to give you
some things about takeaway, but I want to leave you
all with some space. We opened with a question,
and I’ll ask at the end. I’m going to highlight
it right now. How you feel about AI now
might be different than how we started this conversation. But I’m curious from
Karin and James, what kind of final thoughts
or takeaways do you have for this audience
on this question of, does AI belong in the
classroom given what our friend [INAUDIBLE]. KARIN FORSSELL: Yeah, I
would like to definitely leverage Dr. Earnest’s
comment there and say the business
of education is about human intelligence. So we definitely want
to think about to what extent do we want to replace
that with something artificial. And I think there
are places where it can augment our human
intelligence, absolutely. As James started out saying,
finding patterns in the world– that’s a big boon. I’m not in any way
planning on going back to life pre-computers, pre-
all sorts of technologies that I use in my everyday
life because they make me able to do more. So the trick is to
align these technologies with what our goals are for
humans to develop as humans. JAMES LANDAY: And I would say
AI or machine intelligence as a field has been going on
for 60 or more years with folks like Lester and the
famous John McCarthy who also was important to
founding the Stanford AI lab. And it’s only in
the last few years where the public’s
become aware of it because this one piece
of machine learning has been useful to people. I’d say we should
really not fear it as much as think
about this is a tool. It’s a tool just like a laptop
in the classroom, projectors in the classroom, those horrible
smartboards in the classroom, whatever. And we should think about,
what do we want to use it for? And does it have value? Can somebody show it has value? KARIN FORSSELL:
What’s it good for? JAMES LANDAY:
What’s it good for? If it does, use it. If it doesn’t, don’t use. It but it’s not, I think,
necessarily an evil technology. It doesn’t have value. The values are what we
as designers embed in it. And we should make sure
that they align with what we want in our schools. And then we’ll [INAUDIBLE]. KARIN FORSSELL: That
puts the work back on us to actually
articulate our values. JAMES LANDAY: Yeah,
we need to articulate what we want as parents, as
learners ourselves, and then as a society, try to
hold our vendors, who we’re going to get this
stuff from, to the values that we want. LAURA MCBAIN: And whose
values are we promoting? I mean, it’s a very
nuanced subject of how we get to what
values are we ascribing to, and whose values. So we started with a prompt
of raising your hand. And I’m curious just from
an emotional perspective how people feel about a
AI now in a classroom. So let’s just raise
your hand again. Do you feel optimistic about
AI in the classroom now? Raise your hand. OK, how many people
feel the opposite, feel a little bit
more pessimistic? OK, we’ve had a [INAUDIBLE]
shifts in this space. So what I want us to
do is take a moment and thank our really expert
panelists that were here. [APPLAUSE] And we’re wrapping
to a reception, if I believe, but I do want to
encourage you as a teacher– as something like your
exit ticket, if you will, sometimes in classrooms,
we like to say– is find that person that you
started with this conversation. And what did you take away
from this conversation? What did you learn? And what questions
have emerged for you? And so I encourage you to
talk to the person next to you about that, or find some people
in the next 20 or 30 minutes to really dig into
what you learned. I think, as all of
us as educators, we leave with more
questions, which I think is always an interesting
thing and the best way to end a panel. So thank you all once again. KARIN FORSSELL:
Thank you, Laura. [APPLAUSE]

About James Carlton

Read All Posts By James Carlton

Leave a Reply

Your email address will not be published. Required fields are marked *