Episode 283: Ken Shelton & Dee Lanier
Episode 283: Ken Shelton & Dee Lanier
Join me as I welcome Ken Shelton and Dee Lanie r, authors of the new book "The Promises and Perils of AI in Education: Ethics and Equity Ha…
Choose your favorite podcast player
June 19, 2024

Episode 283: Ken Shelton & Dee Lanier

Join me as I welcome Ken Shelton and Dee Lanier, authors of the new book "The Promises and Perils of AI in Education: Ethics and Equity Have Entered The Chat."

In this insightful discussion, Ken and Dee share their experiences writing the book, their critical analysis of the current AI landscape in education, and the urgent need to address issues of equity, access, and ethics as AI becomes more widespread in schools.

Key topics and timestamps:

0:00 - Introductions

2:36 - Dee's background in education and work as an author/speaker

3:28 - Ken's background as an educator and previous books

7:08 - The story behind writing "The Promises and Perils of AI in Education"

14:20 - Concerns about deploying flawed AI systems that enable discriminatory practices in schools

20:12 - The risks of solely relying on AI and automation without considering unintended consequences

28:57 - The importance of slowing down AI implementation in schools to critically examine the implications

34:46 - Issues of access and the digital divide in AI implementation

44:35 - AI and writing assessment - amplifying exclusion if applied negligently

53:45 - Ken and Dee share their "EduKryptonite" - their biggest challenges as educators

58:55 - Closing thoughts

This is an important and timely discussion that all educators and edtech enthusiasts should listen to as AI increasingly shapes the future of education. Be sure to check out Ken and Dee's insightful new book for a deeper dive into these critical issues.

--- Support this podcast: https://podcasters.spotify.com/pod/show/myedtechlife/support

Thank you for watching or listening to our show! 

Until Next Time, Stay Techie!

-Fonz

🎙️ Love our content? Sponsor MyEdTechLife Podcast and connect with our passionate edtech audience! Reach out to me at myedtechlife@gmail.com. ✨

 

Transcript

Episode 283: The Promises and Perils of AI in Education with Ken Shelton & Dee Lanier 

 [00:00:30] Fonzy: Hello everybody. And welcome to another great episode of my ed tech life. Thank you so much for joining us on this wonderful day and wherever it is that you may be joining us from around the world. Thank you as always for all of your support, your shares, the likes, the follows.

Thank you so much for the wonderful feedback too, as well. And for following us on social media, we really appreciate all of your support. As you know, we do what we do for you so we can bring you some amazing conversations here in our education space. And today I am really excited because we have two amazing gentlemen that are going to be here sharing some amazing, amazing things.

And obviously they're going to be sharing this right here. Their book that was just recently published or released, I should say. And I am just really excited to dive in. So I would love to welcome to the show this morning, Mr. D Lanier and Mr. Ken Shelton. How are you this morning?

[00:01:22] Ken: On point.

[00:01:26] Dee: Excited.

[00:01:27] Ken: Yeah, we both D I don't know if you have it.

Yep. We both, so we've got, Oh yeah. All of it. Oh, rise and grind. Bingo.

[00:01:36] Fonzy: I'm really excited, gentlemen. Like I said, I'll also, I'm a long time follower of your work. I was just speaking to Dee backstage too, as well.

And then of course, , Ken, I know that you've also been very gracious to join me in my graduate school cohort too, to, Do some talking to just all our peers. And thank you so much for the work that you both are doing. And I'm really excited to dive in into obviously the story behind your book. Also just talking about the book and talking about some of the issues that are out there as far as the dangers, things to look out for and just to educate all our wonderful friends in the education space about generative AI and its use.

So before we get started though, I would just love for both of you to give us a little brief introduction. And what your context is in the education space. Uh, do this again for our audience members who may not be aware of your work yet. So we'll go ahead and start with D. D, if you can give us a little brief introduction and what your context is in the education space.

[00:02:36] Dee: Sure thing. Um, DeLanier been in education space for a little bit over 20 years now, used to be a high school business teacher, uh, then moved into education technology, also teaching middle school and elementary school context. Also have done some work within higher ed, um, love to specialize in design, thinking equity work, author of.

De marginalizing design co author of the book that we are talking about today. The promises and perils of AI education, keynote speaker, workshop facilitator. I'm also a Samsung education coach and, um, probably should end with also co hosts of the liberated educator podcast with Ken Shelton and Brian Romero Smith.

[00:03:23] Fonzy: Excellent. Thank you so much. Ken, now over to you.

[00:03:28] Ken: Yeah. So, uh, first of all, thank you. It's, uh, obviously you and I go back, you know, I, I think it's important for your listeners to know that you and I go back, um, quite a ways and like you just shared in your intro that I, uh, I had the opportunity to spend time with your graduate school cohort.

And I think the whole reason why you have us on this podcast kind of embrace or embodies even that whole situation around, how are we supporting each other, uplifting each other, recognizing that there are people that are within our respective circles that have areas of. Experience and expertise that, that is a value of community.

That's just the best way I can describe it. And so I'm appreciative of, of the time that we're getting here and obviously of your support. So for those that are listening, that don't know who I am, uh, as we shared, I'm Ken Shelton, I've been in education for, for a minute, D's at 20. I'm more than 20.

 My work has, , is whether it be in the classroom where I spent most of my time teaching middle school, I've taught, uh, The funny thing about my bio that I don't put in there that I share with people is if you count the time that I substitute taught, I literally have taught every subject, and, , I always say that as much as I'm a tech guy, my absolute favorite subjects in order number one was art.

Then number two was physical education. And then number three was of course, tech. Now I have other subjects that are my passion points. Henceforth, some of the stuff that D and I were able to capture in the book. But at any rate, after I left the classroom for reasons that are not going to be a part of this podcast, but, let's just say for my own survival, , I've had the privilege of being able to do a lot of consulting, a lot of speaking, both nationally and internationally, working with, you know, government agencies, NGOs, nonprofits, and, I know we're going to dive more deeply into it, but I always say your effectiveness as an educator is only going to be as good as the people that are within your respective circles.

And so, uh, you know, like I said, I've been around for a while. This is for me, this is the second book that he's published. This is the first of three that I will publish. Right now I'm actually doing pretty good with my timeline. So the other two will be published within the next, I would say, nine months.

 And then I'm hoping that D and I will have another project, , that we can collaborate on, on, uh, you know, in the works, because you'll hear more about it as you ask us questions about the process, but, this was definitely a, a joyful and joyous endeavor, um, with D.

[00:05:59] Fonzy: Excellent. Well, thank you so much, gentlemen.

I really appreciate, like I said, for me, having you both as a longtime follower of your work and the work that you're doing is very admirable because again, being in this space, you always look up to those that have gone ahead and are doing some of that work and, you know, just like I always tell all our audience members that listen to the show, it's like, you're going to take some knowledge nuggets and then sprinkle them onto it.

You're already doing great. And thank you so much. You guys are really just wonderful educators and just the work that you're doing. And so I want to definitely get started because we definitely have a lot to talk about. As we know, your book has just come out and has been released. It says the promise and perils of AI in education.

And for all our audience members, all the little stickers here you see are bookmarks because they're talking points, you know, things that I definitely Want to talk about if we do have enough time, but I know that these gentlemen, definitely, you know, I know the conversation will be free flowing and it'll be good and I know we'll hit on some of those points, but I would love to know, and we'll go ahead and start with you D as far as the, the start of this book, the idea, tell me, tell us a little bit about that story and of course, how you linked up with Ken and as to creating this project.

[00:07:08] Dee: Uh, I feel like , this is the best part. This is, it really is. It really is cool because, um, Ken and I are always trying to sync with, you know, we got a three hour time difference. Uh, and then of course we got our producer for our podcast, uh, Brian Smith, who's in the middle with, with Central Time Zone, and so.

Um, the challenges of us producing our podcasts is really about scheduling. And then when you think about, you know, all of us traveling and doing all the stuff that we do. So, um, we've been talking a bit about a bringing back the podcast, doing another season, um, talking about AI because it was very much emergent.

And then Ken and I also had been talking about doing a book and it just seems like this is where things go with us. We talk about it so much. It's like, okay, what do we have to do to just make this happen? And then, uh, eventually came up with the idea of just had an epiphany. Like what if we actually were to.

Record our podcasts, record episodes for the podcast. And then from that, take that as transcript and then modify it for a book. So that was the early idea. But what we weren't still considering was we still had a major challenge and that was our time schedule and things like that. So, uh, it then iterated to how about we just remove all obstacles?

What if we just, cause this was our original, original podcast idea. What if we were to just. Call one another, utilize Google Voice, record our conversations, and then see if the material is, um, salvageable by Brian, from a producer's standpoint, to still put as a podcast. But either way, we will have that recorded conversation, because we just flow off of each other really well.

And so we You know, developed our outline. , we didn't quite have the title for the book yet, but we knew what topics we wanted to discuss. And then, uh, we intentionally recorded some podcasts. Uh, not even 100 percent sure if they would ever get published, but that was our goal and, um, can say. Thankfully, all of those podcast episodes are now up and the book is obviously finished.

[00:09:24] Fonzy: Excellent. Ken, tell me a little bit about your thoughts and, uh, obviously as the process is taking place.

[00:09:30] Ken: Yeah, it decaptured most of it, I think for us. And this is something that, you know, I, you know, major props to you for not only having obviously a high quality podcast, but maintaining it. Uh, is the whole idea around, as Dee shared, like what, what have been our barriers have been two significant barriers, time and schedule.

And so at some point, you know, Dee and I, when we were talking about artificial intelligence, uh, there's a couple of things that were the nexus from idea to execution to what you have in your hands. And I would say the first is a D and I have this conversation all the time, and I know you and I actually have have briefly had exchanges as well is we noticed a pattern, a predictable pattern when it comes to really ed tech in general, but but definitely with this, and it was that there were no voices being amplified that look like us or a majority of the students that are in public schools in this country, or I'll even say the global majority.

You had a few people, but we both were ultimately like, okay, we can either continue to essentially go through what I like to say is the type of therapy sessions that you do with your close friends to where it's like, I just want to vent. And sometimes we just listen. Sometimes we vent and we ask for advice or We can say, listen, you and I have already done something like a podcast, and I don't know if it came up in the early conversations, but I know it's come up recently where we were thinking in terms of like how Malcolm Gladwell did talking with strangers, and one of his podcast episodes ended up being a chapter in that book, and then the Bomber Mafia I was another thing of like, I have this whole podcast series that kind of gives you an idea is almost teasers, but then I'm going to put it in writing and then now it's going to turn into a whole book.

And so that ultimately is the creative process that we went through and we weren't, we were less concerned about quality and we were more focused on content and what can we do to capture our thoughts and do it in an organized capacity that there's a reason why if you look at the chapters, you might see a pattern.

Of the first letter of the topics within each chapter, that's on purpose. And so, you know, then we were like, okay, well, let's use AI. We already have all right. I can tell you this one is an

[00:12:01] Dee: Easter egg. You can see, you can see it. I was going to leave it as an Easter egg.

[00:12:04] Ken: But goatee. No, no, no, no. That was it.

I can see. So, so then we, we, we were like, all right, we have our idea. Let's use AI to refine our ideas. And then, and then let's, let's just start recording. So there literally would be instances where D would text me. I would say I'm driving for the next two hours. Do you have any time in this window? And I would say yes.

And then he would call me. You'd say, all right, what are our topics? And then I would, we'd rattle off the talking and say, all right, I'm, I'm recording. And then that's what we would do because the whole idea around that was, we didn't want the idea to be stuck in our heads or stuck on, you know, like a Google doc.

We're like, we, we've got to turn this idea into something more than just, um, you know, a discussion between the two of us. And we even were like, we'll tell Brian and, but we want the idea to be even more than just a series of podcast episodes. And so that's ultimately what we did. And the other thing, as you may have already seen in the book, we are transparent in that book about when, how, and why we use artificial intelligence.

And it really fits the whole, uh, whole idea around where many of us, um, especially those of us have a historically excluded and marginalized identity are saying. We don't want AI to replace the human element. It can augment what we do, but we don't want it to replace that. And this is a way of DNI saying, here's how it augmented what we did, including maintaining the message purpose and overall thematic approach of literally our cover art, which by the way, people love that cover art.

And I actually said to D, I'm like, I wonder if people love it as much as we do. And we keep getting the comments. I'm like, well, maybe they do, but that's even an example of where AI did not replace our idea or our voice. AI helped us take our, take our ideas, put it into a visual representation. Uh, we refined it, refined it, refined it, refined it.

Then we reached a certain point. We're like, okay, now we're ready to get it to an artist. We went to one artist and said, here's what we did. Here's our idea. You do you. Then we got it, got it. Then we had one part and then we went to another artist who was a very close friend of ours and was like, here's what we got.

Now we need you to go ahead and do some additional fine tuning, uh, detailed touches to it.

[00:14:18] Fonzy: Wonderful. No, I love this

[00:14:20] Ken: credit in the inside cover.

[00:14:23] Fonzy: There you go. Now, one, one thing that I love gentlemen, and before we kind of get into the questions and everything, because I'm really excited, but this whole process is amazing.

And for me, you know, conversations are data. And so just the way that you guys were going back and having those conversations and believe it or not, right now, I'm going through my doctoral process too, as well in writing. So. I'm doing actually it's going to be really just on all my conversations that I've done on my podcast.

So I've got over 100 episodes that I'm working on transcribing and just kind of seeing from where it started to where we're at now. And I think that that's just something amazing just to be able to document those things and just the way that you guys did it and listening to those episodes too as well, which we will link on the show.

So all our audience members can check those out too, as well. It's just amazing. The process, how you used it, how you refined it. And now you're putting out some amazing content. And most importantly, a lot of the topics that you cover are topics that I myself, you know, since March, 2023 have kind of just kind of held back a little bit and just kind of studying and seeing what is happening and what is out there.

And again, learning. from academics learning from you all to as well and what you're seeing and things of that sort. So I want to touch on something like I know Ken kind of started talking a little bit about this, but in one of the chapters here, uh, we're talking about, you know, deploying these flawed systems that enable discriminatory practices in schools.

And of course, risking overlooking the needs of the marginalized community. So my thing is, is that oftentimes I feel that As EdTech or in the EdTech space, sometimes we push something and we push something like in the name of progress, like we're going to go in and put it out there and whatever happens happens, but we may not be aware yet of what this can do and the harms that can occur.

So Ken, tell me a little bit about this as far as maybe from Jan, from November 2022, to what's So now what it is that you're seeing, because I absolutely loved your, uh, ASU GSB thought, and I actually cut a clip from that and posted it up because of the content that you were sharing, because that was something that to me, a lot of teachers don't hear.

I know there's a lot of hype. There's a lot of sparkle glitz and glamor out there and so on, but when you really get down to it and see the backside of all of this, you're We may be missing on the out on a lot of things. So Ken, can you, I'll start off with you. Can you tell me a little bit about that?

[00:16:41] Ken: Yeah. So one, thank you. I, I, I appreciate the love big time. I, you posted it like on LinkedIn and on several of the other socials and you know, the interesting thing about that talk, which, uh, for your listeners, it is on YouTube. Uh, is, I would say probably about 60% of the feedback that I've gotten about that talk is similar to what you just shared.

This is stuff I either hadn't thought about, hadn't considered, or it's not being talked about enough. 40%. Oh, it's too radical. It's this, I don't like it. You know, it's, it's, uh, you know, it, it's destructive, it's divisive. I literally heard it's, it's divisive and I'm like. Okay. Um, getting you to think as being divisive.

Uh, how's that work? So the whole idea around that and ultimately that talk, and by the way, the book was a creative catalyst for that talk. Uh, because I only had, I want to say only had 15 minutes, no more 10 minutes. And so I was like, you know, what are the ways I can hit on these points? And so ultimately the book and that talk.

goes into, let's look at what are the, what are the positives promises? And then also what are the peril? So I E AI is not the utopian thing that too many people think it is. And it's also not the dystopian thing that people think it is. And what often happens within many of our social systems and institutions, education, healthcare.

financial industry, all these things is that there are systems in place that can funnel us into a predictable outcome. And what I don't want to see happen is the, now this is the utopian side is this whole idea that AI can, number one, the biggest, biggest, uh, one of the biggest selling points I see is it'll help save you time.

And I usually bring this up in my workshops all the time. And we're like, all right, check this out. I'm going to be real. Cause I believe in real talk. Let's say that you use AI and it saved you two hours. Are you going to get two hours of your time back? Or is somebody going to figure out a way to fill in that two hours?

Okay. So that's precisely why I don't think it's that that's not don't buy that. The other thing is that it's it's my thing is how are we using our time to do it in ways that allow us to perform the essential functions of what we do, uh, and be able to prioritize things going back to, like, even our process is the human element.

So, for example, if I can leverage an AI to provide the right keywords kind of feedback for students, then I can now spend time working directly with students, i.e provide more personalization. Okay, it's not that I've got now two hours for personalization. It's no, I'm going to make more efficient use of the time that I do have.

But then conversely to that is this whole idea around this reliance on automation, efficiency and and Dee and I have talked about this at length is that, you know, and I say this all the time that automation and efficiency run a greater risk. Of being counter to equity and personalization. Then all the things that they are being, that are being promised around that.

Uh, and so, so that's why for me, it's, we have to really, uh, and if, you know, I don't know if you caught the quote towards the end of the book, if you've made it to the end of the book, but it's why I love sharing the quote from coach John wooden, late coach, John wouldn't go Bruins is be quick, but don't hurry.

And it's this whole idea around, we do need to be quick to understand it, but don't hurry to embrace it, uh, without considering some of the intended and unintended consequences.

[00:20:12] Fonzy: Excellent. And D how about yourself?

[00:20:15] Dee: I mean, Ken just touched on all of it, uh, in so many ways, which is how our conversations go.

Uh, we, I mean, really go, what are you thinking? And then we just rip off of one another. Uh, but I would even say. You know, we were done with our recordings. Uh, what'd you say, Kenny? Was it at least by December, maybe January,

[00:20:36] Ken: because we were promoting that the book was coming and at this past January. And so,

[00:20:42] Dee: yeah, so we were, we were done with our recordings.

We have full outline. We had every chapter and every sub title already complete. We had all of these recordings done and those recordings were very much are. First reactions to what we were seeing, observing and critically thinking about what does this mean for education? But to say that we were done or to say that, well, what we should do now is just put it through an AI generator and then pump this out would be not only inauthentic, but the quality would have dramatically decreased and really the honesty around.

Are self checking, right? So we could have 1st thoughts in language often uses 1st thought, not last thought. And so 1st thoughts, which are based on our curiosities, based on our experience, based on our education here, all the things that we are seeing as potential promises and potential pitfalls that really exists.

But then we even had to self check and say, but is that correct? And so doing the hard work of then doing lots of research to validate or to modify some of our first thoughts, um, like there's a, there are a number of stories that are in our recordings because there are live recordings, right? That we literally just took out of the book because with further investigation, it wasn't that those things were inaccurate.

It was like. But do we want to put that in writing? Um, because once you do that, then you're sort of elevating a an opinion in comparison to research. Right? And so our cautiousness in doing that. Um, you know, it just really elevates what Ken said, which is to say, you know, efficiency is what all we heard everyone talking about efficiency, efficiency, efficiency, like efficiency for what?

Even to add to Ken's statement, not only will somebody else fill in your time with something else, no one is stopping and pausing and saying, what is the efficacy of all these worksheets in the first place?

Yeah.

So we're going to use AI in order to generate worksheets. We're going to use AI in order to look at standards and then modify them and put them on a, on a better reading level.

But no one's asking the question, is this relevant at all? Right. Uh, we're going to use AI in order to, uh, create lesson plans to create assessments to do, do, do, do, do. It's like, oh, so you're just going to press fast forward on all the things that we've been doing that have already proven to be ineffective.

So are we going to ever rewind the tape and play back? Is this effective? Or are we just going to use AI in order to automate more of the same? And so we're going to increase the outcomes that we already have. That makes no sense to me. Why are we going to do this? So this is where our pain points were.

And it just seemed like everywhere we turn in conferences, even that we were speaking at, we pop in at different sessions and just keep hearing the same thing over and over again. I'm like, no one is thinking about this in these ways. And then what Ken would always say, and it was really an encouragement, encouragement to, to keep writing, to keep modifying, to keep editing, to get the book out there and get it ready.

Was dude, this is urgent. No one else is saying these things. And so even noting that we began recording in, you know, late last year, completed at the beginning of, of this year, and then did not get the text out until last week. Right. Still, no one was saying the things that we were saying at first take.

Right. And so really for us, it's really a caution. Um, Of we need to take our time, slow things down and really take a more critical outlook on all of this and not just look at how can you use it in order to replicate what you currently do and then that will give you time to do other things. I'm like, no, but what you've been currently doing is it's been bad business.

Let's stop there and let's let's reimagine doing something different.

[00:25:11] Fonzy: I love it. I love it. Like you guys really like, you know, coming out and there's so many people and, you know, like mentioning you can and many other people that I have followed too, that are finally like, you know, there's kind of saying the quiet part out loud now, and really because they are really seeing what is happening and just kind of slowing things down.

I mean, if I remember when this first came out in November 2022, there were Probably by the first week or second week of December, there was already books out like AI in the classroom and this and that and so on and so forth. And it's all great, you know, and, and I'm like, okay, cool. But then I was like, but what about the other side?

How about the other things that we don't see, you know, and things of that sort. So it's just really wonderful that what you have here is something that is coming from your experience, because you guys are Practically at every conference that is out there, which is great to be able to have that feel and see where everybody's at.

And like you mentioned, popping in to, um, sessions that happened to me at TCA this last year to, um, thank you, TCA. I got to be a featured speaker on that AI panel, which is great to be able to share these concerns because it was part of my doctoral research. And it wasn't until I started doing that research that I was like, Oh, there is a dark side to this.

The data, the data, privacy, the data, data, renteership. And that's where I got, you know, really now cemented in me that if something is free, we are the product. And that is for certain, you know, because of that data now talking about AI and you mentioned a lot of things and, and that clip can, that I pulled from you, you Talk to you about automation.

And I just want to quote something real quick here from Dr. Emily Bender here, who says, you know, really what the AI is doing is just talking about automation. So the questions you should ask is what is being automated, who's automating it and why, who benefits from this automation, how well does the automation work in its use case that we're considering, who's being harmed, who has accountability for the functioning of that automated system.

Um, And what existing regulations already apply. And I think, like, what you shared there is really an eye opener or an ear opener for a lot of educators, too. And through the message of this book, Dee, both of you are definitely capturing that. And that's something that I really enjoy. That there finally is somebody that is in kind of your space with your voice.

And your voices are being amplified here through this book and what you're sharing. So I want to say thank you. Because this really, for me, It just seems like the ethic space is really divided into two, like whether you're all pro AI, the glitz, the glamour, the sparkles and the shirts and the buttons and all of that, and then you're on the other side where, Hey, let's just kind of slow things down a little bit and just kind of see where this is going.

And so I love that, you know, so the next thing that I want to talk to you about, because you guys are all really in a lot of conferences and chapter two, page 53, I hope you don't mind that I'm quoting here from the book. I This really caught my attention because this is, this is what I'm seeing. And this is what I'm hearing a lot.

It says instead of professional development facilitators, simply showcasing the magic in quotes of AI in schools or employing it as a glorified version of an educator resource marketplace, such as teachers pay teachers. Educators should be equipped with the skills to empower their students. As digital sleuths and advocates for their own rights.

So D, I'll start with you. All right. And it just so happened that this was from you. Um, so tell me a little bit about that because what I saw at TCA this last year and a lot of the sessions that I've been to, it's, let me just show you the magic. Let me just show you the magic, but never really thinking about what's happening in the background of that magic.

So tell me a little bit about that D.

[00:28:57] Dee: Yeah. Well, so this is one of those things. And by the way, when you have a friend like Ken, Ken is never going to tell you to pump the brakes on something that could be potentially controversial.

[00:29:10] Fonzy: I need more friends like Ken around me for sure.

[00:29:13] Dee: So there was a couple of parts where I'd write something into like, and then I'd send it over to him, like, man, I might've gone a little hard on this one. He's like, bro, you didn't go hard enough. Okay. Um, yeah. So that. I think that that was one of those mornings that I was looking back at some of what we had already written and then added a couple more pieces and was on one because, you know, we're feeling that right?

Yeah, everything. I think you just said it, you know, the buttons, the stickers and the like, everyone wants to wants to just, you know, blow kazoos over how awesome this is again for what purpose to what end and with no pause of considering who is being harmed. And how could we even potentially use these tools for a different purpose?

Like, can we stop and ask that question? And, you know, I hate it that everything always comes back to this, but it seems like, you know, we, we look at education, you know, pre pandemic, post pandemic. And there was that time during the pandemic that as we were having to, because we were forced to reimagine doing things differently, there was at least this question mark.

Will we do something different? Can we reimagine learning now? Now that we're forced to, when we kind of reenter into education, I Can we take some of our learnings? Can we, can we start with the big reveal, right? Which wasn't a big reveal to, uh, most educators of color, most teachers in the lower income context, right?

Oh, there's inequities all over the place, starting with access to technology, right? Like, wow, big surprise. But for some, right, that was the big reveal. But then what do we see? As soon as we went, you know, back in person slowly, but surely it was all about going back to normal on. Wait a minute. The very things that we were questioning two years ago, we're now trying to return.

Those were the good old days. Like, seriously. And now a I being looked at as this this promise. To do all the things that we currently do, right? I know I'm just like repeating myself, right? So that that's really where this book was born out of frustration and not comment, right? That you just read from that quote was was written out of frustrations.

Like, when are we going to look at doing things differently and not just looking at a I as and let's be, let's be real. Like, okay, teachers pay teachers. Um, that's what a lot of these different like companies being started all with AI in their name, all for the sake of just come to us, pay your subscription fee and we will automate for you and generate for you this resource that you can then hand to your students and we'll also utilize.

You can also with a higher subscription fee, utilize our services to then assess what the students did and then you can just keep on outputting more. Like, okay, let me also say this. One last thing about that statement. It's not to vilify the teachers that use these tools in many ways, or we got to ask the question.

Why are teachers so tempted to go to these resources and to use them? It's because of the very point that Ken just made made a little bit ago. Their time is being completely obliterated, right? Exhausted at every point, right? So so much is being asked for them in so many unrealistic, unhumanistic ways, all for the sake of reaching goals that are based on standardized test, which are financially.

Um, motivated, right? Like, so no one's questioning all of these things. It's just, well, how can we just do it different in that? How can we just do it, uh, more efficiently, um, or effectively super air quotes on that one?

[00:33:26] Fonzy: No,

[00:33:27] Dee: we gotta, we gotta look at it different.

[00:33:29] Fonzy: Yeah, no, I, and I agree with you on that, you know, cause it just seems like right now it's just, I just want to show you the magic of my tool.

But like you said, how does it work? How does it assess? Is there, what's the efficacy on this? And because oftentimes I always say, you know, the engagement piece, because there's a, well, my students are engaged. Well, but are they learning?

[00:33:47] Dee: That's

[00:33:47] Fonzy: two different things. And they always say the engagement, the engagement, engagement doesn't always equal learning.

I don't see any research at all from a lot of these companies. They stating that there are some incremental changes and incremental learning. You mentioned the access part too, which I want to talk to Ken about, because I've always said in my shows, my district. Gets priced out of everything because I am in a very, very small district.

You offer me a pilot. I still can't even afford a pilot. You, you, even though you give me a pilot now, the next year, it's like, Oh, now it's going to be 16, 000 per campus for this. So now I'm thinking, well, why is it? And this is part, something that you mentioned in your book, which brings me to the next thing, which is, uh, and I'm going to quote the page 62.

It says, Quality AI access should never be a luxury reserved only for the well off. So why should my district feel like, man, I can't get those tools and I can't be where everybody else is at because I can't afford it. So Ken, what are your thoughts on that?

[00:34:46] Ken: Yeah. You know, and your timing with that question is perfect.

I literally just did a webinar for the California department of education, uh, a few days ago. And the title of it was the digital divide in the age of AI. What was that? And the interesting thing is to the point you just brought it with the quote as well, there's a reason why D and I literally dedicated an entire chapter to the digital divide in schools.

Uh, I would actually venture to say that we probably could have taken that chapter and that could probably be an entire book, not dropping it as a hint to D because I like what we have here, but nevertheless, to your point, It's the whole idea around people not understanding that there are multiple layers to the digital divide.

And I always bring it up in this capacity. So first of all, uh, there, you know, I'll even give you a little historical context. So, and I brought this up in the webinar the other day. So I was on a task force for our, uh, state previous state superintendent, this was back in 2011, uh, and it was superintendent Torlakson.

And the theme of the task force was no child left offline. Okay. Okay, that was back in 2011, and it looked at everything from infrastructure, if the infrastructure is there, what is the quality of the access, uh, so obviously broadband, it looked at, uh, you know, uh, cell, cell phone capability. So cell, cell, cell, cell, cell connectivity access.

Then device access. What kind of device access? How are those devices being used? How are we embedding into grad school more than just like what I had to take was literally one class for one semester to clear my credential. Okay, as opposed to having tech embedded throughout the whole program. Uh, and then leveraging things like partnerships with, you know, I'm in California.

So You've got Silicon Valley leveraging partnerships with many of the big tech companies that had their headquarters here in California, you know, among other things. And so to the point what you just brought up, it's, it, it, there's, there's again, there's a multitude of things here. So number one, when it comes to AI, uh, to your point, there are many AI systems that have already priced people out.

And in fact, I just recently saw an article and I haven't seen it written elsewhere yet, but the article basically questioned the viability of both. Uh, uh, anthropic and open a I with their 20 a month subscription model, and then and the anticipation is that it's not financially viable or sustainable.

And the next thing that they're likely to do is try to get elements of their systems embedded into. Uh, existing OS systems. That's probably why Apple announced that they were going to look at putting AI within the iOS for your phone and your iPad. So my point around that is that that's where we get into the perils because there's the promise.

Oh, it can do all these things. Let's just, for the sake of this. Let's say that that what do you share it is? Let's say that that's good. Uh, and, and, and, and I don't think it's good or bad. I just think when you solely rely upon an automated system to create a, an antiquated form of assessment that has proven to be ineffective, uh, that, that, that in and of itself to me should, should lead pause.

But nevertheless, let's, let's stipulate that it is a good thing. Well, if it's not accessible to, you know, to districts that are either economically under resourced, uh, in areas where you don't even hardly have any kind of access, or let's say the school district does have access, but as we all know, because we've seen the stories and I have personal stories with this, sometimes you can go from one classroom to the next and not have access and that's digital equity.

Within one campus within one building, it's this whole idea around one. You cannot revolution. I, this is where I tell people AI will never revolutionize education as long as it is not universally accessible period. And that's based on how I define revolution, uh, revolutionize education. But then the other thing is all of those students, if you're going to consistently and constantly tell me that AI is going to be part of the jobs of the future, that the skill sets around using AI are needed in order to have.

I would say viable opportunities, uh, post educational experience. If you're going to tell me all of these things, then, then that's all fine and dandy, but then I want to know, how are you addressing it to the most vulnerable and the most affected? Because to your point that you just brought up, if you make it accessible to a school district like yours, then by default, it will now become accessible to districts like yours and those that don't, that aren't financially, um, that don't have financial restrictions with respect to the resources and the accessibility.

And that's where you get to the over promise, uh, that we did bring up in the book around, well, use the free version. Yeah. Well, you are the product if it's free. End of story. So there can be, you know, my whole thing is there, it is possible to, to, uh, have tech and do good. And I've brought this up with several of the focus groups I've been on for companies around at a minimum.

I understand you got to pay bills. You got to pay people. You got to maintain these systems, but at a minimum, then do something like considering a tiered model. So like, for example, your school district, here's how we can make it accessible to you by charging differently than we would, For example, a very, very well economically resourced district.

That's it. I, by the way, that's true equity. It's, it's, it's, it's modifying the accessibility component to the, to what is, uh, accessible and viable to, uh, you know, to different school districts. , but it does start off with that whole thing is how are you addressing the needs and the accessibility to the most affected?

And the most vulnerable, uh, to the systems while not obfuscating the truth around, yeah, it is part of the future, but you also are leaving out a significant percentage of the educator and student population by creating these financial barriers. Excellent.

[00:40:42] Fonzy: D I think you had to follow up on that.

[00:40:44] Dee: Yeah. Yeah.

Yeah. Cause this is how me and my brother go, right. He gets going and I'm like, Ooh, yes, yes. And, um, You know, just adding to that real quickly, you know, considering all of the digital device that Ken stated, which, by the way, we've always been talking about these things,

[00:41:03] Ken: correct? So

[00:41:04] Dee: when I came into the picture, it was how is a I being added to the conversation of things that we are already critical about?

Right? So just wanted to get that disclaimer out there as as someone I know online tried to Try to jump out there and say something like, when did you start talking about AI? And, um, almost like trying to say it as a jab, um, at Ken and then at me by proxy. And it was like, um, well, November, 2022, when did you start talking about it?

Soon as they hit the scene in terms of generative AI and then considering, oh, wow, all the things that have always been concerned about all the things that we've been talking about now, how is this a part of that conversation? So considering the digital divide has always been You know, about not just devices, but it has also been about infrastructure.

In addition to that, there's there's also another level, and that is teacher professional development, right? Like, how is it? How are teachers being developed so that they are using these tools effectively? Right? And then comes the next part, which I think is the most important part. Because if you consider infrastructure.

An educational environment should be a learning community, not just a teacher who was an expert, who was then imparting their knowledge on these pupils and then having them regurgitate what they have given to them so graciously from their vast knowledge base, right? On a, on a test. I hope you ran all of that sarcasm, right?

How is it that we are learning together? And so that would require that students are also learning together. And how is the teacher also learning from and with the students? All of those things to take place requires that. They have access, right? And that we're not playing these games, which is to say you're blocked.

You're not allowed. You can't or maybe even teachers. You can because we're going to give you access to these tools again. We're not going to give you professional development. We're not going to give you any. Uh, critical lens on how to evaluate and how to utilize, um, these things. And then we're going to punish students if they're caught cheating more air quotes, more, more sarcasm, um, knowing that we have students who are going to be going home and using whatever tools that they have access to.

And are they going to be using them? Are they going to be using them effectively on varying degrees? Because it all just depends on exposure, right? And education and so many different things. So how are we going to holistically involve our entire community? Make sure that we have access, make sure that we have education and that we're also growing together because guess what?

These tools are continuing to evolve and. They're they're also not going away, which is probably my last point concerning all this. Like, once we started seeing, wait a minute, then massive investments by the Microsoft's by the Google's, um, and then now, of course, by many others, uh, Amazon, of course, uh, started making me think, wait, these aren't going away.

So we have to figure this out. We absolutely just have to, um, we can't just put our head in the sand and say, well, I think this is just destructive to our society. And so we need to just not use it. Well, guess what? It's being used. So what are you going to do about it?

[00:44:35] Fonzy: Exactly. Well, thank you so much for sharing that.

And again, obviously one of the things that really stuck out is yes, there, there are people out there that it will kind of just call you out on stuff and everything. And I'm like, man, come on. Like we're, we're just, you know, I hear a learning community. Like we just want to learn from each other. And we just sometimes stop and listen, you know, to one another, you know, we're just trying to make the space a little bit better and just have giving amplifying every voice that is out there.

So anyway, so kudos to you guys. And this will probably be the last question. Although I probably had like maybe two more, but we're going to do this last one and this one really sticks out.

[00:45:07] Ken: We can try to keep our answers. Okay. Well,

[00:45:09] Fonzy: we'll try and do that, but okay. So this one is from chapter four. This one's from chapter four.

All right. And talking about careful considerations, admissions, grading, and tracking. And as you know, here, or maybe you haven't heard, but I'm sure you have, cause it's been on the news too, that Texas, Texas went to the grading system where they just use AI to grade essays. And, you know, in talking to people in my district, our content specialists, my concern was, I was asking them questions.

I said, look, have you seen the system? How does this work? Because for myself, what I was thinking is like, okay, if there is a personal identifier. Maybe it doesn't have the student's name on it, but just based on a number and then that student's name is selected or ethnicity and things of that sort. Is it grading appropriately?

And is it grading, you know, to what scale and to what level? Because what if that student. Maybe, may not, you know, be at that level yet, but it doesn't mean that the answer is incorrect. So, I just wanted to read here real quick, where it says, Myopic enthusiasm for artificial intelligence risks glossing over its potential to silently Amplify exclusion and inequity if applied negligently in education.

Boom. I was like, yes. All right, so here we go. So I want to know who it is that wants to start off with this

[00:46:26] Ken: one. Okay. So I wrote that this is,

[00:46:30] Dee: I know, I know, I know, I know, let's

[00:46:34] Ken: be brief, but because I'll be brief. So let's use that as your example. So number one for the audience, and we do expand upon this more in the book.

Let's just start off with just bare bones. Basic. If you're if A. I. S. being used to great essays, then what the A. I. Needs is is 3 things. 1, a large volume of historical data to start from. Two, someone's got to code the algorithm to use that data, and then here's a big one. Three, it's going to need an example, a good example and a bad example to basically compare.

Okay, I've got all this data. You want me to synthesize this data. Now you've got to show me an example of what is good and what is not good. And then based upon that, it will use that. As the guardrails or the guidelines for everything that happens going forward. So the questions are always for me, like, well, 1st of all, who's doing that coding?

Are they the students are assuming? Are they representational? The students are the representation of student experiences. And then here's the other thing. And we talked about this in the book. What data waiting? Mechanisms are in place. So, for example, uh, for those that are listening, the AI detectors and many of these things like this, they are biased against students that are acquiring English as their second language, for sure.

It has been, research has proven it, and we cite that in our book, by the way. So, number one, if you have appropriate data weighting measures in place, then that doesn't happen. But of course, if we were to ask the people who designed that, what are your data weighting measures, there's one of two things that happened.

The other are going to say, well, we can't answer it because of proprietary or they'll say, well, no, we don't need to do that. It's the tech. Okay. So they hide behind this stuff. Then the other thing to consider with that is. I have yet to see, and I've tested it out by the way, and it's not formal research on my part, but if people were to read many of the authors that we cite in our book, Ruha Benjamin, Sophia Noble, Joy Boulamwini, Timnit Gebru, okay, there's others, but those are the main ones for me.

You would see that the, the way these systems are designed, it's to, again, it's to automate the status quo. So when it comes to these, they cannot capture nuance of vernacular, nuance of story. Uh, and, and I won't even get into for time sake, cause I do want you to be able to get to your other question.

Let's even get into the cultural implications. And so the whole idea around all of this is that what's happening there is what I wrote about around things like, grading papers, uh, vetting college applications, all those things. The minute you take the human element out of it, I always say like, what could go wrong?

And so that's an example of it. And you're seeing more of that. And, and again, you need a good example, you need a good, a bad example. And the algorithm, the whole idea of the algorithm is to figure out what's good and what's not good. But the bigger question that's being asked is you have no data waiting, no data cleansing.

And, and what are the things that you've done to account for the bias that the person who either authorized and, or wrote the algorithms, uh, to go through that process, um, what are the, what, what, what are your checks for, uh, authenticity are your authentic, authenticating checks for those things? Oh,

[00:49:55] Fonzy: excellent.

[00:49:55] Dee: Yes. Yeah. Yeah. That's it. Why does everything always come down to writing, right? Right. Worship the written word. The quiet part out loud is, you know, we're talking about white supremacy culture, characteristics of white supremacy culture. That includes worship of the written word, right? This was one of our almost immediate reactions to what people were saying in education in particular around the, their concerns around AI.

Ken and I were recognizing we had different, different concerns and different levels of concern. When I speak about Levels of concern. Like my concern was more around what does this mean from, you know, uh, synthetic media standpoint, how could this lead to further incarceration? Another book that I would throw in there would be the new Jim Crow, right?

Like in how technology is, has been, and continues to be utilized in particular to, um, criminalize black and Brown folks, right? Like, what does AI have the potential to do there? Okay, that was just a disclaimer to say, why are we again? Just keep going back to the old thing. Why do we go to the same thing?

Why do we go to? This is the only thing. Um, even how our book was formulated, even how this podcast is and set up is us being able to just talk. Right? And so if you consider from historically, culturally, oral tradition being what is native to us, right? So, why are you limiting the ways in which I communicate how I know and learn understand certain things only to it being written?

And oftentimes, especially in the K12 space, a 1 draft submission, right? So, again, not taking a critical look at what is being produced and why that is being looked at as. The highest level of knowledge assessment opportunity, right? It has to be written in written form. Um, what about someone who not only communicates about something, but is able to showcase their ability to give a convincing argument?

Based on their inflection, based on their eye contact, based on their performance of something, right? That is just a part that should be added to the whole mix. Uh, but again, either we're thinking, uh, it's either got to be writing or it's got to be multiple choice. Can't be anything else. And I'll stop there.

[00:52:48] Fonzy: Excellent. Well, thank you, gentlemen. It's been an honor and a pleasure to have you both here. D, Ken, you've definitely made my morning. Thank you so much for filling my bucket and just for being out there and putting this out there. I mean, guys, if you haven't picked up the book, please make sure you do. I will definitely put a link up in our show notes too as well, and that art is something that is amazing.

But gentlemen, before we close up and wrap up as the show, we always ask the last question to all of our guests here. So I want to go ahead and start with Ken first. Um, Ken in the current state of education. Well, first of all, here's the, the premise of the question is we always say like every superhero has that kind of pain point or weakness.

So for example, for me, Superman and looking at him kryptonite was that pain point or something that just caused that weakness and so on, um, Uh, mainly a pain point. Let's go with that. So I want to ask you, Ken, in the current state of education, what would you say is your current edu kryptonite?

[00:53:45] Ken: Oh, good question.

Can I, can I add a qualifier? Kryptonite actually isn't as weakness. It made him normalize because he couldn't fly on Krypton.

[00:53:54] Fonzy: Ah, okay.

[00:53:58] Ken: Here we go. I, I, I, I, I, for, for the audience, let's just say, please don't discourage students from reading comic books. I read the DC comics and the MC, uh, the, the Marvel cinematic, uh, MCU.

I'm big MCU fan. I read all of those comic books. So, um, so it's, it's a common, thinking that I would, I would say, uh, if you think about Superman, so if he's a protagonist, then his antagonist would be like Lex Luthor. Okay. Kryptonite just made him quote unquote normal. Uh, so in the original stories, they made it in the movie, made it, make it look like he's weak, but he lived on Krypton.

They weren't weak on Krypton. It just made him quote unquote normal. Uh, and their red sun versus our yellow sun, which gives him strength. Anyway. That's a whole nother podcast for us to talk about storytelling and comics and things like that. Um, I, I, I, uh, oh, I could go all in on that about literacy, but it, nevertheless, to, in the spirit of your question, I would say if I were to identify some sort of thing that is my weakness, uh, given the context of your question, um, I would say that it's, I don't get enough time in classrooms with students.

That's it. Uh, and it's because I'm not in the classroom anymore. But, but it's also, to be honest with you, it's also why I am very intentional and diligent with school districts I work with, uh, to add to, uh, one, either meet with student focus groups or go to Like, honestly, in the context of our conversation, some of the AI trainings I've done, I've said, Hey, I want to go and work with the teachers in the classroom with the students.

So, at least it's my acknowledgement that that could be a potential gap in the things that I say versus what's actually happening. I recognize it. And I'm intentional about trying to make sure that, that whatever gaps exist, there don't increase.

[00:55:54] Fonzy: Excellent. Thank you so much, Ken. I really appreciate that very much.

And D next you're up.

[00:56:00] Dee: Sure. Well, so I may have, uh, misinterpreted the question and now I've also reinterpreted it based on what Ken just said. So contextually, I gotta say, I'm going to answer this, how, how I feel about it. And that is. Not know knowing that, um, not only do we have the research that supports this, but personally I live on this and that is having freedom of artistic expression, being surrounded by, um, art and sunlight and music are things that give me life.

And knowing that in many of our education settings, those are things that are stripped away by design. And so thinking about what Ken said. Um, you know, my, my superpowers are stripped from me when you take away the things That get my imagination and creativity flowing. And unfortunately that is the very sterile context in which many classrooms are set up as right.

You have to, you're in a cinder block room with little to no natural light. Uh, no arts or real arts around music is not allowed. Discussion is limited. Right. Like all the things that allow for your creativity, you take away the creative opportunity, then you've taken away any of my powers. And if you make me do a multiple choice test or make me have to write within that context, you will never get the best deal in the air.

And I know that. I am just a representative of many students that, um, are in the same, same space.

[00:57:51] Fonzy: Wow. Thank you so much, gentlemen. This has been a wonderful learning experience for me. And I thank you again for taking the time to be a guest on this show, because for me, this is my hour of Professional development for myself too, for the betterment of myself.

But also now this also gets shared with the world and all the educators that are listening to our podcast. So I really want to thank you because I value everything that you share today, the work that you're doing and the work that you continue to do to help Day in and day out as you go out there to various conferences and really posing these questions and posing these ideas and thoughts that people have not yet considered.

And being that we've already been like a year and a half in or more still, you know, now these questions are coming up and now the voices are being heard. And, you know, like you said, uh, can like, you know, be quick, but don't hurry. It's such a wonderful quote. So thank you for sharing that. So I really appreciate it.

You really guys really filled my bucket today. Thank you both for just the work that you're doing. I appreciate y'all.

[00:58:55] Ken: Thank you for having us. And as always, we appreciate the love and the support.

[00:58:59] Fonzy: Yes, you're very welcome. And for all our, and for all our audience members, uh, please make sure you visit our website at my ed tech dot life, my ed tech dot life, where you can check out this amazing episode and the other 282.

Wonderful episodes with creators, founders, educators. We've got a little bit of everything for you. So you can take some knowledge nuggets and sprinkle them on to what you are already doing great. And please make sure that you're following us on all socials. If you haven't done so yet, follow us at my ad tech life.

We are on all socials with that handle. And please, if you haven't done so yet, jump over to our YouTube channel, give us a thumbs up, subscribe to our channel. Our goal is to get to 1000 subscribers this year. So please make sure you do that. And as always guys, we do what we do for you so we can bring you some wonderful conversations and amazing guests like we did this morning.

So. They can continue just sharing their knowledge with the education space, obviously for the betterment of all of us and amplifying voices to as well. That's what we want to do. Connecting educators one show at a time and my friends until next time, don't forget, stay techie.

 

Dee Lanier Profile Photo

Dee Lanier

Author/Speaker/Creator

Named by EdTechMagazine in 2023 as a top 30 K-12 IT Influencer to Follow, Dee is the Lead Education Experience Designer at Lanier Learning, LLC and is an Education Coach for Samsung. Dee is dedicated to co-designing equitable learning experiences for school leaders, staff, and students. Lanier Learning also partners with nonprofit and for-profit corporations that are committed to community good.

Dee is the author of Demarginalizing Design and co-author of The Promises and Perils of AI in Education. Dee is a passionate and energetic educator and learner with over two decades of instructional experience on the K-12 and collegiate levels. Dee holds Undergraduate and Master’s degrees in Sociology with special interests in education, race relations, and equity. Dee is an award-winning presenter, TEDx Speaker, Google Certified Trainer, Google Innovator, and Google Certified Coach that specializes in creative applications for mobile devices and Chromebooks, low-cost makerspaces, and gamified learning experiences. Dee is a founding mentor and architect for the Google Coaching program pilot, Dynamic Learning Project, and a founding coach of Our Voice Academy, a program aimed at empowering educators of color to gain greater visible leadership and recognized expertise. Dee is also the creator of the design thinking educational activities called, Solve in Time!® and Maker Kitchen™️ and co-host of the Liberated Educator podcast. Dee practices self-care by reading, playing percussion, and roasting, brewing, and drinking coffee.

Ken Shelton Profile Photo

Ken Shelton

Speaker/Author/Consultant

Ken is a multi-award-winning educator with a Master's in Education (Educational Technology and New Media Design and Production), has dedicated over two decades to teaching and advancing educational technology. He is co-author of the best selling book "The Promises and Perils of AI in Education: Ethics and Equity have entered the Chat." His expertise in educational leadership, organizational change, equity, and inclusion have earned him global recognition as a thought leader and highly sought after keynote speaker. Ken's contributions and extraordinary commitment to advancing digital learning experiences have garnered advisory roles for various organizations both public and private, shaping the future of education and technology worldwide. Recognized by leading educational institutions and publications for his unparalleled dedication to creating equitable and inclusive learning environments, Ken continues to make a significant impact in the field through his speaking engagements, trainings, and consultancy work. He is currently the Founder and CEO of Elevate Education. More information about Ken can be found at kennethshelton.net.