Episode 269: Tom Mullaney
Episode 269: Tom Mullaney
Hey everyone, it's Fonz here, and I'd like to welcome you to another exciting episode of My EdTech Life! In today's show, I sit down with t…
Choose your favorite podcast player
March 11, 2024

Episode 269: Tom Mullaney

Hey everyone, it's Fonz here, and I'd like to welcome you to another exciting episode of My EdTech Life!

In today's show, I sit down with the brilliant Tom Mullaney to tackle the hot topic of AI and its potential impact on traditional teaching dynamics. As an educator, Google Innovator, and frequent conference speaker, Tom brings a wealth of knowledge and experience to this critical conversation.

Throughout our discussion, Tom and I explore the complexities surrounding AI in education, from the marketing tactics preying on teacher burnout to the importance of examining AI companies' terms of service and data privacy policies. We also get into the Eliza Effect, the dangers of anthropomorphizing AI chatbots, and the systemic changes needed to address educator burnout rather than relying on AI as a quick fix.

Tom shares his initial thoughts on ChatGPT and explains why he has adopted a cautious approach to AI implementation in K-12 classrooms. We consider the potential impact of AI on students' critical thinking skills and discuss best practices for teachers interested in using AI tools like AutoDraw.

Timestamps:

0:00 - Introduction

2:45 - Tom Mullaney's background in education and journey with AI

6:50 - Is AI being marketed as a solution to teacher burnout?

9:41 - Tom's initial thoughts on ChatGPT and his cautious approach to AI in K-12 education

15:49 - The importance of looking into AI companies' terms of service, age restrictions, and data privacy

20:25 - Eliza Effect and the dangers of anthropomorphizing AI chatbots

26:17 - Systemic changes needed to address teacher burnout instead of AI Band-Aids

30:03 - Is AI polluting its own training data through recycled outputs?

33:05 - Best practices and recommendations for teachers before using AI tools like AutoDraw

38:05 - Will AI help or hinder critical thinking skills in the classroom?

40:20 - Key AI experts to follow for insights into the technology and its implications

44:10 - Tom's edu kryptonite: The overly narrow focus of current EdTech conversations on AI

45:08 - Billboard message: "Predictions are not facts"

45:48 - Dream job: Professional wiffle ball player or full-time NY Knicks fan/blogger

49:09 - Closing thoughts and invitation for a part 2 with Tom Mullaney

--- Support this podcast: https://podcasters.spotify.com/pod/show/myedtechlife/support

Thank you for watching or listening to our show! 

Until Next Time, Stay Techie!

-Fonz

🎙️ Love our content? Sponsor MyEdTechLife Podcast and connect with our passionate edtech audience! Reach out to me at myedtechlife@gmail.com. ✨

 

Transcript

Episode 269: Tom Mullaney

[00:00:27] Fonz: Hello, everybody. And welcome to another great episode of my ed tech life. Thank you so much for joining us on this beautiful Monday morning. And that's right. We're here Monday morning. We are on spring break. So what do we do when we're on spring break? Get some rest.

Nah, that's not me. We go ahead and we podcast and we definitely find the opportunities to bring you some amazing guests to continue the amazing conversations that are happening in the education space to continue to help. Us as educators continue to grow and be well informed of all the new technology that is out there.

And so I'm really excited about today's show and our amazing guests that we have this morning. But before we dive in, I definitely want to give each and every single one of you a big, For your continued support. Thank you so much for following us on socials. Thank you so much for following us on YouTube.

As you know, our mission is to get to a thousand subscribers. We're at about 280, some short. So, Hey, if you haven't jumped onto YouTube, please give us a thumbs up and subscribe. We would definitely appreciate it. And we definitely want to thank our sponsors lucid for education. Thank you so much. Content clips.

We definitely want to give a big shout out to goose chase as well. And our newest sponsor EduAid. So thank you so much for supporting and believing in what we're doing here. As we try to amplify educator, creator, professional, and teacher voices here all at one space. So thank you so much. And today we have a great conversation today.

We're going to continue our topic. As you know, over the past year and a half. And a little bit more. It's all been about AI, AI and AI in the classroom and certain things that maybe as educators, we may have not thought about because we tend to kind of jump in and dive in into the newest trends. And we want to make sure that we are keeping ourselves safe.

We're keeping our students safe. So today we're going to be talking about, uh, AI in. In possibly, or does it threaten the traditional teaching dynamics? And today we've got a great guest and I will let him introduce himself because if you're seeing them live. You know him very well. He is out there. Google innovator also as well.

You see him at a lot of conferences. He posts some amazing stuff. So Tom, how are you this morning?

[00:02:45] Tom: I am great Fonz. How are you doing?

[00:02:47] Fonz: I am doing wonderful. Thank you so much for hopping on kind of like maybe short notice. I know that you were on my friend, Daniel's podcast. Podcast, and you were talking with him and Daniel is an amazing, amazing podcaster, a wonderful person.

So I, as soon as I saw that you were on his show, I was like, Hey, I got to get Tom on my show too. So thank you so much for doing that real quick and saying like, yeah, I'm available.

Yeah,

[00:03:11] Tom: absolutely. No problem. And before I talk about myself, uh, I just want to say, uh, Ramadan Mubarak Ramadan. Kareem to all those who celebrate and let's see, let me, I can give you the quick story about my ed tech life and like what brings me here today.

Um, there's one story about my career and then there's another story about my journey with AI. And I will say, so I, I look at the title that you've chosen for this episode. And I'm a little curious about what we mean by traditional teaching dynamics. Uh, so I'm a little curious. Maybe we'll. Establish that.

And what we can talk about whether or not it threatens whatever traditional teaching dynamics are. But so my background is I, let's see, I grew up on long Island. Uh, it might as well have been Dawson's Creek for all the whiteness, privilege and proximity to water that we had. And then I, so I got, I went into public relations.

I actually went to college at George Washington university. You know, 40 something me looks back and kind of rolls his eyes at 18 something me, but that's okay. And then a couple of years later, I decided I wanted to go into teaching, started doing some special education in the South Bronx. It was a real learning experience, and I met my wife, who's from the Philadelphia area.

Moved to the Philadelphia area. And that's where I spent most of my teaching career. I was a middle in high school, special ed, and then at Springfield Township high school, go Spartans shout out to the Spartans. I transitioned to, uh, social studies. And there before my last year there, the principal came to me and said, Tom, you're teaching eight, nine, and 10 next year.

So the dreaded three preps, and I was gonna have to go between two buildings. And the principal also said that you're going to have Chromebooks for your eighth and ninth graders. And I immediately said. We're not going to not use these things, you know, no one's coming into my classroom and saying, well, there's the Chromebooks under the, under the desks.

Right. And so that really lit a spark really got me excited. I wound up, we moved down here to North Carolina to. Get out of the winter and sure enough, we're moving right back up. So, Hey, anyone in the Delaware Valley want to connect later this month? I'll see you. And, uh, so I, I was an ed tech coach. I did that for a little while.

I moved to San Francisco for a hot minute, worked in their district office. And now since, since the onset of the pandemic, I've been consulting where I work with schools, districts, and even ed tech companies, uh, to provide professional development. So that's kind of my.

[00:05:51] Fonz: I love it. And that's wonderful because we all start off somewhere.

So, you know, I always love these introductions Tom, just because I always love for my audience members that do listen to the show to make a connection with our guests. So thank you so much for sharing that journey and your journey in education and now what it is that you're currently doing. And I know I've seen you and I'm a longtime follower, especially after, uh, via 20, uh, where we were doing, uh, the innovator Academy and then just.

blog time follower also of your work, probably since about 2018. And, um, so I mean, the stuff that you put out there is great, always helpful to the community, putting out some amazing resources, how to videos and things of that sort. So we definitely appreciate what you do because those resources are some resources that I do use not only for myself, but also to share with our teachers in our district as well.

We try and learn, you know, all the new stuff that happens and is coming out as far as technology is concerned. So, but thank you so much for being here also, again, just because of this conversation centered.

[00:06:50] Tom: Hey Fonz, can I just interrupt you for a second? I'm sorry. So yes, I remember, uh, Coaching in via 20 and you know, your show and I was actually watching it back this, this weekend, you know, getting ready for, for my appearance on my tech life and you had Katie fielding on K I mean, shout out to Katie.

This is 2021 of course, a great episode. And I actually dropped comments in the chat, you know, I, you know, following Katie, I, Oh, Hey, she's on my tech life. Let's check this out. And I will say. And I know we have to change. We have to evolve, but your intro that you had back in 2021 and your intro now is cool, don't get me wrong, but you walking the city streets.

Oh, it's so dope. Oh, I, Oh, I don't know why. I don't know. There's creative decisions that are made all the time, but I just wanted to say that that was, uh, I really thought that was so cool.

[00:07:41] Fonz: Oh, well, thank you. Appreciate the feedback. You know what? I probably, I'll bring that back or actually, you know, one thing that I have been thinking about is actually redoing it and just doing just, you know, now something different, but in that aspect, you know, but like you absolutely right.

You know, you just kind of do little creative decisions and stuff, but I appreciate that feedback, Tom. That's awesome. And that was a great show with Katie too, as well. All right. Well, Tom, listen, I know that you have been very vocal and as far as, you know, writing your blogs and the things that you're sharing, talking about AI, and I myself have just been a very big cautious advocate of AI since probably, you know, about November.

Uh, December, my last coursework that I did in dissertation or before I get into dissertation, uh, was talking about data privacy and that kind of just made me pause a little bit as far as how to introduce, you know, AI into the education setting, obviously terms of agreement, terms of service, data privacy, all of those things.

And then. We did see, of course, a lot of AI coming into the classrooms because there's a lot of, uh, wonderful educators out there that really kind of took to AI and adapted and brought it into, to enhance the learning aspect, but there was never really a lot of research behind it. Like, what can we do?

What should we do? And it was just kind of like, bam, here it is, just go with it and roll. And then it was, it's almost kind of like. Let me just go ahead and do it now because it's easier to just get it done. Or what's the term? It's, uh, uh, what is it as like kind of that one term that says, I just do it right now and I'll just ask for forgiveness later.

Uh, but I've always been very cautious as far as let's not get into that. I don't want anything harmful to happen. So walk us through a little bit about, you know, kind of your thought process. I would love to ask you, you know, how can educators integrate AI into their teaching practices? To enhance, you know, inclusivity and engagement.

So let's kind of start with that and get the ball rolling.

[00:09:41] Tom: All right. Well, so I just, do you mind if I just tell my story about AI, if that's real, like my journey, uh, just to kind of set the table here. So it dropped what, you know, chat GPT dropped what it was late November or early December of 22. And my initial response was kind of like, that's a chat bot.

You know, when you engage with a customer service chatbot, you want to get to a human ASAP. So I wasn't that impressed with it. And as I just learned, you know, people are like, wow, how is it doing these things? And I always kind of had the thought, well, it's just plagiarizing from its training data. Why is that?

Like, I don't, I just didn't understand why this was so exciting, but early on, I remember hearing things about how this could potentially amplify bias. And I just immediately said, all right, shut up and listen. And I started reading things and I started reading about AI itself. And I noticed the real, it was incongruous what I was learning about AI and then what the approaches in K 12 were very different because all the, Experts.

And I'll tell you before all this, I didn't know what a computational linguist was, but now just reading these folks talk about it, the last thing they're saying is, Hey, put this in front of K 12 children. That's like the last thing they're saying. And so in May, I went to a conference with some other ed, kind of ed tech people, but district office people, real heads, like nerdies P folks.

Right. Um, and actually I think on the way around that same week, I was listening to your, I was listening to it. Where you had Adam Juarez on shout out Adam, you know, really smart guy and he said it would be like the Model T and I, you know, that just did not again what I'm hearing from the experts on AI that doesn't the Model T is such a huge innovation.

It's a such a huge step forward. That's a huge bar and this is just not clearing that bar. You know, I hate to disappoint. Agree with people, but you know, that's this guy. I kind of was like, so anyway, I brought this up at this conference and it was just met with such, um, it was not well received. Let's put it that way.

So I just decided, all right, I'm just going to keep my mouth shut. I went to ISTE and I was at, I was doing something at ISTE where I knew there would be sessions on AI. So I just said, Hey, that's my lunch break, you know? And, and so I just kind of avoided it. But the good thing about that, sometimes when you wait to say something is you just keep Learning.

You just keep reading, you just keep catching up. And so I didn't say anything publicly about it until my first blog post, February, 2024 and, or 20 yeah, February 20th, 2024. And I think the benefit of that is that I really took a long time to listen. I'm just, I just wanted to share that. And, uh, you were talking about how it could.

Affect engagement in classrooms. Is that right?

[00:12:35] Fonz: Well, yeah, yeah. But you know what? Let's just kind of continue with that story because it's very reminiscent to myself. You know, I've been very quiet too, but then now I'm picking up a lot more and learning. And like I said, being a very cautious advocate and it just incidentally happened because I'm one of those that hey, I'm an early adopter.

I'm a speedboat. You give me something and I want to share it and I want to put it up on Twitter and I want to be one of the first ones to do it and use it and so on. But then after writing that paper and doing research, and I posted that blog post here on my website, I was like, Ooh, there's some things here that I never considered or thought as far as my, just my ed tech life altogether, not the podcast, but when being in the classroom or working with teachers in the fact that it's like, Hey, I go to conferences and then just come in and, Oh, don't worry, just sign on with your personal account or no, no, don't worry, just sign on With this, don't worry about the terms of service.

Just click check. Yes. And you're good. And now I'm very cautious because obviously if to me, the biggest thing is the age restrictions. And so I've put, I've been very vocal and you know, I've even asked, you know, companies, Hey, companies, anybody out there that want to answer just, you know, I want to do a show with a panel of companies.

To just say like, Hey, I want to walk, walk us through a terms of service, walk us through that privacy aspect. How do we know that the data is safe? Because now the more that I read and the more that I do research is when I'm looking at terms of services for a lot of companies out there, and I'm talking about very popular ones out there right now.

One of the things that scares me the most is it still says we still plug in to open AI's API and we still give them information. And then when they say, well, as a district, if anything were to happen, we're not held liable, you're, you know, you'll have to do, you know, deal with a third party. And I'm like, wait a minute.

So you're passing the buck. Where's the responsibility there? And so those are some of the things that. Kind of worry me in that sense, along with what you were talking about, the bias, because it's only trained on a certain data set and then their data set has already been injected with a lot of bias in there.

So how do we know that what our teachers are receiving in the form of. You know, just kind of like create a story for me is going to be something that is going to be very acceptable or not acceptable and who's judging that and policing that, uh, the other thing that you were talking about, I believe, um, was, uh, I forgot the one point that you said, because I'm just so excited about this, but let's just continue with that as far as, You know, now what it is that you're seeing and walk us through this blog post first talking about bias and, uh, you know, that you wrote, you know, can you expand on it as well?

Just for our educators that are out there. And again, educators, this is not to say that we're anti. We're just asking to be cautious advocates and really look into what it is that you're using and the outputs that you're getting. Just be very cautious with that. So Tom, can you just tell us a little bit about that blog and maybe some of the feedback you got from that blog too as well and what it's leading up into?

[00:15:49] Tom: Okay. So one, I would not describe myself as a cautious advocate. I would say I'm a Critic, a critical thinker about it. My whole point is that we should be learning about what AI is from the AI experts before we proceed. So the blog post you're talking about the first one I did, and that's the one that's really gotten the big traction was it was called a pedagogy and the Eliza effect, or what teachers should know.

Oh, excuse me, pedagogy and the AI guest speaker and what, or, or what teachers should know about the Eliza effects and effect, and then, you know, the Subsequently, I did one about the 100 percent ethical AI app, which is auto draw, and I go into why it's, it's ethical. I also go into Google's history with AI, which is a little dicey.

And then the third one I did was follow these experts because, uh, you know, I'm hearing all this stuff about AI and K 12, all this intro to AI and not like the stochastic parrot's paper, which is like the seminal paper about how these large angle language models work and their, uh, harms. I'm not hearing a word about it today.

I have one coming out. If you're watching this about four o'clock today, Eastern, uh, I have one called AI vocabulary for teachers. It is. And again, I'm just quoting a lot of PhDs and smart people , and it's going to ruffle some feathers for sure. It'll push your thinking. Now, as far as the, uh, the guest speaker, let's talk about that real quick, because you talked about how the terms of service, especially for a chat GPT or a Google Gemini are very restrictive on below 13, they're not allowed to be used, right?

And so a big strategy that's out there right now Is the AI guest speaker now in my post I talk about their their reasons. That's problematic because of let's say we're talking about the deceased or we're talking about people from marginalized communities and you have this AI industry.

That is the backers are just they're exclusively male. They're mostly white. You look. At the board of directors of OpenAI, they finally put a couple women on there this weekend. But before that, it was exclusively male. And of them still on there, Lawrence Summers, former treasury secretary, former president of Harvard, said out loud publicly that men have more aptitude for STEM or for science.

So these are some things that really concern me. Now, as far as the AI gets Because it's suggested, do it with your fourth graders, do it with your fifth graders. And I understand there's no data privacy that is an issue if a teacher's doing it. Hey kids, what's your questions? Let me ask ChatGPT. At the same time, the Terms of Service say, not for use.

So I just, I don't, that doesn't sit right as far as, is this. It's a tool we should be using with Children.

[00:18:30] Fonz: Yeah, no, and I agree with you 100 percent on that aspect, because obviously, sometimes there's platforms out there that are, you know, putting things already in the Children's hands. And this is student facing.

And I'm like, how do you get through the terms of service? If this is what it's saying specifically, and, and, you know, I go and read through it. Very cautiously, but of course, you know, that language that they use, it's like, you have to really, really dig in deep and go in through the subsets and so on. But I agree with you as far as one of the things has always said, Oh, it's okay.

We use it just as long as you're driving it, but just ask the students the questions. And then I've seen other platforms out there where it's like, Hey, even a kindergartner can use this. And they're just really chatting with the chat bot, uh, really. And so, like you mentioned. You know, that Adam said, like, oh, this is going to be like the Model T.

I had a conversation yesterday, uh, or the day before with a gentleman who has a podcast from, uh, uh, France. Uh, Moses Sholly, I believe is the last name. I, forgive me if I, if I mispronounce it. But the same thing I was saying, it's like, I haven't seen really a lot of augmentation or redefinition. It just seems like as a teacher, I'm faster at creating worksheets and I'm faster at creating, you know, 10 question problems and quizzes and just translating and changing a Lexile, uh, you know, uh, for a reading, which is great on the Lexile stuff, don't get me wrong, coming from a small, School district sometimes to get those resources can be very difficult.

So maybe changing that lifestyle level. I get it. But what is the innovation, you know, where, what is the redefinition and the augmentation? What is it that we're doing that we haven't done yet? You know, and so I, I agree with you on that comment too, but on that, uh, uh, laser effect, can you tell me a little bit more of why that would be something that would be very dangerous within the classroom, and especially working with children?

[00:20:25] Tom: Yes. So I neglected to mention what the whole point of that post was. Um, so with Eliza, so Joseph Weizenbaum, a computer scientist in the 1960s, he created one of the very first chatbots. It was a therapy chatbot, and he watched his folks interacted with it. And as he watched, he noticed that folks were attributing human characteristics to it.

He was horrified by this. Later, researchers coined the term term, the Eliza effect, the idea that humans have a tendency to intuit human characteristics to, uh, text generating computers. So this gets real dicey as far as anthropomorphization. I almost pronounced it correctly. My friend Stacy Lovedahl in North Carolina, I saw her at a conference last week, and she just pronounced that word like this.

Gangbusters. And I was, how do you do that anyway? So, so it's very dicey. I had a quote in that blog post from his name is Colin Fraser. He's a computer scientist at meta. And he said, uh, they're designed to make you think there's someone there who's not, or I, I butchered the quote and think, I want you to think about it this way.

When you interact with chat GPT, when you interact with Gemini, those two. Programs. And by the way, I am, I think with kids, we have to be so, so careful with the words we say, you know, as I'm stumbling over them because I'll say something and then I'll realize I'm making a mistake. We have to avoid this personification.

Let's say personification of chatbots. And so they choose the designers have chosen to have them. Yeah. Then generate text that reads, I think, or I can't answer that for you. There's no I there. There's no person there. And so my point was when students are not only are they confused about whether it's a person or not, but they're also intuiting judgment and credibility.

And then you throw add that to The inaccuracy and the bias, I think pedagogically, that's a real big problem. Now you mentioned doing Lexile scores and adjusting text. So first thing, if you do that, you have to edit that with a fine tooth comb, because you don't know what's going to come out of that. And the other thing that strikes me as a technological response to a systemic issue, the systemic issue being lack of funds and teachers being overwhelmed.

Worked and that does not sit right with me either.

[00:22:46] Fonz: No, I agree with you. And I just, I just kind of wanted to, I guess, amplify on that a little bit. And that's one of the things that I feel that a lot of. Experts like a lot of people that are, that I'm following that have PhDs that are working on this, that are saying how dangerous this is and, and how we should be very cautious and just very critical about what we're using.

But, you know, 1 of the things that they're saying is these companies are preying on. Teacher burnout. They're preying on, you know, that feeling of like, I'm going to make you more efficient. I'm going to help you. I'm going to put you at ease and so on. And I had a guest, Kip Glazier, uh, who's amazing. And if you don't follow Dr.

Kip Glazier, she's wonderful. And she came in here and she, I don't know if she coined the phrase or she, you know, but the term, but she said it there and it was like tech chauvinism where the tech companies are saying, you I can do this better. I can teach better than you in the sense of you just give me your prompt and I'm going to give you something better than you could ever even think of.

And, you know, it's that feeling of, well, okay, this is going to, I'm going to give this machine, this chatbot, my work. And we say that, Oh, robots will never take your jobs. Chatbots will never do, but we are giving it and outsourcing our work to them in that sense, but it all comes down to that praying on, uh, you know, giving you time back, you know, we're going to help your burnout, but like you mentioned, it is a systemic change that needs to take place from the very top, all the way to the very bottom, of course, dealing with the funding, dealing with more social teacher professional development, taking some things off of their plate instead of continually adding more and more and more.

My biggest fear is with that is the bad habits. Number one that I always tell my teachers, I'm and I tell them I'm very honest and concerned with that. The bad habits that can come up. With using AI for teachers is the fact that I'm going to believe my very first output and I'm done. I can walk in Monday morning and in 30 seconds, I have a 10 problem worksheet that I'm probably not even going to look at.

I'm just going to go ahead and make copies of it or share it with my students or whatever it is. And so with that bad habit, like you mentioned, it's important that we go through these things with a fine tooth comb to make sure that there isn't any, uh, erroneous information there as well. But also then I fear the more efficient they see you be.

The more work they're going to give you anyway, so those are some of the things there that concern me. And so I, and it kind of ties to what you were saying that it has to be a systemic change to relieve that burnout and companies need to stop preying on educators, you know, and selling that. Because one of the things that bothers me also is kind of that equity and access and using a lot of buzzwords that, Hey, Pay us 2, 000 and we'll open this up for you for half a year.

But next year, it's going to cost you 16, 000 per school. Well, a school district like mine can't afford that. But then the neighboring school district might. And now where's the access and where's the equity if this is what you're championing? Why, why can I get access to it? Like everybody else does. So I don't know, I, it might've gotten a little bit off the tangent when we were talking about, but I don't know, what do you see and what are your thoughts?

Do you feel that same way that maybe companies are kind of just really, you know, using that burnout and using all the, the, the, I guess how the teacher situation right now to really market and sell themselves in that way.

[00:26:17] Tom: Well, this is being marketed as a huge time saver, and to me, that is maybe a little less pernicious than it's being marketed as creativity.

That, to me, is where I think we have a real concern. Now, teachers may be wondering, Hey, I've used this before. It works great. What's the issue? It's literally just predicting the next string of characters from its data set.

It has no consciousness. It has no connection to reality. It has no intelligence whatsoever. So the idea that it's an information source, I was listening to Dr. Bender early the other day, talk about how people are very excited about, Oh, this will summarize meetings for me. Okay. So let's say we. Use it to summarize meetings.

Whose voices is it? Is it missing? People aren't now attending because they're relying on the summary. And again, that summary is capturing some voices and neglecting others. When you, whoever is doing the note taking to summarize that meeting is making decisions. And so now we have Chachi PT making those decisions.

So whose voices. They're excluded when we do that. So, yeah, as far as the actual companies, it's really hard for me to say, because I haven't, like I said, my, my research has just been on what people are saying, really what AI is AI. I'll put that in quotes. That's in my blog posts today when I do the vocabulary AI.

It's a marketing term. It's not based on any science or anything. Um, so yeah, that's, I see what you're saying is that they are, there's systemic problems and you know, all these companies are coming up. I will say, so Dr. Ken Shelton was on a, I don't know if Ken, does Ken Shelton have his doctorate? I don't know, but Ken Shelton's a brilliant guy.

He's a really smart guy. And he was on a podcast and I'm forgetting the name of the podcast. It just dropped today. And he made a point about when you're a school and you purchase An AI app, look at that company's board of directors, see their diversity of lived experiences and, and then look at, um, asked to look at the data, uh, you know, and, oh, and look at the diversity and lived experiences of their chief officers, right?

Because you have a board of directors and your chief, uh, chief officers look at that. The other thing I'll just say real quick is that for every IT person watching this. Access through your firewall is a privilege. And so if you look at a company, say open AI and you, and you see, uh, theft from artists and you see Kenyan workers exploited, and you see the problematic practices of their board and you say, Hey, that just doesn't fit with our mission, our goals, and you can have a conversation with students.

It's not that we don't trust you. We don't. That's them, right? So, like I said, access through your firewall is a privilege and just we're not going to ban it because we want to be innovative is not a sufficient rationale for your decision about whether or not you ban chat GPT, right?

[00:29:26] Fonz: And, uh, you know, I like what you said to how important that is to look at the board and look at the makeup and look at, All of those things in the lived experiences.

Cause oftentimes we, we overlook that because it's kind of like, Hey, we just want the tool we want to make sure that we don't fall behind. And to me, like, for me, it's like, Hey, this is a marathon. This is not a sprint. Uh, you know, in that sense too. And, and I had a guest, I can't remember the, oh, it was Evan Harris also, who was, who mentioned that the exploitation of the workers, you know, the moderators that are out there getting paid cents on the dollar just to be there moderating and doing all of this to, to be.

[00:30:03] Tom: Traumatized looking at the worst things on the in the whole Internet. I mean, can you imagine that? That is? Oh,

[00:30:09] Fonz: yeah, absolutely. So there's that aspect to that. That is that people don't think about and what happens on the on the back end to the other thing for me is That if a platform does not openly say, you know, what model it is that they're using, you know, to me, I'm very cautious with that.

You know, usually I always like to ask, okay, I want to see what model you're using. And of course, with the data, like Ken Shelton says, and all of that. So those are some great tips for all the leaders that are out there, all the CTOs, the superintendents, stakeholders, you really need to have those meetings.

And as uncomfortable as it may be, you know, to get in and get in deep, you want to make sure that you're doing right. Not only by yourself, but for yourself, but for your student population. I mean, don't think of it as just like, Oh, I'm going to invest in this. And all my students are going to be future ready and we're good to go.

We've already checked that box and let's move on, but no, I mean, there's a lot more to it and those intricacies. Cause like you mentioned, you know, AI to me, I, to me, it's a linear function. It's just math, really what it is. And like you said, the word predictor, and I don't know if you've seen that Netflix, uh, coded bias, the, the movie, uh, really how it started.

Yeah, you should. It's a, it's on a, it started with a group in Dartmouth, all white males saying, Hey, this is what we're going to do. And then. And it was just the fancy, you know, mathematics, really what it is. And then of course it's, it's come this, this way and this far. So those are just some of the things to really look into because of, again, the bias it's there.

Tom, one other thing that I wanted to share kind of in this and what we're saying is that I was reading the UNESCO report yesterday and I'll put that, I'll link that in the, in the show notes too, as well. And they were saying that there's going to come a point in time. Where all the outputs that we're getting, and then, you know, how people can say, okay, I got this output or my, my chat GPT summary of my meeting, then people will take, Oh, I'll just take the summary.

And then I'm going to re pop it in there and then just go ahead and get a summary of the summary. And so you're just simply recycling all that information, but you've got to keep in mind that that's, you're training the model on that. And the model is really. In a black box, it doesn't really like open up to like everything, you know, like you say, they have control where this are the limitations, but then it's just going to start regurgitating everything that you're continually putting in.

That's really going to really pollute a lot of the data that is in there already too, as well, because it just continues to. Piece these together, like you said. And so there's a lot of things to think about and consider, but I know that we want to get our students future ready. So let's kind of maybe shift the conversation a little bit into what can teachers do or what are your recommendations?

I know you mentioned like auto draw, but what are your best tips that you can share with teachers on what to think about before they use auto draw?

[00:33:05] Tom: Okay, so a couple things. First of all, I do want to address this because you brought this up a number of times, how you and people in our community, our spaces, our circles love to just dive right in, go full bore.

And I've been blogging. My first blog post about technology was eight years ago. August 2014. And in 10 years, I never blogged a negative. I never blogged. Hey, don't do this or be concerned. I'm sure with digital escape rooms, I've said like, I've said publicly and in session, Hey, don't gamify the Holocaust and slavery, but I've never done an entire blog post about it.

So that Eliza effect blog post was the very first time I blogged a negative where I said, Hey, there's some, I have concerns about this approach. I don't love, I don't like doing that. Um, and you know, then my last post about the experts, that's all positive. It's just resources. It's information. Uh, but this is all anyone's talking about right now.

I will say I, you know, I've, I've partnered with Figma a lot. I've worked with them. I love their product FigJam. So I, you know, I'm not, I'm not an unbiased person there. Cause I partnered with them. I love that stuff. I love a lot of the Google docs innovations we've seen the last two, three years. And I've just never had anyone with a Ph.

D. Say publicly that collaborative whiteboarding or word processing can amplify bias. I'm sure it can. I'm sure nobody especially use the wrong ways. So that's my point is that once I heard, hey, this might amplify bias. That was like, okay, then I'm not going to be hyping this about using this with Children.

So what I would say to teachers now is that this is your learning time. This is your time to get in there and understand what this is to understand. It is not superhuman. It is not intelligent. Understand how it amplifies bias. I understand how and why it's inaccurate. I talked about a stochastic model and I was just pulling texts from its data set.

And so to me, My advice to teachers would be to learn, engage your kids about the ethics of it. Oh, absolutely. All day and night, talk with them about if you're doing current events, if you're doing critical thinking, absolutely dive in and give them diverse perspectives. Don't print up with Sal Khan and Sam Altman said about AI.

Come on. That's by these are by these people are biased. They have infested interest in this. So get in there and do that with kids. Talk about the harms, the harms should be the whole conversation kids. Right. Um, so that's what I would say is that right now, this is a great opportunity for critical thinking.

I can talk. Fonts. I can talk about some apps that use a I and I think are totally fine to use if you want to do that. But I right now I would not go in and run into teachers rooms and say you have to use this large language model or you have to use this image generator. We're just not at that point yet.

[00:35:52] Fonz: Excellent. Yeah. And, you know, uh, I agree with everything that you said there and I wanted to add, you know, just also the scary part about it. Like you said, you know, people that have vested interests in this. And, um, and I wanted to talk to about 1 question that I love that we brought in, you know, as far as the personalized learning.

And kind of like your, your response there. Cause I know we, we talked a little bit and saw a little bit of your show notes there. But one of the things that I wanted to ask you too, is I don't know if you had seen, uh, is it, I think it's a Jason Huang or Jensen Huang from NVIDIA. And then even, uh, Sam Altman open AI, and we're talking about critical thinking, you know, right now you mentioned that, which I think it's something that's very important, but they mentioned, it's like, ah, You know, you don't need to code anymore.

You don't need to learn coding anymore. That's going to be gone. Like we already have the technology that can already do that and you won't have to worry about it. So to me, that kind of took me back a little bit because, and I'm bringing that up just because you mentioned critical thinking. And to me, I think that's a big question that I, when I went to TCEA and a lot of other conferences, people are thinking, how is this going to affect critical thinking in the future?

And so on. So for me, you know, coding, there's definitely a lot of critical thinking there and a lot of thought process and a lot of creativity there to have to learn a different language, you know, and then be able to code and put things together to program and so on. But when you have this very powerful company, you know, Nvidia, and then you have, um, Sam Altman also from OpenAI stating, eh, You know, don't worry about it.

You don't need to learn coding anymore and so on. So I'm thinking, oh my gosh, okay, so they're telling coders and people that may be in college saying like, Hey, pretty much we're not going to need you around anymore. You know, we've, we've got open AI and now all I need to do is just speak it into existence.

And it's going to go ahead and work out that way because I can program it just via speech to text and it's going to do everything. So. Now, going back into kind of that, that critical thinking component, what are your thoughts on, you know, critical thinking and the use of, you know, this technology of artificial intelligence in classrooms?

Is it something that's gonna, you know, help or is it something that's going to hinder the critical thinking aspect of it?

[00:38:05] Tom: Okay. So, and again, my critical thinking goes right up, right. My alarm bells go right off. You're the people who have a vested interest in this technology say it's so good that it will work.

Place coding, right? Where is the coder? Like, can we get an expert coder to weigh in on that? You know, you see articles about, oh, it's going to be an amazing therapist. And by the way, any profession where you have to get a professional license, such as teacher, lawyer, therapist, If, if AI is doing that, if the people who were behind it are literally breaking the law, you know, you should report them to the local board or, you know, the, the bar or whatever.

I understand a lot of, a lot of what I hear is enter a prompt into chat GPT and then evaluate it. That's. It's critical thinking. And again, I say, well, what about the Eliza Effect, where we have a tendency to believe it and that students aren't exempt for that. And the other thing I would just say, and I put it in that Eliza Effect blog post, do you know anyone who thinks a falsehood that they learn that read on the internet is true?

So. Why are we so sure that analyzing prompts from chat GPT will build critical thinking? And the other thing I would say about that is, what are you already doing about online reasoning? What, and is it working? And if it does, then wouldn't there be transfer? And if it doesn't, Then one revisited and two, why would it then work when we're analyzing prompts from chat, you know, or excuse me, responses from chat GPT.

So that's what really comes up for me with that stuff.

[00:39:36] Fonz: Excellent. So, you know, this has been a great conversation and. You know, I, we can definitely continue going on and on and on and so on. But I want to ask you now, I mean, we, we talked about the Eliza fact and then you talked a little bit about your blog post on, you know, follow these AI experts.

So right now that we have you live, if you can just, you know, maybe share just a couple of those experts and maybe just a little bit about. You know, a little bit about what the work that they're doing. And this is obviously just for our audience members too, that are going to be catching this either live right now or on the replay, you know, I know I will be linking your blog on there, but I just want to make sure that you give a little bit of time to share, you know, what you've done through your research and you know, why it is that we should be following these experts and reading their stuff.

[00:40:20] Tom: All right, I'm gonna do a quick sampling. So the stochastic parrots paper against seminal work, it got to two of these experts fired from Google because Google didn't care for what they had to say. So their names are Timnit Gebru, Timnit Gebru. As I said in the blog post, when you Google the words truth to power, you should see her picture.

That should be the very first result. Uh, Timnit Gebru. Margaret Mitchell and then Dr. Emily Bender, Dr. Emily Bender does a lot of podcasts. She's on a lot of things and she really is good about breaking it down. She's again, she's a computational linguist. And I would initially think I couldn't hang with this person, but I think I could because she's really good about explaining it.

So those three, Gabrielle, Margaret Mitchell. And, uh, Bender, they were three of the co authors of the Stochastic Parrots paper. The two big issues they identified was the systemic bias and the environmental racism. That's a big one. Uh, not only is, you know, tremendous environmental impacts, but environmental racism.

And again, you have to read the paper. I don't, we don't have time for that. Uh, then I would say Dr. Joy Balawami. She does a lot of great work about AI and bias. She has a great video. I put in the post, uh, AI, ain't I, A Woman, where she does this great poem and she performs it and she talks about all these instances where AI is saying that black women are.

Uh, you know, animals, apes, monkeys, like she's showing that in real time and it's just like, Oh, my God. And then the third person I'll say is just a random person. Uh, his name is Dr. Gary Marcus and his big thing is he's very good at debunking hype. If you look at his Twitter feed, you know, a lot of people think that this stuff is going to become, uh, sentient and, uh, and be smarter than humans.

And he just, you know, Literally documents examples where no, we're nowhere near what's called a G I or singularity. We're nowhere near that. And really, it's theoretical. I personally don't think there's any evidence that that would ever happen. But he looks at it and just says, look at all these things that can't do.

And these algorithms, these mathematical predictions of the next characters are not that's not intelligence. That's not values. That's not any of that stuff. Um, yeah. So, and when Sora came out and everyone was just blown away by Sora, he said, look, it's a 20 second clip. What are they not showing us? And there's no object permanence.

And there's no, um, you know, ants have four legs. This is not going to replace the movies and television you watch. So those are five there. The post has so. Dan Meyer, who does, uh, he's, he's actually K 12. He's the only K 12 I put in there. Cause again, my point is let's learn about it before we put it in classrooms.

Um, but he for a while has been doing amazing blog posts. He's a math guy and he's been doing some amazing blog posts about, um, really about large language models and AI and. So that's a quick, I hope that was quick.

[00:43:17] Fonz: Yeah, no, no, that was great, Tom. Thank you so much. I really appreciate this conversation, Tom.

Thank you. And you know, thank you for all your shares and the work that you're doing. And obviously through the blog post too, as well, which I will definitely be linking. Into our show notes are all our audience members can go ahead and make sure they check that out and please make sure that you connect with Tom too on all socials.

You'll definitely find that in the show notes too, as well. Well, Tom, before we end, I always love to end the show with the last three questions and I always send them to my guests in the calendar invite. So my first question to you is as follows. We know that every superhero has a weakness or a pain point.

For Superman, his weakness was kryptonite. So I want to ask you, Tom, what would you say is your current edu kryptonite?

[00:44:10] Tom: Oh, um,

I

[00:44:14] Tom: mean, to be perfectly honest, the fact that the conversation is focused on AI,

um,

[00:44:21] Tom: I, you know, I, I don't really like being, you know, the, Hey, don't, you know, I don't, that's not what I like to do. I like to have fun and do cool things. That's why I love fig jam so much. Right. Um,

Yeah, I think that that might be it that, you know, I've struggled with this for a long time now, you know, that this is what we're all talking about. And I just feel like there's so many more important and fun, engaging things we can talk about. So I guess I'll just say that, that, that the conversation, the discourse is so focused on one thing and one thing that I I think, uh, it's kind of harmful, but anyway, yeah, so we'll say that.

[00:45:00] Fonz: That's good. That's a great answer. Thank you so much. All right. Question number two is if you could have a billboard with anything on it, what would it be and why

[00:45:08] Tom: Here's what I'm gonna say. It's in my blog post later today.

Predictions are not facts. A lot of the, a lot, so just those words, predictions are not facts. A lot of the arguments for going full speed ahead with AI and K 12 education hinge on predictions. They might come true, they might not. Predictions are not facts. Based your decisions around AI around the facts.

What, what are the facts, not predictions?

[00:45:32] Fonz: I love that. That's very powerful. Thank you so much for sharing that one. All right, Tom. And the last question is, if you could turn one of your favorite hobbies or favorite activities into a full time job, what would it be?

[00:45:48] Tom: Okay. And by the way, before I answer that, you know, you asked me about the feedback I've gotten for this and I never, I never answered that.

I will say it's been mostly positive. People have been very supportive and finds like what you find, what you, your comment, when I saw your comment about it. My appearance on Daniel's podcast. And you said, Oh, Tom's been doing great stuff. Like that meant the world to me. And I, I know we're all going to agree and disagree and I'm going to put things out there that folks can disagree with.

Uh, but just that, that was awesome. So thank you Fonz. Um, well, I'd say two, uh, one, if I could turn my Knicks fandom into a full time career, that would be pretty amazing. Like to be able to like go to the games and, and like, if I could blog about that Content created about that. That would be awesome. But the other, I'll just say it's one that I haven't been able to engage in a long time, but one of my favorite YouTube channels is major league wiffle ball.

They just, these like, they started off as like middle schoolers and now they're adults in, in Brighton, Michigan. And they have a whole thing. So, Oh, my dream in life, if I could wake up tomorrow doing anything else, uh, wiffle ball would be a professional sport and I would be. I'd let's say the Mariano Rivera, although I'd be, I'd be happy to be a middle reliever too.

Like to me, the thought of getting warmed up in the bullpen of a professional wiffle ball game. And I know I'm in my forties, like I'm not, you know, but Oh, if that, Oh, that would just be amazing.

[00:47:13] Fonz: Hey, but you know, there's longevity in that too, as well. Like, I mean, the wiffle ball and it, those, They are intense, you know, sometimes I'll go down this rabbit hole and just the amazing pitches.

And I'm like, how do you hit that? How did you throw that? And it just so much fun. And it really just, I don't know, just, it brings out the child to be growing up, doing that with my dad too, as well, play wiffle ball and getting her bad and just going out there. And now you're just having fun and enjoying it.

The other one we, you know, with the Knicks, I can definitely see that too. I might imagine getting, being a paid fan, you know, in that sense, you know, just come to every single game, every activity you're going to write and blog about it and so on, and yeah, definitely see that. That'd be great. Well, Tom, thank you so much again for taking your time on this wonderful Monday.

You know, I, I just started spring break and to me now I'm just energized this week. Definitely have a lot of, Life shows coming up, but thank you so much for getting this started for me and your shares, the work that you're continuing to do. And, you know, obviously, like you mentioned, many times we will say things that people may agree with.

Some people may not agree with, but we just need to have those conversations and bring things up and just talk about it, have that discourse. And that's important. So thank you for what you're doing through your blogs, obviously, too. Thank you also for being on Daniel's podcast too, because that's great.

And I, Daniel's got a wonderful and amazing audience and thank you so much to hear, you know, for being in this space with my audience, which is, you know, pretty much similar circle that you and I run in and so thank you so much for what you're doing, man, and keep doing it and I appreciate it. I look forward.

To more blogs, and this is an open invite to, for you as well. If you would absolutely love to come back for a part two, or there's something that you say Fonz, you know what, there's something that I definitely want to share or just to continue to the conversation, you always have an open invite, so please feel free to reach out at any time and we'll definitely make it happen.

So thank you, Tom.

[00:49:09] Tom: Uh, thank you fonts. One. Thank you for having me and thank you for what you're doing. Uh, that is so that's so great. And I think you're helping to interject some critical thinking into this discourse. So I could not be more grateful. So thanks again. And yeah, down the road, like, Give it a few months.

We should definitely reconnect and just kind of see where, where we're at with this. So

[00:49:30] Fonz: absolutely. And for all our audience members that joined us live, you know, we definitely, I see the numbers here. They definitely, the, the upticks there, you know, on LinkedIn, on YouTube, on Twitter, you know, wherever it is that you're joining us from.

Thank you so much. If you're not following this, following us on those platforms that you're viewing us on. Please make sure that you give us a thumbs up, subscribe, follow, uh, make sure you check out our website at myadtech. life, myadtech. life, where you can check out this amazing episode and the other 268 wonderful episodes with educators, creators, professionals, founders.

Um, you know, we've got a little bit of everything for you, and I promise that you will leave with some knowledge nuggets that you can sprinkle onto what you are already doing great. So please make sure that you check out our website. And again, guys. Uh, help us out with our goal to get to 1000 subscribers on our YouTube channel.

Please feel free to jump over to YouTube. Give us that thumbs up and subscribe as well. So thank you all as always. And my friends until next time, don't forget, stay techie.

Tom Mullaney Profile Photo

Tom Mullaney

Consultant

Tom Mullaney (he/him) is a former teacher who uses his Special Education and Instructional Design background to help teachers design inclusive lessons with creativity, collaboration, and fun. Tom’s public education experience includes Special Education, Social Studies, educational technology coaching, and digital design. He is an Adobe for Education Creative Educator Innovator and Google for Education Certified Innovator and Trainer who has spoken at national conferences including SXSW EDU, the National Council for the Social Studies, and ISTE.