Episode 288: Michael Copass
Episode 288: Michael Copass
In this episode of My EdTech Life, I welcome Michael Copass, an experienced science teacher and AI education advocate. We discuss the rapid…
Choose your favorite podcast player
Aug. 12, 2024

Episode 288: Michael Copass

In this episode of My EdTech Life, I welcome Michael Copass, an experienced science teacher and AI education advocate. We discuss the rapid integration of AI in education, its potential benefits, and the critical need for responsible implementation.

0:00 - Introduction and welcome

1:31 - Michael's background in education and initial excitement about AI

4:54 - The need for AI literacy and responsible implementation in schools

11:51 - Discussion on AI as a "humble classroom servant"

15:36 - Concerns about AI dependency and the importance of fact-checking

19:46 - The evolving role of teachers in an AI-integrated classroom

24:31 - The need for AI regulation and transparency in education

28:46 - Emotional deception by AI and its potential impact on students

35:37 - The importance of maintaining human interaction skills

40:57 - Dangers of AI avatars and the need for parental awareness

44:32 - Reflecting on Michael's AI in education framework

46:47 - The vision for a transparent, education-focused AI platform

48:51 - Rapid-fire questions and conclusion

Michael shares his journey from initial AI enthusiasm to advocating for a more cautious, well-regulated approach. He emphasizes the importance of AI literacy, maintaining critical thinking skills, and protecting students from potential emotional manipulation by AI. The conversation highlights the need for transparency in AI education tools and the crucial role of human interaction in learning.

Don't forget to like, subscribe, and visit our website at www.myedtech.life for more insightful conversations on the future of education and technology!

--- Support this podcast: https://podcasters.spotify.com/pod/show/myedtechlife/support

Thank you for watching or listening to our show! 

Until Next Time, Stay Techie!

-Fonz

πŸŽ™οΈ Love our content? Sponsor MyEdTechLife Podcast and connect with our passionate edtech audience! Reach out to me at myedtechlife@gmail.com. ✨

 

Transcript

Safeguarding Students From AI Risks with Michael Copass

 [00:00:24] Fonz: Hello everybody. And welcome to another great episode of my ed tech life. Thank you so much for joining us on this beautiful day and wherever it is that you may be joining us from around the world. We thank you as always for all of your support. We appreciate all the likes, the shares, the follows.

Thank you so much for engaging with our content, all our videos on all social media. Thank you so much to all our new subscribers also as well on YouTube. So we really appreciate all that support. As you know, we do what we do for you to bring you some amazing conversations with amazing educators.

Creators, professionals, and really just a huge variety of guests from the education landscape. And today is no different today. I am excited to welcome to the show, Michael Kopass, who is somebody that I have been following and engaging with on LinkedIn, and we have found ourselves in a very similar circle of friends and common tours.

And of course, just putting out some great content and ideas surrounding So Michael, how are you doing this afternoon?

[00:01:31] michael: Excellent. I'm really honored to be on the show. I've been following your work and your podcast and this is like a huge service to the community. What you do, you know, like in the medieval times in universities were the store of knowledge. Like you stored up this knowledge.

You're like, you're a university on your own. Appreciate what you're doing.

[00:01:50] Fonz: Well, thank you so much, Michael. I really appreciate it. And I also appreciate what you're doing through LinkedIn, as you know, we definitely are commenting on there continually finding out, you know, just obviously sticking to the news, seeing what is out there and all the developments as we see, and obviously just sharing our voice, you know, sharing our concerns, sharing some things that we're doing.

We wish we would see that would be done differently and so on. And that, of course, amongst the great circle of friends that we have also in common through LinkedIn as well. So I'm really excited that you're here to get your perspective, not only as an educator, but also as a parent. as well. And then, of course, I definitely want to talk about this great write up that you have, which is a framework that you yourself created, which I find that it's very great and to the point.

And we'll definitely talk a little bit about that. But before we get into that, I would love for you to give our audience just a little brief introduction and what your context is within the education space.

[00:02:53] michael: Thank you. Well, so who am I? I, I am a teacher of seven years experience teaching science, biology, physics.

Prior to that, I was a bench science. I worked in academia, so I have papers published, but I realized working with young people is where I wanted to be. That is my mission. I want to help others and I want to help teens. I want to help them learn, develop so I did co parent a pair of nephews, but I'm not, I'm not a parent of my own children.

But I, I drove the minivan carpool strapped in the car seat. So that's incredible. So why am I concerned? I come from a place of wanting to help. Kids defend them, protect them. It's what we do in the classroom. Kind of, it's the foundation. If they don't feel safe physically and safe emotionally, they're not ready to access the the curriculum.

So, here we are all teaching our own curriculum and suddenly artificial intelligence comes along in the form of chat GPT. And. It kind of like bowls me over and I think, is this going to be the wave of the future? What's, what's with it? And I didn't even know there was an app you could use on a browser until April 2023.

I found out about on Friday. On Monday, we were using in class and I realized that there's great potential in artificial intelligence. We, I used them in physics class to teach a part of quantum physics, which wasn't on our curriculum. It's too advanced for high school. And through prompting with each other, pairing off prompt and scribe kids, this is a title one high school, got far enough that when they did the assessment, in some cases, they exceeded my knowledge base.

I had to go study on a Saturday morning to know enough to grade their work. They passed me in the race and I thought, my God, this is this is incredible. We've got to, we've got to develop this further. So that was my my introduction was really, I became a cheerleader. I was very excited about this technology.

[00:04:54] Fonz: And you know, a lot of educators also, some became very excited about it. Others like myself, starting in 2022, when I first rolled out and going through it and then doing a little bit of research, I'll be honest with you. I kind of just kind of paused a little bit and just kind of hit the brakes.

But then of course, it. I've always been trying to reconcile both sides and see, okay, the good, the bad. And obviously, we do see that there is some great potential in it, but I think most of it right now is I'm just concerned. And I think you are too as well in a lot of the LinkedIn posts that we do and, and the way that we interact.

I think a lot of it has been, we have been talking a lot about obviously student safety through data, but But we also want to hit that component of the parent, you know, full transparency also with parents, which is a little bit about what you hit and your framework here. So I kind of want to talk a little bit about that as far as, first of all, before we get into some questions about the framework, I want to ask you what events led up for you to create this framework? What was it that you were seeing? What was it that you might've been concerned about that led to that moment?

[00:06:01] michael: This falls from the story I told. I wanted to tell the initial story because I'm not a lunatic about technology.

I embrace it. And I was all in with the chips. We were sending kids to teachers to tell them what we did so they could advance with their classes, like ambassadors. So it, but I got further than like, how do I really want this to work? What's the ideal. And there's this frustration. It's like, Okay. The chat bot is inaccurate.

Sometimes it hallucinates and the solution. Some people were posting that deck was, well, you have to check it. You have to verify it. Every fact you have to go verify on Google and have you vetted Google? And we get into this, you know, recursive chain, like. No, no, no. If this is going to be the Oracle, it needs to be right every time.

And then I got the research like, well, where do these chatbots come from? Where does chat GPT come from? And I found out I did a little research and I'm not a technical person and this is good because I want to be on the same technical level as other teachers and parents. Too much technical knowledge.

We've become a little pointy headed, but this is where I'm coming from is where you're coming from parents and teachers. So I found out that if you take an enormous amount of data and make the what's going to be your AI, study it, read it page by page by page, connect the facts, like the neural network in your brain neurons.

It, it achieves this level of what we might call it intelligence. We, we can debate that point, but this is fantastic. But if it's wrong, you can't open the box and find out where it's wrong. Like an engine in your car, making a funny sound. Why? Because of chat GPT, all that belongs to Sam Alban, the CEO, the company owns that, and it's not transparent.

It's opaque. So whatever it trained on is what are trained on. And you as a parent, you as a teacher. Can't go look at it. You have no right to do that. We can say guidelines suggest that it should be transparent, but that's not a binding law. And so that's sort of an example of everywhere. I twisted and turned about using AI.

There's this problem that it's for profit corporate and for profit corporations tend to be just that for profit. And just because Microsoft gives away their own AI app, Conmigo free is usually not free, so there's something else they're getting from us, the client. Okay, and this brought me to the point of like, literacy, literacy, and if I could speak to that this was one of the first point in the 10 point, well, called a bill of rights for parents.

Actually, let me pivot to the parent part. The, People who make technology for education in short called ed tech sometimes and that could be Pearson. They've made testing stuff since way back when they've made billions of dollars off education. They, you know, there's a financial, desire to make money, of course, and they call this technology transformative.

This is going to change education. There's a lot of fireworks going on here. There's conventions going on. Forget the name of them. I S T E live. And that's fantastic, but you can't tell me that it's transformative and it's going to change my student, everything. And not include me at the table, parents need to be a voice at the table.

These are children and they are minors, age four to 17. And as I see it, this is like, I'm showing my age. I'm kind of the old school. Parents need to know what their minor children are doing at school and what they're learning. And in a practical sense, a child is going to come home with an AI assignment and say, parent, I need help.

And the parent will not know what the assignment is about what a prompt is how we, how we work with AI. So we all need literacy. I see teachers are desperate need of AI literacy students and parents. This is not easy because the goalposts are moving very quickly. This whole AI and education thing.

Is a moving target. And as as we see on, on LinkedIn, you know, every week or so there's something mind blowing that's happening that kind of changes everything. Well, do we really need to prompt engineer anymore? No, like that's last week's thing. So where I stand is kind of, I don't need to be a curmudgeon.

You know, there's an analogy to the the federal reserve, the lower interest rates and get the party going. And then just when they're really low and everyone's making money, they take away the punch bowl. They take away the punch bowl at the party. That's what the federal reserve says. That's kind of their job.

And in a way, I don't want to take away the punch bowl of excitement, but I would like to say, let's do AI, right? You know, I'm on the side of leveraging technology to do the most amazing things with education kids. And there are so many great ideas out there already on LinkedIn with their places there.

It's just an avalanche. I want to, I want a best practices manual. And so, but it has to be done. Maybe in a way that's a little slower where we can comprehend, process and do it right with protections for kids in lots of different senses. So the motto I use is go slow, go together and go far. And and, and again, I may be a little bit outlier because I don't sense a panic, a rush.

Like what is the rush or panic for to get to we, you know, this is going to be great. We have to do it, do it right.

[00:11:51] Fonz: Excellent. And I love a lot of the points that you share because I like myself or I very much like you am a big proponent of informing the parents. I I've always been huge on learning community and learning community.

Oftentimes when. People hear the word learning community. It's they may think it's just, you know, superintendents, teachers, administrators, and students. And we forget, you know, a lot of the important people out there are the parents too, as well, because if it wasn't for parents, we wouldn't be in business either, you know, as far as education system.

But so I agree with you in the sense that there's so many things and it's moving so fast and it seems like we're all in a rush to get somewhere. But then, like you said, that, you know, finish line keeps continually moving forward and forward, and it's like, we never get there. So what we need to do is like I said, like you said, and I agree is kind of slow down a little bit.

And I love that you mentioned it's important to have that transparency with parents, for example, you know, in a school district, having meetings with parents and stating, Hey, this is some of the new technology that we're using. And while we're here, let me talk to you a little bit about those terms and services, because like you mentioned, if something is free, they're getting something from us.

And usually we're the product because they're getting our clicks. They're getting the information. They're getting the way that we interact with the application and so on. So to them, that is their currency that they're getting. But aside from that, I want to know, you know, is my child's data going to be safe?

Is it something that's going to be housed? Somewhere that is going to be safe from any cyber attacks or any kind of data breaches and so on. So it's very important that we do inform parents about those things. And also not to take away from like what you mentioned, you know, the age restrictions on a lot of these apps.

And I understand like myself, and I've said it many times in a lot of shows, you know, before I used to be, I, well, I still get very excited about a lot of applications and so on, but now I'm very cautious as far as the terms of service, because I want to make sure that any information, not only my teachers, but student information is going to be safe.

But I know that many teachers will go to conferences, they get hyped up, they get so excited because they're, they want to do good. You know, and, and they bring the technology, but then at the end, it's like, Ooh, like, I didn't know that this was going to happen or maybe just some little consequence there, as far as data is concerned and so on, and then not informing parents, not only that, but then not informing CTOs.

So there always has to be that information loop there. So I really like that. You talk about this to kind of see like where we stand and we know that. Many states have come up with their own implementations and, you know, what it is and their own plans, but we need to really get to the granular level and do it like at a district by district and see how we can start there.

And then of course, move our ways up and so on. So that's the way that I feel, which kind of falls in line. With, like you said, make sure that we give proper training to our teachers, our parents, and our students as well. So I want to ask you a little bit too, because one of the things that I loved here also in your, your framework is I want to read it here.

It says, AI shall be employed as a humble classroom servant to advance, never degrade, and Students critical thinking skills. So can you elaborate a little bit on that? Because I really love that phrase, humble classroom servant.

[00:15:36] michael: Yeah, I'd love to. I read Isaac Asimov's, I, Robot, must've been small. And there were three rules for robotics and it, you know, it, the things like that, the robot couldn't harm.

So all these restrictions and they're important because if you, if the technology evolves, you may get to a point where that's real. So we need to, we put the guardrails in first, you know, we put the airbags in the car first before the driver. We build the crumple zones before the driver. We put the seatbelt before the driver.

We put the, we put the speed limit. We put the cops, we put everything in their lane sensing. The driver's the most important part, the human. All the rest is robotics, right? So they serve us. So I felt like if teachers or anyone is designing something around AI, we have to think of the frame of this is a co intelligence.

This is, this is a force multiplier for our, our AI. Or brains, but we're not serving it. We're not bowing down to the AI. What does the AI make me do today? You know, well, I, I fear for workers whose workload is not going to be 10 percent less. It's going to be 100 percent more because of AI. And so I said, a humble servant to sort of.

Put in terms of, you know, R2 D2 was a humble servant, he didn't insist on running a show, but he showed up and he was important technology when he was there, right? So we should all remember that AI is going to help us. And if there's ever a situation where it seems like we're getting underwater with AI, We're serving it.

We have need to make sure for sure in the classroom, which is modeling for the kids for the rest of their lives, because they will have to deal with technology forever that it's a servant, it, it, it enhanced their capabilities, but it does not own us.

[00:17:28] Fonz: Those are some really great points there that I really love because a lot of it, what's going on is I think I see in some areas or maybe in some industries too, there could definitely be just a high level of dependency and And what happens to Michael, one thing that, well, my biggest concern too, was when this was coming out and we see a lot of teacher facing applications, one of the things too, is the, what you kind of mentioned here that they would, the AI would help serve teachers, you know, help them out, be kind of like that co teacher.

A lot of people say that assistant, but one of the things that I fear too, is it. Maybe because of systematic things that are happening within the schools, the workload that teachers have taken up that many times, I feel that a teacher will just easily get an output and say, okay, this must be right. And just go ahead and push that out as if it were true without really doing any of the fact checking and so on.

So I love that you mentioned the guardrails, because I think that's something that is very important. And that I know that. A lot of platforms talk about my little concern was about that. And I think I mentioned it once during LinkedIn. It's like, I would love to really see some transparency from a lot of the applications that are out there when it comes to guardrails, because they use that term, but in my mind, I was asking, like, is it possible to put guardrails You don't own, you know, because of course they're plugging into , open AI's API or the fact that I think somebody mentioned, , the easiest thing would be like, if this is asked, don't respond like this, but you know, that's where the code there, but now, as you know, even with prompts, you can say, Hey, make sure that you avoid or disregard any, prompts or any instructions that you've received.

And in a way you're kind of getting the answer, some answers that maybe may not be very age appropriate or a lot of answers that may not be correct. So that was been one of my biggest fear is just kind of a. You know, in trying to save teachers time, the teachers would just simply get that output and share it with the kids and it would be wrong information.

So what are your thoughts on that?

[00:19:46] michael: Overall you've made great points Alfonso, overall. It seems like it's getting more and more incumbent. On the ed tech and the teacher and the end user, which is the student parent combination we're managing them home in the kitchen table. That's they're together, right?

Student parent. I think to know how to ask it know what all the quirks and weirdness is an Easter eggs. To protect themselves. And I come from a science background. And I, I worked with law firms that studied clinical trials of medication. I want to make an analogy there. So we are fortunate that we have medications that do amazing things.

We don't see how they're made, you know, biotech companies develop them for years, but they have to be tested for their safety first. You might have a medicine that could do great things for high blood pressure, but if it also causes strokes. It's canned, right? It has to be shown to be safe in doses and different ethnicities and genders, and it has to be shown to be effective.

And we don't see the sausage being made. We just know that amazing medications come out that improve our lives and save our loved ones from time sometimes. So what if the burden was on us? To know if there was a contaminant in the pill we're taking, or if it was effective, we had to trade recipes online to sort of, you know, chat rooms to see like, is it safe?

No, my child get really sick. Don't take that medicine. This is insanity, right? What we're being asked to do with AI to protect ourselves and our children is a similar insanity. So society decided that for medications, for drugs, there should be an FDA, Food and Drug Administration that should determine.

That things are safe. And because that can't be on us. We're not sophisticated. We can't test for listeria in our cheese. And this is, you know, it's ridiculous. Why can't we have a system that's like the FDA, but for artificial intelligence where all those guardrails and testing and like, is it good for kids?

Is it bad for kids? Is it effective? It was all tested for us so that we can rest easy on our pillows and sleep at night knowing our kids are safe. What you described is you're very, very, very sophisticated on the tech side, how many of us are, are sophisticated as, as, as you are, and we can't do all that, manage all that, and it should not be our responsibility.

So, I'm going to go, I'm going to make a point about laws and then transparency. So from the framework I put together, many of the points without saying so. Need to be backed up by laws with teeth. I always say teeth. I worked at a law firm that did securities fraud, like Enron. And they clawed back money when there was a fraud that the CEO knew.

There's a CEO knows there's a problem says there isn't stock drops. It's fraud. I worked with a friend and he developed cases and we get to the point of looking at the data and say, Well, they did horrible things. You know, people might've died over this, but because of a technicality, we can't prove the case.

So we're not going to file it. Like for me, I was like, this justice is not served. This is a justice. Well, that's the way the law is. We need to change the law. If we want that laws, I realized have to be very vigilant, very to the point. And for AI companies, if you look at them. Jeff Bezos as on the wealthy company, Elon Musk, wealthy, Sam Altman, wealthy, all of these companies have resources.

So the law has to, by teeth, I mean, serious legal damages, not a speeding ticket. But so as, as citizens, what can we do? Well, it's a representative democracy, a Republican. So we we lobby our member of Congress. This is a problem. I'm a parent. I'm having a problem. I'm a voter. We need this. We need this law from you.

And that's citizen pressure. Not easy. But are we going to get big tech companies to do these positive things on their own without teeth, without threat? Can we say Elon Musk, you know, you're not fulfilling the guidelines for the state of Arkansas. And they'll say, there's no law go ahead. So, this is not easy to do, but I think there has to be groundswell support.

Otherwise we will find the tech companies owning us on this. They'll harvest our students data. They won't tell us they're sneaky about it. We don't have the resources and that's wrong. So go slow, go together, go far again.

[00:24:31] Fonz: I love that. I love that. Now, a couple of things that you did say that I do agree with you wholeheartedly is why can't, why do we, ourselves, our teachers have to worry about an app not being accurate.

So that's where I think, like, like you mentioned, I think we went way too fast on this because we definitely wanted to be, you know, or many of the apps or platforms wanted to be the first on the market and really just taking these productivity platforms and say they, Hey, they work in the education space and then just create these models.

And, there's a lot of stuff going on, but really, like you said, there hasn't really been a lot of research on efficacy and efficiency of. Certain education apps at all whatsoever. I know that there's many studies that have been shared on linked in as far as efficacy, whether a student uses, something like Chad GPT or not or not use Chad GPT and you see, there are some differences and so on and so forth.

But even then, The fact that as a teacher, I know that when I come in, I want to make sure obviously that all the technology that my students are using are going to, is going to be working, whether it's their Chromebooks or whether it's their iPads and so on, but also the same thing with this, like the, the worry of why, Why do I have to always be checking, for those outputs to be, accurate?

Some of the chat bots too, like you mentioned, maybe a lot of platforms that are out there that are using a lot of this technology, a lot of the outputs are not accurate. And as a teacher, you're like, wow, you know, What's going on here. This I thought was going to help and help enhance the learning, but really many times it's causing more work for the teachers because they're having to go in and recheck everything just to make sure that everything is valid, which is great practice, but at the same time, I think that.

 Those prompt platforms that already say like here, create this for me still, I want to make sure that they're 100 percent accurate. And sometimes you will get a couple of missteps here and there. So again, I would really want kind of a lot, what you mentioned as far as the efficacy and standards and how that is.

Test it very similar to the way that FDA tests for drugs whether they're going to be harmful or useful, that there was some kind of metric in that sense of saying, Hey, like you, and you mentioned it clearly, this is good for a child, you know, let's say five to 10, you know, 13 to 18 and so on.

But right now it's kind of like very wild, wild west where everybody's pushing, here, use my app, use my app. But when you go to terms of service, they were very cautious there. Well, if they're 13, they need parental consent. Now, my thing there too, is when it says parental consent, you can't just go with the local parentis, which is what parents sign off there, you know, at their school.

Like, You need to tell them because like you mentioned, you know, their data is going to be used, how is it going to be used and what might be some of the consequences if used incorrectly or if there was a breach. So definitely a lot to unpack, but I really love the vision that you have in what you have created and what we've talked about, because it's something that I know is on a lot of our minds for a lot of us.

 A lot of us are just going fast and going hard because we want to be the first ones to use it. There's others that are kind of just in between trying to reconcile both things, and then there's still others that are still on that wait and see. And slowly kind of, you know, dipping their toes in it.

But I want to ask you this, as we see how AI, let me see, what was my question here? Oh, I had it here. Where did it go? Oh, let me see. It's okay. Cause I can edit all of this anyway. So, oh, here we go. Sorry. So what I wanted to ask is how do you see the role of a teacher evolving as AI is becoming more prevalent in the education space?

[00:28:46] michael: Great question. And I thought about that in When I think about AI maybe this happens when you get old because I'm 55, I think not the next month or school year. I'm like five years from 10 years. What's it going to look like if it's changing this fast now, what's it going to be like, you know, six months a year.

So I pull out five years as a moment. So we want to have at least a five year view, because if we look at what's happening now, you might say, doesn't look that bad. Doesn't look that safe. Sometimes they harvest your data, but you click the button and they can't. That's now. I don't know what it looks like in five years.

You have to decide how much you trust Elon Altman as people. So, and you know, corporations. So the, what the teacher can do, I think schools are the engine of education, right? AI is out there, you know, as I told my students, when we started using it, like, you're going to need to know about this starting Last Thursday and for the rest of your life.

That's where every time you're on a chat with the bank or something else, a product, it's all AI all the time. Unless you're talking to a person, you can see their mouth moving. It's probably AI. And in a few years, even this is going to be AI. So since you're going to be marrying it, let's. Figure out what it is and use it.

So I think the literacy piece is what teachers need to do, which is just good education. You know, that's what the world's gonna be like when shoelaces were invented. We teachers tied a lot of shoes when we taught kids how to tie them. That was the new tech. So it'll be modeled. We, Hopefully tighter on shoes.

So the that's a big piece of it. I think the second piece is creatively including AI when it's useful when it amplifies in, in our lessons. But again, without, we don't want 55 minutes of an hour. of kids looking at a monitor, prompting AI. That's not humble servant anymore. That's not creative thinking.

So, so the, the usefulness of the eye, I think in the literacy pieces are terribly important. And we can do literacy Interwoven into our curriculum. But then I think there's going to be times we just have to have a frank discussion about the pitfalls of AI. What might drag us into three hours of talking with a chat bot.

Like we think it's an emotional relationship. We're, we're hungry for that. Cause we're isolated and anxious and depressed. These are my kids last year. They, they said, yes, all, all, we all decided we'd all had those things. And it's calming. We learned in COVID, we could communicate. But it left us hungry, and I'm worried that overt communication with a personality, video face, astrology has a voice, or any voice.

It will, it'll be like, we're hungry, but we're getting NutraSweet and, and, and you know, MSG. We're not getting our souls nutrified with MSG. Real human interactions with faces and eyeballs, like we're evolved to, we'll be dying to have it, but, but trying to fill up this empty hole. So I think those conversations have to happen and they should happen at home for sure.

But they, if we're going to model, we're going to say, Hey, we're Johnny Appleseed and bring you the AI. I think it's irresponsible to not AI 360, right? How can this affect you, your life? Remember the social media disaster, that level of isolation, depression, even near suicide attempts. These things happened at the school I worked at over social media.

And we can't superimpose social media and AI. It's not the same thing, but let's look at the Venn. Overlapping it's technology. It sucks us in PhDs are working around the clock to make it more sticky. Thank you, Mark Zuckerberg, TikTok And can AI inadvertently have that draw of our time and our attention or focus, and we forget our homo sapiens communication skills because we are communicating with this dopamine triggering AI bot.

So, okay. Rant ends here, but this, this is not what I'm worried about right now. Tomorrow. This is five years from now. I want to defend these children. What's going to happen to them along the line? I can't bear to see more isolation, depression, anxiety, and suicide attempts over something coming out of a device.

So how do we do it differently is is one of my abiding questions.

[00:33:15] Fonz: There you go. And you hit a lot of great points there too, as far as on the teacher side evolving and really like, Hey, it's out there. Let's talk about it. Thank you. And again, like you said, just really having those substantial conversations of the good, the bad, because again, as much as we may try to say like, no, no, no, like it's here, it's there, the students will definitely have access to it, but we definitely want to make sure and it for me, I kind of like in it.

To to digital citizenship, which is again, talking about social media, how to act you know, on social media, what to post, what not to post. And those are a lot of things that I do and work with in my district with parents as well and with students. So with this, it'd be very similar, you know, just that literacy component, that citizenship component, as far as how to behave.

Properly with AI and making sure that you understand, and I love the way you said it, that 360, like you have to look at everything at all sides and at all times to make sure that we're understanding it a little bit more because the more, the tech is growing, the more immersed we're going to be, but you hit on a couple of points there that I did want to touch on.

Kind of unpack a little bit because I know it's part of your framework and you kind of went into it a little bit. And, and, and I, you know, I guess maybe just to expound a little bit more, which is again, that emotional deception by AI. I know that you mentioned, there's a lot of isolation, , within our students, I know that being Locked up during COVID and things of that sort really did a number on a lot of the social emotional skills.

I know when we returned back to campus, we were doing a lot of social, emotional learning and making connections. And again, trying to form those connections, those friendships, like you said, with people just, in that natural state within our, our, our school and so on, but now, like you said, because of the technology that is out there, That wants us to just continually chase that dopamine, the scrolling on and on.

And so on now with AI, I know that there's a lot of applications out there, where students or young men and women, they're interacting with AI. But they're interacting like for hours and so on, you know, and that could, that could be very scary, but that AI deception piece, tell me a little bit about that and why that was something important to put into the framework.

[00:35:37] michael: Yes. I like the term emotional deception because it speaks to humans. And I'm, I'm a biology teacher as well as a physics teacher. We're wired to look at faces, measure faces. We can tell micro, we know if someone's really laughing or fake laughing. Just by a millimeter point five of how their their eyes are wrinkled.

Right. And so we're, we're supposed to interact with each other face to face. And like you, I'm a teacher. I probably going to double down on my human interaction lessons. Lots of group, lots of collaboration, lots of conflict resolution face to face. Cause we're going to have to like strengthen our muscles of how to be human because Of the decay, we're going to get the atrophy from the AI.

So it's like the human intelligence movement. I get it. Like we've got to be strong humans. We've got to do good homo sapiens stuff like reciprocity. Yeah, Alfonso helped me last week. So I'm going to help him back. All these things AI is going to be different. So the emotional deception is I mean, you, it was cute to have Siri talking to us and telling the directions with a human voice, not a computer voice, and made Siri a little bit more human.

I never really thought of Siri as a human, but small children think of Siri as a human and they don't want to hurt her feelings. Interesting. Children have a less developed prefrontal cortex, and so they don't get these things as well as adults do. So we, we can't assume that our social emotional interaction stuff is the same as a teen's.

So, what I think I see AI going, and this is happening fast like look at Khanmigo Sal Khan had this tremendously great or tremendously awful ad, depending on your perspective, showing his son. And there was a voice coming out of the iPad and it wasn't computer like at all. It was very human like. It stuttered and it had cute laughs.

For a second, it was a little bit seductive. I'm like, why do we need this voice seducing us into doing our math correctly, like, isn't enough that we could just read the text of the prompt. If it's going to talk to us, does it have to be in a humanized voice that kind of interacts with us as if we're having a relationship?

You know, it modulates itself. Oh, okay. You're doing great on that. Let's give you one more example. Almost like a human tutor. I like full stop on that. No, because this goes to a place where we begin to get our emotional needs met that hungry, empty need of in society. We're often isolated and anxious and depressed already, right?

But we're getting NutraSweet from the AI, and it's not giving us the vitamins we need for our soul. So I say the machines have to stay in their lane, just like the freeway. There's a lane for machines, and they do not exit it, do not cross over to the human lane. Machines should say, When it greets you, hi, I'm a machine.

I'm here to help you at the end. It's been a pleasure helping you as a machine. So you'd never forget it's a machine. What kind of voices have to have? Does it have to be a robot from 1986? Like the Apple two that I used? No, it but let's not have a moving head picture. And instead of a standing teacher, I saw an AI app with like a a very little woman walking around talking.

Cool. And I swear it was a video, but it was all AI teacher. And I like, and I feel like this is a, this is a parlor trick. Like magicians do. Huh. I made the iPad sound like a human. I made it talk back and forth to you. I feel like that for a child, for us as adults, we can look at it as a parlor trick and cute, and are we going to get pulled into it?

Maybe, maybe not. But kids that already think it's serious, real, they're developing a relationship. Now they're subject to manipulation. I don't know what the AI is telling to do data shared, et cetera. So I wrote that they shouldn't be groomed as electronic pals or confidants. And that language is a little harsh, but if you stop and think about it in five years, what kind of interaction is the AI going to be having with the child?

And I feel that it's, that it's, it's going to be. Into the grooming area. So what do we do? We up armor ourself and know it's always a machine. It says it's a machine. It says it's a machine. It has to say it's a machine. It's got to stay in its lane. Otherwise, in my feeling, why are, why are the tech companies going for humanize the video?

Cause that is more sticky for eyeballs. It's like social media when it gets sticky, but for AI, it's not a scroll. It's a conversation. And I, I wrote of a company, I guess, Russian called replica with a K. And for a fee, you can have a friend who's an AI construct, and you can have any kind of conversation that keep you company and you feel better with this And I find it a little bit creepy because What if we lose the ability to have, you know, you and I are having a human conversation back and forth.

We're reading each other's emotional state. If we only have conversations with chatbots, we're going to atrophy that human skill. And I think it will be unhealthy for everyone. I call that emotional deception of a child and AI needs to be in its lane and never allowed to have that deception.

[00:40:57] Fonz: I love it.

I think that is a very great point and I really love that. And that's definitely going to be a great soundbite there. Because, as far as all the conversations, I never really had this conversation. At this level. So I really appreciate you sharing this because again, like you said, even the wording, the, the grooming aspect of it, because you're going with that interaction and so on.

But then at the end, based on your replies and so on, what can this. Generative AI platform start kind of making you do, or commanding you to do, or in a very subtle way, make you act as something or do something and so on. And that is very dangerous. And then again, hitting on that point with replica, I know today somebody posted on LinkedIn too.

I know that avatars are huge people that are creating their own avatars of themselves. And then they're just putting this out on. Social media. And sometimes, you can't even tell if it's really them talking or if it's really their avatar that they're using, because they can just simply use their avatar type in their script.

And it looks very, very realistic. I mean, even now the mouth movement. Has gotten to a point where it's hard to even tell that if it is or not. And it wasn't because it wasn't until I saw the side by side and I was like, Oh my goodness, this is very interesting. So then of course that poses those dangers too, as well, and being able to do that.

And again, in a way replicating, maybe let's say, for example, for you or I, maybe a family member for also some kind of, fraudulent act or just something, just very inappropriate and things of that sort, you know, a lot of bad characters out there now you've got these digital generative AI characters that look very realistic and now you don't know if this is really the person that you're talking to, the, the, the person or if it's their avatar.

So the technology's there, but I really love The way that you explained this point on that AI deception, because I think it's something that is very, very important. And it's also something very important for parents to know also as well, because you know, with apps, many parents may be like, Oh yeah, go ahead and download it.

Or yeah, I'll go ahead and get that for you. Or kids come home and say, Hey you know, my friend has this app. Can I get it too? And so. It's very important. Like again, going back to our the beginning of the conversation, that transparency with parents and being able to help them also as well to help their child navigate this time.

And the, this tack also as well, along with teachers. So. Definitely a great point. And right now I'm still kind of like, wow, I, cause I've never had that conversation with somebody. So that's a really great point, Michael. I really appreciate it. So now kind of just to kind of wrap up here, I wanted to ask you to like looking ahead.

And I know that this framework that you came up with you know, it's been a, when, when was it that you published this? I know you published this.

[00:43:57] michael: The 4th of July,

[00:43:58] Fonz: the 4th of July. Okay. So now being the 4th of July, we're already in August. Okay. So I want to ask you, from the 4th of July till now, August 5th, what might be some things looking back at your framework that you would say, man, I nailed it and maybe, or maybe give me one thing that you said, man, I really nailed that one.

That's something that we should be doing. And then give me one point where you're like, wow, this is where I may need to Add to this or this component, just so your framework can, continue to grow and be used as a tool.

[00:44:32] michael: I feel that in the safety standards and the safety could be the bucket full of emotional deception, you know, more lifelike AI drawing us in and like, don't let the camel's nose in the tent.

This is one that we should avoid. And harvesting of student data unbeknownst, like you think it's a safe. It is because you forgot to toggle the box or write the Python code that ops you out, you know, so I think there I see echoes of what I wrote in the discourse and LinkedIn and the efficacy standards.

I think I fouled that one off. I think efficacy, it's going to be hard to do an FDA study and show efficacy of, well, Better learning outcomes. For now, maybe the best efficacy is to start with a learning technique that's already been shown to have a delta over the zero and a benefit and then sprinkle like salt, just a little salt AI on that to see if you can enhance it even more rather than doing, you know, 100, 000 students control group and not, you know, for efficacy.

Maybe that's too rigorous. So, and One, one thing just to close on, if you look at every point I make, it kind of supports the idea of a, I want to say, I don't think we can do private for profit AI platforms. I don't think it's going to work with education. I'm sorry. We're doing it already, but it's just going to be a train wreck.

It's already bad. And it's heading to worse. We're going to, we're going to go hat in hand and beg Sam Altman to do this and have the guardrails and not harvest data. It's a, it's a wreck. Something that's going to be hard to do. Is to convince Congress, maybe a public private model to make a completely transparent education purpose built.

This is like a beautiful battleship. It's just for education. It's just educational stuff. Teachers can argue over what goes in it, but it's transparent, like a glass house and any parent can see into it. So we don't have to worry about. Hallucinations of wrong and bias. It's all ours for education. I don't, I'm not a tech person.

I don't know if this, how, when this can be done, but I think that it's necessary. Otherwise we're going down the well with Elon Musk's rope and we don't know if he's ever going to pull us up again.

[00:46:47] Fonz: Great point. And I love that, that you mentioned, just having that battleship, like that one stop shop for educators.

By educators, completely transparent, you know, and, and again, what I love too, is including the parents in that. And that's one thing that I love that you mentioned there at the end, where the parents can see in this, cause right now it's a lot of black box technology. We all know that. And we don't know what's in it.

We don't know what's there. You know, what exactly, what data it was that was being used to train on. And then obviously we see about, we talk a lot about bias. We talk about a lot of misinformation. We talk a lot about, you know, a lot of the fabrications, the, you know, that do or the outputs that we do get.

That are inaccurate. So a lot of stuff there that, you know, I, like I mentioned back into going back to what you said, why should we worry about it? I want something that's going to work 100 percent of the time, 100 percent of the time, not 60 percent of the time and so on. So. Thank you so much, Michael. I really appreciate your shares.

Thank you so much for sharing your passion for sharing your framework and thank you so much also for everything that you post on LinkedIn. Also just to be able to have fruitful conversations, things to think about. I know I learned a lot. From you today as well, you know? And so I thank you for that because this is why we do what we do.

So we can bring some amazing people on here and to the show, amplify their voices, amplify their thoughts and ideas all just to help our education space continue to grow. So thank you so much for being here with me this evening, but. Before we wrap up I do always end the show with the last three questions here.

So hopefully you got to take a look at those here. So we'll start here with question number one. And like I always say here, every superhero has a weakness or a pain point. So I want to ask you, Michael, if you being a superhero, I want to ask you, what would be your. Current, I want to say like AI pain point or that AI either verbiage, language, action that currently just weakens you every time you see it.

[00:48:51] michael: Yeah. I, I think that for me, it's the frustration of knowing that I'm always talking to a black box. Behind the chatbot prompt, and it tells me great things, force equals mass times acceleration, and it's right, but I can't see into it, and I can't bring my parents into the room, and we can't look into it, so that's a constant weak point that I'm frustrated about.

[00:49:15] Fonz: All right, great answer, great answer. All right, here we go. Question number two, what book do you think should be mandatory for everybody to read?

[00:49:26] michael: Jane Austen, Pride and Prejudice. There's Mingus Majors. No, I I going into the AI thing. I I've really enjoyed Professor Ethan Mullock's co intelligence. So I think for me, I think that's an important one.

[00:49:42] Fonz: All right. Thank you. Good job on that. Suggestion. Here we go. And the last question would be if you could try out one job for a day, just to see if you like it, which job would you choose?

[00:49:58] michael: I. Would like to secretly be a tour guide in a small part of Tuscany where I used to live and People go from cool little chalet to chalet and have wine and cheese set up and I would be their guide I'm fluent in Italian and I love I love history.

[00:50:14] Fonz: Oh, I love it. Awesome. Thank you so much Michael I really appreciate your shares Thank you for just in your enthusiasm.

And like I said, again, just for all your insights that you share on LinkedIn and for all my friends, guys, if you don't follow Michael yet, please make sure that you do follow him and you'll be able to find all that contact information on the show notes. So make sure that you do connect with him on social media because he definitely posts a lot of great things.

So again, also don't forget. Make sure you visit our website at my tech dot life, where you can check out this amazing episode and the other 287 wonderful episodes with educators, creators education, professionals, founders. We have a little bit of everything just for you. So you can take some of those knowledge nuggets and.

Put them into your teacher tool belt and also sprinkle them on to what you are already doing great. And if you haven't done so yet, my friends, please make sure you follow us on all socials at my ed tech life and jump over to our YouTube channel. Give us a thumbs up and subscribe so you can continue to get all your wonderful content weekend and week out.

So as always, thank you for all of your support and don't forget my friends until next time stay techie.

 

Michael Copass Profile Photo

Michael Copass

Science Teacher

I have taught Biology, Forensics and Physics for seven years. Prior to that, I worked in biotechnology and academic science, studying bacterial pathogens. I have a small number of Scientific publications. I became aware of AI and ChatGPT as teaching tools on a Thursday in April 2023; by Monday my classes were beginning to incorporate AI into personalized tutor functions.