Episode 274: Sofia De Jesus
Episode 274: Sofia De Jesus
Exploring the Complex Landscape of AI in Education with Sofia De Jesus In this episode of the My EdTech Life Podcast, I sit down with Sofia…
Choose your favorite podcast player
April 17, 2024

Episode 274: Sofia De Jesus

Exploring the Complex Landscape of AI in Education with Sofia De Jesus

In this episode of the My EdTech Life Podcast, I sit down with Sofia De Jesus, an associate program manager at CMUCS Academy, CSTA Equity Fellow, and author, to discuss the intricacies of AI in education. We dive deep into the critical issues surrounding data privacy, bias, equity, and the responsible implementation of AI tools in the classroom.

Sofia shares her wealth of experience and research in the field, offering valuable insights into the potential risks and challenges educators and students face as AI becomes increasingly prevalent in educational settings. We explore recent developments, such as the LAUSD chatbot release, and examine the need for transparency, proper teacher training, and careful consideration of students' unique developmental needs.

Throughout our conversation, Sofia emphasizes the importance of reading terms of service carefully, ensuring platform transparency, and thoroughly researching and testing AI tools before adopting them in schools. We also touch on the problem of grifting in the AI education space and the dangers of selling potentially flawed AI tutors as a solution for underrepresented students.

Don't miss this essential discussion on the complex landscape of AI in education and the critical work being done to ensure its responsible implementation.

Join us for an informative and engaging episode with Sofia De Jesus on the My EdTech Life Podcast!

Timestamps:

00:00 Introduction

01:48 Sofia's background and expertise

05:05 LA USD chatbot release and associated risks

12:44 Transparency in AI training data and potential harm to students

24:00 Grifting in the AI education space and the rush to adopt tools

39:42 Resources for learning about AI ethics

51:50 Equity in AI education and the issue of accessibility vs. availability

59:43 Closing questions with Sofia

💡Elevate your brand by sponsoring My EdTech Life, the award-winning podcast that captivates educators and tech enthusiasts and shape the future of education together. Contact us for more information

 

--- Support this podcast: https://podcasters.spotify.com/pod/show/myedtechlife/support

Thank you for watching or listening to our show! 

Until Next Time, Stay Techie!

-Fonz

🎙️ Love our content? Sponsor MyEdTechLife Podcast and connect with our passionate edtech audience! Reach out to me at myedtechlife@gmail.com. ✨

 

Transcript

AI in Education: Navigating Data Privacy, Bias, and Equity with Sofia De Jesus 

My EdTech Life Podcast Ep. 274

 [00:00:19] Fonz: Hello everybody. And welcome to another great episode of my ed tech life. Thank you so much for joining us on this wonderful day and wherever it is that you're joining us from around the world, thank you so much for making my ed tech life, what it is today. We appreciate all the likes, the shares, the follows.

Thank you so much for all the re shares too of our content. As you know, we are. We do what we do for you to bring you some amazing conversations that are going to go ahead and build this up professionally and personally too, as well, and we definitely want to increase knowledge and of course, just bring you some hot topics, you know, as far as, Technology, educational technology, and AI, of course.

And I'm really excited for an amazing guest that I have here today, who I have been following on LinkedIn and recently found her on Twitter also as well. So now I'm following her in all the spaces because she is sharing some amazing information, amazing content as far as AI is concerned, and really just taking a deep Dive, not just into the practice of AI and bringing it into our education space, but also going into speaking about the data privacy side.

And of course, those things that sometimes as educators, we may overlook or really as anybody, we may overlook, which is those terms of services. So I would love to welcome to the show today, Sofia de Jesus. Sofia, how are you doing today?

[00:01:46] Sofia: I'm doing great. Thanks you for having me.

[00:01:48] Fonz: Oh, I'm excited to have you and excited to talk to you.

Like I said, I'm a huge fan of what it is that you are sharing within the space here on LinkedIn. As you know, I mean, for over a year and a half now, I've been talking a lot about artificial intelligence and now that it's making its way into the hands of our students, that's You know, there's a lot of things that we may need to consider, or maybe just kind of hit the brakes a bit and step back as we're starting to see a couple of things pop up.

And obviously in the news today, there was something very interesting that we'll definitely get into right now. But before we get into the meat of the conversation for all our audience members that are joining us. Today, who may not be aware of your work yet. Can you give us a little brief introduction and what your context is within the education or the AI space as well?

[00:02:40] Sofia: Sure. So my name is Sophia Jesus. I am a Latina with silver hair and tan skin, and I'm currently wearing a blue pattern shirt and a gray cardigan. I work for Carnegie Mellon University. I am an associate program manager for CMUCS Academy, which is a curriculum provider. We offer free curriculum, , for computer science for middle school and high school.

At the same time, I'm also a CSTA, the Computer Science Teachers Association Equity Fellow. So this year I've been working on my fellowship. Which is specifically, , my project is, geared towards creating a framework, and giving some information to administrators and leaders in K 12 spaces, for the use of AI.

And I am also an author. So I published Applied Computational Thinking with Python, the first edition in 2020, which contains chapters for developers in AI and machine learning. And we published because I have a co author, Dayrene Martinez, and we published our second edition in 2019. , December of 2023.

[00:03:42] Fonz: Excellent. Well, thank you so much for joining us today. Like I said, I'm really excited and just really thankful for your time. And I know that our audience members that are going to be watching this on the replay or listening to the podcast, they're definitely going to, you know, gain some knowledge or actually take some knowledge nuggets and hopefully sprinkle them onto what they're already doing.

Great. So let's get right into it. So let's go ahead and get started with what we saw on the news. Today. So if you don't mind sharing a little bit about what, your thoughts are as far as, is it LA, LA USD, is that correct? Releasing a chat bot , for parents and for students. So as we know, you know, in the education space that definitely trying to kind of.

I guess find a solution for like burnout and help teachers and have like a co pilot somebody to help. And of course you, through the use of artificial intelligence and these chatbots are hoping to maybe kind of relieve some of that stress and be able to help students and help parents. But I kind of want to know like what your thoughts are on that, because I'm a big proponent of.

data privacy and reading terms of service. And to me, this can kind of get just very blurry, very quick. And I'm just worried about, you know, any kind of underlying dangers that may happen. So I just kind of want to ask what your thoughts are on that.

[00:05:05] Sofia: So generally speaking, , I don't like that there's a chat bot that includes all of the information for students and all the information for parents.

It's a risk, and it's a massive risk. And then there's the piece that I worry about, which is like, do they have parental consent? And I do think that they think that in loco parentis, which is in place of parents, , teachers can accept parental consent or the district can accept parental consent on their behalf.

But in consulting with some of the lawyers that work in privacy and policy and in cybersecurity for K 12 schools, what every single one that I've spoken to has said is that that doesn't apply. And the reason why it doesn't apply, it's because it's more similar to like, social media. So because of biases and all that stuff in the terms of service and social media, it says that even with parental consent, like 13 and under cannot be used.

Right. And then, for 13 and above, I don't know what the parental things is, but let's say that a student under 13 can use it with parental consent. So let's say that that's possible. The teacher can't say to her fourth graders or third graders, Hey, let's all create Facebook accounts. The privacy policies in the district don't include that kind of use because there's a lot more risk there.

And a lot of the lawyers that work in K 12 policy agree that this is more in line with that than EdTech, other EdTech. Because there are known risks, there are known biases, and there's also the known factor of questionable content. And questionable content, you know, we're talking everything from sexually explicit content, Guns, et cetera, et cetera, because this includes.

Every piece of data that you can possibly imagine it can contain, it's going to be in there. We don't know the exact contents because they haven't released that information for the most part. Which is another concern, right? So we want transparency. The UK AI Act does ask for transparency in training data.

, the U S has not gone through that yet, although there is some, talk about that, but there's information in the training data that's questionable. How that is shown, we don't know. Sometimes it happens. And so, if we have students interacting with these bots, We don't know when it's going to go awry.

And a lot of these organizations will say that it goes off the rails with something one of the organizations said at a conference in November. And let's see if it works today. And every time somebody says that, and I'm like, why would you. Put that in front of students. And then part of my main concern, and I will say this, , is a lot of these are saying like, we're going to do this and we're going to release it because it's necessary because there's some kids who don't have access to tutors.

And there's some kids who don't have access to mental health services, and we want them to have access. Why are you releasing these bots into the schools that have no services? Because if these bots are so great, why aren't you releasing it in the top schools? And why aren't you putting those humans then in the lower schools?

That's a disconnect for me that I have to question. Because if it's for equity, which one's better? And if you're like, there's, I don't think anything should be used for mental health in terms of chatbots. It sounds human, but it doesn't have the capacity to understand. And instances where it's been used, there's some research out there has been awful.

Because they'll start agreeing with the student, even when they're having thoughts about harming themselves and such. The bots don't know. They don't process. They're just regurgitating. They're just paraphrasing things. They're just, so there's, even if it sounds human, it isn't. It's a thing. And so I don't think that it is healthy or good to have a bot that sounds too human because students might end up having these connections and making connections as if it's a friend or a trusted advisor or something when it isn't, it doesn't have.

Any capacity to have their best interest at heart.

[00:09:29] Fonz: Yeah, there's definitely a lot there. There's it's okay. That's, there's definitely a lot to unpack there that we'll kind of come back and, and well, let's hit on some of those points because, , there's definitely things that I want to highlight.

That you said that really also that I've made those connections and things that I've been sharing too on LinkedIn and doing research. And I know that what you share is definitely research based because I've seen it and the things that I share too are definitely research based. These are not things that we're just saying like, Hey, you know, just stop, don't do this.

And like you mentioned, you know, you and I had been talking and sometimes people may think, you know, or see us as alarmists and like, Oh, they just don't want us to do this or use that. But it's just simply we're, we're sharing with you what the research states. So going back to the chat bot or chat bot for education, like we saw in the news for, , LA USD, you know, having that information out there.

And so much information. And you're talking about student data, which is anything for identifiers. I know, at least in our school districts, there's several identifiers, just, you know, your student ID, their student password, social security, you know, ethnicities home language, all of that. And I can't believe like all of that is all put in one place.

And now this chat bot is going to help. You know, the students navigate through what they need to do as far as, you know, great books and things of that sort, but also getting that parent information. So one of the things that I do kind of want to highlight is we talked to this last week. I talked to Dr.

Nneka McGee, and she's phenomenal. And she was talking about just also the. The technology, the, the parent form and consent form that they do at the very beginning. And I think that's something that a lot of school districts really need to go back and revisit with all stakeholders. And I'm talking, including superintendents CTOs, network people, curriculum, parents, most importantly, and students also as well, because of the dangers there, because I want to touch on a couple of things in, in doing my research, Back in, what was it a year ago when I was doing my last course in doctoral studies and this had just come out in November and just looking at the plus side and then the, the cons of this, I definitely dove in deep into the privacy settings and talking about what companies can use.

All of that data for, and obviously, you know, data rentiership, which is a way that they're making some money off of that, off of the data. And I've always heard if, if the product is free, then you are you know, pretty much the product, you know, in that sense. So. Those are a couple of things that are very scary.

So I want to ask you just with your experience that you've had, and I know you talked about a little bit of that data privacy. What other dangers are there? Because I want to get into the specifics because sometimes we talk about and we just say bias. And then we say, of course, you know, data breaches and so on, but what might be some of those consequences that come about that maybe just to give some clarification, because sometimes people, they just hear the words and they're like, ah, they're just saying the same thing, but can you go in a little bit deeper through your experience and what you have found that are the dangers that are out there?

[00:12:44] Sofia: Yeah. So part of the issue is, and there's a study about, like AI can infer where you're from based on how you speak. So even if I don't give data to the AI, the AI can infer where I'm from. And that's because speech patterns, the way we talk, the way I translate things. Cause of course my first language is Spanish.

And so one of the things that I do sometimes is like I translate things, even though like I, Mostly think in English most of the time, but depends on how tired I am. Sometimes when I wake up, my brain takes a minute. But like there's things about the way that we speak, the, the words that we use, right?

So like in Puerto Rico, people use WEPA to like greet each other sometimes. And in Costa Rica, everybody says Pura Vida. And Puerto Rico. The way that we refer to people, right? So like, chicos, chicas, chavos ticos, anticas. Like we talk about people using specific terms based on our regions.

And so if I talk that way, right? So if I talk to the AI, the way that I speak, the AI might be able to, after a few prompts, be able to say like, Oh, you're from such and such and blah, blah, blah. And that was mentioned in a research study as being. a concern because that's data that now exists in the models.

Now, some of the models say that they don't save the data, et cetera, et cetera, but they have different wording in different places about, you know, like maybe 3. 5 GPT doesn't. remember anything you say, but maybe 5. 0 does. And there's things like that. And maybe the things that you're prompting with are going to then use VB training.

And then different models have different verbiage on that. And we have to be careful about that because if a model, if, if we don't look at those terms, we might not know that the model is actually recording what we're saying and then putting it back into their training data. So if we accidentally give them personal pro identifiable information, then it's going to have it in the training data.

Mind you, it already has quite a bit because they scraped the internet. But, but again, it's one of those concerns. So like in schools, this is even more of a concern because you, you know, like this bot has access to student grades, their schedules. During the day, their attendance records. So an exploitation that might happen.

And they're like, we're pretty sure nothing's going to happen, but we know hackers happen and we know that there's things and it's, we can't just like say like, Oh, nothing's going to happen. Because the data is there and now that data includes. Every part of the student information and it's not the district only They're running that out of some model tried to find it today I couldn't i'm assuming it's open ai but it might be something else.

It might be anthropic. It might be gemini It might be something else large language model. It has to be one of them and even if they do a version of that, who are they partnering with and The reason why we want to know is because their terms of service apply And their privacy policies apply. So if I'm using open AI, which I think is who they're supposed to, they're probably using because they do have the 13 and under no so 13 and above.

And if they're using open AI, then the privacy policies for open AI would apply. So, and those are iffy at best. And they do say, you know, it's on the user. If it, if anything happens, if there's harm and bias, et cetera, it's on you. It's not on them. So it would be on the district, not on them. And in a lot of these apps, and I'm talking about other apps, it would be on the teacher.

So if the teacher chooses to use something and the district has said no, or the district doesn't have a good policy, you're putting yourself, you know, in a jeopardy here. Especially because, you know, when we say biased, a lot of people are like, Oh, that's never going to happen. Hey, if the apps are saying it's biased, it's going to happen.

Not only that, you don't know the amount of damage that biased us to a human being, especially a middle schooler, because they're developing their sense of self worth. And so if they are, you know, If they're bombarded with information that makes them see themselves in a way that's not great they're going to have the, they're going to sense it and see the effects of that for years.

So, students absorb bias as young as preschool age. We're talking about things that we know are biased. You know, if you ask it to tell you about flight attendants, it's always a woman. If you, housekeepers, always a woman. Nurses, always a woman tech people, white men, like there's things like that where it's like it's bias and those are biases and then that affects how people see themselves or whether or not they see themselves in certain spaces.

So, I think it's harmful. I think it's harmful even if you use limited because again, we haven't done the research. The research is just now starting, and it's important for us to slow down, and I am more, and I've said this in my post, I'm more for teacher facing tools than I am for student facing tools. I don't think it's fair to use students as our test subject.

[00:18:31] Fonz: I'm with you a hundred percent there. And again, I do want to clarify myself also, although I'm speaking the way that I speak, it's just really on that student side, that student aspect, because as teachers, you're 18 years, you're older, you can give consent. So we choose to give consent to those applications that we're using, but students don't get to choose, like if they give consent, if a teacher, like you mentioned, is so excited, comes back from a conference and says, Hey, I want to use this.

And go ahead and log in and without giving any notification to a parent stating, Hey, this is what I would like to use in the classroom or more so no notification to your district. Where does that liability fall on? And like you said, you're, you're really jeopardizing yourself because if something happens in that bot or that application, gives an output that is not favorable and is harmful, whether it's through language or something that is suggestive, you're going to be in a lot of trouble.

That child has been harmed because they can't unsee what they saw or unread what they just read and seen in those outputs. And then of course the parents get involved and then it just becomes just really just, it can get out of control really quick. And you're absolutely right. I think that Everybody is rushing.

It's like the gold rush right now. And everybody's just trying to be that first app to do things. But I think, like, like you said, it's okay to take a step back and do things right. There's a lot of apps and platforms that, you know, You know, I've had a lot of founders on here. I've had a lot of people that come and share applications and some of them are really doing it right.

They're taking their time. They're really just going very slow. And like I always say, this, this is a marathon, not a sprint. I would rather them take their time, really do the research, see how effective this is, and really listen to the experts as opposed to just blindly saying. Hey, we're plugging into OpenAI's API, we've created this app, let's put it in the child's hands, and the other thing that you mentioned is when I go into a lot of platforms and look at the terms of service, many times they, they, they'll give you the verbiage that any district needs to see.

To just say, okay, I'm going to glance at this because nobody reads through the whole thing. And I think now the practice needs to be, I need to look through this with a fine tooth comb and the magnifying glass and everything, because what I have seen has been very scary where they'll see, you'll see all the wonderful acronyms that a school district requires.

And so maybe a CTO out there says, Oh, okay, I'm good. As long as I see FERPA and as long as I see COPPA. Oh, I'm good. Let's go ahead and sign off on it. And we'll even agree to like a three year deal because you're giving us a cheaper price. The scary part about it is when you actually go in a little deeper and go into the terms of service, and then when they say, you will not hold us liable.

For anything that should happen, whether it's a data breach, whether it's this kind of output, that kind of output, this, this, or that, we are not liable. You'll have to deal with a third party and that I was like, no, this is unacceptable. This, there needs to be complete transparency. And the other thing that you mentioned that really resonates is you can't assume that it's not going to break.

You know, if I'm going to go ahead and put something in my students hands when I'm in the classroom or even in my teacher's hands now, I want to make sure that it works 100 percent of the time, every time I don't want it to be like Ron Burgundy, where it works 60 percent of the time, every time I want it to work all the time.

And that's the scary part that none of these platforms. To this state can tell you that it works 100 percent of the time that they will not run into any issues. And we're starting to kind of see things as people are posting online, certain things, because certain outputs are coming out like, Whoa, like this is, this is, this is not right.

We need to kind of put some guardrails around here. And I think we just moved too fast too soon. And that's why I always tell teachers, if you're going to use it, use it. For yourself, use it to help you be productive, use it to help you translate certain things, but also be very cautious if you're changing lexile levels, because it can change certain things, certain verbiage, and don't rely on that output, which kind of, Something that I've been saying is I don't want teachers or educators to get into the bad habit of simply accepting that very first output as true gospel because, Hey, it came out right.

It's a chat GPT. I heard somebody today compare chat GPT to ask G's and saying, Oh, this is perfect. Like this is the next X, S G's. And they're using it as a, Search engine, even though the data only goes to 2021 for some applications that are plugging in, why would you use this as an internet search based on the information that you have there again, it's really grabbing pieces and trying to find something that's just going to make sense for you, but you're taking it.

As truth. So that's the scary part. The other thing I, I kind of, Oh, I'm sorry. Go ahead.

[00:24:00] Sofia: So I was going to go a step further on that. I recommend that people use chat GPT and other similar tools only for things they know. So like, I want to automate something that I already know, because the thing is you'll start noticing that your output is.

wrong a lot because you know the content. It is in a search engine. So if you prompt it for things you don't know, how are you supposed to know whether or not it's true? Which you then have to go do a regular search and do more work than that. So don't use it for things you don't know because chances are you going to get wrong information.

Like I like to ask it, like who said this quote? And it'll give me the wrong author. 99 percent of the time, it does not know supposedly, but at least it doesn't give me the right author. But those are the types of things. So it's like, and students assume it's right. And so do a lot of adults. Like I have seen LinkedIn posts from some of these like AI experts in education that are saying like, Oh, I no longer use Google.

I use chat DVD. I'm like, what?

[00:25:10] Fonz: Yeah. And that's something that, right. Yeah, that's something like I really scared me because I saw this person on tick tock and they're very reputable educator. I mean, they have a doctorate, very well educated, and then they're like, Oh, this is like, ask Jeeves, go ahead and use it and do your searches.

I'm like, what, what is going on? And to me, it just seems like there has been so much excitement around this because of the possibilities that can. Or how it may change or help enhance the learning. But the thing is, is that it is not there yet. And I don't know, it's just taking the education space, you know, it's been a craze.

Lots of people doing it and pushing it out. And my biggest fear is like I said, I told Daniel Lopez who interviewed me, I said, I hope I'm that bad meme that says that didn't age well because everything worked out. But right now I'm seeing that there are some things that aren't working out. And that's what really scares me that I don't want something to happen and somebody get harmed due to this.

And which brings me to a next point that you mentioned, as far as the chatbot is concerned. I mean, you have people that, or districts that are adopting platforms for mental health. And I'm thinking to myself, that's why there's people that have licensed, licensed professionals that are out there to help.

And we understand that maybe there won't be any resources because of the school district that you're in or a socioeconomic you know, disadvantage for students. So they'll say, Hey, we'll sell the platform to your school district and everybody gets access as long as they're a student. And now they chat with a bot to You know, get that help and that support.

But like you mentioned now, the student may see that as an actual person, you know, see that as, hey, this is my friend and anything that they tell me is going to be right. And but again, that is the scary part because. You have people that are licensed professionals to help. So why are we doing this? And, and with that possibility of this not working.

So that really scares me too.

[00:27:33] Sofia: Yeah. And, and we've had already, like there's some that have been removed from the market because, so there was one that was tested for eating disorders that went awful because it started. Kind of agreeing that it was a good practice to do, you know, to be bully maker or whatever.

So like these bots, cause they don't understand what are the biggest philosophical debates sometimes is the whole, like, but it does reason and isn't that what makes us human and whatever. And I'm like, no. So it's, it's one of those things where it's like, as I'm sorry, my, my dog is On me at the moment, but so humans are able to process, we can go back.

And I hope I'm wrong. I, you know, it's one of those things where it's like, I'm warning people based on what I see in the terms of service. So I'm not even, I don't have any skin in this game, nor do I want to. Because one of the things is like, I've written some algorithms, right? So, so from machine learning and all of that stuff, but.

So I know, you know, like how it works and I know what it looks like. And I know what the algorithms look like. I know that it's just, it's just an algorithm and it's just putting information together and spitting it out in the way that it's been told to spit it out. just because it sounds human doesn't mean it is.

Because otherwise we would have a whole lot of things coming to life because you've been, you know, observing us. And it's not, it just isn't to me, that's just not a thing. There's some people that think it'll become sentient. I get it. I understand the reasoning for that. I don't agree. I don't think we need to like come to terms with the fact that we have such a disagreement.

Like it's, it's one of those things. It's like, it's philosophical at this point. It isn't sentient at the moment. It cannot reason. It cannot critically assess what you're prompting. It's one of the reasons why I don't know if you've noticed, but the use of prompt engineering has been reduced a lot because there is no such thing as prompt engineering.

Just because you write something in a different way doesn't mean that it's going to interpret it differently. It really won't. So, like, I can just say. I need the name of five lakes. Or if I say like, can you please name five lakes in alphabetical order? Like it's, it's really not gonna, you know, they're going to give me five lakes anyway, you know, like there's no need for the prompt engineering piece there isn't, and we, like that's been already shown in some research studies and a lot of people are like, Oh, let's stop talking about prompt engineering, but because it was mentioned and because somebody came up with a.

Comment now a lot of apps go like, and we'll teach you how to prompt engineer and we'll teach you how to blah, blah, blah, like, and that's what I mean, like, there's a lot of grifting going on. I will say that word outright. Like it's there's and why do they need us? They need us for data. But going back to like a while ago, when we talked about free, I will say there's a caveat to that, which is like, cause we, for example, my project is free and we don't get any data at all.

We're just a service project. But but there's a few and far between, like most projects that are free do have. They do, they do want something out of it, but it's not money, it's data. And right now what they want is data. I mean, they need the data because nobody has been able to prove the effectiveness of these things.

And you know, bad prompting, I mean, bad outputs aren't just about bias. And this is another thing that I talk about when I'm talking about like the tutors. I hate tutors. I will say that with my whole chest. Because even if you only get it wrong one out of twenty times. That 20 times can cause damage to that student for a long time, because the student is told, you have to assess whether the output is correct.

But if I'm using a tutor, I'm using it because I don't know the content. Therefore, and if I've gotten 19 correct answers, I'm going to assume the 20th is also correct. And what we know from research and education is that unlearning something that was taught wrong, Takes a heck of a lot more time than learning it correctly.

The first time I cannot. And, and, you know, like I was talking about this with somebody recently, they're like, well, yeah. Cause sometimes they'll say like chat JPD or whatever, or another chat bot will say like two plus two is five. I'm like, Oh, maybe that's my fault. Cause I've written about that quite a few times with my students.

Cause I used to ask my students. Let's say that you're not on a mod 10. Can you prove that two plus two is five somehow, somehow? And like, I would do that and I would write that. And I know that I'm not the only one. There's a lot of people that say two plus two equals five and a whole bunch of different writings.

So the data exists that says two plus two equals five. So sometimes when you're trying to get a tutor to tell you how much is two plus two, it's going to say five. And then what are students supposed to do? How are they supposed to know it's wrong? If that's what they're using the tutor for.

[00:32:47] Fonz: No, I agree with you.

I think one of the biggest things, Sophia, is that you mentioned the grifting. It just has, to me, it just seems like this is just money, money, money, money, money, and who can get the most very quick on this new technology. And. It really, that kind of irks me in that sense too, because there you have people that are selling courses, they're selling like, here are the prompts, they're selling things, you know, just, just to living off of this, but that includes even, you know, unfortunately, like a lot of platforms, even in my space that where I reside in, and which is the ed tech space, but like you mentioned, I think it's very important that You did clarify.

And again, the same thing with me is that we're just going by based on what is there on terms of service, which is the scary part. You know, if you go in deep again, going back to what you mentioned with the tutors, that is definitely very harmful in that sense. And and I feel that many are taking advantage of.

The term like teacher burnout, we, we, here's the solution to teacher burnout teachers. Are you tired of doing this? Teachers? Are you tired of doing that? Well, guess what? I have the solution for you. Just simply pop this in, into, you know, your prompt in here and we'll give you your lesson plan. We'll give you this email.

We'll give you this, we'll give you that, you know, which is, you know, for productivity that, you know, I get it. But I think sometimes there comes a point and I'm going to quote, you know, is amazing. And when she was on my show it never has gone off my mind, which is the term that she used was tech chauvinism, where it almost outright seems like these platforms may be saying I can do teaching better than you.

Just use me and that's it, you know, that I'm, I'm the only thing that you'll need and then I start thinking to myself too many times where people may think it's like, oh, you know, the robots will never take our jobs, you know, and I'm thinking to myself, but wait a minute, like you're, you're outsourcing your work to a bot already.

To, you know, help you create this worksheet a lot faster or and doing those things and then also praying off of that. Well, now you can multiply yourself many times over by creating these chatbot tutors, but also they may not be very effective because, like you mentioned, you know, now, hey, you got this 1 wrong, like, figure it out.

But, but, but I need you to help me. I sorry, you know, you may, you may want to figure out your output and there's a lot of cases like that. And I know that I was reading or actually I think somebody had posted to that they were using a really well known math platform and it was doing that and giving that student that output continually where the, the student just grew frustrated because it was never really helping them.

It was just like. Okay, just took him around in a circle as opposed to, Hey, all right, well, let's see what you tried. Did you try this first? Did you try that first? And so on. It was just like, Nope, you got this wrong. Go ahead and retry. And so I understand that we're trying to solve, I guess, the problem, which is teachers and teacher shortages and the burnout and so on.

But again, I think we could have definitely the Done a little bit more research and really taking our time to really test this out and actually see what those outputs are. And as far as you know, how it's going to benefit our students. Because today, like I was mentioning to you, I read an article and actually it's from Oxford researchers.

It says here, AI ethics are ignoring children. All together, and so a couple of highlights here that I just kind of want to read from this article here is, let me see where, where did I leave it? Okay, here you go. So a couple of highlights is number 1, the lack of consideration for the developmental side of childhood, especially the complex and individual needs of children.

age ranges, development stages, backgrounds, and characters. Then it mentions minimal consideration for the role of guardians, e. g. parents in childhood. For example, parents are often portrayed as having superior experience to children when the digital worlds may need to reflect on this traditional role of parents.

And it just goes on and on. You know, and so now it's saying in response to these challenges, the researchers recommend Increasing the involvement of key stakeholders, including parents and guardians, AI developers and children themselves. And I think that's a conversation that we kind of have been having where it's, we need to get everybody in one room to have these conversations to really see what it is that we're doing.

And I understand that there's states that are adopting already AI policies. There's districts that are accepting AI or are already putting out AI policies. However, I'm still. Thinking in my mind, are they really going in deep to the terms of service or is it just like, Hey, we created, you know, this cohort they're testing things out.

This is what they like. Okay. They like it. Let's go ahead and just sign off on it and use it and really not going in deep to the terms of service because I just don't want there to be any consequences as, you know, districts, even without, you know, this, like, for example, LA, LA USD, having the bots, even without that, you still have a lot of malware that comes in and you have a lot of districts, even here in my area that have been hit with ransomware, there's a lot of hackers, how instantly they can get that information.

I mean, even in e sports, it was a huge tournament and they hacked into that final round, I guess it was like a championship round and they hacked live Into those players computers, and then, you know, they ruined the event. So, how easy, imagine, like you mentioned, having one place, one repository, one large language model, that has all of this information, all in one place, and then getting hit?

It could be something that's devastating, and like, it's something that we don't want, and because we need this to work, but, again, we, we need to do our due diligence, and I think it's okay to take a step back, and just rethink. The situation, but I think some people are far too deep already that I think for them kind of moving back a bit.

It's going to look, you know, the optics of it might be like, oh, what's going on? You know, and then it's going to start raising some questions, but, you know, sometimes you need to take at least 10 steps back, maybe to move 30 forward. But so that's my concern. So that's why I'm glad that you were here to share.

Your, your thought process, but as far as a little bit about what I brought up here in the article, you know, what might be some of your thoughts as far as AI ethics currently that you have done through your research or through the news, or maybe through your peers and colleagues and what they're sharing?

[00:39:42] Sofia: So, a couple of resources on, on this Dr. Alex, Hannah and Dr. Emily Bender have Podcasts, which is amazing. Mystery AI Hype Theater 3000. And they also have a newsletter and that talks, takes a lot of information about the hype and the ethics of AI. I've also been participating in Women in AI Ethics because they share quite a few resources about, like, so we get to hear from a lot of women who are doing work in AI ethics.

I recommend people read. I think it should be required reading Dr. Joy will mini unmasking AI. But I'm going to, I'm going to address this in a couple of different ways. So here's the thing. Ethics is a broad term and not everybody agrees on what the ethics of things are, or what should be ethical or not ethical, et cetera.

But one of the things that seems to be. Understood for some reason is teachers know about bias and they have bias training, and that's not actually the case. Not only that, but there's actually states now that don't allow you to teach about equity, discrimination, bias in teacher programs. And there is many bills out there currently being considered in other states as well.

So not only do teachers in the classroom not always have bias training, but new teachers won't have it in some states either. We are requiring that these teachers use these tools without any formal training. And bias discrimination, how that might affect different students in their classroom to the point where there's an app, for example, that records students just to give one little data point to teachers, which is how many minutes a student participates in my class.

Number one, there are teachers in Texas, in California using that, but California has a law that says that teachers are not allowed to do that except for professional development and not every day. Only in specific cases. This tool uses everyday data. And it's recording at all times. And then it says, it used to say which large language model they were using.

It doesn't anymore. Doesn't want to say. So we don't know what they're using to process this data. And it says that the data belongs to the teacher, not the school or the parents or the students. That they don't even have to tell the parents or the students that they're being recorded. That's not true because any data that I use for my classroom or whatever, as a teacher in public school systems in the United States belongs to my district or my school.

So there's all sorts of those things that are ethical. Like we are putting tools in classrooms that go against rules and laws and things that we already have in place, but because we're not well versed in this information, like it just, and I, I make the comparison about like implementing these things.

To when somebody gets a promotion now, sometimes, yes, you get a promotion because you've been doing the job, but a lot of times you get promoted and then you learn the job. But some people think that just because you have a title, you now know everything you don't. It doesn't magically happen like there's training necessary and a 2 day training on AI.

Look, I've been working on studying this thing for 12 years now because I was thrown into, I was a math teacher and I was thrown into a CS class and then I started falling in love with all of it again. And then I, I would buy books just to study all of this. And I've buy, I don't know how many books were over 20 because I like to study.

I am one of those people that just like studies and studies and studies.

That's 12 years. And I'm still going to tell you, I'm not an expert. I am not, I am a dabbler who likes to play with the algorithms, who also loves everything that has to do with equity, because that's my wheelhouse and accessibility because that's my wheelhouse. Like those are the things that I fight for and have been fighting for and advocating for, for the past 24 years, but AI specifically.

It's too new for anybody to say, like, I'm an expert. Like there are some people who might be able to say that, but even the people who've been studying AI, this is not just AI, it's not artificial intelligence, as we've talked about into artificial intelligence. It's like the same as like predictive text is not predictive AI.

Predictive AI is a specific type of artificial intelligence and predictive text uses some other different mechanisms. Also AI, not the same as predictive text, like as predictive AI. Like there's a lot of things that, that are nuances about like the terms and the things and how they work. And magically now everybody needs to be an expert and teachers now have to on top of everything else they do.

And I was a teacher up until 2021. So like not so far away, but also, also disconnected to what they're doing right now. They have too much to do their counselors, and they're also teachers and mentors and advisors and lunch monitors and all sorts of things. And on top of that, now we're adding, you need to now be an AI expert.

Oh no, just dabble. All you gotta do is dabble. But then we're talking about all the pitfalls that happen when somebody's not trained. Because they need to know about ethics. They need to know about the impacts on children. They need to understand that the impacts on an adult are not the same as the impacts on a child.

And then the other piece of this is a lot of people are dismissing the bias issue. Well, they're dismissing it because it doesn't affect them. And sure, it might not affect the majority, but it isn't them I'm worried about. It's the people who will be affected by the biases. So like, we're not, we need to consider all of the others.

And I will also add to the point of accessibility and such. AI has done great work, but I am going to tell you now that I think that it's because we as humans have done horribly. There is such a gap in access that the AI tool seems to be phenomenal for that. That's because we've fallen short. So, AI seems to be doing, oh, this is amazing, because we failed, and so it seems bigger than it should seem, and I will say that, but I do think that there's some really great applications, and for accessibility, I think it's done some great things, but I like it.

Use cases, like if I'm going to use an application, an AI thing, I'm going to say like, I'm going to use it because this might help me with this particular thing in this particular way to solve this particular problem. Not let's just write with AI. If everybody writes with AI, we're going to get the most boring stories, writings, everything, because it doesn't do it differently for all of us.

It has one voice, the AI. So like, there's, there's a lot of concerns about like children and their children. They should be learning how to assess things, how to critically look at the things around them, and that part I do agree with, but I don't think AI is the way to do that. It might be a tool later for them to like, oh, here's the output and for teachers to provide some of the output and then have a problem that they discuss together, that's fine.

if the output is given by the teacher. So what I don't want, because the teacher can sift through the output, determine whether or not it's appropriate, and then present it to the students. But I don't think that it's okay for students to be playing with that directly. And I am sort of more okay with 13 to 18, but let's talk about consent for a second.

If there is the opportunity for consent, then that means that I should have the option to opt. And if that's not an option, that's not true consent. And if the applications like OpenAI, et cetera, say that it's 13 and through 18 with parental consent, then we have to provide the ability for parents to say no.

And that's the thing that I don't think we're doing. And that's the thing that concerns me as well.

[00:48:27] Fonz: Yeah. That's, I think that's for me is one of the biggest things, like, just at least to Kind of get going in those conversations that include all those stakeholders. And, you know, oftentimes you always hear the term learning community and, you know, Oh, we got to get the learning community involved, but oftentimes it's parents that are left out because they include learning communities, just like superintendent, directors, teachers, and students.

But what about the parents? You know, they need to be involved. And I think that, like you mentioned, that's what scares me the most is that. Many educators get excited. They want to bring it back into the classroom. They want to try it, but they're not going about it the right way. They're not reading terms of service.

And then because the applications won't tell them what's right. They're just, Oh, no, it's okay. That just do Google single sign on. They just log in, you know, the way they would, you know, with their school email, just click here, so on and so forth and everything's fine. So they assume, Oh, okay. If they can sign in that way.

Then every I'm good. Everything is fine. The other one that I've heard is, well, the students don't actually sign on. It's the teacher creates, you know, the, the activity and what the student is doing. They're just putting in a join code so they can go ahead and get there. So the. There's there, everything is safe.

There's nothing there that, that it's data private, you know, and they feel safe. And I'm thinking to myself, wait a minute. I was like, no, that's not the way it works, but that's what they're told by the companies. And so that's one of the things that's really scary there. The other thing that I wanted to talk about and, and I'll kind of bring it up.

Cause I know you mentioned the terms and I think there was a post that I think Jason Golia had posted a way back. He goes, what is like your. AI term or word that kind of makes you cringe. And there was everybody that was posting, posting, posting. And for me, it's always when they use the word equity because for me, it's being in a small school district of 14 schools where we're very small.

And so I'm thinking to myself, how do these platforms make it equitable for my district, let's say, to have access or my teachers to have access to some of these tools when we won't have the money to pay for it, as opposed to another school district next door to us who has that money. And so they'll say like, Oh, you know, we'll offer you, you know, this pilot for so much.

But then the next year it's like, well, it's going to be now about 16, 000 per school. To use this and like, well, where am I going to get that money? And that's the other kind of practice there that kind of bothers me where they kind of hook you and then now, you know, your teachers are using it. They love it, but then now you can't afford it.

And so where's the equity there? Like, if you want to make this accessible to everybody, I mean, why not make it accessible to anybody and fair for everybody. So those are just some of the things there that also come to mind on that side. And I know that's a little bit more on that business side, but it does affect the learning outcomes because now you'll have school districts that are a little bit, well, a lot bigger than ours.

Maybe they have more funds. They're going to be able to get all of that, the, you know, those tools. And then here we are. Not having access to those tools. So now you see that disparity where maybe these all of a sudden, you know, grades are better, things are doing better. State testing is better, and then we don't continue to grow.

So that to me is kind of a little bit bothersome there too, as well.

[00:51:50] Sofia: Yeah. And so on equity, there's a, there's two things that that I want to get hit on that. There's so many things I want to hit on that, but I'm going to just try to one is the use of equity for available. So a lot of people will say like, we need to make this equitable, meaning we need to just make it available to everybody.

That doesn't mean equitable. That means that it's available, but something can be available and not be used by a whole lot of people because of disability, because they don't understand it because of language. So we need to be very careful about the way that these to listen carefully to the people who are saying equity.

Because a lot of the times they're saying equity because they think that we're going to buy into it because we're, because it's equitable, but by the definition of equitable is accessible, like available, not accessible. And then there's the, again, grifting, which is we're going to use, we're going to say, you need to do it because it's everybody and everybody needs to have it.

Otherwise, we're going to widen the achievement gap and access gap, et cetera. But there is some research I think was published yesterday or today. That shows that the opposite is true, that making it available to everybody and saying like, okay, so now we all have this is actually widening that gap because again, being available doesn't mean that it's accessible or that it's equitable.

And so the types of. Resources that are available to some, the money that might be needed to pay for a tool, how it's used in different schools, because, you know, and Dr. Dequan Bashir does a flashdoc on this, and I'm hoping that he does this in CS, at CSDA this year, at the conference which is about his zip code, like, Your zip code affects the type of education, the quality of education, the resources you have.

And why is that? Like, why are we allowing that to happen? And yet districts who can pay for it have these bots and these things and are trying to do these tools. And, you know, and I commend them for trying to do, because they are trying to do this for their students. I think they're rushing into it. I do think they're rushing into it.

But I think they're trying to do well by their students. I do again, but again, impact intent inequity work intent is not as important as impact, but we need to do better. We need to make sure that we're, when we talk equity, we're talking about actual equity. Is a student with vision impairment going to be able to participate if a student with mobility issues going to be able to participate?

Is, is this going to address the needs of all students? Is it usable by all students? And, and, and I don't think that that's been thought of. And I, a lot of the companies Are using that as a, you know, like, Oh, now everybody can have a tutor. Even those who don't have tutors in their schools. Well, why is your tutor?

That's super flawed. And then the response I get a lot is like, humans are also flawed. I don't care. I know that humans are flawed. I'm flawed. I'm going to make mistakes. I'm going to put my foot in my mouth a thousand times before next Friday. Bye. The thing is, we know human tutors and the connection that you make with the student and your ability to see how they respond to things that affects their outcomes.

A bot does not know those things. It cannot form a relationship. It cannot know that the student is more tired today than, than, than the time I saw him. Unless it asks, but it's not going to understand it. It's just going to be a data point that it then uses in its algorithm. But it's not actually going to be able to say like, Ooh, there's a pattern.

Or maybe Wednesdays is not a great day because you know, observations that you make as a human being with the student that you're tutoring. They're not going to be able to address because unless it asks again, very personal questions, it won't know if the student has eaten or not eaten. How is your home life?

How's your, and all those things we know tutors get a feel for and teachers get a feel for, for better or worse, but we know that those, like the personal relationships affect outcomes. So don't feed me a tutor and tell me, tell me that it's the same or better than a human. Because it can't be. And why are you selling that to me as a solution for the underrepresented?

Because that's what they're doing. It's for equity.

[00:56:38] Fonz: Powerful stuff. Wow. Sophia. Well, I mean, this has been an amazing, amazing conversation and I know that we can continue. Maybe we'll do a part two, you know, maybe not in the not so long future. Maybe we can make that happen. But again, I just want to thank you for just really opening up our eyes and really sharing your expertise and.

It's great to, well, you're maybe again, like you said, we don't call ourselves experts, but maybe just sharing your experience that you've had. So sharing your experience that you've had of 24 years, and then obviously the work that you're doing and, and being a published author too, as well. And what you're seeing.

So thank you so much for sharing that experience with us today. And I'm, I'm, I'm leaving with a lot of things to think about too, as well, and just really kind of making some mental notes of how to. Help here locally, at least in my district, too, as well as, you know, we know that. This is not going to go away, but now we just really have to be very smart about how it's used and really look into those terms of service and really also ask platforms to be completely transparent, you know, come talk to us, like share with us what is available, what it is that you're working on and how you're improving.

So that way there is that safety there that we want to make sure that this is going to work a hundred percent of the time and be something dependent that, you know, again, teachers don't have to maybe. I don't know, just be stressed about those outputs, treat it as gospel. And then students also being saying, Nope, it's got to be true because that's what the bot told me and things of that sort.

So there's definitely a lot of work to do, but I want to commend you for being very vocal, being able to share and the work that you're doing with various groups and, and I know that you, you have a, Webinar coming up next week also with Kip Glazier, Scott, Mary, and Wyman Q. I hope I'm pronouncing his last name right.

I'm so sorry, but yeah, so please make sure for all our audience members that are going to be checking this, make sure you check that out. I will put the link into the episode also, so you can go ahead and check that out too, as well, and it's educational leadership in the age of AI. And I mean, if you thought this conversation was very great, I'm willing to bet that that conversation, that webinar is.

It's just going to be awesome too, as well as we continue to navigate this space. So Sophia, thank you so much. But before we go I don't know if you're, if you're familiar with the show, I know you're a first time guest and hopefully it's not just going to be a first time guest, but you'll come back to, but I always like to end the show with the last three questions and the first question.

And hopefully it maybe doesn't put you on the spot here because I don't try to. But the first question is the following. So we know that every superhero has a weakness or a pain point. So for example, for Superman, it was kryptonite that kind of weakened him. So my question to you is, is in the current state of education, or maybe we can just say in the current state of AI in education, what would you say would be your current

[00:59:43] Sofia: I, one of the things that I worry about is that I'm no longer in the classroom and I am disconnected to that. And therefore I am always trying to get information from teachers, but that's a weakness when it comes to this type of thing, because. I have a lot of opinions, but those opinions are based on, you know, years of experience, et cetera, but in a different time.

And I will say that outright. Once you leave the classroom, you no longer have a pulse on what exactly is going on. And I think that that's a weakness in terms of like the work that I'm trying to do. And in order for me to be able to be successful in the work that I'm doing, I do need to be able to communicate more with teachers because their voices are critical.

And I don't think that they're involved in the organizations that need to be in order for all of this to be successful.

[01:00:37] Fonz: Well, thank you so much. I really appreciate that answer. Good answer. All right. Question number 2, if you could have a billboard with anything on it, what would it be? And why?

[01:00:50] Sofia: It would be my favorite quote which is by Miguel de Nuamuno, which in English is, and he who wants to understand will understand.

And it's it's a, I, I remember the first time I read it, but it's one of those things where it's like, if we want to, We'll look deep into the things and understand them, but if you don't want to, you won't understand anything

[01:01:18] Fonz: that is great. That is deep. That's really a good one. I can definitely see that 1 as I'm driving and just read that and just reflect on the way home.

Or whoever it is that I'm going, but that was a really great one. Thank you so much for sharing that. And the last question, Sophia, aside from what we've talked about, it does it maybe I hopefully it's not work related or anything like that, but the question is the following, do you have a favorite hobby or activity that you wish you could turn into a full time profession?

[01:01:50] Sofia: Too many. So no I work in my hobby. Like, I feel like this is, CS was my hobby. And it was like that one elective that I first started teaching. So like, I work in my hobby and one of the hobbies that I had. And I, so like, I like to play with circuits. I like to do jigsaw puzzles, which some are in my wall.

But I love, I am actually learning to. Work with jewelry. I'm taking classes on working with metals and such. And that I think would probably be my next choice if I, if I had a choice there. Because I find it fun and engaging and it shuts off my brain, which doesn't usually do that.

[01:02:31] Fonz: That's great.

Well, that's a wonderful answer. So thank you so much for that. So again, Sophia, it's been an honor and a pleasure. Thank you so much for just taking some time and just really sharing just again, your experience, your passion for this subject, and just really just, Going back and forth on this and knowing that, you know, we're just trying our best to help educators and at least educate not only ourselves, but help, you know, just bring some more knowledge and some more ideas and thoughts into the education space.

So our educators can maybe if they need to just kind of take a little pause. Don't don't fall into the FOMO, that fear of missing out. It's okay to take a pause. It's okay to. Also for our educators to that haven't even tried anything yet because they're overwhelmed. It's okay to be overwhelmed. Those feelings are acceptable.

There are many teachers that are out there too. So please don't feel that you are behind. Don't feel that you're being left out. You know, I see you, I hear you. Like we're here and we're just trying to help and just bring more knowledge into the education space. So thank you so much for all our audience members.

Thank you so much for all the shares, the likes, the follows. If you haven't done so yet, please jump over to our YouTube channel where you can give us a thumbs up and subscribe as well. And you can definitely check out this episode. by visiting our website at myedtech. life, myedtech. life, where you can check out this amazing episode and the other 272 wonderful episodes where you can find some knowledge nuggets and sprinkle them onto what you are already doing great.

So thank you all for being with us. Thank you all for your support and my friends until next time, don't forget, stay techie.

I don't know what I'm talking about.

Sofía De Jesús Profile Photo

Sofía De Jesús

Associate Program Manager

Sofía De Jesús is the author of Applied Computational Thinking with Python, 2e, and a computer science education curriculum designer and developer with over 20 years of experience including more than 10 years in the classroom. She has a BS from the University of Puerto Rico with a focus on mathematics and an MS from the University of Dayton in Teacher Education and Allied Professions. She also completed all doctoral credits (64) toward an EdD (abd). She is currently the Associate Program Manager for CMU CS Academy, an outreach program from Carnegie Mellon University, where she leads the efforts with the Spanish translation of CS curriculum and expansion into Latin America as well as accessibility and equity initiatives for the project.