Episode 286: AI Literacy in Academia
Episode 286: AI Literacy in Academia
AI Literacy in Academia Join me as I welcome Jessica Parker, Ed.D. , and Dr. Kimberly Becker , co-founders of Moxie . In this episode of My…
Choose your favorite podcast player
July 22, 2024

Episode 286: AI Literacy in Academia

AI Literacy in Academia

Join me as I welcome Jessica Parker, Ed.D., and Dr. Kimberly Becker, co-founders of Moxie. In this episode of My EdTech Life, my guests unpack the complexities of AI in academic settings, from AI literacy to the concept of a "post-plagiarism era." Jessica and Kimberly offer great insights into how AI is reshaping academia, the critical importance of understanding large language models, and the potential biases lurking within AI systems.

Whether you're an educator, student, or simply curious, this episode provides great perspectives on navigating the AI landscape in academia.

Timestamps:

0:00:30 - Introduction and welcome

0:01:59 - Dr. Jessica Parker introduces herself and Moxie

0:03:07 - Dr. Kimberly Becker shares her background in linguistics and AI

0:04:54 - Explanation of how large language models work

0:07:07 - Discussion on AI detectors and their limitations

0:11:23 - Concerns about AI implementation in education

0:14:44 - Hesitations and misconceptions about AI in academia

0:18:37 - The importance of critical thinking when using AI

0:21:34 - Exploring the concept of a "post-plagiarism era"

0:26:17 - AI as a collaboration tool in higher education

0:28:48 - Recent study on dialect bias in AI language models

0:32:01 - Examples of bias in AI-generated images

0:35:22 - Considerations for AI use in grading and assessment

0:37:37 - The story behind Moxie and its mission

0:40:13 - How Moxie is improving academic rubrics through AI

0:43:14 - "EduKryptonite": Challenges in using AI for education

0:45:44 - Book recommendations from the guests

0:46:57 - Fun question: What job would the guests like to try for a day?

0:48:24 - Closing remarks and how to connect with the guests

Don't miss out on more insightful conversations at the forefront of educational technology! Follow My EdTech Life on all social media platforms, subscribe to our YouTube channel, and hit that notification bell to stay updated on our latest content.

Join our community of educators and tech enthusiasts as we explore the future of learning together!

--- Support this podcast: https://podcasters.spotify.com/pod/show/myedtechlife/support

Thank you for watching or listening to our show! 

Until Next Time, Stay Techie!

-Fonz

🎙️ Love our content? Sponsor MyEdTechLife Podcast and connect with our passionate edtech audience! Reach out to me at myedtechlife@gmail.com. ✨

 

Transcript

Episode 286: AI Literacy in Academia

[00:00:30] Fonz: Hello everybody and welcome to another great episode of My EdTech Life. Thank you so much for joining us on this beautiful day, wherever it is that you're joining us from around the world. Thank you so much for all of your continued support. We appreciate all the likes, the shares, the follows. Thank you so much for engaging with our content.

And I want to give a special shout out to To our sponsors, EduAid. Thank you so much EduAid for believing in our cause and bringing some amazing conversations into the education ed tech space. And ladies and gentlemen, I am excited to be here with you all today. I have two amazing guests. I have Kimberly Becker and Jessica Parker who are joining me this morning so we can have a great conversation, obviously centered around the work That they are doing, what they're seeing, and just to get a little bit of about their perspectives of AI.

And of course, that's what we've been talking about, throughout some of these episodes here. So I'm really excited. Ladies, how are you all this morning?

[00:01:32] Jessica: Hi, we're good.

[00:01:34] Fonz: Excellent. Well, thank you. Thank you so much for making it here today. And before we get started, I would love my audience to know a little bit more about you.

So we'll go ahead and start with some brief introductions at what your context is within the edtech ecosystem. education space, you know, and so I'm really excited. So we'll go ahead and start with Jessica. Jessica, tell us a little bit about yourself and your context in the space.

[00:01:59] Jessica: Sure. Yeah. So I'm Jessica Parker.

I'm the co founder and CEO of Moxie. We're an AI company. We predominantly work with graduate students, postdocs, and faculty. These are all. Soon to be researchers, students learning how to conduct research or actual faculty who are currently conducting research and writing grants or publishing in terms of my background.

So I have around 15 years of experience in higher ed. I've been teaching and doctoral programs. I was a research director at Northeastern University. And about 8 years ago, I started an academic consulting company. And so I continue to teach. I continue to consult. So I get to see higher ed from sort of both sides as a faculty member and as a consultant, which definitely informs our work at Moxie.

[00:02:46] Fonz: Excellent. That is wonderful. I love that range of work and that experience. So I'm excited to really get your insights as far as what you're seeing again, obviously in this space. And I'm, I, we work a lot in on LinkedIn too, as well. So definitely looking forward to that. And Kimberly, can you tell us a little bit about yourself and what the context is within the EdTech space?

[00:03:07] Kimberly: Yeah, my name is Kimberly Becker and I have been either a teacher or a student my whole life until really recently when I co founded Moxie with Jessica and I've taught at all levels just about high school, community college, and university. I got my PhD in Applied Linguistics and Technology which Is very related to AI because I studied corpus linguistics, and so I'm quite interested in, data sets and you know, the back end basically of a large language model.

I'm very interested in in the models themselves and.

[00:03:49] Fonz: Excellent. Well, I'm excited about that, too, because that's something, too, that is of my interest, especially with so much that has transpired since November 2022. And I know for all our audience members, I know you're, you've been getting this type of content since November 2022 when this broke, but I'm really trying to Bring in so many people that are, have different experiences in their fields.

And then obviously as AI continues to progress the work that is being done. And of course, we'll learn a little bit about Moxie too, as well. But one of the things that I did want to talk about here is Kimberly is about your background and the linguistics that you're talking about and how large language models work.

So if you can, can you just give us just for any audience members still who may not be familiar how that works. If you can give us just, you know, the easiest, most simplest definition that you may have as far as, So, a large language model may work. I know that there is a lot that goes into it, but just for our audience members, KTOLF space and all around, something easy that they can understand.

[00:04:54] Kimberly: Sure. So a large language model basically is a big collection of data from the web. So it includes blogs, websites. It's just a big data set. And machine learning allows the machine to. Find the patterns, learn the patterns, and then predict what the next word or phrase is going to be from that data set.

So when you interact with the chatbot, it's basically outputting a prediction of what it thinks is the next word and the next word and the next word. And it's quite good at it, as we know. But it's tricky because it's not really thinking or communicating in the way that we think of. So the verbs we have to describe output are a little bit deceiving because it's not what's going on.

Really it's just a predictive mathematical model of communication.

[00:05:51] Fonz: Excellent. And I love that definition. And again, just because of so much of the hype that we hear, like you said, I think a lot of the way that we describe AI or that AI is being described by a lot of people. There's this notion and misconception of like, oh, this thing You know, it's something that is live, it's almost like the, it's like a brain that is just everything everybody's connected to and that they're getting the most accurate answers.

And I know that it sounds very confident, you know, or I should say the output that it gives looks and reads very confidently. But like you mentioned, it is very live. Doing a lot of the, the predicting and aspects of it. Now, one of the things too, is that I'm a big fan of Dr. Emily Bender too, and the work that she's put out.

And then I know I saw some posts that you put out too, as well recently. Like, I think it was either yesterday too, as well, where you put out a graph and talking about those large language models and so on. So can you tell us a little bit more just as far as some of the research that you have seen and some of the things that may have.

Maybe a couple of bullet points positives and maybe negatives within the space for, I guess, education and how this can affect it.

[00:07:07] Kimberly: Jessica, do you want to take this one?

[00:07:10] Jessica: Yeah, so there's a lot to unpack there. I'm going to just talk about a few things that are top of mind. So we're spending a lot of time right now unpacking how AI detectors work.

I think it's an interesting topic because they're out there, people use them. There's press releases showing the uptick and the use of AI detectors. I know Turnitin put out a press release stating that, you know, that their AI content detector had been, you know, searched over 200 million papers by now, and 20 percent of them were found to be AI content.

AI generated. And when we think about how an AI detector works, it uses a large language model. So it's the same concept as what Kimberly described. It's predicting the next word. It's a mathematical model of communication. And so there's this, I think, myth out there that it can detect its own generated text accurately, but at the most basic level, I mean, it's more complex than this.

It is still just using a large language model in order to distinguish between human and AI generated text, but because large language models are the input is human generated text. It's taking stuff off the Internet that humans have produced and written over time. And it's designed to produce human like text.

So it's like we're asking it to be able to detect that accurately, and it's just prone to false positives. At the end of the day, it's still using that predictive mechanism that large language models use. And so expecting it to behave accurately is Not a good thing to expect. And so when I think about what's happening in the overall higher ed space, I think the use of detectors is a sign of low literacy.

And I think this is a problem across K through 12 and higher ed where faculty, teachers, students, when they're using these large language models, I don't think that they're. I'm intentionally using them and appropriately. I think it's just a sign of low literacy. We haven't really received the professional development that we need to know how to use them.

And so when we see people, anthropomorphizing them, which we tend to do at Moxie, we call our AI chat bot Moxie. It's, it's challenging because I was just talking to this faculty member at the University of Rhode Island where Kimberly and I have noticed this too, where people tend to use it. And especially if they're novice users, they'll get that initial output and they might stop.

But the real benefit is in that meaning negotiation, that iterative use of the chat bot, just like you would. When you communicate with a human. So on one side, when I'm teaching my students, I'm like, I want you to engage in conversation. I want you to have follow up questions. Just like if you went to a human, you wouldn't ask a question and then walk away when you get their first response, you would ask follow up questions.

So in some ways I think anthropomorphizing it is good because it, it reiterates the, the true nature of a chat bot, which is that iterative conversation, that meaning negotiation. But then the pitfall of that is this, this tendency to, to attribute, you know, thinking to it, that it's actually thinking when it's giving a response when it's not.

So it's like two sides of the same coin. I think other pitfalls that we see are. This binary sort of tendency to either ban it or just totally given to it and accept it instead of teaching students how to position it as, say, a collaborator. I think that these one size fits all approaches don't work.

And we know that in education is educators that. It depends on the learning outcomes. It depends on what are we actually assessing. And so it goes back to this larger field of study that's evolving around assessment, which is painful to have to rethink how we're designing our assessments and what are we actually measuring.

It's forcing us to rethink all that. So just some of the advice that I give faculty I talk to is resist the urge to take this binary response. Become a super user yourself. And then at the end of the day, you're an educator. You know that it comes back to the learning outcomes. So how can you use it to help students achieve those outcomes?

Or how do you tell them to not use it because you're worried about it prohibiting learning for that specific outcome? Sorry, that's a blended answer, but

[00:11:23] Fonz: no, no, actually that that's a great, you know, you definitely unpacked a lot. And especially just I wouldn't say. I guess maybe not concerns, but I mean, it's just what we're seeing.

I mean, obviously there's, there's two sides, you know, and I'm, I'm, like I said thanks to Dr. Nico McGee also as well. You know, I always use the term cautious advocate. There's many, you know, people that and I really try and remain In the middle, but then all of a sudden you'll see something. Of course, what happened with LA, LA USD, you know, you see some other things, you know, as far as privacy or you see, you know, news about entities, the way that they're scraping without giving any knowledge of what they're scraping.

And then sometimes you're like, wow, what's going on here? And then obviously that educator side of me, I'm like, wow, you know, this can be so helpful for teachers, for some of those menial tasks that they can do, but. What I'm very cautious about, though, is working in the K 12 space, personally, is obviously, the introduction of this with the younger students, with the chatbots, and all of that, and especially, like you mentioned right now, I think there has not been enough training.

To appropriately use the tools because I think in, in the K 12 space, and I don't know if in the higher ed space, you may see this, but usually it's teacher goes to conference, teacher is wowed by shiny new tool that does this, and then immediately just come back with no, You know, letting CTOs know like, Hey, I'm going to be using this AI tool and really with no regards to terms of service.

And then of course, data and where's the data stored and so many questions and so many variables there. And I think what happens is that. When I go to LinkedIn, you see so many people that have so many great ideas or you have states that are coming out with you know, just great you know, write ups for teachers, for educators and so on and regulations and policy.

But at the end of the day, it's like that kind of stays out there and then it hasn't quite trickled down and filtered down. To the teachers to approach to you know, use the tools appropriately and really think about, like, is this a healthy way of teaching you know, are the students really going to gain from, you know, learn talking with a chat bot where maybe a student has to see things differently or be taught differently.

And they often talk about personalized learning, but we just see how, you know, Dr. Becker was just saying how this is statistically just giving you. You know, predicting the next best word. And so those are just some of the things that are kind of like a little scary, but I am excited about it because I do use it and see the potential that it has.

But just, I guess it's just the terms of service and of course the safety of that. And I know that it's going to get better. But for now, it's just some, those are the little stumbling blocks that I run into that I just want to make sure that I have the best answers to because we all want the what's best for our students as well.

So, you know, Kimberly, I want to ask you, what are some of the things that you may have seen as far as your experience? What are some of the hesitancies that either professors or maybe that you hear or work with anybody in the K 12 space, what are some of those things that they're hesitant about when.

talking about AI or using AI then for themselves or with students?

[00:14:44] Kimberly: Yeah. Well, Jessica touched on this a little bit. We're very much advocates for AI literacy, and we published a white paper with a framework for AI literacy that goes through three levels or aspects of what it means to be literate. And we're hoping to continue fleshing that out, that kind of.

theory around what it means to know how to use it well. And because you have to know how to use it well in order to teach it well. I think the main thing is, and this is why the framework starts with functional AI literacy, which just means you know how to interact with it. You understand what's happening under the hood.

You don't have to fully be able to, you know, Understand it in a way that like a machine learning engineer would understand it. But you need to understand the predictive technology. So one of the things that I have found people believe is happening is that they're that the chatbot is somehow going into its data and pulling chunks of text and then displaying them directly from some source that has been harvested from the Internet.

And that's not what's happening. It's it's it's a pattern. And so it's. Just predicting the next word based on multiple examples of that word in context. It's never taking exact chunks of text and dumping them out for output. And so that's 1 thing that I think, you know, once you understand how the predictive technology works and you can.

And you do have to at least look at some, I don't know, we, we have a very easy slide that explains kind of vectoring and what a vector space is and how that works and and then I think people can wrap their, their heads around it and then they're like, Oh, so it's not actually taking anybody's exact wording.

No, it's not. In fact, if it were, then every plagiarism detector would work on AI generated output. Right. I mean, that's why we can't detect it in the same way. We can detect, you know, plagiarism like turnitin. com has always done. So that's one thing that I think is really important. And then the other thing is just to be critical about it, just to, to look at it closely and to iterate your, your interaction with it.

People think that it's going to be a quick fix to something, and it's not. You have to try and try again to get the best. And that may mean that it takes actually longer than it would. You know, there's some things that Google search is still better for. And then there's other things that it, that it's not.

So a critical approach, not only to the biases, you know, when we think critical, we often think, oh, it's, it's going to produce these biases and it's going to enhance those in the real world. And I do think that is a concern. But I think. Just on a practical level, the first critical aspect of interacting with it has to be iterating on whatever and not just taking it for face value, but, you know, push it really pushing it to give you more because that's what it's trained to do.

It's trained to sort of please you or, you know, give you and that's what people people will often. Text me just friends who know I'm, you know, interested in, and they'd be like, it's so human. It apologized to me, you know, or, or I said, thank you. And it said, you're welcome. And I'm like, well, of course, because it knows predictively that the, that the phrase that follows thank you is you're welcome.

And if you kind of scold it, then it knows to apologize and amend itself in some way. So, yeah, it's really interesting how we, we want to anthropomorphize it, but. But we, but we can't because, because then we lose that critical stance.

[00:18:37] Fonz: Excellent. Yeah. And you know, I like that, that you mentioned, I think oftentimes my, my concern is, and because in the very beginning of, you know, trying this out myself and of course getting replies and you know, you're getting these responses and you're interacting.

It just, what worries me is because obviously the workload and of course that's a lot of stuff that occurs within, you know, K 12 space. And my biggest fear is, is that this could turn into just a, a crutch. Like you said, like immediately it's like, Oh, you get the first response and teachers may take that at face value without, you know, Just okay.

Let me look through this very carefully. Let me just make sure that it works with my content objective with the goals that I have for class and so on. And so that's kind of my big concern, too. And also because of the knowledge cutoff dates that it may have in many states, like, for example, in our state, they have updated.

What they call the Texas essentials, knowledge and skills you know, or the teaks in our, in here in Texas. And so a lot of teachers may be under the impression like, well, it should be updated. And so they may go in there, type in, this is what I would need my students to do or create something that goes along this lesson.

And then of course it may not be accurate. So my thing is, like you mentioned, as far as the AI literacy aspect of it, I think it's just. Getting that information out there in the student's hands, and actually in the teacher's hands, I should say, so they know, like you mentioned in your paper too as well, just understanding, and you mentioned it here earlier, being that co collaborator, but really working together At modifying and prompting correctly to get that output that really lines up.

And so I think for a lot of teachers, they may have that misconception of like, hey, once I pop this in and say, hey, 4 4 H, the student will learn how to divide with decimals in this, and then they get that first output. They're like, okay, I'm good to go. Let me go ahead and just give that to my students or just introduce that to our lesson.

So I really like that you did provide, you know, or this framework, which I did notice that has been downloaded close to 300 or maybe even more just from the website that you have. And that's wonderful. So I'll definitely, we'll link this into the episode show notes too, as well. So people can go ahead and read you know, the, the white paper that you've provided.

And of course, to give us and help us. Get some better insight. Like I said, working at the district level, I really need to hear what, you know, people with your experience all over the world, you know, that have been working that are immersed in it to help us make those better decisions and just kind of feel a little bit safer and tread this landscape very carefully to as well.

So I want to ask because I know in your paper to you introduced the concept of the post plagiarism. era. So can you tell us a little bit about that? And we'll go ahead and start with Jessica. Tell us a little bit about the post plagiarism era.

[00:21:34] Jessica: Yeah. So that's based off the work of Sarah Eaton. I believe she's at Calgary University.

Is that right? In Canada? Yeah. I think it's Calgary. Hopefully I got that right, but yeah, she Wrote a textbook and post plagiarism is a term that she used and Kimberly and I really latched onto it because from day one, the term plagiarism comes up quite a bit and it's like educating folks on, again, if you understand how an LLM works, like Kimberly just said, then you would understand that it's not plagiarizing.

It's not pulling a sentence or a phrase from a single source. It's just predicting the next word. So therefore our definition of plagiarism does not apply. When we use large language models to help us with writing tasks. So you can't plagiarize it. First of all, it's not an author. So you're not taking the ideas of a person.

You can't cite the LLM and give it attribution for as a source. And it's also not plagiarizing from a source. So this definition of plagiarism just doesn't exist in the, in the context of using LLMs. Instead, like for folks who are really in this frame of mind of like, well, Well, how do we address it in terms of if a student is using it in a way that what they're calling is plagiarism?

I'm like, well, that falls under just academic honesty and integrity and academic conduct, not the plagiarism policy. So that's one aspect of it. But going back to this idea of being a post plagiarism era, as Sarah Eaton says, is, is this recognition that we are in this new era where As humans, we are going to be collaborating with AI in a multitude of ways.

And so what does that actually look like for our output as humans, whatever that output may be, if it's writing a research paper, if it's writing, developing a course or writing a LinkedIn post, like these outputs that we typically have, that are a hundred percent human generated. Now we're collaborating with AI.

And so her stance is that the traditional rules of plagiarism no longer apply. That human AI collaboration is going to become the norm, that it's actually going to enhance our creativity. But this all hinges on learning how to use it appropriately and effectively. And what you'll notice is in her framework.

And when you hear Kimberly and I talk about AI. Is we don't focus on efficiency and speed, and I think that that's a major issue that we have right now is what we're being sold and marketed. What's being marketed to us. I know that Microsoft uses the term teaching speed. And I, I wrote something about this the other day where it's like, when did speed and efficiency become like, so important to us as educators.

And so when I, when I think about this idea of human AI collaboration and being in this post plagiarism era, it's not about producing something faster or more efficiently. I think that we can get to the point where our creativity is enhanced and ultimately our output is higher quality. When we're not focused on speed, because often it can take longer to produce something with a I to truly collaborate it in a way that stays true to your original thoughts and ideas, not passing it off as your own being transparent and honest about how you used it puts us into this new era where we're going to start to see a lot more acknowledgement of how I was used in the way that we do things.

The writing or the creative process. So it's no longer just acknowledging a human's role in the development of a product. It's also acknowledging what was the role of AI in it. And I think central to this is us developing a better understanding. And I see more researchers focused on this of like, what does it look like to truly collaborate with AI?

Like I've seen some studies coming out where they're trying to look at what's happening in the brain, in this collaborative process. But I think in a more practical way, it's like, what does human AI collaboration look like for a specific task? Because it's going to look different for a task where writing is being evaluated versus a task where you're creating a video for a marketing class.

And so I think For educators, that is the level of thinking that's required. It's like, well, what am I teaching? What do I want the students to learn? And what does AI collaboration look like within that context? And how do I reframe this idea of creativity and collaboration and even plagiarism and academic honesty based on the sort of expected collaboration that I think we're going to see, at least in the higher ed space.

I think it's more difficult for, for children where you're thinking about child development. And brain development, but in our space in higher ed, I do think this idea of AI collaboration is very much going to become the norm and that's really Sarah Eaton's framework is about.

[00:26:17] Fonz: Yeah, no. And I love a lot of things that you said, especially higher ed and myself currently working on my doctorate as well, you know, believe it or not.

I told people, you know, that they, I know that, you know, there's, there's two sides, obviously in, whether you're in the K 12 space or in the higher ed space, there's always like two sides, like the ones that are for, ones that are against. Luckily for myself, you know, going through you know, my educational technology and that's what my doctorate is going to be.

And obviously working on AI and actually these episodes that they're all my data sets, all the episodes and interviews, because you know, the discussions and you know, it's, it's data. So I was talking to my professor and he's like, oh yeah, so, you know, Here's what you can do is just download your transcripts, and then he goes, what you can do is create a PDF, go to this app, this app, get an overarching theme, think about some research questions, and then of course, right, and he's like, all pro AI, like, because he understands, you know, in his forward thinking, it's like, hey, the tools are there.

Rather than fight against it, you know, then say, okay, how can we leverage that? Especially for myself in going through that process has really helped out to be able to be honestly, my collab, my collaboration buddy in doing all that and putting my ideas together as I start chapter one and chapter three and so on.

And, and it's been wonderful to see, obviously, you know, K 12 space is a little different, obviously terms of service. Things like that come about, but at some point, you know, at least for the teachers to be able to understand that on their side, because our, our district keeps like chat GPT and clot and access to that for teachers only.

So at least for them to be able to experiment, to really see it as a co collaborator in a lot of the tasks and understanding the way that it really works really helps out a lot. And I think a lot that you shared right now are some great soundbites that I know I can go back and. You know, doing a PD session, be able to help our teachers know what AI is, what it isn't.

And then of course like, Dr. Kimberly was saying, you know, some of the limitations that are there. So talking or going back to a little bit, as far as the limitations, I just wanted to go back to Kimberly, I know on LinkedIn, I believe it was yesterday that you had posted the new study that reveals the dialect bias in AI language models.

Can you tell us a little bit about that, that, that bit of research that you found? And of course, some of the implications that it may have moving forward.

[00:28:48] Kimberly: Yeah, so that study used a sociolinguistic research method to analyze how the AI would predict, like finishing a sentence, if the dialect was standard English versus if it was African American English.

And it found that it definitely predicted occupations. That were less educated occupations for when, when they use the, the African American dialect, it produced occupations. It predicted occupations that did not require a college degree. And I think they pushed it further. I can't remember exactly, but I know they, they put, you know, you can, you push it and to see, like, how far the bias, how, how far they could take the bias.

And, of course you know, it every study I've seen where a research question is, is this output bias? The answer is overwhelmingly yes. And that's, you know, you referred to Emily Bender and she's one of the first people who, who raised those ethical concerns about the reinforcement of biases and the propagation of, you know, stereotypes and, and, and even falsehoods as truth.

So I think, yeah, it's, it's really concerning. And if you're working with students, especially young students, I think Even beyond the text generation, it becomes very, very obvious when you start generating images, the images, you know, of course, a picture is worth 1000 words. And so that can be a really good way to introduce students to this concept at any level.

But I think especially children because then you see it immediately. You know, if you, If you say, for example, we, we did a professional development workshop and I I demonstrated how I asked it to generate an image of a group of CEOs. And the first image was a bunch of skinny, I just say it like it is a bunch of skinny white people.

You know, dressed up in gray and black business suits. And so then I asked it to, Hey, let's add some diversity to this group. And it got better, but it was a process. And I show in, you know, the images and that's easier than asking people to read because the truth is that it's very covert. And that's the what the study that I posted on LinkedIn about yesterday was looking at was this, you know, the more covert things that you have to really look for because ultimately these systems will be embedded into human resources, softwares, and you know, those they're already accepting CVS and they're.

You know, identifying like a short list of people to interview based on keywords in in resumes. And so, as these technologies, you know, become more and more embedded, and we, and sometimes we have less choice, whether we're even making a decision to use them, or whether it's just automatic, we really need to understand what kinds of biases these are propagating.

[00:32:01] Fonz: Yeah, no, and I agree with you on a lot that you set there because especially like, you know, with some platforms that are out there that do a lot of like text to image, things of that sort, I've had similar experiences with a lot of them. And even they. The ones that are accessible in the education space.

I mean, I went in there and I know that there was a huge video on Tik Tok that got millions of views where, you know, there was a lady that put in you know, an image of, I think it was I forget. It was like an ankle bracelet or something like that. And then of course the images that came out and I said, well, let me go and try something.

So I went to that platform and then I typed in janitor and I got it. All, like for the first 10 pages that I regenerated, regenerated, regenerated, it was the same kind of image and of course the same type of ethnicity and background. And I'm thinking to myself, wow, you know, and even for myself when I try and create and I see a lot of teachers that'll put themselves out there and say like, hey, you know, I created this with something.

It's It's oftentimes very kind of, one of them mentioned it's like, Hey, somebody said, like, this created me very voluptuous for myself. I, when I type in and finding a prompt for myself, what I just put here, like Hispanic male. And I put, you know, and I put kind of two words. Like chubby, I put chubby, you know, I admit it.

And then I put with a full hawk and everything. It always brings up a Hispanic male that is very obese and very overweight. And then it always gives me a goatee. And. And that's because I don't put that in the prompt, but it's just kind of interesting to learn about those things like you mentioned, because, I mean, with language and what it's understanding and the output that you're getting is very interesting.

So those are, I think, some of the concerns too, obviously, not just in the K 12 space, but in all spaces, because we're seeing you know, a lot of that happen. Now, Also, going back to this, it's very interesting, and I wanted to ask that feedback from you, because I don't know if you have heard, but I know one of the big things this past year in Texas our state exams, our writing exams, we're starting to grade no longer with, you know, actual human verifiers.

I think there's a couple that'll go through that percentage, but a certain percentage will go through these AI, you know, readers. That'll go straight through and my concern was this year when I was talking to our content specialist saying like, Hey, you know, did y'all notice anything different? You know, how is this graded?

Because since I don't get involved in those conversations because they get more contact and access to T a, my concern was, is, is there some kind of identifier that goes along with this test? Like maybe hopefully not including a name, maybe it's a number because. of the language. Obviously, emergent bilingual students may be rated a lot lower, even though they still meet the requirements of the rubric, but because of that language aspect.

And so that was some things that came up, and I know probably those will be some questions that will come up this year as teachers are going through grades, you know, for the state and as this moves forward. So it's very interesting that you bring this up because there are some things that obviously there's still a lot of work to be done in that area.

So thank you so much for sharing that.

[00:35:22] Jessica: Yeah, I think later it's important for leaders to also determine what is an acceptable level of. Inaccuracy and that can be a difficult question to ask, but I was reminded of it. So we have a webinar today that I've been prepping for and right after turn it in released as a writing detector shortly after Vanderbilt University released a press release saying that they were no longer using it because to them.

A 1 percent inaccuracy rate or error of inaccuracy is too much because they gave the example of, well, we have around 750, 000 papers that are, will be used, will be pushed through this AI writing detector. And if even 1 percent of them are incorrectly accused of cheating, of generating their content with AI, that's not acceptable to us.

And so I think of the example you're giving. If the AI has a 1 percent rate of inaccuracy is that acceptable? And for, for many districts, it might be when you compare it to how much money they might be saving and that where that money can be put elsewhere into the budget. But these are all, I think, challenging decisions that stakeholders are having to make where they're looking at the decisions in front of them and Also, these, you know, companies are doing their own internal research, so you have to question that level of bias.

And I, I do not, I would not want to be in that decision making seat because I think it's, it's very challenging. So I'm not trying to be critical of that decision. But I think it is something that you just have to ask yourself as a district of, like, what are the pros and cons? What are we potentially losing here by making this decision?

And maybe it is worth it. Maybe the benefits outweigh the risks.

[00:37:11] Fonz: Excellent. Thank you so much. Well, as we kind of start wrapping up, I do want to give you an opportunity to share some of the work that you're doing through Moxie. So we'll go ahead and start. Jessica, if you want to tell us a little bit of background, how Moxie started, the services that are being offered and how people may contact you, you know, to be able to, you know, get some consultations or, you know, maybe collaborate with you.

So tell us about Moxie and Moxie's vision. Sure.

[00:37:37] Jessica: Yeah, so Kimberly and I were previously working on another business. We were working on a virtual academic writing center, and we were running into a lot of the same problems that our university partners were having around why they're. Brick and mortar writing center wasn't working.

We tried to solve that with a virtual writing center. And we were kind of in the midst of trying to figure that out because we were, we were truly solving a problem, but we couldn't scale the solution. So we were in the midst of that when we started using chat GPT three and we did our first study together, we looked at how well.

It performed in terms of automated writing evaluation. We were particularly interested in how it could be used for formative feedback on academic writing. Cause that's exactly what we were doing with humans at the writing center. And we immediately saw its potential and published that study. And that's kind of how Moxie was, was born was in the midst of that.

And it goes back to our philosophy where just because tech can do something doesn't mean that you use it. So our framework is If the problem cannot be if the solution to a problem cannot be humanly scaled, and that's a good opportunity to think about technology solving that problem. And so we had the problem of really formative feedback on academic writing in this virtual writing center and saw the potential for generative AI to be a scalable solution.

So we started building formative feedback tools on various aspects of academic writing. I started using them with my doc students and our academic writing course got great feedback. We continued our research and then we expanded our tools to include research tools to help researchers make decisions around research design, data analysis plans, ethical considerations.

So, all of Moxie's tools are designed to provide guidance and feedback, not to generate the work for you. So not to write your data collection plan or write your methods chapter, but rather to act as a collaborator. And so Kimberly and I infused all of our domain expertise into the development of those tools.

And right now we have, we've had around 4, 000 users this year, and we're partnering with multiple universities to pilot our tools with their doctoral students. And so that's kind of what's happening right now and how we got here.

[00:39:58] Fonz: Excellent. Kimberly, so tell me a little bit about that journey too as well.

What are some of the exciting things that you're seeing? I know you just mentioned that you'd be partnering with universities, but you know, tell me a little bit about on your side, you know, with Moxie and the work that you're doing.

[00:40:13] Kimberly: Yeah. Well, one of the most interesting things that I think we've Noticed in partnering with universities and in building these tools ourselves, that is, you know, the, the importance of a rubric and the importance of the clarity and the robustness of that rubric to really flesh out what you're looking for and and so through, you know, telling a machine what to look for and how to give feedback.

feedback. You have to know exactly what sort of what's this going to look like on the back end. And how is the machine? You know, once the student puts in what they have written, how will the machine identify in that writing if it's meeting a goal or not, if it's achieving the outcome or not. And so One thing that we found is that this is a great way to improve your rubric because it's like, it's like giving the instructions of your rubric to the machine to see if the machine can then interpret and apply it.

And it's been fascinating just to see how, how much better. Our rubric have got our rubrics have gotten and our understanding of what we're really looking for, because then we can translate that into the teaching, right? Like, how do we tell a student what good writing looks like and how does good writing accomplish the communicative goals?

That we're looking to to achieve there. So it's really given us a lot more understanding about the language. And for me, I'm always looking at the language. And so that really was a surprise. We didn't expect that. And, and that's, that's something that I'm. Really curious about.

[00:41:52] Fonz: That's excellent.

Well, thank you so much, ladies. It's been wonderful to have you here today and just share a little bit about not only yourselves, the work that you're doing, your insights as like I said, you know, one of the things that I love doing about this show is not only amplifying the voices of all my guests and the work that they're doing, but also just to contribute to just the education space with these conversations.

And so everybody from K all the way to higher ed. You know, that is curious about AI or this section, or maybe this particular you know, subject that we're talking about today. Can just get some information there and just roll with it. The more knowledge that we have, the better that we better decisions that can be made.

So Thank you so much. And before we end I always love to end the show with the following three questions, just to kind of little lighthearted way of ending the show. And I know I put those on the calendar invite, but just in case you know, we'll kind of take it slow and we'll start with Kimberly first.

So my question to you, Kimberly is. We know that every superhero has a weakness or a pain point, okay? So, you know, so I want to ask you, you know, in the current state, and I, and I want to say maybe education altogether, K through, you know, higher ed, at this point in time, what would be your biggest education pain point, or I like to call it the EduKryptonite?

[00:43:14] Kimberly: Mm.

Well, for me, so, you know, 1 thing I've noticed about what other people do when they use an eye is that they sort of stop after they get the initial output. They think, well, it did its job and now I have the answer. And so, you know, but for me, the kryptonite is like, well, I can ask it again to improve that.

And then I get into this loop of like never ending wordsmithing and improvement seeking and optimization. And it just, it's maddening. Because a lot of times done is good enough. And that is, so it's, it's just a loop my brain gets into and it does not serve me well.

[00:44:02] Fonz: All right. Good point. I like that.

Jessica, how about yourself?

[00:44:06] Jessica: Oh gosh. Kryptonite, I,

I don't know. I, I keep going to the positives in terms of like kryptonite. I think about, I don't know, I'm similar to Kimberly. I, I'm a perfectionist and I tend to wordsmith and so on. On one side, the way I use AI is in a good way. I, I, I overcome writer's block, so I don't have writer's block anymore, but then I sort of make up for that time that I would normally be stuck staring at a page perfecting.

The output and so it's like, I'm shifting that time in the beginning to time and revision, which I don't think is a bad thing, but I do think it sort of pushes against this misconception that anytime you use AI, it's going to make you more efficient. I don't think that I think it can lead to inefficiencies, but I think the reality is, is our time is just shifting.

So, like, not falling into that planning fallacy. If I'm going to do this faster, this time

[00:45:04] Fonz: Excellent. Great. And I'm with you on that ladies. Since November 2022. It's just that you were, you're absolutely right. You just kind of shifted. Now, it's like, no, no, I gotta keep working on it. And so that's one of the things that I started learning to is that it's.

You can be efficient, but then there's that point of diminishing returns where I'm continually trying to perfect and work and so on. And then it's like, you know, sometimes good is already done, you know, or like, I think that's something that you mentioned. So yeah, definitely getting caught in that. All right.

Now we'll start with you, Jessica. First, I want to ask you, what is one book that you think should be mandatory for everybody to read?

[00:45:44] Jessica: The Alchemist.

It's an allegory. It's a story about Finding your purpose. And it's a short read. I probably read it two or three times a year. It's really easy to read. So the alchemist.

[00:45:58] Fonz: Excellent. There you go. Great insight. All right. And Kimberly, how about yourself?

[00:46:02] Kimberly: Well, can I give a one fiction and one nonfiction?

[00:46:05] Fonz: Yes, by all means.

Go ahead.

[00:46:06] Kimberly: Okay. Now I have to complicate everything. Well, the nonfiction book that has most affected me is Atomic Habits. Learning how to set little goals throughout a day so that, you know, things add up over time. And I, I really, that's how I finished my dissertation was 30 minutes a day. And I never would have believed that I could do it.

And the one, and the book that I, the fiction book that I think everyone should read is called the Poisonwood Bible by Barbara Kingsolver. It's my favorite novel.

[00:46:40] Fonz: Excellent. All right. So great, great choices there. So for all our audience members listening, make sure that you check out those reads too. And the last question, we'll start with you, Kimberly, and we'll end with Jessica.

If you can try out a job for a day to see if you like it, which job would you choose?

[00:46:57] Kimberly: Oh, it's really hard. I, I would, I love plants when I would love to be a botanist. I love to read about plant theories. I just learned about this thing called the root to shoot ratio that I got really excited about. Yeah, I think. Some sort of botany or maybe just working in a, like a Lance landscape.

Yeah, like a garden. Yeah. Like a garden center. I mean, more like Costco, you know,

[00:47:30] Fonz: like a

[00:47:31] Kimberly: high end.

[00:47:33] Fonz: There you go. I like it. Like, yeah, here we have here in our area in deep South Texas, we have these they call them just nurseries. And I mean, So much lush greenery there and it's so beautiful and it's way better than what you'd be able to find, you know, that, you know, the bigger stores that are around here, but it's just beautiful and it's just so nice.

So yeah, I could definitely see that. That's wonderful. All right, Jessica, how about yourself? What would be one job that you would love to try out to see if you like?

[00:48:01] Jessica: Well, I've been saying for a few years now that I really want a farm, but I feel like I should probably. Work on a farm for a few days to make sure that I'm up for it.

I really just want animals. So maybe I should just go into a vet office or something, but yeah, I want some land with some pigs and cows and goats and chickens, so maybe I should work on her a few days to. Test drive that.

[00:48:24] Fonz: There you go. I love it. Just kind of breaking up the routine and going to try things out and everything.

That's wonderful. Well, this has been a wonderful conversation. Thank you so much for joining me this morning and sharing a little bit about, of, about yourselves and about the work that you're doing with Moxie and for all our audience members, please make sure that you check out the show notes so you can go ahead and connect with Kimberly and Jessica and learn a little bit more about Moxie.

We really appreciate all of your support. Thank you all for listening. Please make sure that if you're not following us on all socials yet, make sure that you go ahead and follow us at my ad tech life, jump over to our YouTube channel, give us a thumbs up and subscribe to our channel. Thank you so much as always.

And again, my friends until next time, don't forget, stay techie.

I don't know what I'm talking about.

Kimberly Becker Profile Photo

Kimberly Becker

Co-Founder of Moxie

Kimberly is an applied linguist specializing in disciplinary academic writing and English for research publication purposes. She has a Ph.D. in Applied Linguistics and Technology (Iowa State University) and an M.A. in Teaching English as a Second Language (Northern Arizona University). Kimberly’s research and teaching experience as a professor and communication consultant has equipped her to support academics in written, oral, visual, and electronic communication. She has taught at the high school, community college, and university levels. Her most recent publications are related to the use of ethical AI for academic research. In her spare time, she enjoys practicing yoga, gardening, and walking with her two poodles.

Jessica Parker Profile Photo

Jessica Parker

Co-Founder & CEO of Moxie

Jessica is an educator, researcher, and serial entrepreneur (Dissertation by Design, The Dissertation Coach, Moxie) and a Lecturer in the Doctor of Healthcare Administration program at MCPHS University. Jessica’s research interests are at the intersection of technology and education. She is particularly intrigued by the potential of generative AI for academic purposes, exploring how this technology can revolutionize research, teaching, and learning.