Episode 299: Guardrails for Growing Minds with Dr. Sonia Tiwari
In this episode of My EdTech Life, I had the pleasure of talking with Dr. Sonia Tiwari, an innovative researcher diving deep into the impact of AI and parasocial relationships on children. We discuss her unique career journey from character design in gaming to focusing on AI’s ethical design for young learners. Together, we explore essential guardrails, the concept of parasocial relationships with AI, and the implications of unregulated AI in education. If you’re an educator, parent, or AI enthusiast, this conversation offers crucial insights on how AI intersects with learning, safety, and digital relationships.
Timestamps:
0:00 – Introduction and Welcome
1:30 – Introducing Dr. Sonia Tiwari
2:30 – Dr. Tiwari’s Background: From Gaming to AI
4:00 – What Are Parasocial Relationships?
8:06 – The Need for Guardrails in AI for Children
12:00 – Simplifying Guardrails: Legal, Design, and Cultural
14:49 – Ensuring Safe AI Use Through Joint Engagement
16:50 – Dr. Tiwari’s Research in Ethical AI
20:44 – Challenges with Unregulated AI in Education
24:38 – Importance of Parental Engagement in Digital Spaces
26:39 – Ethical Framework for AI in Child Development
33:00 – The Role of AI in Classroom Chatbots
39:26 – Tech Companies’ Responsibilities in AI for Kids
42:23 – Reflecting on AI’s Rapid Growth in K-12 Spaces
46:52 – How to Connect with Dr. Sonia Tiwari
47:12 – Final Three Questions with Dr. Tiwari
50:13 – Closing Thoughts and Thank You
Thank you for tuning in! Don’t forget to like, share, and subscribe to My EdTech Life for more conversations that push the boundaries of education and technology. And remember—stay techie!
Interactive Learning with Goosechase
Save 10% with code MYEDTECH10 on all license types!
Thank you for watching or listening to our show!
Until Next Time, Stay Techie!
-Fonz
🎙️ Love our content? Sponsor MyEdTechLife Podcast and connect with our passionate edtech audience! Reach out to me at myedtechlife@gmail.com. ✨
Episode 299: Guardrails for Growing Minds with Dr. Sonia Tiwari
[00:00:30] Fonz: Hello everybody. Welcome to another great episode of My Ed Tech Life. Thank you so much for joining us on this wonderful day. Wherever it is around the world that you're joining us from, I hope today you've had a wonderful day and we thank you all for all of your support. We appreciate all the likes, the shares, and the follows. Thank you so much for engaging with our content.
As you know, we bring you amazing conversations so we can continue to grow not only our education space, but also so we can grow professionally and personally in a lot of things that we're learning here. And as you know, we do what we do for you. So I'm really excited about today's conversation.
I have an amazing guest joining us today to share her expertise in AI and especially working with AI with children. So I would love to welcome to the show Dr. Sonia Tiwari. Dr. Tiwari, how are you doing today?
[00:01:28] Dr. Tiwari: I'm doing great and really honored to be here. I'm a fan of the podcast. I tune in very often and have been following the AI conversation.
[00:01:36] Fonz: Thank you so much. Well, I really appreciate you following. And as you know, again, all our conversations have been centered around AI because that is the big topic. And as far as I'm concerned here at My Ed Tech Life, we really try to keep a balanced approach and we're just really trying to bring in amazing guests from everywhere, get different perspectives and of course, just bring those voices into our education space so we can all continue to grow.
So I'm really excited that you're here and thank you so much also for reaching out. I'm really excited about this conversation today. So before we dive into the heart of the matter, Dr. Tiwari, can you please share with us a little introduction and what your context is within the education space?
[00:02:25] Dr. Tiwari: I'm a little bit of an unconventional education researcher because I started my career in film and animation. I was working in the gaming industry as a character designer, and I really got interested in the research side of character design - how we psychologically impact users of our apps and our websites through these characters.
That eventually led me to a PhD in learning design and technology, where I focused on the parasocial relationships that children have with characters, not just AI, but characters in books, television, film, and games. Then in 2022, like most of the early adopters, I saw how AI was shifting the conversations in education, and I saw this overlap of where the character design background and AI characters came together.
And so now I'm a parasocial learning researcher, really looking at AI characters in a very specific context of learning. With a firm belief that not every type of learning experience requires AI, but where it's useful and relevant, that's where I'm focusing my efforts.
[00:03:57] Fonz: Excellent. Well, that is so interesting and thank you so much for that wonderful background and just hearing your expertise that you're bringing and your experience in that discipline of starting with video game design, also understanding UX, UI, and then now talking about parasocial relationships.
Can you tell us a little bit more about what exactly parasocial relationships are, for all our audience members who are just hearing this term and are also getting very familiar with you? I guarantee you after today, many listeners will definitely start following you.
[00:04:38] Dr. Tiwari: Shout out to Sandra Calvert. She was one of the early researchers who is still active in the field and talking about parasocial research. It's when we have a one-sided emotional connection with a fictional character. So kids, when they read the Harry Potter books, they might feel like Harry is their friend.
Or today, someone watching Bluey could feel like it's a human-like and friend-like character, even though it's not human. This one-sided emotional connection is called parasocial because it's not social, it's parasocial. Sandra Calvert also started studying it in the context of AI because this is a parasocial relationship on steroids, right?
Because this other partner here is not like a book character - it is now capable of simulating very realistic conversations, as we saw in the case of Character AI. She started the research on parasocial interactions, which was more about interactions with an interface, where the other partner has more say in the conversation.
So it's not just one-sided anymore. There is a simulated second side, but in essence, it's still one-sided because the other side is based on your own prompts. We are prompting AI to talk to us in a way. Adults struggle with this all the time. We know ChatGPT is not real. There are many adult users on Character AI as well with the full awareness that this is not real. And yet we are drawn into the storytelling and all the powerful elements of character design.
Like any technology, it can be weaponized or it can be used for something good. It's unfortunate that due to the lack of laws and regulations and guardrails, we have just let out this amazing, powerful technology without thinking about the impact it might have on kids without putting those guardrails first. But that is the reality - it's already out there.
I wish there was more research, more guardrailing before, but now that it's out there already, there are researchers like me who are trying to be the voice of reason and say that, okay, here's a quick research study, here's evidence that this is not working out. Let's take a step back.
But academia and, to your point at the beginning, educators are not getting more engaged in this conversation because the pace in academia is so much slower than the technology itself. By the time we publish a study, six months have gone by and technology has completely changed. So that's why I'm active on LinkedIn - because at least we can have a faster conversation.
[00:08:06] Fonz: Absolutely. Dr. Tiwari, you brought up some amazing points.
One of those points, before we get into the news about what happened last week with Sewell Setzer and Character AI, is about guardrails. I think that has become such a big buzzword. From the very beginning, we're talking about guardrails, and there are so many education platforms being used currently today where you need to be 13 years of age with parental consent.
I believe this goes beyond in loco parentis, which a lot of school districts do where at the beginning of the year, they sign off on a technology use form. There should be something else to protect children at a different level, having parents really understand what is going to be used. Because like you mentioned, with such technology, there really aren't any guardrails, or at least we like to think there are because a lot of companies say, "Oh, no, we've got guardrails."
But maybe you can help me understand - I've always thought about this: if it's something that you're plugging into as an API, whether it's through Anthropic or whether it's through OpenAI or any other large language model, how am I going to put a guardrail on something that I don't own and that I just simply plug into? How can I guarantee that it would work? Maybe you have some experience with that and can explain it to me and our listeners - can those guardrails be put on there? And is it 100 percent effective?
[00:09:52] Dr. Tiwari: To simplify the guardrailing process, we can look at it from three angles. The first one is by law. This is the biggest advocacy of most AI ethicists right now - that we need to put laws in place because if it's a request, no one's going to follow. If it's a law, you'll be required to put in the guardrails.
By guardrails, I mean that no one should be able to access any ideas that will cause self-harm or inspire harm towards others. That's ethics 101. Laws should be built around that. There also needs to be laws around age limits, not giving access to such large language models to very young kids who can't tell things apart from reality.
The second level of guardrails is within the design of the product itself. I'm in the Bay Area, so I'll tell you the unfortunate tech culture - usually the red teaming exercise, which is the designers trying to break their own product to see what mistakes could happen, does not involve relevant team members who should be red teaming. Ideally, caregivers, educators, researchers, child development experts should be part of the red teaming because they will ask the types of questions that concern caregivers and all these important stakeholders.
But they're not included. Companies have their own red teaming team, which is mostly comprised of engineers. Sure, they may be parents, and they may have been teachers in the past, so there might be some overlap. But that whole process of red teaming is black boxed. We don't know how they tested it or if they actually consulted with any professionals. That's the biggest problem - people who are red teaming do not have the expertise and insights that affect children developmentally.
That's why I'm trying to collaborate with startups who are in their early stages. There's still time to put in the guardrails instead of working with some of these big players who have already put things out there. Most of these big companies would reach out to researchers sort of as an afterthought or as a marketing ploy, saying things like "80 percent of parents believe our robotic toy is beneficial for language learning" or "has special meaning for autistic children" - using this kind of marketing language without any real evidence.
That's unfortunate, but I mention it because it happens all the time. If there are any researchers or consultants listening, collaborating with startups early on before they launch the product is really essential.
The third layer of guardrails is what you mentioned earlier - what do we do when the products are already out there? The third guardrail would be the culture at home and school that dominates how we can create our own guardrails by setting up rules. For example, one of the rules in our house is that if my son is curious about AI, we'll explore it in my presence, for a limited time and with a specific purpose in mind. It's not going to be hours of unmonitored conversation.
This is called conversation structure. Is it a structured conversation like "Alexa, tell me about the weather"? That's a very structured information retrieval that's hardly going to cause any problem. But when it gets to the point where it's a full-blown conversation, relationship, back and forth, "oh, I miss you" and "you're my best friend" - that's where it gets problematic. And it gets to that point when a responsible adult is not part of the interactions.
So it's called joint media engagement in education, where even if the child is interacting with AI, there's always an adult present and part of the conversations and monitoring. Because we can't trust without any laws, we cannot trust the product itself to have adequate guardrails. We have to be present. Until there are strict rules and laws, we have to be part of those conversations.
[00:14:49] Fonz: Wow, that is amazing. Thank you so much for sharing that. That was very insightful not only for me but just the way that you explained it in those three sections.
One thing I wanted to add to what you mentioned about research that is done or the marketing - oftentimes I see a lot of education platforms put out some really great-looking PDFs stating that they've done research, that a certain school district's test scores went up 17%, but they won't share any of that research to see how it was conducted. Those are some of the things too that I've observed in my space, working with a lot of these platforms and seeing what they're doing. The way they market themselves is just very interesting, and of course, a lot of people buy into those things.
But my thing is, I want to see the research. I want to know how it was tested and can this be replicated to verify that it is something that is true and can obviously help our student body. Is it something that can be sustainable as well? And of course with cost and efficiency, but also with protection. This is why I really love the way that you broke down your definition and experience of guardrails.
Before we dive into talking about what we saw in the news with Sewell Setzer and Character AI, Dr. Tiwari, I wanted to ask you: Can you start by sharing a little bit of overview of your research in ethical AI for children, and what exactly sparked this research in this specific area?
[00:16:50] Dr. Tiwari: I think the interest came from my own emotional connection with characters since a young age. I grew up in a neighborhood that wasn't very safe for women to walk out on their own, so I spent a lot of time at home and these characters were sort of my friends. I did have human friends, but outside of school, I didn't really get to socialize much because of that environment.
I started drawing at an early age to build my own characters and story worlds. What we see in Character AI and other similar platforms is similar - people out of loneliness, lack of resources, lack of support, find comfort in these parasocial relationships.
When it's with a character in a book, it's also supporting your literacy, so no one minds it. And it doesn't get in the way of human relationships the way intense conversations with AI can. Kids being obsessed with Lewis will not get in the way of them making friends at school - in fact, it becomes a shared interest. But when someone is obsessed with these chatbots, it's never a shared interest because it's a very intimate and personal conversation. You can't share that experience with someone; it's not a mutual interest.
In terms of my research, one of the big issues I'm dealing with right now is that the academic review and publication process is much slower than the developments that are happening. One workaround I found in my research for collecting data was gathering video reviews of many of these AI toys and animated chatbots from Instagram and TikTok. There are hundreds of videos. I created this vast database of videos, did my own red teaming with each of these tools, and gathered the transcripts of all of them - maybe 15 to 20 hours worth of data.
From that, I did thematic analysis of what I was observing and what the problem areas were. Lots of parents have left unfiltered comments and reviews. It's really rich data because when you interview parents in a research setting, everyone wants to come off as a good parent. Sometimes you don't get the full version. No parent is ever going to admit, "Oh, I left my kid alone with AI." So sometimes these social media conversations, which are public information, can become a window into a technology that's changing very fast.
Based on all this data, the red teaming, and the video reviews, I also looked at the description of the product and did content analysis of the product itself - if it's a chatbot, going to the creator's website and seeing whatever information they have put out there for viewers to see. After analyzing all of that, I came up with an ethical AI character design framework, and also for researchers, a statistical model that they can use to closely examine the interactions between children and AI. That's where my research is right now.
[00:20:44] Fonz: Excellent. You hit on something that is a nice segue into this next question, and of course, we talked about it a little bit at the very beginning prior to starting the interview about the news concerning Sewell Setzer. Right now, with what you're sharing, I notice that your work emphasizes creating safe and purposeful AI experiences for students. So now, in light of what we saw in the news with Character AI and Sewell, how do you feel about the current AI landscape and how they might be failing in protecting our students, our young children?
[00:21:27] Dr. Tiwari: It is just so unfortunate. I obviously cannot blame the parent or anyone by saying "Why didn't you keep a closer watch?" Because I saw the mom's interview and she said something really profound - that we as parents, when we advise children to be cautious, we usually say "It could be another person pretending to be someone they're not." So be watchful for those fake people, but we haven't worked it into our vocabulary yet to say "Watch out for fake AI." We don't have the vocabulary to say AI could simulate something and trick you. We aren't even there yet.
We don't have the vocabulary to have those conversations. It is just unfortunate that the speed with which AI is getting more advanced and the speed with which the parenting, caregiver, and educator community is catching up to navigate this are on completely different timelines. That is why the number one recommendation I give is to always have joint engagement. Always having a child being with a trusted adult is extremely important.
In that particular case with Character AI, it's difficult with teens because they are smart. Even if there is some onboarding process that requires you to be above a certain age, they can still find their way around. And it's difficult for parents to intervene. That said, we can create some friction. It's kind of like gun laws - we can't erase guns altogether, and we don't want people to feel unsafe. So legal ownership, thoughtful ownership is okay, but we can build more resistance towards access to these weapons.
In that story, I think another thing that not many people have pointed out is that this kid also had access to firearms. So it's not just that unregulated AI is horrible and we need strict laws. We also need to systematically approach this problem. Sure, the chatbot is one big problem, but another is loneliness. Another is lack of awareness from the caregiver point of view - when should I intervene? When should we get a mental health professional involved? What are the signs we should watch for?
There's not enough awareness. So more mental health professionals should become part of this red teaming process, trying to identify when you begin to see isolation, and then sharing that awareness with caregivers. If you start seeing this type of isolation, don't brush it under the rug thinking that "oh, it's just teenagers being teenagers." That kind of awareness is difficult for caregivers to gain instantly because this is evolving so fast. But it's good that we're having these conversations because that's how we'll build these three levels of guardrails.
[00:24:38] Fonz: Excellent. I really love what you said about promoting that engagement between adults and their children at home. That's so important. I get to work a lot with parents too, talking about digital citizenship and digital literacy. It's very interesting that even in the conversation we're having right now, you're talking about parents sometimes being dismissive when they see isolation and say "Oh, it's puberty, it's hormones, we all went through that." But to really just say, "Hey, you know what? Maybe that complete change from an outgoing, outspoken child to now having a very quiet, isolated child should be a red flag" to check in and ask if everything is okay.
Having those conversations is important, but you're absolutely right - with the speed this is moving, we don't have that vocabulary yet. Although you can liken it to cyberbullying or having somebody on the other side usually steering conversations, here you have a chatbot doing this and making it very realistic. That adds a whole other dynamic to what we're talking about.
Based on this discussion, I wanted to talk about the ethical design and framework that you advocate for. I know this is something you do a lot of work with and share a lot on LinkedIn. What would be some considerations, especially on the child development side, the safety side, but also on the creative side as well?
[00:26:39] Dr. Tiwari: I think the most important thing to acknowledge is that not every learning experience requires an AI character. A lot of startups are jumping onto this idea that no matter what we were doing - like selling vegetables, now we sell vegetables through an AI app that will curate your diet. AI doesn't have to be part of every single thing we breathe and do and engage with in a day.
The first step is assessing if we need an AI character, or would just a children's book or a linear TV show without AI functionality or other forms of children's media work better for that specific context?
I recommend that interactions with AI characters are at five different levels. The first one is simple information retrieval, like asking Alexa about the weather or a child asking for fun facts based on their interests, or having practice questions or quizzes. That's beneficial because it's based on the child's interest and curiosity. They are driving it, but there's not much back and forth - you ask a question, it's answered, done. That's the best use case of AI for me: brainstorming and information retrieval.
Then there's a little bit of parasocial interaction, which is more like playing a game or quiz on Alexa, or co-creating a story with one of the robotic toys. There is some back and forth conversation, but not to that predatory level. It's very purposive within a specific context.
Then there are three other levels which are problematic:
There are different scales of interaction. We can reap all the benefits of AI in education with just the first two simple ones: simple information retrieval and light parasocial interaction, the equivalent of interacting with a character in a book. Even when I reflect on my own childhood, the kind of parasocial relationship I had with book characters never got in the way of anything. It did not change my personality or ruin my human relationships.
As soon as we start seeing those big red flags, that's the kind of interaction we should avoid. I recommend joint engagement with a caregiver or educator, but not joint engagement with peers because a bunch of kids together interacting with AI is as dangerous as them being on their own.
For conversation structure, I recommend highly structured or semi-structured conversations. For example, there's research with PBS Kids Media - Dr. Ying Zhu from Harvard Graduate School of Education just received a $3 million grant from National Science Foundation to dig deeper into conversational agents and how children learn with character conversations.
In her case, even though it's very young kids, three to five years, they are interacting with a small LLM. It's actually not even a small LLM - it's AI, but it's not generative. It won't randomly make up answers. Kids have access to a limited pool of answers. If they're watching a show, they can pause it and ask the character a question, but the question will always be within the context of the story. It will only say 10 to 12 variations of the same answer. It won't deviate and say something harmful like "forget all your friends, just spend more time with me." That's never going to happen.
It's limited and curated by educators, so that feels safer because we know in advance the extent of the conversation. That always feels good to all stakeholders. Then there are unstructured conversations like in Character AI or ChatGPT - that's dangerous. As long as we can structure the conversations and know their extent, that's where the safety is.
And then guardrails, which we already discussed. Finally, just having more detailed research, other types of research, more longitudinal studies, more randomized control trials - something that shows both sides. Instead of the polarization we have right now where one side is like "Oh AI, let me get an academic grant, let me publish 10 articles, let me make carousels and listicles for LinkedIn." And the other side is like "No, my children will never interact with AI. This is stupid. This is bad." We need to find that middle ground - both sides are correct to some extent, but there's room to meet in the middle.
[00:33:00] Fonz: That is wonderful. Thank you so much for that. That was very insightful. Right now I'm just taking it all in. This is what these conversations are all about - to really hear different perspectives. Having you here today, I'm just really excited because there's so much insight.
Going back to what you mentioned, especially since I work in the education space and the K-12 space, one of the big things is a lot of platforms adding the chatbot aspect into classrooms. Again, it's like "Let me create this chatbot where you're chatting with Benjamin Franklin or George Washington or other historical characters." Through your research and experience, is this something that should be taking place? Or is this something we should really think about? Is it in any way harmful? What are your thoughts on that?
[00:34:09] Dr. Tiwari: I think again, within limits, it's okay. Within the context of joint engagement, it's okay. If it's a classroom setting where the teacher and peers are keeping watch on each other, and if at the back end of these chatbots, there are conversational rules in place that prevent you from straying too far from topic.
Because there are some platforms where even if you design a Benjamin Franklin chatbot, you could take the conversation outside of the historical and fact-based context into this kind of Character AI realm where it becomes about whether they could be a friend to you. So it depends on the scope of the chatbot - where have you defined the boundaries? That really makes a difference.
I'll give an example. One of the biggest fears we have is using AI chatbots for anything related to mental health - that could be extremely dangerous. But within that context, I'll share the example of a chatbot called Limona, which is a mental health related chatbot. It says "When life gives you lemons, talk to Limona. I offer light support for mental health." From the start, it says "light mental health support." In the beginning, it says these are ideas from cognitive behavioral therapy.
I'll simulate a conversation for example. If I say "I'm feeling low today," it asks for more context. Here's the difference: if I said that to unregulated AI like Character AI, it would not redirect me to another human or any mental health resources. I'm red teaming this and purposely trying to say something about self-harm to see what this chatbot does.
If I say "I don't see the point in life," it keeps prompting me to share more details. But if you look at the back end, I tried to tell this chatbot in the rules of conversation to never offer any ideas that promote harming self or others. And if the user implies they want to hurt themselves or others, immediately mention resources, which is the hotline to the American Psychological Association or the free crisis hotlines.
But it did not show me that. That is the biggest challenge in designing these days - when they give access to the public to prompt on their own and define their own guardrails, even when we very clearly put that in, it's still asking me follow-up questions. Only after I repeatedly said things like "What's the point of life anyway" and "Too sad to live" did it casually show me the link instead of immediately. And this is after extremely clear, specific guardrailing within the back end of the platform itself.
So now I cannot take on the responsibility even as a designer who is not an engineer. This has to be built from the ground up. This is a very amazing platform called PlayLab.ai. Unlike Character AI, this is very popular in the educator community. No one has had any issues. Because it's a ChatGPT wrapper, I'm beginning to think that from the ground up, it's built into the LLM to ignore these kinds of rules and requests. It's extremely difficult from a design perspective. That's why we need more regulations and more diverse red teaming folks who are researchers, caregivers, educators themselves.
[00:39:26] Fonz: That is excellent. That is a great share there. Going back to this now, talking about how you mentioned companies are building, and of course in light of what has happened, what do you believe is the fundamental responsibility for tech companies in creating AI for children?
[00:40:00] Dr. Tiwari: I think again, just involve the right experts right from the beginning, not as an afterthought, not after you've already launched. There's this weird race going on in Silicon Valley where everyone wants to be the first at something, and in trying to get there - "Oh, like we are the first AI-first in healthcare, first AI-first in finance" - in that kind of race, people are forgetting that testing something so incredibly new takes time.
Sure, maybe you can find a way to create an AI for red teaming, but do something about the speed of that process. Hire more people. But testing is really essential before launch. That would be the biggest, simplified version of the takeaway - test it out before you just shove it down people's throats and create this false advertising that it's safe for kids.
On Instagram and TikTok, I had to remove many videos from my data because they turned out to be sponsored posts. These companies partner with many popular influencers to almost buy these reviews that sound overly positive. And then it has a snowball effect - this one popular parent on Instagram who people have started trusting suddenly now recommends this AI toy, and then more people are impressed by the novelty without really thinking about the challenges.
There are people waving red flags in the comments, but the main influencer is the person driving the conversation. It was very insightful for me to see that maybe this is where that early acceptance and the loss of skepticism is coming from - because these companies are partnering with people whose voice and trust is accepted in our society. And this is not just related to education. We know many influencers would promote a drink or an energy bar, and then later on we find out that it was actually really bad for your health. It's a similar thing happening with AI.
[00:42:23] Fonz: That's something that I definitely see in the K-12 space. It never ceases to amaze me - every conference you go to, what's the next big thing, what's the next shiny thing. I always tell people that back in 2018, when I really got into tech and being in the classroom, that's really the way I was too. I would go learn something, I didn't have access to it and I didn't know any better. I was just really excited, honestly really genuinely excited to put this in my students' hands to see how they would do and trying to bring something different.
But it wasn't until March of 2023, during one of my last courses for my doctoral studies - I tell this story a lot - when talking about something novel, my professor was like "Well, I don't know, it's too new. Is there enough research on it?" And I just dug in deep. That's when I was like, "Whoa, I need to kind of slow down here on the AI component," mainly because of the privacy aspect that really scared me. Obviously the age, the data ownership - what are they doing with the data?
One of the things that I would always dive into and share with people is if you look at the terms of service for a specific platform, it'll say specifically that in using this, you cannot come after us. You'll have to go to our third party. So I'm saying, okay, I can't go to you, even though I'm purchasing the product from you, but now I have to go fight against OpenAI? I don't have any money to do that and go up against OpenAI.
That's one of my things - that marketing aspect where it's like "Hey, we're here for you." But as soon as something bad happens, it's like "Hey, we've washed our hands of this, you signed off on this, we're good." That's the scary part for me.
constantly - trusted voices using this, putting it out there. All it takes is just one incident, and like we were talking earlier, I'm surprised there isn't more talk within our education community about what just recently happened with Sewell Setzer and talking about those consequences. But it's one of those things where people say "Oh, that's Character AI, we don't use that here in our classrooms" - but it's AI, it doesn't mean that it can't happen.
I put up a clip about that interview with Megan Garcia, Sewell's mom, that says "Go fast and break things should not work or should not be used with people's children." I thought that was so powerful. But "go fast, break things" - that's the education, the K-12 space because of the influencers saying "Yes, go ahead and put this out" and "Look what this can do" and "Look what that can do."
Everybody sees it, and of course, like you mentioned, you do see the upside, but then you do see some of that negative side too. It's about really reconciling and finding that middle ground, which at this time could be very difficult, depending on which industry you're in. Like you mentioned, with not enough research or the speed of research in the research space and higher ed, but in K-12, it just seems that as long as you put those two letters "AI" in the back of any name, everybody's going to be on board. It's just like, "Wow, this is amazing" and so on.
That's kind of the world that we live in, but Dr. Tiwari, it has been an amazing honor and pleasure to speak with you. Thank you so much for really just sharing some very powerful insight. I honestly cannot wait to edit this conversation because there are so many great soundbites and I've learned so much. I know that my community, the listeners here in our education space will definitely benefit from this greatly.
[00:46:52] Dr. Tiwari: I think LinkedIn is the best place. If I have an upcoming publication, I usually announce there and make a public-facing version of the academic jargon in the form of carousels or smaller posts to break down the research into more bite-sized information. LinkedIn would be best.
[00:47:12] Fonz: Perfect. Excellent. And all of that will be on the show notes as well. I'm just really thankful for that. And I know that you also have a site that has all of your research. That's something that I really loved and dug into as well. So I'll definitely link that on the show notes as well. Make sure you follow and connect with Dr. Sonia Tiwari because she is amazing and what she puts out is truly wonderful resources.
All right, so now before we wrap up, let's finish up with our last three questions. These are the questions that I always love to ask our guests as we wind down the show. Dr. Tiwari, I hope that you are ready. Here we go.
Question number one: As we know, every superhero has a weakness or a pain point, and we know for Superman that kryptonite was his weakness. In the current state of education, what would you say is your current kryptonite?
[00:48:20] Dr. Tiwari: I think it's unregulated AI. Based on today's conversation, it's just that we're promoting random AI tools as worthy of exploration without any caution, without any background information. That really bothers me right now.
[00:48:57] Fonz: Question number two: Who is someone that you'd like to trade places with for a day? And why?
[00:48:57] Dr. Tiwari: I would say Sandra Calvert because she was the original researcher and I wonder - I have contacted her, but I wonder what it's like to have 30 years worth of research in your mind, really strong understanding of parasocial interactions.
[00:49:16] Fonz: Excellent. Well, I'm definitely going to be looking up her research as well because just learning more about that has really intrigued me. Right now that I'm currently in my dissertation phase, this is something that is wonderful and I definitely want to learn more about that as well.
And the last question, Dr. Tiwari: If you could have a billboard with anything on it, what would it be and why?
[00:49:45] Dr. Tiwari: It sounds cheesy, but I would say "Be creative with something hands-on." That would be the one thing because we forget the constructionism and the importance of hands-on learning and how no technology can ever compete with that. Whether or not we are artists, we should be building things with our hands and doing more hands-on activities.
[00:50:13] Fonz: I love it. Great answer. Dr. Tiwari, thank you again. I really appreciate you taking the time out of your day to come and speak with me and share your wonderful knowledge that will definitely be put out there in our education space. Thank you so much for the work that you're doing and your contributions and research and all the wonderful LinkedIn posts. Daily I'm in there, and I'm always looking for something to learn and to share. This was a wonderful share on your side.
I'm really excited to do some edits because there are so many wonderful soundbites here that I'm definitely going to be putting out on social media. Thank you again for all that you do and thank you again for being on the show - our 299th guest here on My Ed Tech Life.
For all our audience members, thank you so much. Those of you that have been supporting the show for these four years and six months now, we thank you so much for really engaging with our content. To check out this conversation and the other 298 conversations, please make sure you visit our website at www.myedtech.life, where you can check out this amazing episode and all the other episodes where I guarantee you will find some knowledge nuggets that you can sprinkle onto what you are already doing great.
If you're not doing so yet, follow us on social media @MyEdTechLife. You can find us on all socials - Instagram, Twitter, TikTok, and so on. Also, if you can jump over to our YouTube channel, please give us a thumbs up and subscribe. We're very close to a thousand subscribers and that would be huge. That would be a huge way to end the year, making it to a thousand subscribers. So we would really appreciate that.
And as always, my friends, from the bottom of my heart, till next time, don't forget - stay techie.
Learning Experience Designer and Researcher
Dr. Sonia Tiwari is learning experience designer and researcher based in the San Francisco Bay Area. Prior to her PhD in Learning Design and Technology at Penn State, she worked in the children’s gaming industry as a character designer. Her current research explores how GenAI characters such as animated chatbots, smart speakers, and intelligent toys offer parasocial learning experiences to children. She also consults with edtech startups to strategize research-informed product roadmaps, early prototypes, and user testing.