Nov. 6, 2024

Episode 300: Benjamin Riley

Episode 300: Navigating AI and Human Cognition

In this special 300th episode of My EdTech Life, I sit down with Benjamin Riley, founder of Cognitive Resonance, to explore the intersection of AI and human cognition in education. We discuss everything from the hype surrounding AI, the challenges of automation in learning, and why understanding human cognition is crucial to navigating new educational technologies. Join us as we question the assumptions about AI’s role in schools, dig into the biases of large language models, and look at the responsibilities educators face in this tech-driven world.

Timestamps
00:25 - Introduction to the 300th Episode and Guest Introduction
01:33 - Benjamin’s Background and the Founding of Cognitive Resonance
06:01 - Initial Thoughts on ChatGPT in November 2022
11:45 - Comparing AI Hype to Past Tech Predictions in Education
16:15 - Why Effortful Thinking is Essential for Learning
20:03 - Limitations of AI as a Tutor and Khanmigo
25:06 - The Risks of Taking AI-Generated Content at Face Value
29:35 - Influence of Tech Companies and Education Influencers
34:05 - Real AI Literacy vs. Learning Prompt Engineering
39:02 - Addressing the Pressure to “Keep Up” with AI in Education
44:59 - Practical Frameworks for Cautious AI Adoption in Schools
47:47 - Closing Questions: Benjamin’s Edu Kryptonite, Role Models, and Billboard Message

Thank you for joining us for this milestone episode! Don’t forget to check out the Cognitive Resonance website and Benjamin’s must-read paper, “Education Hazards of Generative AI.” And remember, stay techie!

Interactive Learning with Goosechase
Save 10% with code MYEDTECH10 on all license types!

Support the show

Thank you for watching or listening to our show! 

Until Next Time, Stay Techie!

-Fonz

🎙️ Love our content? Sponsor MyEdTechLife Podcast and connect with our passionate edtech audience! Reach out to me at myedtechlife@gmail.com. ✨

 

Transcript

Episode 300 Navigating AI and Human Cognition with Benjamin Riley

[00:00:25] Fonz: Hello, everybody, and welcome to another great episode of My EdTech Life. Thank you so much for joining me today. I hope you've had a wonderful day, wherever you're tuning in from around the world. Thank you, as always, for your support. We appreciate all the likes, shares, and follows. Thank you for being part of My EdTech Life. Today is a special day because it’s our 300th episode, and I couldn't think of a better guest to have on for this milestone. I'm thrilled to welcome Benjamin Riley to the show. How are you today, Benjamin?

[00:01:10] Benjamin: I'm doing great. Thanks for having me. It’s an honor and a privilege to be here for episode 300.

[00:01:17] Fonz: Thank you so much.

Before we dive into today’s discussion, could you give our audience a brief introduction and tell us a bit about your background in education?

[00:01:33] Benjamin: Sure, I’ll try to keep it brief. My career in education was unexpected. In April 2024, I launched a venture called Cognitive Resonance. The goal of Cognitive Resonance is to help people understand human cognition and generative AI and to use these tools side by side to understand how they operate our minds and this new set of tools that’s entered our lives.

Before that, I spent about a decade founding and being the first executive director of a U.S.-based nonprofit called Deans for Impact, which focused on improving teacher preparation. The work I’m proudest of there was bringing insights from cognitive science into teacher preparation. In some ways, I see Cognitive Resonance as expanding on that work.

Prior to that, I worked for a “venture philanthropy” organization that supported edtech entrepreneurs, and before that, I was deputy attorney general for California, working on education policy. I've had many different perspectives in education, though I’ve never been a classroom teacher, which I always like to acknowledge. My role is as an advocate—for education, for great teaching, and for improving learning.

[00:03:36] Fonz: That’s great. I love hearing about the different perspectives you've had. Although you mentioned not being a classroom teacher, you’ve clearly been tuned into the challenges and changes in education. I know you share many insights on social media, which have been valuable for me and others, offering a different perspective on education.

Today’s discussion will focus on AI, and I’m eager to delve into the many great points you've raised in articles, interviews, and social media. So, let’s start with November 2022. What were your initial thoughts, and what’s changed for you since then?

[00:06:01] Benjamin: Great question. I was stunned when I first used ChatGPT 3 or 3.5. I’d loosely followed deep learning, and a while back, I’d connected with Gary Marcus, who helped me wrap my mind around some of this. Reading his work, I learned a bit about AI and its potential. My focus has been more on human cognition, so when I tried ChatGPT and saw it could converse in a very human-like way, I was stunned and even a little scared. I remember texting my father, a neuroscientist, and asking, “Dad, have we done it?” He was also blown away.

This kicked off a learning journey for me. At first, it was a side interest—just something intellectually baffling that I wanted to understand. Over time, through conversations and reading, I began to grasp it. Eventually, I wanted to share what I learned about human cognition, using AI as a way to start conversations about how our minds work and how this new tool operates. That led to the creation of Cognitive Resonance.

[00:10:19] Fonz: There’s a lot to unpack there, especially regarding Gary Marcus. I follow him too, and you’re right—he’s often misunderstood. Before we started, we discussed some of your past podcast appearances, where people might misinterpret your views, but I love your honesty and the thought-provoking points you raise.

In an article for The 74 Million, you compared today’s AI hype to Thomas Edison’s 1913 prediction that movies would make books obsolete in schools. What do you think drives these overhyped tech predictions? Beyond tech companies, what else contributes to this hype?

[00:11:45] Benjamin: That’s a great question. I think we're living in an age of technological transformation. It’s funny—I barely remember life before smartphones. Technology has changed so much, and we’ve all witnessed the rise of the internet, social media, and other powerful developments.

The optimistic view is that if technology has positively impacted other aspects of life, why not education? However, I’m cautious. Ten years ago, smartphones in schools were widely accepted; now, they’re often seen as a distraction. AI might follow a similar trajectory, where initial excitement is tempered by real-life challenges. There’s a long history of people wanting to use technology to solve educational problems, but too often, the tools don’t truly address the issues.

[00:15:38] Fonz: Excellent segue into my next question. I read your Substack on Cognitive Resonance, where you discuss AI in education. In one article, you say, “AI is not doomed to fail in education.” You mention that AI automates cognition, reducing the need to think, but since learning requires effortful thinking, how do you respond to companies marketing AI as a learning enhancement tool?

[00:16:15] Benjamin: I push back on that notion. The term “AI” is broad, and even if we narrow it down, how it’s applied matters. For example, AI chatbots that use large language models—like ChatGPT—aren’t ideal tutors. From a cognitive science perspective, learning is effortful, and AI shortcuts can hinder that process. Using AI to tutor kids is concerning. Tools like Khanmigo, for instance, won’t transform education as advertised. They may offer some benefit, but they lack the depth and human understanding that real teaching requires.

My concern is that students today, already impacted by the pandemic’s disruption to social and educational experiences, are now expected to navigate this new tool with little guidance. OpenAI reports that many of its users are students, which makes sense—AI provides a shortcut to effortful thinking, which is appealing but not educationally beneficial. While I’m confident we’ll eventually establish a balance, I worry about the current harm to students who may be deprived of essential learning experiences.

[00:19:21] Fonz: I see your point. Dan Meyer, who was on the show recently, shares similar concerns. He emphasizes the importance of teachers guiding students through the learning process. What, in your view, is missing from tools like Khanmigo that would make them viable in education?

[00:20:03] Benjamin: Great question. Dan and I often discuss this. Humans have a unique ability to understand “theory of mind”—we can interpret and empathize with others’ thoughts and feelings. Teachers use this skill to assess students’ understanding and adjust their approach. Large language models can’t do that. They only process the input they’re given and generate responses based on training data, with no understanding of a student’s mental state. This limitation makes AI unsuitable for nuanced educational roles.

For example, Khanmigo struggles with basic algebra and can’t diagnose why a student has a misconception. It lacks the ability to build on a student's existing knowledge, validate their efforts, and foster motivation—all essential parts of teaching. Digital tools, especially large language models, fall short in these areas.

[00:22:54] Fonz: Exactly. I’ve had guests like corpus linguists explain that chatbots and large language models are just predicting the next word. But people often don’t realize this and take their responses as factual. You’ve mentioned the term “stochastic parrot” from Emily Bender, who calls these tools “synthetic text extruding machines.” One of my concerns is how many take AI-generated responses at face value, especially in education, where teachers might use ChatGPT to create reading materials or lesson content without verifying it. What are your thoughts on that?

[00:25:06] Benjamin: I’m glad you asked. There’s an essay I want to write about this idea. We’ve become accustomed to digital tools, like calculators and Excel spreadsheets, providing accurate results. So, we tend to trust digital outputs. But AI models are probabilistic, not deterministic. A calculator will always give the correct answer to “2 + 2,” but when you ask AI something complex or subjective, it’s guessing based on patterns in its training data, not on factual knowledge.

This difference means we can’t trust AI outputs as authoritative. In the Cognitive Resonance document, “Education Hazards of Generative AI,” I emphasize that because AI is probabilistic, its output needs careful review. Asking teachers to double-check everything generated by AI is unrealistic, though, so we have to be selective and thoughtful in how we use it.

[00:28:11] Fonz: Yes, and Dan Meyer discussed similar issues. AI-generated lesson plans may look good but often leave teachers with more work. I wanted to ask you about the influence of tech companies and education influencers. What are some of the risks you see in how they market AI as a “one-size-fits-all” solution?

[00:29:35] Benjamin: I see many concerning trends. One of the most discouraging moments in my day is checking the “AI for Educators” Facebook group, where teachers discuss using AI in ways that could be harmful, like letting AI replace traditional writing tasks. Writing is challenging, but that’s what makes it essential for cognitive development. If teachers encourage students to skip that effort, they’re missing out on a key learning experience.

As for influencers, companies, and even organizations like Khan Academy, there’s a troubling pattern of promoting AI without fully understanding its educational impact. I’ve openly criticized leaders like Sal Khan and Bill Gates because their predictions on technology in education have been consistently wrong. We need to learn from past mistakes and approach AI with caution. The document I co-authored, “Education Hazards of Generative AI,” aims to provide a guide for educators to think critically about these tools, rather than blindly following the hype.

[00:32:11] Fonz: That document is excellent, and I’ll definitely link it in the show notes. You cover potential hazards in lesson planning, tutoring, assessment, and more. I’ll be sure to share it, as it’s an incredibly valuable resource for educators.

On a related note, with so many people positioning themselves as “AI experts” and tech companies embedding AI into school platforms, AI literacy has become a buzzword. I saw you address this in an EdWeek article. Could you explain the distinction you make between AI literacy and simply learning to use AI tools?

[00:34:05] Benjamin: Absolutely. People often describe what I do as “AI literacy,” but I sometimes hesitate to use that term because many organizations promoting “AI literacy” are actually advocating for AI. They’ll gloss over real concerns, like bias, and instead focus on teaching users to be better “prompt engineers.” This approach is shallow—it doesn’t equip people to understand the technology deeply.

Real AI literacy involves developing a mental model of how these systems work. It means understanding biases, limitations, and social implications, not just mastering prompts. For example, when you type “low-income school” into an AI image generator, you’ll see a biased representation. That’s because these models have been trained on data that reflects societal biases, and using AI without awareness of this can perpetuate harmful stereotypes. My goal is to foster a deeper understanding of AI’s inner workings and its broader impact, so people can make informed decisions.

[00:39:02] Fonz: That’s a great point. There’s so much pressure to adopt AI, almost like you’re “falling behind” if you’re not using it. I had Dr. James Bauer on the show for episode 170, and he emphasized that it’s okay to feel reluctant about new tech. But there’s this argument that we must prepare students for an “AI-driven workforce,” and that not using AI is holding them back. How would you respond to that?

[00:40:20] Benjamin: First, I’d say no one knows the future. People claim we must use AI because it’s essential for future jobs, but that’s speculation. My co-author Paul Bruno and I touched on this in our paper. Secondly, there’s evidence suggesting that those who benefit most from AI are those already well-educated. They’re the ones who can combine their knowledge with AI tools effectively. So, ironically, AI may widen achievement gaps rather than close them.

Also, the adoption of AI isn’t as widespread among educators as social media might suggest. Many teachers and administrators are still cautious, and rightly so. Social media can make it seem like there’s universal enthusiasm, but in reality, many educators are proceeding carefully. I try to remind myself that Twitter isn’t real life.

[00:43:03] Fonz: Well said! It’s easy to feel pressured by social media, but it doesn’t reflect the whole picture. In K-12 and higher education, there’s definitely more caution. Jason Guglia talks a lot about the way AI is marketed to college students, with ads suggesting that AI can do their assignments for them. To me, that’s alarming. But I think it’s okay to be cautious and to move at your own pace with AI. Not using it doesn’t make you any less of a great teacher.

Now, as we wrap up, I know you’ve advocated for slowing down AI adoption. In your article, “Generative AI in Education: Another Mindless Mistake,” what guidelines or frameworks do you share in your workshops for educators considering AI tools?

[00:44:59] Benjamin: Good question. My approach is to first help people develop a mental model of human cognition, which takes time. Understanding how the mind works provides a foundation for understanding AI. When people grasp that AI is essentially a “next-word prediction” machine, they start to see its limitations.

In the “Education Hazards of Generative AI” document, my co-author and I tried to make research and citations accessible. I didn’t want to use traditional academic citations, as they can be confusing outside academia. Instead, we included titles of relevant papers so that anyone interested can explore further.

Ultimately, I encourage educators to think critically and independently. Don’t just accept AI because someone on TikTok says you should. Take time to understand it and make informed decisions.

[00:47:47] Fonz: Excellent advice. Thank you, Benjamin, for sharing your insights. This is a topic I’m passionate about, as it’s part of my doctoral research. Connecting with like-minded individuals like yourself helps me broaden my perspective. While some aspects of our views may differ, today’s conversation has highlighted the importance of open dialogue.

Before we wrap up, I always like to end the show with three questions. Are you ready?

[00:48:15] Benjamin: I’m ready!

[00:48:17] Fonz: Awesome. Question one: We know that every superhero has a weakness. In the current state of education, what would you say is your “Edu-kryptonite”?

[00:48:26] Benjamin: That’s a great question. I think my “kryptonite” would be the forces working to erode what makes humans unique—our ability to learn from one another and connect socially. Technologies like generative AI and personalized learning tools risk undermining this by pulling us apart rather than bringing us together. I’m here to push back against anything that disrupts that essential human connection in education.

[00:49:35] Fonz: Fantastic answer. Question two: Who is one person you’d love to trade places with for a day, and why?

[00:49:47] Benjamin: I gave this one a lot of thought! I decided I’d love to trade places with astronaut Sunita Williams, who’s currently in space on the International Space Station. She’s been up there for over a year, and I’d love to experience what it’s like to be in space, looking down at Earth. She seems like an absolute badass, and that perspective would be incredible.

[00:50:31] Fonz: That’s an amazing answer! I’d love to experience space too. Last question: If you could have a billboard with anything on it, what would it say and why?

[00:50:47] Benjamin: I think it would say something like, “All humans are teachers—embrace that responsibility.” I believe everyone has the capacity to teach and share knowledge. Education is about bonding through shared understanding, and I want people to recognize the nobility in that.

[00:51:44] Fonz: I love that. I’ve always believed that storytelling is one of the best ways to learn, and your answer really resonates with me. I think that’s one reason I enjoy podcasting so much—it’s my way of learning through others’ stories, and I get to share that with the world.

Benjamin, thank you so much for joining me today on this very special 300th episode. It’s been an honor to have you on as a guest, and I appreciate the incredible work you’re doing. And hey, it’s awesome to know you’re only five hours away—maybe one day we can meet up for some BBQ at Black’s!

[00:52:49] Benjamin: That sounds like a plan! I’d love that.

[00:52:52] Fonz: For our listeners, thank you, as always, for your continued support. It’s been an amazing four years of My EdTech Life, and this milestone of 300 episodes wouldn’t be possible without you. We’re committed to bringing you insightful conversations that inspire growth in the education space.

If you haven’t already, check out our website at myedtech.life, where you can listen to this episode and the other 299 episodes filled with knowledge nuggets to sprinkle onto your journey. Don’t forget to follow us on all socials @myedtechlife, and hop over to our YouTube channel to subscribe and keep up with all our content.

Thank you, as always, for your support. And remember, stay techie!

 

 

Benjamin Riley

Founder and CEO

Benjamin Riley is the founder of Cognitive Resonance, an organization dedicated to helping people understand human cognition and generative AI. He is a social entrepreneur who has spent nearly two decades working to improve education.