Summit Reflection: AI, Ethics, and Education

Over 300 attendees joined us for three days of inspiring presentations and engaging discussions at the 2024 Emerging Pedagogies Summit, which explored the theme of Designing and Scaling Transformative Learning For All. Through their work with lifelong learners, LILE staff regularly engage with both the challenges and opportunities these emerging pedagogies present. In this series of Summit reflections, they share with you their major takeaways from each session.

Ken Rogerson speaking during the 2024 Emerging Pedagogies Summit.

The last day of the Emerging Pedagogies Summit began with an insightful session on AI, ethics, and education with Emma Braaten, Director of Digital Learning at the Friday Institute for Educational Innovation, Ken Rogerson, Professor of the Practice at Duke University Sanford School of Public Policy, and moderator Aria Chernik, Assistant Vice Provost for Faculty Development and Applied Research at Learning Innovation and Lifetime Education (LILE). What stood out to me the most about this session was that it provided an invaluable moment of honesty, thanks to the panelists and the attendees who had packed the room. The impetus for this moment, I recall, came from Braaten’s emphasis on transparency in her presentation, e.g., notifying the end user when AI has been used. A Duke professor followed up on this point during the Q&A, posing this question to the room: there is so much emphasis on watching students’ use of AI, but how much are we, as educators, willing to turn that lens on ourselves? How much are we willing to look inward and be honest about what we are doing? 

Such a question underscores the complexities of AI in teaching and learning. I am not yet “AI literate”—to quote Remi Kalir, LILE’s Associate Director for Faculty Development and Applied Research, from a different event—and the question of ethics and morality seems so lucid at times, and yet, at other times, so hopelessly obscure. In thinking about AI and ethics, I found Rogerson’s emphasis on “sector” helpful, the idea that we “can’t regulate AI, we can only regulate AI in a sector.” Talking about AI in the context of business, for example, is different from talking about it in education, because the former is frequently focused on profit (and the industry is quite honest about it). Honesty for educators is also important, because as Chernik reminded us during the session, education is about human flourishing. We must be honest about AI’s effects, and perhaps that starts, again, with looking inward. How do we really feel about AI in education? If we are comfortable, what is the source of that comfort? If we are worried, where does that worry come from? What do we truly care about? Being honest is difficult, but I do know that often what is good consists in what is difficult, and I think as educators, we at least owe that much to our learners, who are doing their best to understand, navigate, and thrive in this bewildering world. 

Copilot was used to copyedit the final-stage draft of this blog post.