Summit Reflection: Generative AI & Pedagogical Innovation

Over 300 attendees joined us for three days of inspiring presentations and engaging discussions at the 2024 Emerging Pedagogies Summit, which explored the theme of Designing and Scaling Transformative Learning For All. Through their work with lifelong learners, LILE staff regularly engage with both the challenges and opportunities these emerging pedagogies present. In this series of Summit reflections, they share with you their major takeaways from each session.

The first afternoon of the Summit, Carter Zenke, a Duke alum who has taught computer science at Harvard and is currently the Curriculum Product Manager at Hello World CS, provided a fun yet thoughtful workshop on how educators can use AI to enhance their students’ learning experience. Participants not only came out of the workshop with a better understanding of AI, they also left with some good ideas of how to start incorporating AI into their pedagogy.

Carter Zenke presenting at the 2024 Emerging Pedagogies Summit.
Carter Zenke presenting at the 2024 Emerging Pedagogies Summit.

Zenke made us all at ease with his light-hearted approach to using AI. After a short ice-breaking activity, he explained the difference between AI, Generative AI, and a GPT (Generative Pre-Trained Transformer). He then illustrated how a GPT on its own is not very useful as a means of teaching because it gives answers instead of helping the student figure out the solution on their own. For that reason, Carter explained, educators must create pedagogical guiderails when using GPTs. This not only means tailoring a GPT, but also creating clear guidelines of how and when students could use AI tools. 

Zenke gave an example of a GPT tutor for Harvard’s Intro to Computer Science course (CS50). While students are forbidden from using general use GPTs, they are allowed to use one that was created specifically for this course. CS50.ai is designed to help students learn the course materials without giving them the actual solutions. And as Zenke pointed out, this AI creates a 1:1 tutor to student ratio. And unlike human tutors, CS50.ai is available all the time, and never gets impatient. You can learn more about this work in this article published by Zenke and his collaborators. 

As we started the “work” of the workshop, Zenke first had us brainstorm some new learning experiences for our learners. I oversee Duke’s paralegal program, and I found myself imagining how I could train AI to create a law firm scenario where the AI would act as the supervising attorney or as a client being interviewed by the paralegal. 

Zenke then introduced prompts, system prompts, and Retrieval Augmented Generation and had us play with these to start creating our learning environments and to test how AI performed. After each step, we discussed our findings. Through this process, participants learned first-hand what AI was good at doing, and equally as important, what it did poorly. To return to my paralegal example, AI would be useful to help paralegals practice their interviewing skills, but it would do poorly in answering questions about legal ethics.

What I really loved about this workshop and what I think made it such a success was how Zenke helped us dream big. Participants were given the time and space to brainstorm new ideas before we even considered how AI could assist us with them. Playing with AI to further our own ideas made the process fun (and less intimidating), and we left the workshop feeling energized and excited for the rest of the Summit.

Perhaps the biggest lesson learned is that while AI is daunting and ever-evolving, it is, at the end of the day, just another pedagogical tool. It won’t on its own create a better learning environment, but when employed with an instructor’s creativity and careful guidance, AI can create exciting new avenues for active learning.