Recently, I had a conversation with Robert Duvall, Senior Lecturer of Computer Science, to discuss generative AI and teaching. This interview is part of a series of posts in which LILE will speak to instructors across the university to capture their insight into this emerging trend in teaching and learning.
Duvall primarily teaches upper-level undergraduate students in the Computer Science department at Duke. Some of his recent courses include COMPSCI 308: Advanced Software Design and Implementation, COMPSCI 290: Educational Technology Seminar, and COMPSCI 190: Game Programming.
Integrating generative AI into students’ assignments and instructors’ own work
Duvall mentioned that in his upper-level courses, students are encouraged to use AI tools and Large Language Models (LLMs) to enhance their projects rather than replace the core elements. This requires teaching students how to create effective prompts and verify AI-generated results, which can only be done well if they understand the concepts.
Duvall proposed an experimental course structure that strategically scaffolds the student experience. The first half of the curriculum would prohibit using LLMs, instead focusing on building essential competencies through traditional methods. The second half would require extensive LLM integration, asking students to hone their newly learned skills in order to harness the power of these transformative technologies.
This approach reflects a comprehensive pedagogy designed to ensure students understand the lifecycle of AI-assisted creation. By first establishing a grasp of fundamental design and development tenets, students would then learn to use LLMs with the critical thinking necessary to generate meaningful results.
Talking to students about AI
Duvall noted that students have much to learn about using AI and LLMs effectively, including the ethics and impact of their use. He believes students should be encouraged to use AI and LLMs responsibly and shown the results of their use on typical problems, allowing them to see what can be done, what limitations exist, and when to stop using them.
In his courses, students are required to cite AI tools and take responsibility for AI-generated content in their work.
Overall, Duvall is focuses on talking to students about integrating AI in ways that enhance learning.
Advice for instructors exploring AI and teaching
Duvall recommended that instructors spend time trying AI tools to understand their capabilities and limitations rather than trying to keep up with the rapidly changing technology. Rather than focusing on cheating, he suggested that instructors find new ways to assess student learning and aim to create meaningful learning experiences that incorporate AI. For example, CompSci used to only assess students by the code they wrote, and now faculty have devised dozens of different ways to assess student learning.
Duvall emphasized the importance of updating a course’s learning objectives to focus on work that continues to be meaningful in an AI-enhanced environment, such as teaching students how to create effective prompts and evaluate AI-generated results.
Duvall intends to continue exploring and refining the use of AI in his courses and is open to collaborating with others to share insights and strategies.
Additional resource
Check out Robert Duvall’s talk Balancing Ethics with AI where he discussed how generative AI offers immense potential, but its use raises crucial ethical, privacy, and equity concerns.