The Triangle AI Summit brought faculty, staff, and community members together to engage around the ethical, social, humanitarian, and environmental aspects of AI. After a lively day of discussion and learning, the last panel, moderated by Candis Watts Smith, brought the voices of four students in Duke’s Code+ program – Dara Ajiboy, Muke Akume, Ciaran Burr, and Roudy Mohamed – into the conversation.

How and why are these students using AI?
The panel started with a question for the students: are you pro- or anti-AI? All four students on this panel responded pro-AI. However, the ways in which they use AI varied. All of the students have used it to help them during the school year, but one of the students was critical of AI, recognizing that it can be “misleading when it comes to academics and studying.” Two students described how they’ve used AI outside of school as a place to go when feeling overwhelmed and needing somewhere convenient to vent, or to search for answers to very personalized questions (e.g., if I leave my sweaty gym shoes in a closet for four hours, will they grow mold?).
Hearing directly from students that they are using AI when doing coursework can feel like a confirmation of our biggest concerns. But it’s important to explore the nuance associated with these students’ use of AI. The students made it clear that they often use AI as a resource when they don’t know where else to turn. Two students discussed how they recognize office hours are a resource, but they can’t always visit office hours due to scheduling conflicts. One student also pointed out that she turns to AI when the resources provided by instructors for a class are scarce or too broad to be helpful (e.g., “here’s a whole book” vs “look at these 10 pages”).
One student described how AI also provides a non-judgemental space to ask lots of questions and go down rabbit holes while receiving instant feedback and answers. He discussed how asking too many questions is something that can be perceived as annoying, so having a tool that can provide answers without making the student feel bad is useful.

All of the students discussed how they want AI to support their knowledge building, not just help them regurgitate information. They recognize the limitations of AI, for example, that it is nowhere near as good a talking partner as another human because it can’t show sympathy or empathy. They also pointed out that AI is not very good with creative tasks, and that it will write code like an AI, not a human.
So even knowing these limitations, why might the students in some cases use AI to simply get answers? One student explained that if he did use AI in this way, it is likely for one of two reasons. First, it might be because that is the only form of assessment in a class, and students are not being asked to challenge themselves to think critically or problem solve. Second, it might be because the value of a class and how it’s run has not been made clear to the students.
Takeaways for instructors
One of the most impactful statements of the panel was when Dara said that the “American education systems need to evolve.” So what can instructors do to meet this moment?
It’s critical to recognize that this was a panel of four students in a program about coding, who are primed to be thinking about and using AI. This panel is not representative of all students. It is likely you have some students that align with this panel, and others who don’t. So the biggest takeaway should be to learn more about AI use in your specific context. Talk to your students and learn about how they use AI. Start a dialogue in the context of your class to identify places where you might be able to provide extra support for students.
Relatedly, as AI continues to evolve, now is the time to review how your course is designed, what kinds of assessments you use, and how you frame the value of your course. Now more than ever, students need to know why they should care about your course and how learning in your course will benefit them. Making this value clear will help build student buy-in and potentially reduce the use of AI. Challenging students to think more critically and creatively with your assessments is also valuable for potentially reducing students’ reliance on AI.
It is also very important to recognize that AI is here and that students have easy access to it. Even if you are not a fan of AI and don’t want your students using it in your class, you will need to address it in some way. You should create an AI policy for your course, and discuss it with your students. Make it clear when it is ok for students to use AI in your class, and when it is not. If there are times when students could use AI in your course, provide them with appropriate resources and guidelines.
Finally, work to build trust with your students. Constantly chasing students down over concerns of cheating can create distrustful relationships with students who may have only used AI because they had nowhere else to turn. As we heard from our panel, students are likely aware of the limitations of AI, and some of them only turn to it as a last resort or because they don’t feel comfortable asking the same questions elsewhere. Policing and punishing these actions will push students farther away from learning in your course.
Resources
- Duke’s AI Website
- Generative AI and Teaching at Duke – Guidance for Instructors
- Alternative Strategies for Assessment and Grading
- LILE’s Teaching Guides
- You can find a number of posts related to AI in teaching on LILE’s Blog page
- For further support, stop by our GenAI Office Hours on Zoom every Wednesday at 10am