

“Let’s Not
Know Together”
A Duke Portrait of Practice about
Generative AI in Writing 201

Welcome to CARADITE’s new Portraits series that documents innovative teaching and learning at Duke University.
With Portraits, we are excited to share a collection of case studies distinguished by a central commitment: We will amplify human(e)-centered narratives of Duke students and instructors learning together through reciprocal curiosity and care, with all entries in our series honoring collective sense-making and growth. In alignment with CARADITE’s mission, this series will document “critical inquiries at the intersection of equity, education, technology, and society.” To that end, each Portraits entry will be produced in partnership with a Duke course, providing mutually reinforcing perspectives on how educators teach and how students learn. Within a given case, you may find interviews, learning artifacts, descriptions of activities, and—most importantly—student projects and written reflections. As CARADITE engages with the complex and emergent work of transformative education, we’ve decided that our Portraits series will not position students and their learning as objects of distanced study. Alternatively, we’re creating a platform that honors students’ voices and insights alongside the practical wisdom of their educators. Unlike conventional academic research, students are our co-authors and we’re humbled that those participating in our Portraits series have entrusted us to share their learning journeys.
CARADITE’s Portraits series begins with a topic that, for the past few years, has been impossible to avoid: the complicated—at times controversial, and still uncertain—relationship between generative artificial intelligence (AI) and student writing. ChatGPT and its large language model (LLM) doppelgangers have ushered forth unparalleled disruption in higher education while summarily erasing the original written essay, or so go the pervasive narratives associated with this technology. As higher education communities quickly acquainted themselves with AI, so, too, have we wrestled publicly with plagiarism, surveillance, concern for academic integrity, concern about algorithmic inaccuracy, the principled refusal of AI in coursework on account of political ideology and linguistic homogeneity, as well as the principled adoption of AI because of pedagogical inventiveness and students’ new literacies. As writing teacher John Warner recently noted in the introduction to his new book More Than Words: How to Think About Writing in the Age of AI:
Rather than seeing ChatGPT as a threat that will destroy things of value, we should be viewing it as an opportunity to reconsider exactly what we value and why we value those things… In my ongoing quest to make the experience of writing meaningful for students, for teachers, for those at work, and for those at play, I see ChatGPT as an ally. If ChatGPT can do something, then that thing probably doesn’t need to be done by a human being. It quite possibly doesn’t need to be done period. The challenge is to figure out where humans are necessary.1
We’re excited to introduce Dr. Jennifer Ahern-Dodson, Associate Professor of the Practice of Writing Studies at Duke’s Thompson Writing Program, and her students in Writing 201: The History of Writing Studies, whose experiences this past semester echo Warner’s advocacy to thoughtfully reconsider what, and why, we value when we are writing–and learning–as humans. Though Ahern-Dodson began the course with limited knowledge of AI, she was explicit with her students about the productive value of curiosity and experimentation; as she told us, “I don’t have all the answers, either. Let’s not know together and figure it out.” And that they did, as we’ll read from student co-authors Connor Barritt (Trinity/2027), Amie Masemore (Trinity/2027), and Elizabeth Romage (Sanford/2026):
We were inspired by the openness of our classroom space—engaging with AI inquiry honestly and ethically motivated us to advocate for a similar openness in more of Duke’s academic spaces. With a newfound understanding of how AI will change the future of education and professional settings, we wanted to take action and become part of the Duke community’s conversations about AI.
What follows in our inaugural Portraits entry is a measured and inquisitive account of how generative AI came to complement “the humanness of the work” in one writing studies course. Here, the hyperbolic capabilities of chatbots and algorithms are contextualized by on-the-ground artifacts and observations, including Ahern-Dodson’s AI policy and multiple examples of student writing. With this publication, we are making clear CARADITE’s social responsibility to uplift the voices of students and educators who, together, are collectively making sense of how AI fits—if at all—in their respective and intertwined learning journeys.

A Portrait of Writing 201
Following the Fall 2024 semester, Dr. Jennifer Ahern-Dodson was interviewed by Dr. Aria Chernik, Duke’s Assistant Vice Provost for Faculty Development and Applied Research in Learning Innovation and Faculty Director of CARADITE, about the role of generative AI (AI) in Ahern-Dodson’s Writing 201 course.
Excerpts from this interview are featured in the following narrative which was written by Aria Chernik, Laura Achenbaum, and Remi Kalir.
Generative AI Comes to Writing 201
Dr. Jennifer Ahern-Dodson is not easily surprised when it comes to the subject of teaching at Duke. She has been in the classroom for over 20 years and is currently an Associate Professor of the Practice of Writing Studies at the Thompson Writing Program. When she taught Writing 201: The History of Writing Studies, a foundational course for the new Thompson Writing Program writing minor, for the first time last fall, she expected her students to ask and answer questions like: Why do we write? For whom do we write? And why do writers get stuck? She did not expect to begin a journey with her students– and campus leadership–about how AI is impacting writing studies, as well as how this technology may impact disciplinary practices in the future.
Ahern-Dodson included an AI policy (see page 10) when she wrote the Writing 201 syllabus. On the first day of class, a student thanked her for including the policy, prompting Ahern-Dodson and her students to share curiosities and concerns about AI, writing, and academic inquiry. Recognizing that preliminary conversation warranted further exploration, Ahern-Dodson thoughtfully added a unit on AI into the course; she also invited Dr. Yakut Gazi, Duke’s Vice Provost for Learning Innovation and Digital Education, to speak about AI as a guest expert. What happened next still amazes Ahern-Dodson.
The conversations that unfolded during and after Vice Provost Gazi’s visit to Writing 201 were “about teaching and learning, and we covered a lot of ground about both possibility and uncertainty, and equity.” These conversations sparked students’ critical inquiries for the remainder of the semester. Ultimately, students in Ahern-Dodson’s course selected the topic of AI for their Sustained Inquiry Research Projects, the capstone project of the course. Summaries for three student projects are included in the final section of this case study.
Augmenting the Writer’s Toolkit
As a writing studies scholar, Ahern-Dodson helped students understand how AI fits within the complex history of technologies that have interfaced with writing and education. Debate about AI echoes prior “panics” associated with writing tools. For instance, Ahern-Dodson informed her students about when the eraser became a part of the pencil; that change, at the time, was publicly questioned because some educators felt students needed the experience of crossing out their words as part of the writing process. According to Ahern-Dodson, the prevailing logic favored an approach to writing whereby “we can always see the history of what was written and the revision.”
The technologies that enable student writing have come a long way since erasable pencils replaced pens: “Fast forward,” Ahern-Dodson noted, and “now we have computers, and we have writing assistants with spell check and grammar check and style check. Now we’ve gone from assistance to an agent, right; and these AI tools are not assisting. I would ask it as a question, ‘Do they have their own agency?’” Unlike other tools for writing, AI has uniquely blurred the line between writing assistant and writing agent, with chatbots capable of generating original–albeit synthetic–content for students. Considering this reality, Ahern-Dodson asked her students: “What does it mean to create and to author?”
“Help Us to Create, Not Create for Us”
Throughout Writing 201, Ahern-Dodson and her students examined what it meant to create as an author and carefully considered how AI could help that process–not override it. Ahern-Dodson was disheartened by prominent deficit narratives suggesting that students’ AI-enabled writing assistance might be perceived as “cheating” or needed to be “policed.” Rather, she was interested in exploring how writing and technology–including AI–might actually deepen human agency in writing practices.
“Does AI have the potential to diminish the humanness of the work?” Ahern-Dodson asked. “Yes, it does.” That is why, she recalled, “It’s on us to think about where we want to use it to add to our creativity, to add to our critical thinking.” To that end, Ahern-Dodson and her students interrogated how AI might intervene in writing as a process, not as a product. She elaborated:
“If we’re just product-oriented, there’s our agency out the window. But if we’re process-oriented, that’s a place students and I explored as having great potential. AI might help us reframe, rephrase, or just have multiple examples of a research question, or keywords… But, we still make writing decisions. We still discuss implications.”
Indeed, Writing 201 students experimented with how they could use AI to make the writing process more creative, going deeper into their respective approaches. Such exploration helped students to further engage with what Ahern-Dodson referred to as the “politics of writing,” typified by questions like, “What makes writing good? How should writing be taught? What are the key debates?” The relevance of these questions has been compounded by AI concerns related to equity and bias: “We still have to look at the ways that biases are a part of this, even if we don’t want to make eye contact with it. There is potential to lose the ‘us-ness,’ the humanity, especially if we’re product focused.”
Ahern-Dodson posited, throughout the course, that there are two processes fundamental to writing for students. The first is what she described as curiosity or wondering, “That liminal in-between space of not knowing the answer.” The second method is reflection, or the “so what, for you.” Both of these processes require students’ “frustration tolerance;” all writers will get stuck, she observed, especially if they regard their process as linear. Alternatively, Ahern-Dodson wanted students in the class to experience the writing process as iterative, more like “recursive conversational phases,” and encouraged collective critique about whether or not AI should be “a part of the conversation.”
Not Knowing, Together
“It’s follow the learners,” Ahern-Dodson noted, as she reflected on the role of conversation in both writing and her teaching; “Collaboration and collaborative learning is such a huge part of this.” Though she enjoyed helping students pursue and express their curiosities about AI during the course, Ahern-Dodson also acknowledged the limits of her own knowledge about AI. Because she was learning with them, she turned to Vice Provost Gazi. Ahern-Dodson recalled: “The class visit and conversation with Dr. Gazi taught me that AI literacy is not a threat. It’s about giving people the knowledge and skills to understand, use, and interact with AI, both responsibly and effectively.”
Ahern-Dodson’s course demonstrates the communal and creative exploration of AI in one instructor’s writing pedagogy and, subsequently, her students’ writing practices and products. As Ahern-Dodson remarked, “I don’t have all the answers, either. Let’s not know together and figure it out.”

Student Voices on GenAI:
How Duke Can Support Learning in the AI Era
By Connor Barritt, Amie Masemore, and Elizabeth Romage
In our Fall 2024 History of Writing Studies class we explored the role of generative AI in writing, teaching, and learning processes. We were inspired by the openness of our classroom space—engaging with AI inquiry honestly and ethically motivated us to advocate for such openness in more of Duke’s academic spaces. With a newfound understanding of how AI will change the future of education and professional settings, we wanted to take action and become part of the Duke community’s conversations about AI.
We believe students and educators compose one academic community, and the incorporation of AI into learning is something we must navigate together. To collaborate on this effort, we believe it is important to understand both student and educator perspectives on learning in the new AI era. We would like to offer our viewpoints as students.

Faculty: Acknowledging AI
The emergence of AI has sparked concerns in higher education because of perceived threats to teaching and learning environments, including issues of equity3 and transparency.4 Teachers have argued for caution with AI usage because of the dependency and overreliance it might instill in students, plagiarism concerns, user biases, and a loss of human interaction along with other ethical considerations. However, generative AI tools like ChatGPT also have potential benefits for classroom teaching and instructional planning. Teachers should consider the strengths and possibilities along with the concerns about AI.5 We know a lot of them are.
Students are already widely using GenAI.6 In this new AI era, we encourage teachers and educational leaders to acknowledge the growing technology use in classrooms. Students need faculty guidance and information because of the potential for misuse and unknowing plagiarism. Students may hesitate to ask questions about using AI in fear that teachers will judge them as lacking academic integrity or as being dishonest. Therefore, faculty should address student concerns by providing direction on what role AI can or cannot take in their classroom. Rather than telling students to simply uphold community standards, teachers should establish a “rules of the road” for using GenAI in particular that includes defining terms, setting parameters, and establishing possible functions in class. Providing these rules of the road on a syllabus will create a shared understanding necessary for a healthy student and teacher learning environment. AI’s presence in education has and will continue to transform the learning environment. We urge faculty to approach AI with both curiosity and caution, ensuring its use fosters both teacher growth and student success.
Peers: Open AI Exploration
We have spoken to many of our peers both at Duke and beyond about their experiences with artificial intelligence, and the reactions have ranged from completely embracing the technology to completely rejecting it. In class, outside of class, on breaks, in study groups—there are plenty of cases where the technology comes up. We have spoken to people who outright refuse to use AI and others who copy-paste the results from prompts onto discussion boards. We have even learned through classmates that ChatGPT is remarkably adept at answering physical chemistry questions to double check our preparation for tests. Of course, AI could also be a resource for students to answer questions on tests, particularly take-home assignments. Clearly, there are ways to misuse this technology, but there are also ways to use it ethically and productively. Imagine you are taking a practice test where the answer key provides only answers without work or explanation. What if there is no answer key at all? AI can be a great tool to access potential explanations and answers, even if they only serve as catalysts for further investigation.
At the end of the day, AI is a tool that can be used and abused like any other. The internet already offers all sorts of services that blur the line—or even outright cross it—between original thought and plagiarism. Tools like Grammarly can revise more than just spelling and grammar and analyze your tone and delivery. Textbooks shared across years may have question answers leaked into Quizlets online. Paid services exist for homework answers and writing essays. Yet despite all of these issues, navigating the modern education landscape by ignoring the internet would be absurd. Generative AI is similar: there are ways to abuse it, yes, but there are plenty of ethical ways to use it that will not compromise the originality and authenticity of your ideas. All three of us were initially quite skeptical of AI and we still are, but engaging with a class that openly discusses the use of this technology has greatly expanded our perspective on how AI can be used without compromising our intellectual honesty.
We would like to encourage our peers at Duke to engage in AI conversations with their professors and work with them to understand the line between ethical and unethical AI use in their classes. Even if we don’t use AI on assignments, having guidelines on what is acceptable and unacceptable will make attempting to use the tool much easier. Even a flat “no, you can’t use it” is a step towards a more comprehensive AI policy, which will only grow more important with each coming year. AI is a tool that’s here to stay, so having guidelines about its use will become just as important as guidelines about internet use or collaboration between students on assignments.
Duke: Supporting AI Inquiry
While we believe it is important for students to openly ask questions about AI, there is an important precursor to honest inquiry that needs to be considered: bias. From the student side, asking questions about AI takes courage because it inadvertently means confronting the possibility of unknown bias in professors. For example, students can ask professors about AI use and be met with enthusiasm (e.g., “I have a policy!” or “I support its ethical use and let’s talk about how.”) or a skepticism that reinforces the negative stereotypes about AI—the ones that ultimately cause some students to hide their AI use. We have personally experienced the setbacks that can occur when different understandings of AI skew trust or promote misunderstandings between students and professors, which contributes to student worry about how their grades may be affected. We therefore hope Duke can help move students and faculty towards a shared understanding of AI’s governing framework: its ethical incorporation to improve educational outcomes.
Students have expressed a variety of interests in learning about AI. One interest relates to AI’s relevance to employment: with the emergence of conversations regarding AI’s facilitation and/or replacement of professional work across fields, we hope learning AI proficiency can prepare us to enter a workforce where AI will soon play an integral role. Another interest is learning ethical AI use. We have heard concerns about how excluding it from educational conversations misses an opportunity to teach students proper citation, integration, and frameworks to understand the stigma associated with its link to academic dishonesty. We therefore seek guidance on how to properly use AI and assign credit to support our educational advancement, not replace it. Finally, from roommates to classmates, our peers have expressed interest in learning more about student attitudes towards and use of AI. While we have many ideas about how AI should be integrated into our educational experience, we hope that as many Duke students as possible can be included in and benefit from conversations about AI. We would therefore welcome CARADITE conducting a research study to survey students on their attitudes, wants, and needs in AI at Duke.
By acknowledging, exploring, and supporting learning with AI, we believe Duke can better facilitate the intellectual growth and development of its community members. As students and educators collaborate to center open inquiry and ethical AI use, we look forward to the growth our Duke community will experience in the new AI era.

Sustained Inquiry Research Projects
In the second half of Writing 201, students chose a key question from the first half of the semester and developed an independent research project around it. The project helped writers connect their personal interests, major, or future career aspirations with the history of writing studies, as well as to current and future contexts for their writing. As noted, Writing 201 invited Dr. Yakut Gazi, Vice Provost for Learning Innovation and Digital Education, to help students question the role of generative AI in higher education and consider how student agency can shape current and future conversations.
The following are summaries of three Sustained Inquiry Research Projects.

The Role of AI in the Future of Patient-Physician Interactions
By Connor Barritt
As a pre-med student, I have a natural concern about the ways in which Large Language Models (LLMs) like ChatGPT or Copilot may affect my future career in healthcare. My research delved into how AI may influence the future of in-person physician-to-patient communication. Prior to exploring this topic, I had assumed that while AI will grow to permeate areas of medicine such as surgery, diagnostics, and interpreting scans, it had no place in the human side of medicine. My research suggests, however, that this area of medicine will not be immune to AI technology, but that this also may not be a bad thing. There is a place for GenAI and LLMs like ChatGPT or Copilot in the physician-patient interaction that, counterintuitive as it may be, could serve to make medicine more human.
Many physicians already consult LLMs for advice on how to deliver information to their patients both with compassion and in a way patients understand. The place for LLMs in this interaction is not as a replacement for the physician but, instead, as a coach or an intermediary that helps the physician break down the complicated medical situation to a layperson in a clear and compassionate way. Many providers and patients—me included—have a knee-jerk rejection to this idea as something straight out of Black Mirror. Robots advising humans how to express compassion? However, a physician’s ability to explain patient health issues in a simple and understandable way with support from LLMs may improve the bedside manner of physicians who use it, and, thus, improve the patient experience with the healthcare system.

Empowering ESL Teachers with AI
By Amie Masemore
As an English Language Learner in high school, my mother faced proficiency challenges. The language assistant program she was placed in did not provide the support she needed, particularly with writing. When my mom entered the university system, the advanced and complex writing skills needed to succeed were overwhelming and difficult to master. The university she attended was neither ready nor adequately equipped to assist her in this challenge.
For my project, I wanted to research how tools such as ChatGPT could support learners and teachers in English as a Second Language (ESL) classrooms. I discovered that ChatGPT can help improve language skills, boost student confidence, and enhance writing abilities by addressing common ESL challenges like pronunciation, vocabulary, and grammar.7 However, some educators are understandably cautious about AI’s potential impact, expressing concerns that it might reduce student-teacher interaction, be biased against English learners, or simply inaccurate. With these concerns in mind, I researched both the strengths and limitations of AI tools like ChatGPT in language education.
Based on my research, I recommend allowing teachers to decide how and to what extent AI is integrated into their classrooms. This personalized, teacher-driven approach enables educators to tailor AI use to their classroom needs, the learners they are working with, and their own skills, interests, and teaching philosophies.

What Would an AI Initiative in Duke’s Sanford School of Public Policy Look Like?
By Elizabeth Romage
Of all conversations I have engaged in this academic year, the topic of AI has been most prominent. On countless occasions, I have sat across from friends using AI to support their academic work, as well as heard fears about AI’s common association with academic dishonesty. I have also learned about educators’ perspectives—during conversations and FLUNCH catch-ups, educators have expressed that their lack of AI training has left them feeling unequipped to use AI meaningfully and guide student-AI usage. Together, these interactions reveal a collective curiosity surrounding AI: students and educators alike seek a starting point—an initial orientation and foundation—in AI use. But before I could help orient this curiosity, I wanted to become part of the Duke community’s AI conversation.
I first researched current AI efforts at Duke, and was intrigued to find guides and information from Duke communities like the Trinity School of Arts and Sciences and Learning Innovation & Lifetime Education (LILE). I knew these were resources I could share with the students and educators I conversed with, but still wondered how to better support and even expand upon these existing initiatives. Then, LILE’s Vice Provost, Dr. Yakut Gazi, visited our Writing Studies class. She taught me that AI literacy is about giving people the knowledge and skills to understand, use, and interact with AI both responsibly and effectively. She inspired me to help others understand that and ignited a new question within me: what would it look like if AI was embedded into the Duke curriculum?
This led me to the Sanford community: my home base as a public policy student. As a “leader in public policy scholarship and education,”8 I believe Sanford is also a leader in curiosity. Yet as I canvassed Sanford’s mission statement, community values, and academic expectations, I noticed that AI is not currently acknowledged in the curriculum. With the belief that Sanford can better support student learning and educator instruction by leveraging AI’s benefits in public policy studies, I created an AI Initiative to capture the curiosity posed by students and educators in a field that is central to my interests. Two of the Initiative’s recommendations include creating an AI Studio where students can learn and practice AI integration and adding a statement of AI acknowledgement to Sanford’s Code of Conduct.
As I participate in research and continue learning about Sanford’s needs, I am excited by the opportunity to help bridge learning at Duke and AI. Through this Initiative, I hope to support the collective curiosity of students, educators, and the Sanford community.
Notes
1. Warner, J. (2025). More than words: How to think about writing in the age of AI. Basic Books.
2. Modern Language Association of America. (2025). How do I cite generative AI in MLA style? MLA Style Center. https://style.mla.org/citing-generative-ai/
3. Bjork, C. (2023, April 5). Don’t fret about students using ChatGPT to cheat – AI is a bigger threat to educational equality. The Conversation. https://theconversation.com/dont-fret-about-students-using-chatgpt-to-cheat-ai-is-a-bigger-threat-to-educational-equality-202842
4. Nature. (2023). Tools such as ChatGPT threaten transparent science; here are our ground rules for their use. Nature 613(612). https://doi.org/10.1038/d41586-023-00191-1
5. Farrokhnia, M., Banihashem, S. K., Noroozi, O., & Wals, A. (2024). A SWOT analysis of ChatGPT: Implications for educational practice and research. Innovations in Education and Teaching International, 61(3), 460-474.
https://doi.org/10.1080/14703297.2023.2195846
6. Coffey, L. (2024, June 25). A new digital divide: Student AI use surges, leaving faculty behind. Inside Higher Ed. https://www.insidehighered.com/news/tech-innovation/artificial-intelligence/2024/06/25/digital-divide-students-surge-ahead
7. Ibrahim, K., & Kirkpatrick, R. (2024). Potentials and implications of ChatGPT for ESL writing instruction. International Review of Research in Open and Distributed Learning, 25(3), 394-409. https://doi.org/10.19173/irrodl.v25i3.7820
8. Duke University Sanford School of Public Policy. (2024). 2024 Impact Report. https://sanford.duke.edu/2024-impact-report/
