Featuring insights from Eileen Chow

Can machines replicate the full scope of human creativity and expression—whether in translating a poem, composing music, or telling a story? This is the kind of question Eileen Chengyin Chow, a Duke Associate Professor of the Practice in Chinese and Japanese Cultural Studies and one of the founding directors of Story Lab, finds herself exploring in the AI era. Yet she doesn’t settle for a binary yes or no.
Over the course of our conversation, she reframes the question entirely: perhaps it’s not about whether machines can match us, but whether the outputs of machines and humans are even comparable. Chow recalled that her colleague in Computer Science, Ron Parr, once remarked that large language models excel at replicating the normative because they reflect the data they are trained on, which is itself shaped by existing human input. Chow believes humans, by contrast, prize emotion, ambiguity, and uniqueness—qualities that resist replication.
It’s a perspective she finds echoed in the words of filmmaker Wong Kar-Wai, who, when asked about AI on the 25th anniversary of his film In the Mood for Love, replied:
“Technology is just a tool. AI can replicate, but can it yearn? Can an algorithm understand the weight of a glance between two people who can’t express their feelings? Can code capture the way memory distorts and reshapes our past?”
For Chow—a translator, literature professor, and steadfast humanist—this quote rings especially true. Her work sits at the intersection of language, culture, and narrative. And while she acknowledges the disruptive force of generative AI in education, she approaches it not with panic, but with principled curiosity.
Shifting the Framework
Chow believes that the rise of AI has placed many educators in the uncomfortable position of surveillance. She senses that instructors—especially those who teach writing and analysis—often feel pressure to monitor how students are using (or misusing) these tools. The anxiety is real: What happens when ChatGPT can generate a passable first draft in seconds? What happens when translation apps claim to render poetry across languages?
But Chow cautions against approaching AI solely through the lens of enforcement. Instead, she urges a broader, more generous frame of mind. Is this disruption so different from those we’ve already faced—Google, Wikipedia, even spell check? “We’ve adapted before,” she reminds us. The question, in her view, is not just how to prevent misuse, but how to think with AI, around it, and beyond it.
Chow is also attuned to how this impacts students. She points out that “there is currently a lot of burden on students, and it is not fair. They should not have to fear being caught using a tool that they may or may not be using.” That kind of climate, she suggests, risks turning learning into a minefield of suspicion rather than a space for exploration.
Translation, Tone, and Technological Limits
Chow hasn’t used generative AI in her own writing or translation work—not out of disdain, but because she hasn’t yet felt the need or the draw. However, she is experimenting with how students might use it thoughtfully.
In her translation theory and praxis seminar, Chow incorporates AI translation tools not as shortcuts, but as objects of critique and discussion. She has students use these tools and then evaluate their output, especially in terms of tone and cultural context. She encourages them to compare how different languages fare in translation, noting that in earlier machine translation systems like Google Translate, Chinese translations often fall short in nuance compared to languages like English or French, due to the disparity in the quantity of texts accessible online.
To deepen this analysis, Chow introduces case studies of Asian authors who have used AI to translate their own novels into English. These examples offer fertile ground for discussion: students consider how tone, voice, and meaning can shift—or disappear—when filtered through machine translation. Exercises like these are intended to develop sharper judgment and help learners recognize that translation is as much about culture and context as it is about language.

Teaching the Act of Interpretation
One of Chow’s most effective pedagogical strategies is refreshingly analog: reading together in the classroom. She often brings in just a single page—sometimes a very short story, a poem, sometimes even an illustrated children’s book like Charlie Cook’s Favorite Book—and has students diagram its structure in real time. The exercise is simple, but powerfully instructive. For many students, it feels novel, even fun. But more importantly, it forces them to slow down, pay attention, and make meaning on their own.
Poetry, in particular, becomes a tool for unlocking interpretation rather than avoiding it. Through close-reading exercises, she shows students that literary analysis isn’t a mysterious talent—they just haven’t been taught how to do it well yet.
This, she believes, is one of the key reasons students turn to generative AI: not out of laziness or intent to cheat, but because they haven’t developed confidence in their interpretive skills. Rather than scolding them, she designs assignments in which AI confers no particular advantage. For Chow, reading and interpreting are not just academic exercises—they’re cognitive practices foundational to being human. If we outsource too much—our thinking, our translating, our storytelling—we risk losing the very habits that define us.
Ethics Without Fear
Chow doesn’t use the language of plagiarism in her AI conversations. She prefers to talk about the process. She pushes students to ask themselves hard questions, such as, “What happens when you give away the struggle of thinking? What do you lose when you shortcut the research or outsource the reading?”
She recalls an assignment in her Narrating Asia course a few years back. Students had two options for their second paper in the course. One: input the prompt into an AI generator and annotate the output in Google Docs, checking and adding citations, noting where it succeeded or failed. Two: present a 5–7 minute oral paper, live. “Only a few chose the AI path,” she recalls. “But they learned a lot. They saw its hallucinations.”
Advice for Faculty: Collaboration over Control
So what does she recommend for other instructors, especially those feeling overwhelmed?
“If you’re lucky enough to have small classes,” she says, “make the AI conversation part of the course. Invite students into the process. Set parameters together. Ask them how we should use it, not just how they shouldn’t.”
And most importantly, model the behavior yourself. “If I use AI, I will cite it.” She also thinks instructors should show their students how they use it because that level of transparency builds trust.
So, CAN a Machine Yearn?
In the end, we return to Wong Kar-Wai’s question: Can a machine yearn? For Chow, the answer is embedded in the very fabric of her teaching. Her trust in close reading, her insistence on cultural nuance, and her students’ encounters with flattened, soulless machine translations all echo the same belief: yearning is a human act.
In Chow’s classroom, students aren’t just learning to critique machine outputs; they’re learning to recognize and reclaim what makes us irreducibly human. AI may be able to replicate the norm, but it cannot long for meaning. And that, perhaps, is what will set us apart.
Interested in exploring AI in your own teaching?
LILE is here to support you. Whether you’re curious about course-integrated chatbots, want help developing AI assignments, or just want to talk through the ethics of AI in the classroom, please reach out to us at lile@duke.edu or attend AI Office Hours every Wednesday from 10 a.m. to 11 a.m. here! You can also find resources from Duke on teaching with AI, including LILE’s teaching guides on the AI at Duke website.