COVID-19 forced many educators to move tests and quizzes online, sparking concerns among instructors that students would take shortcuts when being assessed remotely. In response, an arms race of EdTech solutions emerged: lockdown browsers, remote proctoring, and now AI detectors. These companies continue to flourish, fueled by the rise of artificial intelligence and their promise to curb academic dishonesty.
However, these tools can unintentionally work against what instructors and students both want. They prioritize surveillance over trust, cultivate suspicion instead of curiosity, and reward compliance over learning. Cheating is a real challenge, but rather than placing the burden on instructors to police it, we can ask how teaching and learning environments might better engage students in ways that ease pressure on both sides.

The answer lies in authentic assessment, which prioritizes learning, builds trust, and calls on higher-order thinking.
Is AI even triggering cheating?
The good news is that, despite what some headlines may lead us to believe, there’s little evidence that GenAI has triggered a surge in cheating. Turnitin’s own figures, drawn from over 200 million assignments, show that 1 out of 10 submissions contained “some” AI-produced text, while only 3 out of every 100 assignments are “mostly” generated by AI. These numbers have held steady since 2023, not long after ChatGPT first hit the scene.
Researchers from the Stanford Graduate School of Education report that the share of students admitting to cheating in some capacity has stayed flat (about 60-70%) both before and after ChatGPT’s release. AI hasn’t produced a new cheating epidemic like many believe. What it has done is make a longstanding integrity issue more visible.
Findings from this study conducted by researchers from the University of the Basque Country in Spain further undermine the claim that tools like ChatGPT cause plagiarism. The study found that while using ChatGPT slightly increases the risk of students cheating by 3.9%, a lack of motivation and “cheating culture” are stronger predictors of plagiarism than generative AI tools.
Tech crackdowns don’t deliver
Lockdown browsers may seem to provide a barrier, but they don’t necessarily make tests more secure. These tools cannot guarantee test security or prevent determined students from using workarounds, the most common of which is to use a second device out of camera view.
AI detectors veer into even thornier territory. Multiple analyses have documented high false positives, especially for multi-lingual writers and non-native English speakers, alongside false negatives on well-prompted AI text. Even OpenAI, the inventor of ChatGPT, shut down its own AI detector due to its rates of inaccuracy. The use of these tools can lead to inequity and mistrust between educators and their students.
Advocates of AI detection stress that these tools, at best, provide us with a signal, not proof, of AI use, meaning they should not ever be the basis for punitive measures in isolation.
The bottom line is that when an instructor takes an enforcement-first approach to managing AI in their classroom, it’s both unreliable and corrosive to relationship building. Authentic assessment offers a better path: it strengthens trust between instructors and their students while also effectively measuring content knowledge.
Authentic assessment = more engaging, harder to fake
Authentic assessment asks students to apply what they know in ways that prioritize critical thinking, creativity, collaboration, and communication over recall. It mirrors the ways people use knowledge in their daily lives by offering students learning opportunities that they find meaningful, especially when linked to real-world applications. Research shows that when academic work connects to students’ lives, they are more likely to find learning more relevant and engaging. At its core, authentic assessment is a type of relational pedagogy that signals that instructors trust students as learners, and that trust is often reciprocated with deeper course engagement.

Here are some (though certainly not all) forms of assessment that both reduce incentives to shortcuts and engage students in relevant work:
- Performance tasks and projects – Ask students to build or analyze something that matters to them (a prototype, policy memo, data story, mini-exhibit, PSA, political campaign), with public-facing deliverables.
- Case studies and simulations – Give students context-rich problems with incomplete information that require justification.
- Guided investigations – Have students deeply explore scientific, cultural, or historical topics, and then showcase their understanding through oral presentations, extended writing, or similar in-depth projects.
- Oral defenses – Have students defend their explanations of choices, trade-offs, and revisions live.
- Process-centered work – Shift the focus from having only the final product hold merit by attaching points to drafts and requiring draft logs, experiment notebooks, and/or version histories (all with brief metacognitive reflections).
- Digital portfolios – Allow students to show cumulative evidence of growth through annotated standards-aligned rubrics.
👉 Want to learn more about authentic assessment? Check out LILE’s page on alternative strategies for assessment and grading.
Authentic assessment alongside AI
You may be wondering how authentic assessment fits with the rise of generative AI. The good news is that it not only works in the AI era, but it can also make your assignments more AI-resilient.
Start by emphasizing the learning process over final products. Rather than grading only the polished, final submission, give meaningful weight to the iterative steps involved—drafting, receiving feedback, and revising. You may pair a take-home draft with a brief, in-class oral check-in or a low-stakes writing assignment that asks students to explain their choices or connect their outline to course concepts.
In addition, try to anchor your assignments in local data, lived experiences, or recent class discussions. These details are more authentic, make the work more engaging, and are harder for AI tools to fabricate convincingly.

You have the discretion to define how, if, and when generative AI may be used in your courses. Your instinct may be to ban AI completely from your curriculum, but you don’t have to choose between “ban it” and “anything goes.”
Instead, clearly outline what is permitted at each level of the process. You can use the AI Assessment Scale to help you determine the appropriate level. You should also show students examples of how you expect them to attribute ideas or language originating from an AI system. The important thing is to be explicit about your policy and citation expectations.
👉 Use our guidance on Artificial Intelligence Policies: Guidelines and Considerations.
When you decide AI is permitted, find ways for students to make their cognitive work visible. Try having students write an AI Declaration Statement, which includes:
- Which AI tool(s) they used and how they used them in the process of completing their assessment (e.g., brainstorming, outlining, code debugging, creating citations).
- The prompts they used.
- A critique of any limitations or inaccuracies they encountered.
- A clear explanation of how they incorporated the AI-generated material into their assignment.
Practical steps you can take right away
You don’t need to redesign your entire course overnight. Start small this semester by making incremental changes. Here are a few practical ways you can begin:
- Ensure your AI policy is transparent and clear.
- Talk openly with your students about AI (the opportunities and limitations it presents, how you use it in your work, and your expectations for responsible use).
- Break up a single assignment into sections that clearly specify when and how students can use AI, drawing on tools such as the AI Assessment Scale.
- Redesign one high-stakes assessment so that it emphasizes the learning process over the final product and offers students low-stakes practice opportunities.
- Connect your assignments to authentic contexts, cases, and problems that mirror the real world whenever possible.
- Build reflection into more assignments to make the process itself more visible. Ask students to explain what they tried, what they changed, and what they learned.
- Require students to submit an AI Declaration Statement with an upcoming assignment.
Closing Thoughts
High-stakes finals, browser-locked multiple-choice quizzes, and generic term papers are increasingly untenable in an AI-driven world. Now is the moment to shift your practice toward assessments that put value in authenticity, emphasize process, and prioritize learning. Doing so allows you to move beyond the anxiety of “catching cheaters” while enabling your students to engage more deeply with your course content.
We also recognize that implementing authentic assessment can feel more challenging in large-enrollment and content-heavy courses. While not impossible, it requires rethinking course structures and support systems. If you are feeling overwhelmed, experiment gradually by adapting strategies that feel feasible for your particular teaching context.

Need support?
If you’d like to talk through your assignments or build out authentic assessments with guidance:
- Schedule a one-on-one consultation with a Teaching Consultant: lile@duke.edu
- Attend drop-in online office hours:
- Mondays, 1–3 pm
- Wednesdays, 10–11 am (AI focus)
- Thursdays, 10 am–12 pm
- Zoom: duke.zoom.us/my/dukelearninginnovation
