There’s a great deal of discussion among faculty in higher education about ChatGPT and other Artificial Intelligence (AI) tools. Faculty are concerned about students using this technology to generate essays and other work in their classes.
Learning Innovation has created a “conversation starter” blog post where we asked ChatGPT about these issues and developed our responses to what AI had to say. This post is a more personal view, highlighting some of my own thoughts as a consultant about what the disruption of AI means for faculty and students.
We’ve faced technological challenges like this before. As AI develops, responses by faculty to what it can and can’t do are going to be a moving target, similar to the the emergence of the World Wide Web and student use of online resources or the development of commercial online “paper banks” where students could purchase a pre-fabricated term paper on a given topic. Before the Web, faculty were only concerned about students sharing exams or papers at their university, but the Web opened the floodgates of readily accessible ways to share information.
Some time ago, to face the challenge of easy online searches and paid “essay banks,” faculty and administrators at Duke discussed whether to license plagiarism detection software for the campus. A decision was made to not license plagiarism detection tools for two main reasons.
Plagiarism detection services work by collecting papers from students and use those entries as a database for detecting plagiarism instances within or across institutions. It was felt, at the time, that these services raised serious issues about student rights to copyright on their work and student privacy – by requiring students to submit papers to a service like this, faculty would be forcing students to give up their legal copyright to their work and store their work on outside commercial services in perpetuity.
Even though some services give faculty or institutions an “opt out” for submission and retention of papers at their school or within a class, the general consensus by Duke administrators and faculty was that use of these services sent the wrong message to students about individual responsibility, the Duke Honor Code, and encouragement of heavy-handed “surveillance” tools to monitor students.
The recommendations on combating plagiarism that emerged from these discussions – and what Duke Learning Innovation recommends – was to make assessments more authentic and robust. By engaging students in original research, using non-traditional assessments such as presentations or long-term projects, or asking students for more in-depth analysis and debate, faculty members could move students past repeating content that might be easily accessible through the web to a deeper engagement with the course content and goals.
ChatGPT and other AI tools present a more difficult challenge. AI services use an inquiry to generate, on the fly and within a few seconds, a natural language response to a prompt or question based on databases of research, articles, and web pages drawn from a variety of sources. Each response is unique, since AI “learns” and changes its approach as more inquiries are made and more data is used to generate answers.
The only technology solution that would overcome use of AI by students with certainty that could be used by a faculty member for pressing an Honor Code case would be if the entity offering an AI service would save all of the answers it generates and allow faculty to search student papers against these responses. With the different services emerging and the large number of answers being automatically generated every day, it seems unlikely that such a service would be practical. ChatGPT is discussing the addition of digital watermarks (specific arrangements of words that form a pattern invisible to the end user) to texts it generates, but it’s unclear if these watermarks would hold up with significant editing of the AI produced text.
As a teaching consultant, I’ve put a great deal of thought into how to respond to AI and have discussed it with others in my field and faculty I know here at Duke and at other universities. I’ve also spent the past few weeks running some typical essay questions in my areas of subject matter expertise to see what ChatGPT could come up with.
I noticed some patterns in what ChatGPT currently can and cannot do that could be used to rethink assignments in some classes. Of course, the approaches would be different, depending on the subject area, whether it’s a specific historical era or another discipline, such as computer science.
One of my areas of interest is film and media history. I entered several simple essay prompts that might be used in film history and analysis classes. I noticed that ChatGPT could generate a convincing and correct answer, for the most part, if you’re asking about something that’s widely written about and discussed in these types of classes. The answers were at the level of a basic Wikipedia article. Some of the typical questions I asked were to discuss the themes or controversies about well-known films, such as Citizen Kane, or comparing themes in works by well-known directors.
Where ChatGPT either refused to provide an answer or was evasive were questions where I asked it to be more specific or to analyze a controversial or ambiguous topic. ChatGPT can regurgitate factual information that is widely known – again, the type of cursory information seen in a Wikipedia article. Where it fails is with analysis and taking a position on a controversial topic and offering evidence. It also has difficulty with more obscure topics, where the information sources it draws on are lacking.
The furor around AI and student work reminded me of one of the most challenging papers I had to write in a Social Studies class in high school in the early 80s. For a final paper, I had to write about whether I thought Oswald acted alone in the JFK assassination, drawing upon primary and secondary sources. It’s still an ambiguous question debated today, with more trustworthy works by historians and less supported information available. What my teacher was looking for was my analysis of the evidence I could find and why this evidence was valid to support my view. Based on my look at AI so far, I don’t think that ChatGPT could come up with a convincing analysis on the topic because of the large amount of information available and the ambiguity over sources and evidence that the student would have to sort out.
As a consultant, my advice to faculty would be to actually run some of your questions through ChatGPT or another AI service to see what kind of answers you see. If it comes up with a convincing answer, rethink your questions to be more specific to the content and discussions in your class, to require a more in-depth analysis, or to tackle more ambiguous questions. Topics on more obscure or local topics would also seem to be an approach to discourage use of AI for easy answers.
There’s not one simple technology solution to student plagiarism or student use of AI tools in their work. Some high schools and institutions are blocking access to ChatGPT and AI services by students. However, we all know that some students will be determined and smart enough to bypass these technology obstacles.
Some faculty I’ve talked with see AI as an opportunity and want to be up-front about it with students, showing these resources to students and the kinds of limitations of the tools to demonstrate that AI might provide a starting point for a question, but can’t really provide a simple, pre-packaged response for the type of work the students will be doing in a given class. Or faculty might allow students to critique AI generated material as part of a class activity. It’s similar to how many faculty now approach using Wikipedia or other web-based information resources in classes.
After discussions with colleagues in Learning Innovation, we feel that solutions and responses to AI are going to have to emerge from faculty conversations about this challenge and may vary depending on the discipline. We also feel this will need to be an ongoing dialogue, involving faculty, students, and Duke administrators. We’re looking forward to talking more with faculty about how they see AI influencing their work.
As educators, we can never stand still and have to adapt and respond to new challenges presented by emerging technologies, social trends, or current events. AI is just one of many new trends we have to think about moving forward in the next decade.
Randy A. Riddle is a Senior Teaching Consultant in Duke Learning Innovation, primarily serving Social Sciences faculty. He has a degree in Public and Applied History from Appalachian State University, is an independent documentary filmmaker and researcher, and has worked at Learning Innovation (and it’s predecessor, the Center for Instructional Technology) since 2000.