I recently attended All Things Open AI, a two-day conference for AI practitioners and users held at Durham’s historic Carolina Theatre. As a Durham resident and frequent patron of the Carolina Theatre, I was intrigued to see the city’s beloved cultural landmark transformed into a bustling tech hub. The experience offered an invaluable opportunity to step outside my comfort zone in higher education and academia to learn from a diverse group of industry professionals—from software engineers and data scientists to business leaders—all while enjoying some complimentary popcorn. Here are my key takeaways:
We Need to Be Strategic About AI
A session that I found particularly helpful was “You Don’t Need an AI Strategy, But You Do Need to Be Strategic About AI,” presented by Jessica Hall, Chief Growth Officer at OpsCanvas. While Hall’s presentation was targeted at businesses, I found her ideas relevant for the conversations that we have been having about AI in education as well. Essentially, Hall emphasized the importance of knowing what our most critical priorities or objectives are, and then figuring out how AI can be a means to that end, instead of an end in itself. A good AI use case, according to Hall, promises “clear value for business and customer . . . and has to be done with AI.” I think these statements also make sense in an educational context: a good AI use case is one that presents clear value for the learner (or the learner and the instructor) and has to be done with AI. I believe the AI learning tools that I have been exploring with my colleagues from the Learning Experience Design team—specifically for asynchronous online courses with limited instructor availability and real-time interactivity—fall under this category, since AI is filling a gap in traditional methods and has the potential to greatly enhance the learning experience. However, this session also reaffirmed for me that if we cannot come up with such good use cases—and, to take it a step further, see where AI can actually help us with our goals—then we should also be disciplined in ditching AI for the sake of the learners we care about. A business leader would certainly do the same with a poorly-thought-out business idea.
We Should Keep Things We Want to Do
One of the references that Hall used to explain what makes a good AI use case was Cassie Kozyrkov’s drunk island analogy, i.e., that you should be offloading to AI the kind of work that you wouldn’t mind a bunch of drunk people on an island doing for you: “What repetitive drudgery would you offload?” This was a recurring theme throughout the conference. For example, the six-hour AIOS AI SuperUser Bootcamp led by Mark Hinkle and Melanie McLaughlin provided a comprehensive introduction to AI tools, features, and techniques, with the underlying assumption that they were mainly for doing things we either did not want to do or spend time doing, whether it was creating slide decks or responding to emails. This led me to a simple conclusion—one I think most people would agree with—that we shouldn’t offload to AI things that we enjoy doing or care about doing well (for, as Kozyrkov reminds us in her analogy, the people on the island are drunk, and not to be completely trusted). As a learning experience designer, for instance, I like working closely with faculty to create assessment materials that are intentional and intellectually stimulating for the learner, and I hope to keep this work to myself. To extend this insight to our broader mission as educators, it appears that now more than ever, we need to be creating learning environments where learning doesn’t feel like “repetitive drudgery” that learners want to offload, but a process of discovery and growth that brings them joy.
We Should Be Open to Learning
One of the highlights of the conference was simply sharing space with and hearing from individuals whose paths may not have crossed mine otherwise. The wealth of knowledge and expertise surrounding me didn’t feel intimidating; instead, it inspired a radical sense of openness and receptivity to new information, and a desire to learn. Based on this, my last takeaway is simply that we should be open to learning—from different people, about new things, and in ways that may or may not change us. While this conference did not resolve my concerns about AI’s harms or environmental impact, it was illuminating to see how professionals across multiple industries and sectors, who are shaping the world, are thinking about and working with AI. Participating in this conference made AI less abstract and more concrete, and helped me develop a more balanced understanding of its capabilities and limitations.
Claude 3.7 Sonnet was used to copyedit this blog post.