Several LILE team members attended the joint Society-Centered AI Summit and Responsible AI Symposium hosted by the Society-Centered AI Initiative at Duke and the Duke Artificial Intelligence Master of Engineering program from February 28 to March 3.
The goal of the event was to bring together industry and academic leaders, researchers, and students to discuss emerging trends in society-centered and responsible AI. Featuring industry and academic keynote talks, spotlight research talks, and a poster session, the event was an opportunity to explore the myriad of ways in which AI will influence human behavior and how social factors will shape the future of AI technology.
Showcasing an Ethical Approach to Evaluating Pedagogical AI

Four LILE staff participated in the poster session, sharing how they developed and implemented a platform agnostic, values-centered evaluation framework to assess AI tools for pedagogy.
The presenters were:
- Maria Kunath, Learning Experience Designer
- Catherine Lee-Cates, Learning Experience Designer
- Michael Hudson, Learning Experience Designer
- Hannah Rogers, Learning Experience Designer
Grey Reavis, Academic Innovation Research Specialist, also contributed to the poster.
In a pilot of their evaluation framework, their early results show their process successfully promoted trustworthy and transparent uses of pedagogy-focused AI. The following excerpt from their poster proposal illustrates the importance learner-centered approaches to AI development and evaluation:
“Pedagogical AI tools exist in a complex ecosystem across industry, academia, and learners. As educators continue to explore responsible AI development in this context, it is critical to keep the human learner at the center. By designing and enacting a values-centered evaluation framework and inviting diverse stakeholders—most critically learners—into this effort, our work demonstrates how responsible and ethical AI development in education may be achieved. As AI becomes further entwined with learning, our human-centered and transparent process emphasizes the importance of creating connections with learners and, ultimately, reestablishing trust in education.”
LILE Reflections
Some of the LILE staff who attended the event share their reflections below. These takeaways highlight not only the complexity of responsible AI but also the critical role that educators and learning designers have in shaping its future.
Sarah Wike, Teaching Consultant
One of the most compelling sessions for me was presented by student hackathon winners, who had just 48 hours to develop a benchmark for society-centered AI. Their project centered on Al that prioritizes the collective well-being of society by promoting factual consistency, mitigating misinformation and minimizing confirmation bias.
They discovered that AI outputs varied based on perceived demographics—for example, a persona based on a Duke student and one representing a Pennsylvania coal miner received different answers to the same query about climate change. This highlighted the risk of reinforcing echo chambers and factual fragmentation.
Their proposal aimed to ensure factual consistency across user contexts, aligning with broader efforts to develop AI that prioritizes societal well-being over engagement-driven algorithms. It left me hopeful that we can avoid the divisiveness that has plagued social media.
Michael Greene, Senior Director for Learning Innovation Systems
I was very proud to see our team members presenting their work alongside so many others; it was a great reminder that many skill sets are necessary to responsibly embrace and build AI systems. That was my biggest takeaway of the day – seeing computer and other discipline scientists and researchers alongside designers, philosophers, ethicists, and lawyers side by side in discourse illustrates a powerful way forward and leverages an aspect of what makes Duke great: interdisciplinarity.
Hannah Rogers, Learning Experience Designer
Throughout the first two days of the symposium, the question of responsibility was raised again and again by scholars across the disciplines — in how responsible AI is defined, who is responsible, and to whom must the responsible be held accountable. In seeing the diverse research on how to develop, evaluate, and use (or thoughtfully reject AI), I took away three overarching points:
- We must center values in how we engage with AI — and ensure those values are being enacted in actual implementation.
- While we may not all be developers or academics, every discipline and every person with lived experience are a part of how we can gain an understanding of responsible AI.
- As keynote speaker Assistant Professor at UNC-Chapel Hill Neil S. Gaikwad said, we need to approach all aspects of design with intellectual humility.
At LILE, we have the great power and responsibility to think about the intersection between AI and education. And so we must work with the educational community we serve to ensure our use of AI meets their needs while centering our own values.
Catherine Lee-Cates, Learning Experience Designer
A key takeaway for me was the necessity of an active, robust, sustained, and comprehensive investment in ethics. I appreciated the effort by many keynote speakers to outline this endeavor, in particular Neil Gaikwad’s “five pillars of responsible AI education” that encompassed foundational training in moral philosophies and theories as well as AI governance and policy and professional ethics. Walter Sinnott-Armstrong and Jana Schaich Borg also offered practical steps that organizations could take, such as creating moral AI KPIs and developing moral AI training for all career stages. As a learning experience designer, I am excited about possibly working on these trainings and meaningfully contributing to responsible AI education.
Another important takeaway was the need to keep pursuing the question that seems so difficult to answer: how to reconcile society-centered AI with AI’s impact on the environment. As Sinnott-Armstrong remarked in a Q&A, “we can’t have any benefits of AI if our environment is destroyed and we can’t live here anymore.” Society and nature are not separate but thoroughly intertwined, which means environmental considerations in AI discussions cannot be an afterthought. As a learning experience designer working with Duke’s Office of Climate and Sustainability and Duke Environment, I am looking forward to seeing how leaders across Duke rise to this challenge to advance not only social but ecological wellbeing, and doing my part by raising awareness through thoughtful learning experiences.
Join Us at the Triangle AI Summit
If you are looking for more ways to engage with AI at Duke, join us at the upcoming Triangle AI Summit on May 30 at the Washington Duke Inn. Open to all members of North Carolina’s Triangle community, the Triangle AI Summit will be a dynamic gathering of faculty, staff, and community members from across the region designed to engage with the evolving landscape of artificial intelligence.
Educators of all grade levels are invited to submit a proposal about their innovative AI project in educational settings at the Teaching with AI Showcase at the Triangle AI Summit. Presentations should plan to include live demos or slideshows showcasing AI applications in classrooms or other teaching venues and their impact on student learning or academic administrative efficiency. The deadline for proposals is April 4; see the full call for proposals for more information.