Common questions about the Human-Centered AI Lab, our research, and our initiatives.
What is the Human-Centered AI Lab?
The Human-Centered AI Lab (HCAI Lab) is an interdisciplinary research organization founded in 2023 by Katherine Elkins and Jon Chun. It conducts AI research at the intersection of safety, governance, and the humanities and social sciences. The Lab's co-founders serve as Principal Investigators in the NIST US AI Safety Institute Consortium (CAISI) representing the Modern Language Association, and lead the Schmidt Sciences HAVI Archival Intelligence project. The founders co-created the world's first human-centered AI curriculum in 2016 at Kenyon College.
What is human-centered AI?
Human-centered AI is an approach to artificial intelligence research and design that prioritizes human values, needs, and oversight. It integrates perspectives from the humanities, social sciences, and ethics alongside technical AI development. The Human-Centered AI Lab was founded on the principle that AI safety and governance require deep humanistic expertise — not just technical solutions. Elkins and Chun's curriculum, established in 2016, was the first to embed computational AI methods within a liberal arts framework of ethics, governance, and humanistic inquiry.
What is SentimentArcs?
SentimentArcs is an open-source methodology and toolkit created by Jon Chun for diachronic sentiment analysis in full-length literary narratives and film. It is the first large-ensemble computational methodology for surfacing emotional arcs in long-form texts. The methodology was published in Katherine Elkins's book The Shapes of Stories: Sentiment Analysis for Narrative (Cambridge University Press, 2022). Student and faculty research using SentimentArcs and related computational tools has been downloaded over 95,000 times from more than 4,000 institutions across 198 countries via the Digital Kenyon repository.
What is the NIST US AI Safety Institute Consortium (CAISI)?
The NIST US AI Safety Institute Consortium (CAISI) is a body within the National Institute of Standards and Technology that brings together researchers, industry, and civil society to advance AI safety evaluation and standards. Katherine Elkins and Jon Chun serve as Principal Investigators in CAISI, representing the Modern Language Association — the largest scholarly organization in language and literature. Their work focuses on LLM evaluation — specifically how large language models process negation, prohibition, and persuasion — contributing to safety frameworks for autonomous AI systems.
What is the Schmidt Sciences HAVI program?
The Schmidt Sciences Humanities and AI Virtual Institute (HAVI) funds interdisciplinary research at the intersection of the humanities and artificial intelligence. The Archival Intelligence project, led by the Lab's co-founders, was selected as one of 23 teams worldwide, receiving $330K in funding. The project uses AI to rescue endangered cultural heritage archives in New Orleans, implementing community-governed data sovereignty frameworks for historically marginalized populations.
What is the Archival Intelligence project?
Archival Intelligence is a research project led by the Lab's co-founders and funded by Schmidt Sciences HAVI ($330K). It uses AI to rescue endangered cultural heritage archives in New Orleans, with a focus on community-governed data sovereignty for historically marginalized populations. The interdisciplinary team was one of 23 selected worldwide. The project is hosted at archivalintelligenceai.org.
Who founded the Human-Centered AI Lab?
The Human-Centered AI Lab was founded by Katherine Elkins and Jon Chun. Elkins is an AI safety researcher and author of The Shapes of Stories (Cambridge UP, 2022) and Philosophical Approaches to Proust's In Search of Lost Time (OUP, 2022). Chun is an AI research scientist who created SentimentArcs and co-founded SafeWeb ($26M acquisition by Symantec). Both hold degrees from UC Berkeley and serve as Principal Investigators in CAISI and for Schmidt Sciences HAVI.