Interdisciplinary Research Organization

Human-Centered AI Lab

The Human-Centered AI Lab is an interdisciplinary research organization founded in 2023 by Katherine Elkins and Jon Chun, who co-created the world's first human-centered AI curriculum in 2016 at Kenyon College. The Lab enables fast, flexible collaboration between AI researchers and domain experts across institutions, disciplines, and sectors — filling gaps that traditional academic structures can't. Elkins and Chun are Principal Investigators in the NIST AI Safety Institute Consortium (CAISI), representing the 25,000-member Modern Language Association, and recipients of a Schmidt Sciences HAVI grant for the Archival Intelligence project in New Orleans.

01

AI Safety & Governance Research

LLM red/blue team testing, ethical auditing, multi-agent system evaluation, and comparative AI regulation. Our research contributes to AI safety conversations at NIST, UNESCO, and beyond.

02

Interdisciplinary Collaboration

An umbrella for distributed teams of diverse experts to secure funding and collaborate on AI research across geographic, institutional, and disciplinary boundaries.

03

Open-Source & Public Interest

All research tools and methodologies are open-source. Student research mentored through our programs has been downloaded over 95,000 times by 4,000+ institutions worldwide.

95k+
Research downloads from 4,000+ institutions in 198 countries
300+
Student research projects mentored since 2016
61%
Women in our AI curriculum · 13% Black · 11% Latinx
22
Papers published or under review at ICML, FAccT, UAI, CogSci & more

What we do

The Human-Centered AI Lab (HCAI Lab) is an interdisciplinary research organization founded in 2023 by Katherine Elkins and Jon Chun. The Lab conducts AI research at the intersection of safety, governance, and the humanities and social sciences. It provides an institutional umbrella for distributed teams of experts to secure funding and collaborate on human-centered AI research in the public interest.

Elkins and Chun co-created the world's first human-centered AI curriculum in 2016 at Kenyon College, integrating computational methods with ethics, governance, and humanistic inquiry. The curriculum has trained over 300 students — 90% from non-STEM backgrounds, 61% women, 13% Black, and 11% Latinx — whose research has been downloaded over 95,000 times by more than 4,000 institutions across 198 countries.

The Lab's principal investigators serve in the NIST US AI Safety Institute Consortium (CAISI), representing the 25,000-member Modern Language Association, where their work focuses on how large language models process negation, prohibition, and persuasion. They are also Principal Investigators for the Schmidt Sciences Humanities and AI Virtual Institute (HAVI), leading the Archival Intelligence project — a $330K initiative to rescue endangered cultural heritage archives in New Orleans using AI with community-governed data sovereignty.

Additional affiliations include UNESCO (MONDIACULT expert consultation on AI and cultural heritage), the Notre Dame–IBM Technology Ethics Lab (multi-agent behavioral simulation of judicial decision-making), the Meta Open Innovation AI Research Community, Bloomberg (AI Strategy course), and OpenAI (Education Guild, Higher Education Forum).

Where AI meets human systems

Our research spans AI safety evaluation, governance and regulation, computational social science, cultural heritage preservation, and affective computing — always grounding technical methods in humanistic depth.

AI Safety & Security

LLM Evaluation & Red-Teaming

Syntactic framing vulnerabilities, threat modeling for multi-agent architectures, and evaluation frameworks for autonomous AI systems. Our work on how language models process negation, prohibition, and persuasion informs safety evaluation as part of the NIST US AI Safety Institute Consortium (CAISI).

Governance & Regulation

Comparative AI Policy

Comparative global AI regulation (EU, China, US). Open-source AI policy analysis. Behavioral prediction and ethical auditing of LLM decision-making systems. Co-authored policy paper with International Public AI.

Computational Social Science

Multi-Agent Behavioral Simulation

Multi-agent simulation of high-stakes human decisions including judicial recidivism prediction. Over 90 model/reasoning combinations benchmarked. Funded through Notre Dame–IBM Tech Ethics Lab.

Archival Intelligence

Schmidt Sciences HAVI Project

Rescuing endangered New Orleans heritage archives using AI. Community-governed data sovereignty for historically marginalized populations. One of 23 teams selected worldwide for Schmidt Sciences Humanities and AI Virtual Institute funding ($330K).

Affective AI & Narrative

SentimentArcs & Multimodal Analysis

Open-source methodology for diachronic sentiment analysis in text and film. Created the first computational methodology for surfacing emotional arc in full-length literary narratives. Student research using the methodology downloaded from institutions in 198 countries.

Education

Human-Centered AI Curriculum

World's first interdisciplinary AI curriculum (est. 2016) integrating computational methods with ethics, governance, and humanistic inquiry. 90% non-STEM students, majority women and underrepresented groups. Celebrating 50 years of the host IPHS program.

Funded research, global reach

Our work is supported by federal agencies, philanthropic foundations, international organizations, and leading technology companies.

NIST · US AI Safety Institute (CAISI)
Principal Investigators

Representing the 25,000-member Modern Language Association in the AI Safety Institute Consortium. Contributing LLM evaluation expertise with focus on linguistic edge cases and ethical frameworks.

Schmidt Sciences · HAVI
Principal Investigators

$330K grant. One of 23 teams worldwide. AI-powered archival intelligence for endangered cultural heritage in New Orleans.

Notre Dame–IBM Tech Ethics Lab
Co-Principal Investigators

"How Well Can GenAI Predict Human Behavior?" Multi-agent judicial decision-making and behavioral prediction research.

UNESCO
AI & Cultural Heritage

MONDIACULT expert consultation. Advising on AI frameworks for multilingual cultural preservation and digital heritage.

OpenAI
Higher Education Forum

Selected Education Guild speaker. Presented computational humanities research at OpenAI Forum, San Francisco, October 2025.

Bloomberg
AI Strategy Course

Created AI Strategy course for Bloomberg's professional education platform. Designed to integrate AI into organizational workflows.

Who we are

The Human-Centered AI Lab brings together researchers, policymakers, and technologists across institutions.

Co-Founder · Principal Investigator

Professor of Humanities & Comparative Literature, Faculty in Computing, Kenyon College. Director of the Integrated Program in Humane Studies (IPHS). Author, The Shapes of Stories (Cambridge UP, 2022) and Philosophical Approaches to Proust's In Search of Lost Time (OUP, 2022). Ph.D., UC Berkeley. PI in the NIST AI Safety Institute Consortium (CAISI) and for Schmidt Sciences HAVI. Keynotes at OpenAI, UNESCO, Weill Cornell Medicine-Qatar, RALLY Innovation.

Co-Founder · Director · Principal Investigator

AI research scientist, Kenyon College. Co-creator of the first human-centered AI curriculum. Created SentimentArcs, the first large-ensemble methodology for diachronic sentiment analysis. ICML 2024 oral presentation (top 2%). PI in the NIST AI Safety Institute Consortium (CAISI) and for Schmidt Sciences HAVI. Co-founded SafeWeb ($26M acquisition by Symantec; first In-Q-Tel security investment). UC Berkeley EECS, UT Austin MS. Two US patents.

Board Member

Founder and Director, The Helix Center for Interdisciplinary Investigation. Clinical Professor of Psychiatry, Weill Cornell Medical College. Training and Supervising Psychoanalyst, New York Psychoanalytic Institute.

Board Member

Bruce R. Kuniholm Distinguished Professor of History and Public Policy, Duke University. Sanford School of Public Policy.

Advisor

Product Manager, eBay (Generative AI for consumer fashion). Founder of Yakera.com, crowdfunding and cash transfer platform across Latin America.

Recent coverage

Christian Science Monitor · Feb 2026
Katherine Elkins quoted on AI safety research and the role of humanities in AI governance.
NPR / WOSU · Feb 2026
Coverage of the Schmidt Sciences archival intelligence project in New Orleans.
Forbes · Nov 2025
Forbes feature on the human-centered AI program at Kenyon College.
Bloomberg · 2024
AI Strategy Course
Professional education course created for Bloomberg on integrating AI into organizational workflows.

Common questions

What is the Human-Centered AI Lab?
The Human-Centered AI Lab (HCAI Lab) is an interdisciplinary research organization founded in 2023 by Katherine Elkins and Jon Chun. It conducts AI research at the intersection of safety, governance, and the humanities and social sciences. The Lab's principal investigators serve in the NIST US AI Safety Institute Consortium (CAISI) representing the Modern Language Association, and lead the Schmidt Sciences HAVI Archival Intelligence project. The founders co-created the world's first human-centered AI curriculum in 2016 at Kenyon College.
What is human-centered AI?
Human-centered AI is an approach to artificial intelligence research and design that prioritizes human values, needs, and oversight. It integrates perspectives from the humanities, social sciences, and ethics alongside technical AI development. The Human-Centered AI Lab was founded on the principle that AI safety and governance require deep humanistic expertise — not just technical solutions. Elkins and Chun's curriculum, established in 2016, was the first to embed computational AI methods within a liberal arts framework of ethics, governance, and humanistic inquiry.
What is SentimentArcs?
SentimentArcs is an open-source methodology and toolkit created by Jon Chun for diachronic sentiment analysis in full-length literary narratives and film. It is the first large-ensemble computational methodology for surfacing emotional arcs in long-form texts. The methodology was published in Katherine Elkins's book The Shapes of Stories: Sentiment Analysis for Narrative (Cambridge University Press, 2022). Student and faculty research using SentimentArcs and related computational tools has been downloaded over 95,000 times from more than 4,000 institutions across 198 countries via the Digital Kenyon repository.
What is the NIST AI Safety Institute Consortium (CAISI)?
The NIST US AI Safety Institute Consortium (CAISI) is a body within the National Institute of Standards and Technology that brings together researchers, industry, and civil society to advance AI safety evaluation and standards. Katherine Elkins and Jon Chun serve as Principal Investigators in CAISI, representing the 25,000-member Modern Language Association. Their work focuses on LLM evaluation — specifically how large language models process negation, prohibition, and persuasion — contributing to safety frameworks for autonomous AI systems.
What is the Schmidt Sciences HAVI program?
The Schmidt Sciences Humanities and AI Virtual Institute (HAVI) funds interdisciplinary research at the intersection of the humanities and artificial intelligence. The Human-Centered AI Lab's Archival Intelligence project was selected as one of 23 teams worldwide, receiving $330K in funding. The project uses AI to rescue endangered cultural heritage archives in New Orleans, implementing community-governed data sovereignty frameworks for historically marginalized populations.
What is the Archival Intelligence project?
Archival Intelligence is a research project led by the Human-Centered AI Lab and funded by Schmidt Sciences HAVI ($330K). It uses AI to rescue endangered cultural heritage archives in New Orleans, with a focus on community-governed data sovereignty for historically marginalized populations. The interdisciplinary team was one of 23 selected worldwide. The project is hosted at archivalintelligenceai.org.
Who founded the Human-Centered AI Lab?
The Human-Centered AI Lab was founded by Katherine Elkins and Jon Chun. Elkins is Professor of Humanities and Comparative Literature and Director of the Integrated Program in Humane Studies (IPHS) at Kenyon College. She is the author of The Shapes of Stories (Cambridge UP, 2022) and Philosophical Approaches to Proust's In Search of Lost Time (OUP, 2022). Chun is an AI research scientist at Kenyon College who created SentimentArcs and co-founded SafeWeb ($26M acquisition by Symantec). Both hold degrees from UC Berkeley and serve as Principal Investigators in CAISI and for Schmidt Sciences HAVI.

Propose research

We welcome proposals for interdisciplinary AI research collaborations. The Human-Centered AI Lab provides an umbrella for distributed teams of experts to secure funding and collaborate on human-centered AI research in the public interest.

[email protected]