Nonprofit Research Organization

Human-Centered AI Lab

We conduct interdisciplinary AI research at the intersection of safety, governance, and the humanities and social sciences. Our nonprofit enables fast, flexible collaboration between AI researchers and domain experts across institutions, disciplines, and sectors — filling gaps that traditional academic structures can't. Founded by researchers who co-created the world's first human-centered AI curriculum in 2016, we are Principal Investigators in the NIST AI Safety Institute Consortium and recipients of a Schmidt Sciences HAVI grant.

01

AI Safety & Governance Research

LLM red/blue team testing, ethical auditing, multi-agent system evaluation, and comparative AI regulation. Our research contributes to AI safety conversations at NIST, UNESCO, and beyond.

02

Interdisciplinary Collaboration

An umbrella for distributed teams of diverse experts to secure funding and collaborate on AI research across geographic, institutional, and disciplinary boundaries.

03

Open-Source & Public Interest

All research tools and methodologies are open-source. Student research mentored through our programs has been downloaded over 95,000 times by 4,000+ institutions worldwide.

95k+
Research downloads from 4,000+ institutions in 198 countries
300+
Student research projects mentored since 2016
61%
Women in our AI curriculum · 13% Black · 11% Latinx
22
Papers published or under review at ICML, FAccT, UAI, CogSci & more

Where AI meets human systems

Our research spans AI safety evaluation, governance and regulation, computational social science, cultural heritage preservation, and affective computing — always grounding technical methods in humanistic depth.

AI Safety & Security

LLM Evaluation & Red-Teaming

Syntactic framing vulnerabilities, threat modeling for multi-agent architectures, and evaluation frameworks for autonomous AI systems. Our work on how language models process negation, prohibition, and persuasion informs safety evaluation as part of the NIST US AI Safety Institute Consortium (CAISI).

Governance & Regulation

Comparative AI Policy

Comparative global AI regulation (EU, China, US). Open-source AI policy analysis. Behavioral prediction and ethical auditing of LLM decision-making systems. Co-authored policy paper with International Public AI.

Computational Social Science

Multi-Agent Behavioral Simulation

Multi-agent simulation of high-stakes human decisions including judicial recidivism prediction. Over 90 model/reasoning combinations benchmarked. Funded through Notre Dame–IBM Tech Ethics Lab.

Archival Intelligence

Schmidt Sciences HAVI Project

Rescuing endangered New Orleans heritage archives using AI. Community-governed data sovereignty for historically marginalized populations. One of 23 teams selected worldwide for Schmidt Sciences Humanities and AI Virtual Institute funding ($330K).

Affective AI & Narrative

SentimentArcs & Multimodal Analysis

Open-source methodology for diachronic sentiment analysis in text and film. Created the first computational methodology for surfacing emotional arc in full-length literary narratives. Adopted globally with 95,000+ downloads.

Education

Human-Centered AI Curriculum

World's first interdisciplinary AI curriculum (est. 2016) integrating computational methods with ethics, governance, and humanistic inquiry. 90% non-STEM students, majority women and underrepresented groups. Celebrating 50 years of the host IPHS program.

Funded research, global reach

Our work is supported by federal agencies, philanthropic foundations, international organizations, and leading technology companies.

NIST · US AI Safety Institute (CAISI)
Principal Investigators

Representing the 25,000-member Modern Language Association in the AI Safety Institute Consortium. Contributing LLM evaluation expertise with focus on linguistic edge cases and ethical frameworks.

Schmidt Sciences · HAVI
Principal Investigators

$330K grant. One of 23 teams worldwide. AI-powered archival intelligence for endangered cultural heritage in New Orleans.

Notre Dame–IBM Tech Ethics Lab
Co-Principal Investigators

"How Well Can GenAI Predict Human Behavior?" Multi-agent judicial decision-making and behavioral prediction research.

UNESCO
AI & Cultural Heritage

MONDIACULT expert consultation. Advising on AI frameworks for multilingual cultural preservation and digital heritage.

OpenAI
Higher Education Forum

Selected Education Guild speaker. Presented computational humanities research at OpenAI Forum, San Francisco, October 2025.

Bloomberg
AI Strategy Course

Created AI Strategy course for Bloomberg's professional education platform. Designed to integrate AI into organizational workflows.

Who we are

The Human-Centered AI Lab brings together researchers, policymakers, and technologists across institutions.

Co-Founder · Principal Investigator

Professor of Humanities & Comparative Literature, Faculty in Computing, Kenyon College. Director of the Integrated Program in Humane Studies (IPHS). Author, The Shapes of Stories (Cambridge UP, 2022) and Philosophical Approaches to Proust's In Search of Lost Time (OUP, 2022). Ph.D., UC Berkeley. PI in the NIST AI Safety Institute Consortium (CAISI) and for Schmidt Sciences HAVI. Keynotes at OpenAI, UNESCO, Weill Cornell Medicine-Qatar, RALLY Innovation.

Co-Founder · Director · Principal Investigator

AI research scientist, Kenyon College. Co-creator of the first human-centered AI curriculum. Created SentimentArcs, the first large-ensemble methodology for diachronic sentiment analysis. ICML 2024 oral presentation (top 2%). PI in the NIST AI Safety Institute Consortium (CAISI) and for Schmidt Sciences HAVI. Co-founded SafeWeb ($26M acquisition by Symantec; first In-Q-Tel security investment). UC Berkeley EECS, UT Austin MS. Two US patents.

Board Member

Founder and Director, The Helix Center for Interdisciplinary Investigation. Clinical Professor of Psychiatry, Weill Cornell Medical College. Training and Supervising Psychoanalyst, New York Psychoanalytic Institute.

Board Member

Bruce R. Kuniholm Distinguished Professor of History and Public Policy, Duke University. Sanford School of Public Policy.

Advisor

Product Manager, eBay (Generative AI for consumer fashion). Founder of Yakera.com, crowdfunding and cash transfer platform across Latin America.

Recent coverage

Christian Science Monitor · Feb 2026
Katherine Elkins quoted on AI safety research and the role of humanities in AI governance.
NPR / WOSU · Feb 2026
Coverage of the Schmidt Sciences archival intelligence project in New Orleans.
Forbes · Nov 2025
Forbes feature on the human-centered AI program at Kenyon College.
Bloomberg · 2024
AI Strategy Course
Professional education course created for Bloomberg on integrating AI into organizational workflows.

Propose research

We welcome proposals for interdisciplinary AI research collaborations. The Human-Centered AI Lab provides an umbrella for distributed teams of experts to secure funding and collaborate on human-centered AI research in the public interest.

[email protected]