501(c)(3) Nonprofit Research Organization

Human-Centered AI Lab

We conduct interdisciplinary AI research at the intersection of safety, governance, and the humanities and social sciences. Our nonprofit enables fast, flexible collaboration between AI researchers and domain experts across institutions, disciplines, and sectors — filling gaps that traditional academic structures can't. Founded by researchers who co-created the world's first human-centered AI curriculum in 2016, we are currently Principal Investigators for NIST CAISI and Schmidt Sciences.

01
AI Safety & Governance Research
LLM red/blue team testing, ethical auditing, multi-agent system evaluation, and comparative AI regulation. Evidence-based research informing policy at NIST, UNESCO, and beyond.
02
Interdisciplinary Collaboration
An umbrella for distributed teams of diverse experts to secure funding and collaborate on AI research across geographic, institutional, and disciplinary boundaries.
03
Open-Source & Public Interest
All research tools and methodologies are open-source. Student research mentored through our programs has been downloaded over 95,000 times by 4,000+ institutions worldwide.

95k+
Research downloads from 4,000+ institutions in 150+ countries
300+
Student research projects mentored since 2016
61%
Women in our AI curriculum · 13% Black · 11% Latinx
22
Papers published or under review at ICML, FAccT, UAI, CogSci & more

Research

Where AI meets human systems

AI Safety & Security
LLM Evaluation & Red-Teaming
Syntactic framing vulnerabilities, threat modeling for multi-agent architectures, and evaluation frameworks for autonomous AI systems. Active research under NIST CAISI.
Governance & Regulation
Comparative AI Policy
Comparative global AI regulation (EU, China, US). Open-source policy. Behavioral prediction and ethical auditing of LLM decision-making systems.
Computational Social Science
Multi-Agent Behavioral Simulation
Multi-agent simulation of high-stakes human decisions including judicial recidivism prediction. 90+ model/reasoning combinations benchmarked. Notre Dame-IBM Tech Ethics Lab.
Archival Intelligence
Schmidt Sciences HAVI Project
Rescuing endangered New Orleans heritage archives using AI. Community-governed data sovereignty for historically marginalized populations. One of 23 teams worldwide.
Affective AI & Narrative
SentimentArcs & Multimodal Analysis
Open-source methodology for diachronic sentiment analysis in text and film. Created the first computational methodology for surfacing emotional arc in narrative. Adopted globally.
Education
Human-Centered AI Curriculum
World's first interdisciplinary AI curriculum (est. 2016) integrating computational methods with ethics, governance, and humanistic inquiry. 90% non-STEM students, majority women and underrepresented groups.

Grants & Affiliations

Funded research, global reach

NIST · US AI Safety Institute (CAISI)
Principal Investigators
Representing the 25,000-member Modern Language Association. LLM red/blue team testing, safety evaluation frameworks.
Schmidt Sciences · HAVI
Principal Investigators
$330K. One of 23 teams worldwide. AI-powered archival intelligence for endangered cultural heritage in New Orleans.
Notre Dame–IBM Tech Ethics Lab
Co-Principal Investigators
"How Well Can GenAI Predict Human Behavior?" Multi-agent judicial decision-making and behavioral prediction.
UNESCO
AI & Cultural Heritage
MONDIACULT expert consultation. Advising on AI frameworks for multilingual cultural preservation and digital heritage.
OpenAI
Higher Education Forum
Selected Education Guild speaker. Presented computational humanities research at OpenAI Forum, San Francisco, October 2025.
Bloomberg
AI Strategy Course
Created AI Strategy course for Bloomberg's professional education platform. Designed to integrate AI into professional workflows.

People

Who we are

Founding Researchers
Co-Founder · PI
Professor of Humanities & Comparative Literature, Faculty in Computing, Kenyon College. Director of IPHS. Author, The Shapes of Stories (Cambridge UP) and Proust's In Search of Lost Time (OUP). Berkeley PhD. NEH Teaching Professor. PI for NIST CAISI and Schmidt Sciences. Keynotes at OpenAI, UNESCO, Weill Cornell Medicine-Qatar, Concordia, RALLY Innovation.
Co-Founder · Director · PI
AI research scientist. Co-creator of the first human-centered AI curriculum. Created SentimentArcs. ICML 2024 oral presentation (top 2%). PI for NIST CAISI and Schmidt Sciences. Co-founded SafeWeb ($26M acquisition by Symantec; first In-Q-Tel security investment). UC Berkeley EECS, UT Austin MS. Two US patents.
Board of Directors
Board Member
Founder and Director, The Helix Center. Clinical Professor of Psychiatry, Weill Cornell Medical College. Training and Supervising Psychoanalyst, New York Psychoanalytic Institute.
Board Member
Bruce R. Kuniholm Distinguished Professor of History and Public Policy, Duke University. Professor, Sanford School of Public Policy. Professor of History.
Officers
Officer
Product Manager, eBay. Driving Generative AI for consumer fashion. Founder of Yakera.com, crowdfunding and cash transfer platform across Latin America.
Elizabeth Bonaudi
Officer
Former Educator. Local government and community engagement.

In the Press

Recent coverage

NPR / WOSU · Feb 2026
Coverage of the Schmidt Sciences archival intelligence project in New Orleans.
Forbes · Nov 2025
Forbes feature on the human-centered AI program at Kenyon College.
Bloomberg · 2024
AI Strategy Course
Professional education course created for Bloomberg on integrating AI into organizational workflows.

Collaborate

Propose research

info@humancenteredailab.org

We welcome proposals for interdisciplinary AI research collaborations. The Human-Centered AI Lab provides an umbrella for distributed teams of experts to secure funding and collaborate on human-centered AI research in the public interest.