We conduct interdisciplinary AI research at the intersection of safety, governance, and the humanities and social sciences. Our nonprofit enables fast, flexible collaboration between AI researchers and domain experts across institutions, disciplines, and sectors — filling gaps that traditional academic structures can't. Founded by researchers who co-created the world's first human-centered AI curriculum in 2016, we are Principal Investigators in the NIST AI Safety Institute Consortium and recipients of a Schmidt Sciences HAVI grant.
LLM red/blue team testing, ethical auditing, multi-agent system evaluation, and comparative AI regulation. Our research contributes to AI safety conversations at NIST, UNESCO, and beyond.
An umbrella for distributed teams of diverse experts to secure funding and collaborate on AI research across geographic, institutional, and disciplinary boundaries.
All research tools and methodologies are open-source. Student research mentored through our programs has been downloaded over 95,000 times by 4,000+ institutions worldwide.
Our research spans AI safety evaluation, governance and regulation, computational social science, cultural heritage preservation, and affective computing — always grounding technical methods in humanistic depth.
Syntactic framing vulnerabilities, threat modeling for multi-agent architectures, and evaluation frameworks for autonomous AI systems. Our work on how language models process negation, prohibition, and persuasion informs safety evaluation as part of the NIST US AI Safety Institute Consortium (CAISI).
Comparative global AI regulation (EU, China, US). Open-source AI policy analysis. Behavioral prediction and ethical auditing of LLM decision-making systems. Co-authored policy paper with International Public AI.
Multi-agent simulation of high-stakes human decisions including judicial recidivism prediction. Over 90 model/reasoning combinations benchmarked. Funded through Notre Dame–IBM Tech Ethics Lab.
Rescuing endangered New Orleans heritage archives using AI. Community-governed data sovereignty for historically marginalized populations. One of 23 teams selected worldwide for Schmidt Sciences Humanities and AI Virtual Institute funding ($330K).
Open-source methodology for diachronic sentiment analysis in text and film. Created the first computational methodology for surfacing emotional arc in full-length literary narratives. Adopted globally with 95,000+ downloads.
World's first interdisciplinary AI curriculum (est. 2016) integrating computational methods with ethics, governance, and humanistic inquiry. 90% non-STEM students, majority women and underrepresented groups. Celebrating 50 years of the host IPHS program.
Our work is supported by federal agencies, philanthropic foundations, international organizations, and leading technology companies.
Representing the 25,000-member Modern Language Association in the AI Safety Institute Consortium. Contributing LLM evaluation expertise with focus on linguistic edge cases and ethical frameworks.
$330K grant. One of 23 teams worldwide. AI-powered archival intelligence for endangered cultural heritage in New Orleans.
"How Well Can GenAI Predict Human Behavior?" Multi-agent judicial decision-making and behavioral prediction research.
MONDIACULT expert consultation. Advising on AI frameworks for multilingual cultural preservation and digital heritage.
Selected Education Guild speaker. Presented computational humanities research at OpenAI Forum, San Francisco, October 2025.
Created AI Strategy course for Bloomberg's professional education platform. Designed to integrate AI into organizational workflows.
The Human-Centered AI Lab brings together researchers, policymakers, and technologists across institutions.
Professor of Humanities & Comparative Literature, Faculty in Computing, Kenyon College. Director of the Integrated Program in Humane Studies (IPHS). Author, The Shapes of Stories (Cambridge UP, 2022) and Philosophical Approaches to Proust's In Search of Lost Time (OUP, 2022). Ph.D., UC Berkeley. PI in the NIST AI Safety Institute Consortium (CAISI) and for Schmidt Sciences HAVI. Keynotes at OpenAI, UNESCO, Weill Cornell Medicine-Qatar, RALLY Innovation.
AI research scientist, Kenyon College. Co-creator of the first human-centered AI curriculum. Created SentimentArcs, the first large-ensemble methodology for diachronic sentiment analysis. ICML 2024 oral presentation (top 2%). PI in the NIST AI Safety Institute Consortium (CAISI) and for Schmidt Sciences HAVI. Co-founded SafeWeb ($26M acquisition by Symantec; first In-Q-Tel security investment). UC Berkeley EECS, UT Austin MS. Two US patents.
Founder and Director, The Helix Center for Interdisciplinary Investigation. Clinical Professor of Psychiatry, Weill Cornell Medical College. Training and Supervising Psychoanalyst, New York Psychoanalytic Institute.
Bruce R. Kuniholm Distinguished Professor of History and Public Policy, Duke University. Sanford School of Public Policy.
Product Manager, eBay (Generative AI for consumer fashion). Founder of Yakera.com, crowdfunding and cash transfer platform across Latin America.
We welcome proposals for interdisciplinary AI research collaborations. The Human-Centered AI Lab provides an umbrella for distributed teams of experts to secure funding and collaborate on human-centered AI research in the public interest.
[email protected]