Where AI meets human systems

Our research spans AI safety evaluation, governance and regulation, computational social science, cultural heritage preservation, and affective computing — always grounding technical methods in humanistic depth.

AI Safety & Security

LLM Evaluation & Red-Teaming

Syntactic framing vulnerabilities, threat modeling for multi-agent architectures, and evaluation frameworks for autonomous AI systems. Our work on how language models process negation, prohibition, and persuasion informs safety evaluation as part of the NIST US AI Safety Institute Consortium (CAISI).

Governance & Regulation

Comparative AI Policy

Comparative global AI regulation (EU, China, US). Open-source AI policy analysis. Behavioral prediction and ethical auditing of LLM decision-making systems. Co-authored policy paper with International Public AI.

Computational Social Science

Multi-Agent Behavioral Simulation

Multi-agent simulation of high-stakes human decisions including judicial recidivism prediction. Over 90 model/reasoning combinations benchmarked. Funded through Notre Dame–IBM Tech Ethics Lab.

Archival Intelligence

Schmidt Sciences HAVI Project

Rescuing endangered New Orleans heritage archives using AI. Community-governed data sovereignty for historically marginalized populations. One of 23 teams selected worldwide for Schmidt Sciences Humanities and AI Virtual Institute funding ($330K).

Affective AI & Narrative

SentimentArcs & Multimodal Analysis

Open-source methodology for diachronic sentiment analysis in text and film. Created the first computational methodology for surfacing emotional arc in full-length literary narratives. Adopted globally with 95,000+ downloads.

Education

Human-Centered AI Curriculum

World's first interdisciplinary AI curriculum (est. 2016) integrating computational methods with ethics, governance, and humanistic inquiry. 90% non-STEM students, majority women and underrepresented groups. Celebrating 50 years of the host IPHS program.