Our research spans AI safety evaluation, governance and regulation, computational social science, cultural heritage preservation, and affective computing — always grounding technical methods in humanistic depth.
Syntactic framing vulnerabilities, threat modeling for multi-agent architectures, and evaluation frameworks for autonomous AI systems. Our work on how language models process negation, prohibition, and persuasion informs safety evaluation as part of the NIST US AI Safety Institute Consortium (CAISI).
Comparative global AI regulation (EU, China, US). Open-source AI policy analysis. Behavioral prediction and ethical auditing of LLM decision-making systems. Co-authored policy paper with International Public AI.
Multi-agent simulation of high-stakes human decisions including judicial recidivism prediction. Over 90 model/reasoning combinations benchmarked. Funded through Notre Dame–IBM Tech Ethics Lab.
Rescuing endangered New Orleans heritage archives using AI. Community-governed data sovereignty for historically marginalized populations. One of 23 teams selected worldwide for Schmidt Sciences Humanities and AI Virtual Institute funding ($330K).
Open-source methodology for diachronic sentiment analysis in text and film. Created the first computational methodology for surfacing emotional arc in full-length literary narratives. Adopted globally with 95,000+ downloads.
World's first interdisciplinary AI curriculum (est. 2016) integrating computational methods with ethics, governance, and humanistic inquiry. 90% non-STEM students, majority women and underrepresented groups. Celebrating 50 years of the host IPHS program.