Where AI meets human systems

Our research spans AI safety evaluation, governance and regulation, computational social science, cultural heritage preservation, and affective computing — always grounding technical methods in humanistic depth.

AI Safety & Security

LLM Evaluation & Red-Teaming

Syntactic framing vulnerabilities, threat modeling for multi-agent architectures, and evaluation frameworks for autonomous AI systems. Our co-founders' work on how language models process negation, prohibition, and persuasion informs safety evaluation through their role in the NIST US AI Safety Institute Consortium (CAISI).

Governance & Regulation

Comparative AI Policy

Comparative global AI regulation (EU, China, US). Open-source AI policy analysis. Behavioral prediction and ethical auditing of LLM decision-making systems. Co-authored policy paper with International Public AI.

Computational Social Science

Multi-Agent Behavioral Simulation

Multi-agent simulation of high-stakes human decisions including judicial recidivism prediction. Over 90 model/reasoning combinations benchmarked. Funded through Notre Dame–IBM Tech Ethics Lab.

Archival Intelligence

Schmidt Sciences HAVI Project

Rescuing endangered New Orleans heritage archives using AI. Community-governed data sovereignty for historically marginalized populations. One of 23 teams selected worldwide for Schmidt Sciences Humanities and AI Virtual Institute funding ($330K).

Affective AI & Narrative

SentimentArcs & Multimodal Analysis

Open-source methodology for diachronic sentiment analysis in text and film. Created the first computational methodology for surfacing emotional arc in full-length literary narratives. Student research using the methodology downloaded from institutions in 198 countries.

Foundational Work

Human-Centered AI: From Curriculum to Framework

Our 2023 paper "The Crisis of Artificial Intelligence: A New Digital Humanities Curriculum for Human-Centered AI" (International Journal of Humanities and Arts Computing) established the intellectual framework for integrating computational methods with ethics, governance, and humanistic inquiry — and the evidence base for why this integration matters for AI safety and public benefit.