Skip to content

Mission

Our mission is to facilitate efficient collaboration on interdisciplinary AI between individual researchers and domain experts separated by geographic, organizational, doctrinal, and legal divisions. We focus on human-centered AI topics like safety, bias, explainability, ethics, and policy grounded in careful experimentation and expert interpretation. Our goal is enable fast, focused and flexible research funding and collaboration overlooked by traditional institutional research structures.

History

The idea of the Human-Centered AI Lab was born out of a close collaboration between a former Silicon Valley entrepreneur and accomplished humanities academic.

The rise of deep neural networks, punctuated by the 2012 ImageNet breakthrough in computer vision, set the current trajectory of incessant, immense and unforeseen progress of AI. We realized AI was economically escaping the cycle of AI winters/springs, and technological progress would increasingly present humanity with ‘big questions’ that could only be answered by collaboration with domain expertise working with AI researchers.

After several years of research, planning, and presentations, in 2016 we announced the first course in our human-centered AI curriculum: “Programming Humanity”. It seemed AI, more than any other technology, is suited the McLuhan/Culkin quote “We shape our tools and thereafter our tools shape us.” Our human-centered AI currulum is based upon interdisciplinary project-based research across virtually every department. As of 2023, we have mentored over 300 projects that have been downloaded 40k times in over 160 countries worldwide by leading institutions like Stanford, Berkeley, MIT, CMU, Princeton, Cambridge, Oxford, and the Chinese Academy of Social Sciences. Our approach to collaborative research brings diverse insights to human-centered AI with 90% non-STEM, 61% women, 13% Black, 11% Latine and other new student perspectives.

Our founders’ current research is focused on Narrative, Affective AI, eXplainable AI, ethical auditing of LLMs, synthetic benchmarks, and multi-agent autonomous agents. We have published and presented with leading journals and conferences as well have consulting with industry and safety committees like NIST and Meta.

Motivation

The Human-Centered AI Lab was created after the founding researchers lost several valuable opportunities to advance AI safety and seek research grant support. In January 2024 alone, we had to decline an invitation to join the Whitehouse/NIST AI Safety Institute and were unable to apply for grants from Meta, OpenAI as well as the National Endowment for the Humanities.

This is because many grants are accessible only to organizations, not individuals. Some academic institutions still do not support AI research, require months/years of planning, and are not designed to deal with interdisciplinary collaboration across disciplines, institutions, or in partnership with industry experts. The Human-Centered Lab acts as an umbrella organization for teams of diverse experts to assemble, collectively apply for grants, and collaborate on human-centered AI research.

Founding Researchers

Jon Chun brings broad technical expertise with undergraduate, graduate, and post-graduate studies in EECS, Biomedical Engineering/Cognitive Science, and Medicine. He has published research and patents on semiconductors, computer security, gene therapy, medical informatics, and AI. Before selling his last start up to the world’s largest computer security company, he worked in FinTech, HealthTech, InsurTech, and other industries in the US, Asia and Latin America.

Katherine Elkins has won both research and teaching awards and published an unusually diverse number of leading journals and presses including Literature, Narrative and Philosophy. Her books include “Proust’s ‘In Search of Lost Time:’ Philosophical Perspectives” (Oxford UP) and “The Shapes of Stories: Sentiment Analysis for Narrative” (Cambridge UP). In the latter she demonstrates how to leverage the latest AI to explore the unique shape of a story

Officers

Katherine L. Elkins

  • Co-Founder, world’s first Human-Centered AI curriculum and Colab
  • Director, Integrated Program for Humane Studies (2023-2024 sabbatical)
  • Academic, author, and researcher published on Literature, Narrative, Philosophy, AI, Regulation, etc.

Raul Romero

  • Product Manager, eBay
  • Driving Generative AI for consumer fashion
  • Founder of Yakera.com, a crowdfunding and cash transfer platform across Latin America

Elizabeth Bonaudi

  • Realtor ABR, e-Merge Real Estate
  • Former Educator
  • Local Goverment/Community Engagement

Board of Directors

Jon Chun

  • Co-Founder, world’s first Human-Centered AI curriculum and Colab
  • Former Silicon Valley Entrepreneur and Fortune 500 Director of Development
  • Integrated Program for Humane Studies, Kenyon College

Edward Nersessian

  • The Helix Center, Founder/Director
  • Clinical Professor of Psychiatry, Weill-Cornell Medical College
  • Training and Supervising Psychoanalyst, New York Psychoanalytic Institute

Jennifer Siegel

  • Duke University
  • Bruce R. Kuniholm Distinguished Professor of History and Public Policy
  • Professor in the Sanford School of Public Policy
  • Professor of History

Research Grants

An unbrella non-profit for interdisciplinary teams of distributed experts to secure funding under.

Interdisciplinary Collaboration

A means for collaboration across geography, institutions, academic silos, and diverse experts in industry.

Open-Source, Education & Public Service

Collaborative AI research in the public interest including safety, bias, explainability, ethics and public policy via open-source informing higher-education and the public more broadly.