Skip to content


Here is a sampling of the growing number of initiatives, grants, and interdisciplinary AI research opportunities that promote collaboration across academic, government, commercial, and non-profit organizations. Some of these focus on interdisciplinary research across disciplines within academia while others concentrate on collaborations between organizations and even individual researchers.

The Human Centered AI Lab shares many of these goals and participates in some of these collaborations. However, we occupy a unique and complimentary space in this expanding ecosystem as an entirely virtual, minimal, and all-volunteer organization. We bring almost a decade of experience based upon creating the world’s first human-centered AI curriculum, launching a successful interdisciplinary research Colab, and publishing some of the first AI Digital Humanities research. We have an unusually diverse humanistic perspective to contribute to questions of AI safety.

  • We bridge the CP Snow cultural divide between STEM and non-STEM using computational AI Digital Humanities without relegating either to a junior partner or ceremonial role. This is a delicate balance to maintain against a strong natural bias towards STEM. We are unusual in that most of our AI research starts with broad top-down, research questions driven by humanists rather than narrow bottom-up research driven by technologists.

  • As a minimal, virtual organization we are extremely agile and can react to fast moving opportunities, new breakthroughs, and pivots in AI research that can take larger organization months or even years. For examples, one recent AI safety speed grant had an application period of less than 2 months while college centers can take years to establish. This mismatch between the slow institutional pace of academic organizations and the lightening pace of progress in AI research leaves many potential fruitful collaborations unable to be realize under traditional structures.

  • Unlike many academic institutions, we are not subject to restrictions on collaboration. For example, many grants require a Primary Investigator (PI) to be an institutional faculty member or affiliated scholars/professors of practice (which come with other restrictions like administrative approval and/or fees). We also do not have Faculty & Administrative management overhead that can consume over 50% of a grant and which are disallowed by some non-profit/industry AI grants.

  • Larger bureaucracies have more organizational, political, and ideological divisions which become more difficult to overcome with scale. Higher education also faces additional challenges like unsustainable tuition growth, demographic decline of college age students, departmental competition for zero-sum/shrinking budgets, the crisis/decline in the humanities, etc. All of these make consensus building, risk taking and collaboration exceptionally difficult.

  • Since our research and curriculum is based around humanistic ‘big questions’ around technology rather than technology itself, we have many more diverse disciplines, perspectives, and voices from virtually every discipline. The student research we supervise comes from 90% non-STEM, 61% women, 13% Black, 11% Latine and other underrepresented groups. As of January 2024, this mentored research has been downloaded over 40k in over 160 countries worldwide including by leading institutions like Stanford, Berkeley, MIT, CMU, Princeton, Cambridge, Oxford, and the Chinese Academy of Social Sciences.

The Human Centered AI Lab was founded in January 2024. In that month alone we lost four exceptional AI safety research opportunities as individual researchers, mentors of original student research, and as representative of our institution. Networking with peers at both big R1 universities and small liberal arts colleges revealed a these many of these structural and cultural obstacles exists to varying degrees throughout academia.

Although a growing number of colleges/universities are beginning to support interdisciplinary research, collaboration with industry, professors of practice, and human-centered AI research they are the minority. Academia typically conserves a very traditional culture organized around siloed disciplines. The Human Centered AI Lab is a resource that enables domain experts and AI researchers to overcome barriers where they exist to more freely, flexibly, and quickly collaborate in interdisciplinary AI safety research.