top of page

OUR MISSION

Reduce p(doom):

The probability that advanced AI causes human extinction.

OUR GOAL IS SIMPLE

Lower the risk of catastrophic outcomes from AI and increase the likelihood that advanced systems improve life for everyone.
 

We design architectures that align with human values and remain safe as they scale. That means building systems that are understandable, auditable, and under meaningful human control.

FEATURED APPEARANCE

Designing safe superintelligence
with Dr. Craig A. Kaplan

Craig Kaplan and London Futurists Podcast

OUR RESEARCH

Stay Tuned

10 designs for safe SI

Stay Tuned

AI Safety Series link

Stay Tuned

Safe SI Keynote
Paper icon

Designing Safe Superintelligence:
How aligned systems evolve safely

Play button that links to YouTube video

Safe Superintelligence in 3 Minutes:
Quick intro to

risk-reducing SI design

Play button that links to YouTube video

AI Safety Series:
Exploring ethical and technical safeguards for AGI

bottom of page