top of page
OUR MISSION
Reduce p(doom):
The probability that advanced AI causes human extinction.
OUR GOAL IS SIMPLE
Lower the risk of catastrophic outcomes from AI and increase the likelihood that advanced systems improve life for everyone.
We design architectures that align with human values and remain safe as they scale. That means building systems that are understandable, auditable, and under meaningful human control.
Designing Safe Superintelligence:
How aligned systems evolve safely
Safe Superintelligence in 3 Minutes:
Quick intro to
risk-reducing SI design
AI Safety Series:
Exploring ethical and technical safeguards for AGI
bottom of page





