top of page

OUR MISSION:

Reduce p(doom)

The probability that advanced AI causes human extinction.

Our goal is simple

Lower the risk of catastrophic AI outcomes and increase the likelihood that advanced systems improve life for everyone. ​

 

We design architectures that align with human values and remain safe as they scale. That means building systems that are understandable, auditable, and under meaningful human control.

Featured Appearance

YouTubeCTOCompass.jpg

Our Research

Designing Safe SuperIntelligence

How aligned systems evolve safely

Safe Superintelligence

Safe SuperIntelligence in 3 minutes:
Intro to risk-reducing SI design

AI Safety Series

Exploring ethical and techincal safeguards for AGI

bottom of page