top of page

OUR MISSION

Reduce p(doom)
Increase p(zoom)

"i" information icon that pops up text

p(doom) is the probability that advanced AI makes humanity extinct; p(zoom) is the probability that safe SuperIntelligence is the first SuperIntelligence developed

SuperIntelligence could save us - or end us. Design is everything.

Craig Kaplan and London Futurists Podcast
YouTube icon that links to podcast

A discussion on designing

Safe Superintelligence featuring

Dr. Craig A. Kaplan.

OUR GOAL IS SIMPLE

Lower the chances AI wipes us out. Raise the chances it makes life better for everyone.​

We design systems that stay aligned with human values and actually make things safer as they get smarter.

Expected Value of Lives Saved

Our best estimate of lives that might be saved from visits to this site so far.

...

OUR RESEARCH

Stay Tuned

10 designs for safe SI

Stay Tuned

AI Safety Series link

Stay Tuned

Safe SI Keynote
Paper icon

Designing Safe Superintelligence:
How aligned systems evolve safely

Play button that links to YouTube video

Safe Superintelligence in 3 Minutes:
Quick intro to

risk-reducing SI design

Play button that links to YouTube video

AI Safety Series:
Exploring ethical and technical safeguards for AGI

bottom of page