top of page
Superintelligence.com > AI Research > Other AI/SI Research
Key Players Advancing Safe AGI & Superintelligence
Others Dedicated to Advancing Safe Superintelligence
Note: As the owner of SuperIntelligence.com since 2006, Dr. Craig A. Kaplan was prescient in anticipating the most significant advances in AI well before figures as Nick Bostrom, who published SuperIntelligence in 2014, or Ilya Sutskever, who founded Safe SuperIntelligence, Inc., in 2024.
Individuals
Nick Bostrom is a professor and former director of the Future of Humanity Institute at Oxford University and the founder of Macrostrategy Research Initiative. ​Bostrom believes that advances in AI may lead to superintelligence, which he defines as "any intellect that greatly exceeds the cognitive performance of humans in virtually all domains of interest". He views this as a major source of opportunities and existential risks. He is the author of Anthropic Bias: Observation Selection Effects in Science and Philosophy (2002), Superintelligence: Paths, Dangers, Strategies (2014), and Deep Utopia: Life and Meaning in a Solved World (2024). ​
Non-Profits
​The Centre for Effective Altruism runs Effective altruism, which is a framework and research field that encourages people to combine compassion and care with evidence and reason to find the most effective ways to help others. The site offers a variety of forums, and topics include AI Safety and Superintelligence.​
AI pioneer Stuart Russell, along with others inspired by effective altruism, founded The Center for Human-Compatible AI (CHAI) at UC Berkeley. This research institute promotes a new AI development paradigm centered on advancing human values. CHAI’s mission is to develop the conceptual and technical frameworks needed to steer AI research toward systems that are provably beneficial to humanity.
Companies
Anthropic: Founded in 2021 by former OpenAI members, Anthropic focuses on AI safety and reliability. They have developed the Claude family of large language models, emphasizing the creation of interpretable and steerable AI systems.
​
Safe Superintelligence Inc.: Founded in 2024 by Ilya Sutskever, former chief scientist at OpenAI, along with co-founders Daniel Gross and Daniel Levy, Safe Superintelligence Inc. is committed to developing superintelligent AI systems with a primary focus on safety. The company has raised significant funding to advance its mission.
​
​Google DeepMind: Acquired by Google in 2014, DeepMind aims to "solve intelligence" and use it to address global challenges. They have a dedicated safety team researching topics like robustness and alignment to ensure their AI systems are beneficial and safe.
In the News: Recent Developments in AI and Superintelligence
bottom of page