top of page

GLOBAL AI & SUPERINTELLIGENCE RESEARCH

In addition to our own work, we review and analyze research from leading experts, non-profits, and organizations advancing AI, AGI, and SuperIntelligence. This section highlights influential studies and safety initiatives shaping the future of AI development. Stay informed by exploring current research and contributing to the global effort to ensure AI remains safe and beneficial.

​

Dr. Craig A. Kaplan has worked in SuperIntelligence research and system design long before these topics entered mainstream discussion. As the owner of SuperIntelligence.com since 2006, he recognized early the urgent need for safe, human-aligned AI systems, a mission that continues to guide the work presented here.

In the News: AI and Superintelligence Around the World

The 2,000-year-old debate that reveals AI’s biggest problem
Silicon Valley is racing to build a god — without understanding what makes a good one.

​

AI 'less regulated than sandwiches' and no tech firm has AI superintelligence safety plan, study

Eight leading AI companies, including OpenAI, Meta, Anthropic, and DeepSeek, do not have credible plans to prevent catastrophic AI risks, a new study shows.

​

AI companies are failing existential safety tests. That's not slowing them down

A sweeping safety review shows leading AI companies advancing toward superhuman systems without guardrails to stop catastrophic failure.

​

Thousands sign petition calling for ban AI "superintelligence"

More than 28,000 people have now signed an online petition calling for a ban on the development of AI "superintelligence." The list includes hundreds of public figures and several prominent AI pioneers. Anthony Aguirre, one of the organizers of the petition, joins "The Daily Report" to discuss.

​

The Hard-Luck Case For AGI And AI Superintelligence As An Extinction-Level Event

There is a great deal of research going on to further advance AI. The general goal is to either reach artificial general intelligence (AGI) or maybe even the outstretched possibility of achieving artificial superintelligence (ASI).

​

Global Call for AI Red Lines
Over 200 leaders, including Anthropic’s CISO, Nobel laureate Geoffrey Hinton, and other leading AI researchers and policy thinkers, have signed a new call that demands enforceable restrictions by 2026 on high-risk capabilities like self-replication, impersonation, and autonomous weaponization. The message is clear: without shared global norms, alignment cannot scale. Guardrails aren’t optional; they’re overdue!

 

​Bay Area researchers argue that tech industry is 'careening toward disaster'
A new book by Yudkowsky and Soares warns that current AI development paths could lead to human extinction. Others challenge that framing Vox’s "AI will kill everyone is not an argument. It’s a worldview" explores competing narratives of doom, optimism, and systemic risk. These tensions shape which AI policies and research directions gain traction.
 

Amodei on AI: "There's a 25% chance that things go really, really badly"
Anthropic’s CEO Dario Amodei reiterates a “p‑doom” estimate: a 25 % probability that AI development could lead to catastrophic outcomes, even extinction.

Individuals

Nick Bostrom is a professor and former director of the Future of Humanity Institute at Oxford University and the founder of Macrostrategy Research Initiative. â€‹Bostrom believes that advances in AI may lead to superintelligence, which he defines as "any intellect that greatly exceeds the cognitive performance of humans in virtually all domains of interest". He views this as a major source of opportunities and existential risks. He is the author of Anthropic Bias: Observation Selection Effects in Science and Philosophy (2002), Superintelligence: Paths, Dangers, Strategies (2014), and Deep Utopia: Life and Meaning in a Solved World (2024). ​

Non-Profits

​The Centre for Effective Altruism runs Effective Altruism, which is a framework and research field that encourages people to combine compassion and care with evidence and reason to find the most effective ways to help others. The site offers a variety of forums, and topics include AI Safety.

AI pioneer Stuart Russell, along with others inspired by effective altruism, founded The Center for Human-Compatible AI (CHAI) at UC Berkeley. This research institute promotes a new AI development paradigm centered on advancing human values. CHAI’s mission is to develop the conceptual and technical frameworks needed to steer AI research toward systems that are provably beneficial to humanity. 

Foresight Institute: a non-profit advancing frontier biotech, neurotech, nanotech, and AI for the benefit of life. They were co-founded by Eric Drexler and Christine Peterson in 1986 on a vision of great futures enabled by powerful technologies.

Companies

Anthropic: Founded in 2021 by former OpenAI members, Anthropic focuses on AI safety and reliability. They have developed the Claude family of large language models, emphasizing the creation of interpretable and steerable AI systems.

Safe Superintelligence Inc.: Founded in 2024 by Ilya Sutskever, former chief scientist at OpenAI, along with co-founder Daniel Gross, Safe Superintelligence Inc. is committed to developing superintelligent AI systems with a primary focus on safety. The company has raised significant funding to advance its mission.
​
​Google DeepMind: Acquired by Google in 2014, DeepMind aims to "solve intelligence" and use it to address global challenges. They have a dedicated safety team researching topics like robustness and alignment to ensure their AI systems are beneficial and safe.  

bottom of page