top of page

GLOBAL AI & SUPERINTELLIGENCE RESEARCH

In addition to our own work, we review and analyze research from leading experts, non-profits, and organizations advancing AI, AGI, and SuperIntelligence. This section highlights influential studies and safety initiatives shaping the future of AI development. Stay informed by exploring current research and contributing to the global effort to ensure AI remains safe and beneficial.

​

Dr. Craig A. Kaplan has worked in SuperIntelligence research and system design long before these topics entered mainstream discussion. As the owner of SuperIntelligence.com since 2006, he recognized early the urgent need for safe, human-aligned AI systems, a mission that continues to guide the work presented here.

In the News: AI and Superintelligence Around the World

The Future of AI
Anthropic’s chief scientist Dario Amodei says AI autonomy could spark a beneficial ‘intelligence explosion’ – or be the moment humans lose control.

​

The Guardian — Key takeaways from the 2026 International AI Safety Report

A major global AI safety report, chaired by Yoshua Bengio, warns about rapid capability growth, deepfakes, and emerging autonomy in AI agents, while noting that systems remain limited in long-term autonomous tasks.

​

Why are experts sounding the alarm on AI risks?​

AI is advancing in rapid and unpredictable ways but there is no joint framework to keep it in check, experts say.

Individuals

Yoshua Bengio
Currently the world’s most cited AI researcher, Bengio has moved away from pure capabilities to become a global leader in AI governance. In early 2026, he led the second International AI Safety Report, which warned that reasoning models are becoming more autonomous and harder to monitor.

Geoffrey Hinton
Since winning the 2024 Nobel Prize in Physics, Hinton has used his platform almost exclusively to warn about existential risks. He is frequently the primary voice advocating for a "slow down" in AGI development.

Demis Hassabis
Following his 2024 Nobel Prize in Chemistry (for AlphaFold), Hassabis remains the primary architect of Google's AGI strategy. In 2026, he is pushing the idea of "automated scientific laboratories," where AI doesn't just chat, but actually conducts physical experiments.

Non-Profits

Stuart Russell leads The Center for Human-Compatible AI (CHAI) at UC Berkeley, an institute dedicated to developing systems that are provably beneficial to humanity. Grounded in the principles of Effective Altruism, CHAI focuses on technical frameworks that align AI behavior with human values to prevent the risks of misaligned goal-seeking. Russell remains a primary voice in the "Singularity" debates, actively steering AGI regulation and policy across the US and EU.

The International Association for Safe & Ethical AI (IASEAI) is an independent, nonprofit organization founded to address the risks and opportunities posed by rapid advances in AI.

Center for AI Safety
CAIS is a San Francisco-based research and field-building nonprofit. They believe that artificial intelligence has the potential to profoundly benefit the world, provided it is developed and used safely.

The Centre for Effective Altruism remains the primary steward of the Effective Altruism movement. While the movement has navigated significant reputational challenges over the last few years (largely following the FTX collapse in late 2022), it has emerged in 2026 as a more professionalized and specialized field, particularly in AI Safety.

Companies

Anthropic: Founded in 2021 by former OpenAI members, Anthropic focuses on AI safety and reliability. They have developed the Claude family of large language models, emphasizing the creation of interpretable and steerable AI systems.

​Google DeepMind: Acquired by Google in 2014, DeepMind aims to "solve intelligence" and use it to address global challenges. They have a dedicated safety team researching topics like robustness and alignment to ensure their AI systems are beneficial and safe.  

bottom of page