GLOBAL AI & SUPERINTELLIGENCE RESEARCH
In addition to our own work, we review and analyze research from leading experts, non-profits, and organizations advancing AI, AGI, and SuperIntelligence. This section highlights influential studies and safety initiatives shaping the future of AI development. Stay informed by exploring current research and contributing to the global effort to ensure AI remains safe and beneficial.
​
Dr. Craig A. Kaplan has worked in SuperIntelligence research and system design long before these topics entered mainstream discussion. As the owner of SuperIntelligence.com since 2006, he recognized early the urgent need for safe, human-aligned AI systems, a mission that continues to guide the work presented here.
In the News: AI and Superintelligence Around the World
The Future of AI
Anthropic’s chief scientist Dario Amodei says AI autonomy could spark a beneficial ‘intelligence explosion’ – or be the moment humans lose control.
​
The Guardian — Key takeaways from the 2026 International AI Safety Report
A major global AI safety report, chaired by Yoshua Bengio, warns about rapid capability growth, deepfakes, and emerging autonomy in AI agents, while noting that systems remain limited in long-term autonomous tasks.
​
Why are experts sounding the alarm on AI risks?​
AI is advancing in rapid and unpredictable ways but there is no joint framework to keep it in check, experts say.
