GLOBAL AI & SUPERINTELLIGENCE RESEARCH
In addition to our own work, we review and analyze research from leading experts, non-profits, and organizations advancing AI, AGI, and SuperIntelligence. This section highlights influential studies and safety initiatives shaping the future of AI development. Stay informed by exploring current research and contributing to the global effort to ensure AI remains safe and beneficial.
​
Dr. Craig A. Kaplan has worked in SuperIntelligence research and system design long before these topics entered mainstream discussion. As the owner of SuperIntelligence.com since 2006, he recognized early the urgent need for safe, human-aligned AI systems, a mission that continues to guide the work presented here.
In the News: AI and Superintelligence Around the World
Thousands sign petition calling for ban AI "superintelligence"
More than 28,000 people have now signed an online petition calling for a ban on the development of AI "superintelligence." The list includes hundreds of public figures and several prominent AI pioneers. Anthony Aguirre, one of the organizers of the petition, joins "The Daily Report" to discuss.
​
The Hard-Luck Case For AGI And AI Superintelligence As An Extinction-Level Event
There is a great deal of research going on to further advance AI. The general goal is to either reach artificial general intelligence (AGI) or maybe even the outstretched possibility of achieving artificial superintelligence (ASI).
​
Global Call for AI Red Lines
Over 200 leaders, including Anthropic’s CISO, Nobel laureate Geoffrey Hinton, and other leading AI researchers and policy thinkers, have signed a new call that demands enforceable restrictions by 2026 on high-risk capabilities like self-replication, impersonation, and autonomous weaponization. The message is clear: without shared global norms, alignment cannot scale. Guardrails aren’t optional; they’re overdue!
​Bay Area researchers argue that tech industry is 'careening toward disaster'
A new book by Yudkowsky and Soares warns that current AI development paths could lead to human extinction. Others challenge that framing Vox’s "AI will kill everyone is not an argument. It’s a worldview" explores competing narratives of doom, optimism, and systemic risk. These tensions shape which AI policies and research directions gain traction.
Amodei on AI: "There's a 25% chance that things go really, really badly"
Anthropic’s CEO Dario Amodei reiterates a “p‑doom” estimate: a 25 % probability that AI development could lead to catastrophic outcomes, even extinction.
Zuckerberg states that Meta will invest massively, even at potential waste, to avoid falling behind in the race to superintelligence.
