Dr. Craig A. Kaplan has distilled three decades of research on intelligent systems, including AI, AGI, SuperIntelligence, and AI safety, into ten comprehensive white papers.
The goal is to provide AI researchers, developers, and all individuals concerned about AI safety with critical insights and solutions. AI impacts everyone, and the opportunity to design safe and aligned systems is rapidly closing. Therefore, SuperIntelligence.com is making these designs available to those committed to their responsible and ethical use.
In addition to our own work, we track research from leading experts, non-profits, and organizations in AI, AGI, and Superintelligence. This section highlights groundbreaking studies and safety initiatives that shape the future of AI development.
Stay informed by exploring the latest research and contributing to the global effort toward safe and beneficial AI.