top of page

SUPERINTELLIGENCE WHITE PAPERS

Building Safe and Scalable SuperIntelligence

These papers present a unified framework for designing Artificial General Intelligence and SuperIntelligence that is safe, aligned, and transparent.

Authored by Dr. Craig A. Kaplan, each paper distills more than three decades of research into practical architectures that combine cognitive science, intelligent systems, and AI safety. Together, they outline a pathway toward AI that reflects human values and reduces existential risk.

All papers are freely available for responsible research and safe implementation.

Explore the Papers

Each white paper outlines a key design for building safe AGI and SuperIntelligence. Summaries and full PDFs are available for anyone committed to advancing AI safety and alignment.

Introduces a collective-intelligence architecture where millions of customized AIs and humans cooperate to form AGI that learns safely from human values. Safety is built in through redundant checks across five subsystems that allow AGI to evolve ethically.

ABSTRACT / SUMMARY (PDF)  |  FULL WHITE PAPER (PDF)

Explores how AGI can be developed to act within human moral and ethical boundaries. It defines frameworks for aligning machine intelligence with human intent, ensuring that powerful AI systems remain transparent, controllable, and beneficial to humanity.

​​

ABSTRACT / SUMMARY (PDF) FULL WHITE PAPER (PDF)

Presents an AGI design that keeps humans actively “in the loop.” This system combines human judgment with machine reasoning to achieve super-human performance while staying grounded in aligned, human values. It demonstrates how collaboration between humans and AI can enhance safety and trust.

ABSTRACT / SUMMARY (PDF)   |  FULL WHITE PAPER (PDF)

Describes scalable AGI architectures that expand safely without losing alignment. The paper details how distributed, modular systems can evolve at scale while maintaining human-compatible goals and predictable outcomes.

ABSTRACT / SUMMARY (PDF)   |  FULL WHITE PAPER (PDF)

Outlines how SuperIntelligence can be personalized to reflect individual ethical preferences and cognitive styles. This approach allows AI systems to evolve alongside their human counterparts, creating safer, more context-aware intelligence.

ABSTRACT / SUMMARY (PDF)   |  FULL WHITE PAPER (PDF)

Identifies the environmental, social, and technical catalysts that accelerate the development of safe SuperIntelligence. The paper explores how open collaboration, transparency, and shared safety goals can guide global AI progress.

ABSTRACT / SUMMARY (PDF)   |  FULL WHITE PAPER (PDF)

Focuses on achieving robust alignment between AGI objectives and human values. It introduces verifiable feedback loops that allow AGI systems to self-correct and maintain trustworthiness throughout their evolution.

ABSTRACT / SUMMARY (PDF)   |  FULL WHITE PAPER (PDF)

Proposes a novel use of online ad systems as large-scale behavioral training environments for AGI and SuperIntelligence. The method leverages global human interaction data to improve ethical decision-making and collective intelligence learning.

ABSTRACT / SUMMARY (PDF)   |  FULL WHITE PAPER (PDF)

Examines how self-awareness can be safely integrated into advanced AI systems. It defines architectures that allow SuperIntelligence to understand and regulate its own goals without diverging from human-aligned ethics.

ABSTRACT / SUMMARY (PDF)   |  FULL WHITE PAPER (PDF)

Introduces the concept of Planetary Intelligence — a globally integrated network of human and AI collaboration that optimizes civilization-scale decision-making. This system aims to align progress, sustainability, and survival through shared ethical design.

ABSTRACT / SUMMARY (PDF)   |  FULL WHITE PAPER (PDF)

bottom of page