top of page
AI PUBLICATIONS
Advancing Safe and Aligned SuperIntelligence
Explore Dr. Craig A. Kaplan’s original research on AGI, SuperIntelligence, and AI safety. These works define new architectures for building transparent, scalable, and human-aligned systems designed to evolve responsibly.
AI Research Papers
White papers, conference, and peer-reviewed publications introducing the design principles that enable safe, human-centered SuperIntelligence.
Includes:
-
Designing Safe SuperIntelligence (Springer, AGI-2025)
-
Four Gifts from the Founders of AI (CogSci 2025)
-
Can LLMS Pick Stocks? (White Paper, 2025)
-
DeepSeek, Nvidia, AI Investing, and Our Future (White Paper, 2025)
-
Safe and Profitable SuperIntelligence (White Paper, 2024)
-
Foundations of Cognitive Science (Herbert A. Simon & Craig A. Kaplan, 1998)
bottom of page
