top of page

AI VIDEOS: AI SAFETY SERIES

An overview of AI, AGI, and SuperIntelligence 

A 26-episode documentary hosted by Dr. Craig A. Kaplan, focused on AI ethics, Artificial General Intelligence (AGI), and Superintelligence (ASI). Each episode introduces a key question from "What is AI?" to "Can we regulate or align superintelligence?" - offering clear explanations of issues that will shape the future of technology.

The series highlights perspectives from leading voices in AI, including experts connected to Google, OpenAI, Anthropic, Nvidia, and other pioneering organizations. By combining expert insights with accessible storytelling, it helps viewers understand why AI safety is not only a technical challenge but a global responsibility.

At Superintelligence.com, this series complements ongoing research and patents on safe AGI by providing a narrative introduction to alignment, regulation, and responsible development. Whether you’re new to the field or already engaged, these episodes offer an accessible starting point for exploring one of today’s most urgent questions: How can we make AI safe?

What is AI?
03:07
How dangerous is AI?
02:25
Can we regulate AI?
03:07
Can we program AI to be safe?
03:18
Can we lock AI up?
03:27
What is the alignment problem?
03:18
How do we solve the AI alignment problem?
03:17
How do we make AI safer?
02:16
Can we increase the odds of human survival with AI?
02:38
How to avoid extinction by AI
03:31
Should we slow down AI development?
02:25
What is the fastest and safest path to AGI?
02:55
How do we build safe AGI?
03:28
What is Collective Intelligence in relation to safe AGI?
03:13
Does Collective Intelligence work?
03:32
How do we build an AGI network?
03:33
What is a human collective intelligence network?
03:09
AI: What is a problem solving framework?
03:57
What are customized AI agents?
03:04
How do Advanced AI Agents learn?
03:53
Can we train AI to be saintly?
02:59
How does the AGI network learn?
03:49
What makes the AGI network safe?
03:24
More detail on how to build safe AGI
03:11
Summary of AI safety
03:28
Now is the time to make AI safe!
03:42
bottom of page