top of page
FAQs
• The probability of doom, often abbreviated as p(doom), refers to the likelihood that advanced artificial intelligence leads to human extinction or irreversible civilizational collapse.
• As of 2025, many leading AI researchers estimate p(doom) to lie between 10 percent and 20 percent over the next 30 years. While this implies that catastrophe is not the most likely outcome, the magnitude and irreversibility of extinction mean that even probabilities at the low end of this range represent an unacceptably high risk.
• One way to make this concrete is through expected value. If the global population reaches 10 billion by mid-century and there is a 10% chance that advanced AI causes extinction, the expected number of lives lost is approximately 1 billion. This exceeds the human cost of any war, pandemic, or disaster in recorded history.
• Conversely, even small reductions in p(doom) have enormous value. Reducing existential risk by just one-tenth of one percent corresponds to an expected value of approximately 10 million lives saved. From this perspective, investments in AI safety and better system design may offer returns that exceed those of even the most effective global health interventions.
• For this reason, reducing p(doom) is the central objective of SuperIntelligence.com.(https://www.superintelligence.com/)
• Superintelligence refers to a form of artificial intelligence that surpasses human intelligence across essentially all relevant domains, including creativity, problem-solving, emotional understanding, strategic judgment, and scientific reasoning.
• Beyond human intelligence. Superintelligence would not simply be faster or have greater memory. It would be qualitatively more capable, consistently making better decisions than any human across a wide range of contexts.
• Autonomous improvement. A true superintelligence could potentially improve its own architecture, algorithms, or training processes. This capacity for recursive self -improvement could lead to rapid and unpredictable increases in capability, sometimes described as an intelligence explosion.
• General purpose intelligence. Unlike narrow AI systems, which are specialized for specific tasks such as language generation or image recognition, superintelligence would operate across domains. It would be capable in science, engineering, language, art, economics, governance, and beyond, applying knowledge flexibly rather than within predefined boundaries.
• Artificial Superintelligence refers to advanced AI systems that surpass human capabilities across essentially all cognitive tasks. It is distinct from Artificial General Intelligence (AGI), which is typically defined as human-level performance across domains.
• While existing AI systems already exceed human performance in narrow areas such as chess or protein folding, ASI denotes a qualitative shift. It refers to intelligence that surpasses the best human minds across all major cognitive domains, including reasoning, learning, creativity, strategy, and scientific discovery.
• White Papers 5 through 9 (https://www.superintelligence.com/si-research-whitepapers)on this site focus on system architectures and methods designed to enable SuperIntelligence or ASI, with particular attention to safety, control, and alignment challenges that arise at this level of capability.
• Superintelligent AGI is effectively synonymous with Artificial Superintelligence (ASI). In this context, ASI refers to systems that exhibit superhuman intelligence across the full range of cognitive tasks humans can perform, rather than narrowly surpassing human abilities in isolated domains such as playing chess.
• Artificial General Intelligence (AGI) refers to advanced AI systems capable of performing any cognitive task that an average human can perform.
• White Paper 2 a(https://www.superintelligence.com/whitepaper-2-ethical-safe-agi)nd White Paper 4 (https://www.superintelligence.com/whitepaper-4-scalable-agi)on this site focus in particular on system architectures and methods intended to enable the development of AGI.
• The Alignment Problem, a central concept in AI safety, refers to the risk that the goals, objectives, or learned behaviors of advanced AI systems may diverge from human values and intentions. If left unaddressed, such misalignment could lead to serious harm, including large-scale societal disruption or, in extreme cases, human extinction.
• Collective Intelligence is the concept that multiple intelligences working together generally outperform a single intelligence. The adage "two heads are better than one" is a simple expression of this idea.
• However, collective intelligence can scale to include millions of intelligences, encompassing both human and non-human entities (e.g., AI).
• All ten white papers (https://www.superintelligence.com/si-research-patents)o(https://www.superintelligence.com/si-research-patents)n this website, especially #1 - #4, leverage collective intelligence to enable advanced forms of AI such as AGI and ASI.
• AAAIs are a class of advanced AI agents designed to be customizable and capable of operating within a network, where they can interact and coordinate to form larger superintelligent systems.
• AAAIs are described in White Paper 1: AAAI System and Methods (https://www.superintelligence.com/whitepaper1-aaai-systems-methods)on this site.
• AI Ethics is the study of how values, norms, and moral principles are defined, learned, and enacted by advanced AI systems as they make decisions and take actions.
• White Paper 2: Ethical and Safe AGI(https://www.superintelligence.com/patent-2-ethical-safe-agi) o(https://www.superintelligence.com/patent-2-ethical-safe-agi)n this site focuses on AI ethics and on methods for designing advanced AI systems that behave in ways consistent with human values and ethical constraints.
• Developed by Herbert A. Simon, Bounded Rationality (https://www.investopedia.com/terms/h/herbert-a-simon.asp)is the idea that intelligent systems (e.g., humans) are inherently limited in memory and processing power, which can result in suboptimal reasoning.
• To cope with these limitations, humans often settle for solutions that are “good enough,” a strategy Simon termed "satisficing," because computing the optimal solution might require more computational power or memory than humans are able or willing to devote to the cognitive task.
• In 1978, Simon received the Nobel Prize in Economics,(https://www.nobelprize.org/prizes/economic-sciences/1978/press-release/) partly for his contributions to the study of bounded rationality.
• Society of Mind,(https://en.wikipedia.org/wiki/Society_of_Mind) published in 1986 by MIT professor and computer scientist Marvin Minsky,(https://en.wikipedia.org/wiki/Marvin_Minsky) proposes that human-level intelligence emerges from the interaction and coordination of many simpler agents, each with limited capabilities.
• Although Minsky did not specify concrete mechanisms by which such agents could be implemented or coordinated in artificial systems, the core idea anticipated later work on multi-agent systems and collective intelligence. Several of the inventions described across the ten patents on this site build on and operationalize related principles.
• Several of the inventions (https://www.superintelligence.com/si-research-whitepapers)described across the ten patents on this site build on and operationalize related principles.
• Human-Centered AI refers to artificial intelligence systems that are designed to incorporate human input, oversight, or feedback, often described as keeping humans in the loop, and that are explicitly aligned with human values, goals, and decision-making processes.
• White Paper 3: Human-Centered AGI (https://www.superintelligence.com/patent-3-human-centered-agi)on this site focuses on methods and system designs for developing advanced AI that remains meaningfully guided, constrained, and informed by human judgment.
• Planetary Intelligence refers to a globally distributed network of superintelligent systems that operate together as a single, large-scale intelligent entity.
• One proposed architecture for Planetary Intelligence, composed of multiple interrelated inventions supporting AGI, ASI, and collective intelligence networks, is described in Patent 10: Systems and Methods for Planetary Intelligence (https://www.superintelligence.com/patent-10-planetary-intelligence)on this site.
• Inter-planetary Intelligence is a concept developed by Dr. Craig A. Kaplan, proposing that once advanced SuperIntelligent networks reach a planetary scale, they can expand across multiple planets, forming an inter-planetary network and intelligence.
• White Paper 10: Systems and Methods for Planetary Intelligence (https://www.superintelligence.com/patent-10-planetary-intelligence)on this site focuses on Planetary Intelligence and the ultimate extension of PI to IPI.
• Humans, AIs, dolphins, and chimpanzees are all examples of intelligent entities.
• As AI continues to advance and become more widespread, the predominant intelligent entity will likely shift from humans to AI. Recognizing AI as an intelligent entity rather than merely a "tool" is a crucial distinction with implications for AI safety, ethics, rights, and system design.
• A KIT is a theoretical framework developed by Dr. Craig A. Kaplan that measures information content with respect to the information already known by an intelligent entity and the goals of that entity.
• White Paper 6: Catalysts for Growth of SuperIntelligence (https://www.superintelligence.com/patent-6-catalysts-safe-si)on this site provides a detailed explanation of KIT and illustrates how advanced intelligent entities are likely to apply these principles to further enhance their intelligence.
• Learning via Proceduralization of Knowledge is a type of machine learning that is fundamentally different from the prevalent approaches based on neural networks or transformer algorithms.
• In procedural learning, a sequence of cognitive or behavioral steps is identified as a solution to a problem and then "chunked" into a single procedure that can be invoked as a whole.
• In humans, for example, learning to drive a car is initially challenging due to the many individual sub-tasks involved. A beginner must focus intensely on staying within the lines, avoiding overcorrection, and shifting smoothly (in a manual transmission). However, an experienced driver has automated these sub-tasks into a seamless process, allowing driving to become a single, fluid action without conscious attention to each individual step.
• In the same way, AI systems can learn to chunk complex sub-tasks together into a single procedure that can be executed without having to start from scratch, solving sub-problems, each time. This type of learning comes from the symbolic school of AI and will likely become critical for the development of more advanced forms of AI as described especially in patents #2 - #4 (https://www.superintelligence.com/patents)on this site.
• The Logic Theorist was the first artificial intelligence program widely recognized as capable of creative problem solving.(https://www.researchgate.net/publication/276216226_Newell_and_Simon's_Logic_Theorist_Historical_Background_and_Impact_on_Cognitive_Modeling) It was developed by Allen Newell, Herbert A. Simon, and Cliff Shaw and publicly demonstrated in 1956, the same year as the Dartmouth Conference, which is commonly regarded as the event that named and launched the field of artificial intelligence.
• Narrow AI refers to AI systems that are confined to a single, specific domain, such as playing chess, folding proteins, or driving a car. While these systems may excel within their designated areas, they lack the ability to perform tasks outside their expertise, making them fundamentally limited.
• Narrow AI is sometimes compared to an "Idiot Savant"—a system that demonstrates genius-level performance in one domain while being completely incompetent in others.
• In contrast, researchers are now working toward developing AGI, which would be capable of performing any cognitive task with human-like competence.
• "No Logical Way to Derive Values" refers to the idea that values, such as moral principles, cannot be determined purely through logic or reason alone. This concept, explored by both philosopher David Hume (1776) and AI pioneer Herbert Simon (1981), suggests that reasoning is instrumental—it can determine how to achieve a goal but cannot determine what the goal should be.
• In Reason in Human Affairs,(https://www.sup.org/books/theory-and-philosophy/reason-human-affairs) Simon states: "Reason is wholly instrumental. It cannot tell us where to go; at best it can tell us how to get there." This means that to reach a moral conclusion, such as "we should not kill our fellow man," there must first be an underlying value-based premise, such as "killing is wrong." Logic alone cannot establish right and wrong; these values must come from external sources, such as parents, society, religious texts, or cultural norms.
• This idea is critically important in AI development. Even if AI becomes vastly more intelligent than humans—reasoning a trillion times faster and better—it still cannot logically derive moral values on its own. AI must acquire its core values from an external source. If humans are the ones defining AI’s values, this could be a crucial factor in ensuring that Advanced Superintelligence (ASI) remains benevolent, even as it surpasses human intelligence.
• Scalable Ethical Checks are a design feature in which ethical evaluations are performed on every goal, sub-goal, and contemplated action of an advanced AI system.
• The advantage of embedding ethics checks within an AI’s thinking or problem-solving cycle is that ethical and safety considerations keep pace with the system’s cognitive speed. This means that no matter how rapidly an ASI processes information—whether billions of thoughts per second or more—each thought is subjected to ethical scrutiny at the same rate. If these checks are built into the AI’s fundamental decision-making process, humans can have greater confidence that the system’s conclusions and actions remain safe and ethical.
• Without scalable ethical checks, an ASI could reach decisions far faster than humans can evaluate their consequences, potentially leading to unsafe or unintended outcomes.
• Search through a problem space architecture refers to a cognitive framework in which problem-solving is viewed as navigating through a structured set of possible actions and states to reach a desired goal.
• Newell and Simon proposed a universal theory of human problem-solving, suggesting that all cognitive activity can be understood as a search through a problem space—a conceptual space that contains all potential steps and states leading from an initial condition to a solution. In this model, problem-solving involves evaluating different pathways, choosing among alternatives, and applying heuristics or strategies to efficiently explore the space.
• This approach became foundational in both cognitive psychology and artificial intelligence, influencing algorithms used in AI planning, decision-making, and optimization.
• This same architecture can be adapted for AI as described in patents #1 - #5 o(https://www.superintelligence.com/si-research-whitepapers)n this site.
• The Theory of Human Problem Solving,(https://archive.org/details/humanproblemsolv0000newe/mode/2up) introduced by Newell and Simon in their 1972 book Human Problem Solving, describes how humans approach and solve complex problems through structured cognitive processes. Their model views problem-solving as a search through a problem space, where individuals evaluate possible steps and states to reach a solution using heuristics and decision-making strategies.
• This paradigm has recently been rediscovered by AI researchers, who are using its principles to design systems capable of reasoning and multi-step problem-solving, helping AI move beyond simple pattern recognition toward more advanced, structured intelligence.
• This theory is also extensively cited, adapted, and expanded upon in W(https://www.superintelligence.com/si-research-whitepapers)hite Papers # 1 - #5 (https://www.superintelligence.com/si-research-whitepapers)o(https://www.superintelligence.com/si-research-whitepapers)n this site.
• Training, Tuning, and Customization are key stages in the development of large language models (LLMs). These stages are typically grouped into two phases: initial training, often called pre training, and subsequent tuning or customization.
• During training, a model is exposed to extremely large datasets containing many examples of text and other data. Using neural network-based learning techniques, the model identifies statistical patterns and relationships, forming a general foundation of knowledge and language capability.
• In the tuning or customization stage, parts of the model are further refined to shape behavior for specific uses. This can include training on specialized or proprietary datasets, incorporating human or AI-generated feedback, and adjusting model parameters to improve performance, safety, or alignment in particular applications.
• White Paper 5: Safe Personalized SuperIntelligence (https://www.superintelligence.com/patent-5-personalized-si)on this site focuses explicitly on methods for model customization, personalization, and tuning, though related techniques are also discussed across other papers on this site.
• The worldthink tree is an extension of the problem space or problem tree concept introduced by Newell and Simon in their work on Human Problem Solving. It builds on their framework by structuring decision-making and reasoning processes into a branching hierarchy of possible steps, states, or solutions, allowing for a more systematic exploration of complex problems.
• The worldthink tree is described, especially in White Papers #1 - #4 (https://www.superintelligence.com/si-research-whitepapers)o(https://www.superintelligence.com/si-research-whitepapers)n this site, as providing a way to coordinate the problem-solving of many diverse intelligent entities.
All About AI, AGI, and SuperIntelligence
bottom of page
