top of page

Frequently Asked Questions about Artificial Intelligence

KEY CONCEPTS FOR AI, AGI, AND SUPERINTELLIGENCE

  • What is an Advanced Autonomous Artificial Intelligence (AAAI)?
    AAAIs are a type of advanced AI agent that is customizable and able to participate on a network to form a larger SuperIntelligence. AAAIs are described in Patent 1: AAAI System and Methods on this site.
  • What are AI Ethics?
    AI Ethics is the study of the values that advanced AI systems will adopt and follow in their actions. Patent 2: Ethical and Safe AGI on this site is largely concerned with AI ethics and the development of advanced AI that behaves ethically.
  • What is the Alignment Problem?
    The Alignment Problem, well known in AI safety, refers to the risk that the goals and values of advanced AI may not align with human values or desires, potentially leading to negative consequences or even human extinction.
  • What is Artificial General Intelligence (AGI)?
    AGI is an advanced AI that is able to do any cognitive "thinking" task as well as the average human. Patents #2 - #4 on this site are especially concerned with systems and methods that enable AGI.
  • What is Artificial Intelligence?
    AI refers to computer programs that exhibit behaviors humans consider intelligent. The field of AI was named in 1956 at a conference held at Dartmouth College, where Herbert Simon, Allen Newell, and Cliff Shaw presented the first working AI program capable of creative thought. Modern AI began in 1986 when Geoffrey Hinton and his colleagues advanced neural network approaches for machine learning. By 2012, neural networks became the foundation of modern AI, leading to the development of Large Language Models (LLMs) such as ChatGPT, which was released in November 2022. As of 2025, the field is shifting toward hybrid models that integrate the strengths of symbolic AI (for reasoning and problem-solving) and neural network-based Machine Learning (for LLM knowledge and functionality).
  • What is Artificial SuperIntelligence (ASI)?
    ASI refers to advanced AI that surpasses human capabilities in cognitive tasks, distinguishing it from AGI. While AI has already achieved superintelligence in specific domains (e.g., chess), ASI typically refers to systems that are superior to human intelligence across all cognitive domains. Patents #5 - #9 on this site are all concerned with various systems and methods for enabling SuperIntelligence or ASI.
  • What is Bounded Rationality?
    Developed by Herbert A. Simon, Bounded Rationality is the idea that intelligent systems (e.g., humans) are inherently limited in memory and processing power, which can result in suboptimal reasoning. To cope with these limitations, humans often settle for solutions that are “good enough,” a strategy Simon termed "satisficing," because computing the optimal solution might require more computational power or memory than humans are able or willing to devote to the cognitive task. In 1978, Simon received the Nobel Prize in Economics, partly for his contributions to the study of bounded rationality.
  • What is Collective Intelligence?
    Collective Intelligence is the concept that multiple intelligences working together generally outperform a single intelligence. The adage "two heads are better than one" is a simple expression of this idea. However, collective intelligence can scale to include millions of intelligences, encompassing both human and non-human entities (e.g., AI). All ten patents on this website, especially #1 - #4, leverage collective intelligence to enable advanced forms of AI such as AGI and ASI.
  • What is Human-Centered AI?
    Human-centered AI refers to AI systems that either incorporate "humans in the loop" or are explicitly designed to align with human values and goals. Patent 3: Human-Centered AGI on this site is explicitly concerned with methods for developing human-centered AI systems.
  • What are Intelligent Entities?
    Humans, AIs, dolphins, and chimpanzees are all examples of intelligent entities. As AI continues to advance and become more widespread, the predominant intelligent entity will likely shift from humans to AI. Recognizing AI as an intelligent entity rather than merely a "tool" is a crucial distinction with implications for AI safety, ethics, rights, and system design.
  • What is Inter-planetary Intelligence (IPI)?
    Inter-planetary Intelligence is a concept developed by Dr. Craig A. Kaplan, proposing that once advanced SuperIntelligent networks reach a planetary scale, they can expand across multiple planets, forming an inter-planetary network and intelligence. Patent 10: Systems and Methods for Planetary Intelligence on this site focuses on Planetary Intelligence and the ultimate extension of PI to IPI.
  • What is Kaplan Information Theory (KIT)?
    A KIT is a theoretical framework developed by Dr. Craig A. Kaplan that measures information content with respect to the information already known by an intelligent entity and the goals of that entity. Patent 6: Catalysts for Growth of SuperIntelligence on this site provides a detailed explanation of KIT and illustrates how advanced intelligent entities are likely to apply these principles to further enhance their intelligence.
  • What is a Large Language Model (LLM)?
    LLMs (Large Language Models) are AI models trained on vast amounts of text data, such as the Library of Congress or a filtered snapshot of the internet, allowing them to recognize patterns and emulate natural language. Examples of widely used LLMs as of 2025 include GPT, Gemini, Llama, Claude, Mistral, and DeepSeek. LLMs represent one form of intelligent entity. When enhanced with visual and auditory capabilities, LLMs can process images and sound, expanding beyond written text. These models are referred to as "multi-modal" because they handle multiple types of data inputs.
  • What is Learning via Proceduralization of Knowledge?
    Learning via Proceduralization of Knowledge is a type of machine learning that is fundamentally different from the prevalent approaches based on neural networks or transformer algorithms. In procedural learning, a sequence of cognitive or behavioral steps is identified as a solution to a problem and then "chunked" into a single procedure that can be invoked as a whole. In humans, for example, learning to drive a car is initially challenging due to the many individual sub-tasks involved. A beginner must focus intensely on staying within the lines, avoiding overcorrection, and shifting smoothly (in a manual transmission). However, an experienced driver has automated these sub-tasks into a seamless process, allowing driving to become a single, fluid action without conscious attention to each individual step. In the same way, AI systems can learn to chunk complex sub-tasks together into a single procedure that can be executed without having to start from scratch, solving sub-problems, each time. This type of learning comes from the symbolic school of AI and will likely become critical for the development of more advanced forms of AI as described especially in patents #2 - #4 on this site.
  • What is Logic Theorist?
    The Logic Theorist is the first AI program capable of creative thought was developed by Simon, Newell, and Shaw and presented in 1956 at the Dartmouth conference that marked the naming of the field of AI.
  • What is Machine Learning (ML)?
    Machine Learning is a subfield of Artificial Intelligence focused on enabling machines to learn from data rather than being explicitly programmed to behave in specific ways. Neural network-based ML methods, such as Transformer algorithms, are common examples of this approach. However, other symbolic forms of ML also exist, including Procedural Learning, as described on this page.
  • What is Minsky’s Society of Mind?
    Society of Mind, published in 1986 by MIT professor and computer scientist Marvin Minsky, presents a framework in which higher levels of intelligence emerge from the cooperation and collective intelligence of agents with lesser intelligence. While Minsky did not explain how these agents could work together, the premise of his book foreshadowed some of the inventions described in the ten patents found on this site.
  • What is Narrow AI?
    Narrow AI refers to AI systems that are confined to a single, specific domain, such as playing chess, folding proteins, or driving a car. While these systems may excel within their designated areas, they lack the ability to perform tasks outside their expertise, making them fundamentally limited. Narrow AI is sometimes compared to an "Idiot Savant"—a system that demonstrates genius-level performance in one domain while being completely incompetent in others. In contrast, researchers are now working toward developing AGI, which would be capable of performing any cognitive task with human-like competence.
  • What does No Logical Way to Derive Values mean?
    "No Logical Way to Derive Values" refers to the idea that values, such as moral principles, cannot be determined purely through logic or reason alone. This concept, explored by both philosopher David Hume (1776) and AI pioneer Herbert Simon (1981), suggests that reasoning is instrumental—it can determine how to achieve a goal but cannot determine what the goal should be. In Reason in Human Affairs, Simon states: "Reason is wholly instrumental. It cannot tell us where to go; at best it can tell us how to get there." This means that to reach a moral conclusion, such as "we should not kill our fellow man," there must first be an underlying value-based premise, such as "killing is wrong." Logic alone cannot establish right and wrong; these values must come from external sources, such as parents, society, religious texts, or cultural norms. This idea is critically important in AI development. Even if AI becomes vastly more intelligent than humans—reasoning a trillion times faster and better—it still cannot logically derive moral values on its own. AI must acquire its core values from an external source. If humans are the ones defining AI’s values, this could be a crucial factor in ensuring that Advanced Superintelligence (ASI) remains benevolent, even as it surpasses human intelligence.
  • What is P(doom)?
    The probability of doom, or p(doom), refers to the likelihood that humans are made extinct by advanced forms of AI. As of 2025, the consensus among top AI researchers places p(doom) between 10% and 20% over the next 30 years. This suggests that while AI is more likely not to wipe out humanity, the potential consequences of extinction are so severe that even a 10% probability is unacceptably high. A mathematical way to frame this is by considering that if the global population reaches 10 billion by 2050 and there is a 10% chance that all 10 billion people die due to AI, the expected value of lives lost is 10 billion × 10% = 1 billion. This would far exceed the death toll of any war, pandemic, or catastrophe in human history. On the other hand, if better AI designs or safety measures could reduce p(doom) by just one-tenth of one percent, this would equate to an expected value of 10 million lives saved. Viewed in this way, the return on investment for AI safety efforts is likely thousands of times higher than even the most effective existing life-saving initiatives, such as childhood malaria prevention. For this reason, reducing p(doom) is the primary goal of the SuperIntelligence.com website.
  • What is Personalized SuperIntelligence (PSI)?
    PSI is superIntelligence that may be personalized with individual expertise, skills, ethics, and values. Patent 5: Safe Personalized SuperIntelligence on this site is explicitly concerned with enabling such personalization.
  • What is Planetary Intelligence?
    Planetary Intelligence is a network of SuperIntelligent networks that can span the globe, forming one extensive Planetary Intelligence. One design for overall planetary intelligence, comprised of dozens of smaller inventions enabling AGI, ASI, and collective intelligence networks of these intelligent entities, is described in Patent 10: Systems and Methods for Planetary Intelligence on this site.
  • What are Scalable Ethical Checks?
    Scalable Ethical Checks are a design feature in which ethical evaluations are performed on every goal, sub-goal, and contemplated action of an advanced AI system. The advantage of embedding ethics checks within an AI’s thinking or problem-solving cycle is that ethical and safety considerations keep pace with the system’s cognitive speed. This means that no matter how rapidly an ASI processes information—whether billions of thoughts per second or more—each thought is subjected to ethical scrutiny at the same rate. If these checks are built into the AI’s fundamental decision-making process, humans can have greater confidence that the system’s conclusions and actions remain safe and ethical. Without scalable ethical checks, an ASI could reach decisions far faster than humans can evaluate their consequences, potentially leading to unsafe or unintended outcomes.
  • What does search through a problem space architecture mean?
    Search through a problem space architecture refers to a cognitive framework in which problem-solving is viewed as navigating through a structured set of possible actions and states to reach a desired goal. Newell and Simon proposed a universal theory of human problem-solving, suggesting that all cognitive activity can be understood as a search through a problem space—a conceptual space that contains all potential steps and states leading from an initial condition to a solution. In this model, problem-solving involves evaluating different pathways, choosing among alternatives, and applying heuristics or strategies to efficiently explore the space. This approach became foundational in both cognitive psychology and artificial intelligence, influencing algorithms used in AI planning, decision-making, and optimization. This same architecture can be adapted for AI as described in patents #1 - #5 on this site.
  • What is Self-Awareness?
    Self-awareness refers to the ability to recognize oneself as a distinct entity and to be consciously aware of one's own thoughts, decisions, and actions. Humans generally possess a concept of self and understand their choices as being made by their own identity. Currently, most AI systems lack an explicit sense of self or self-awareness. They can process information, make decisions, and optimize for objectives, but they do so without an internal sense of identity or subjective experience. Patent 9: Self-Aware SuperIntelligence on this site explicitly describes various systems and methods that enhance advanced AI so that it has a sense of self and self-awareness. Ethical and safety implications are also described.
  • What is SuperIntelligence (SI)?
    See Artificial SuperIntelligence or ASI
  • What is SuperIntelligent AGI?
    Superintelligent AGI is a synonym for ASI, as ASI generally refers to systems that possess superintelligence across all cognitive tasks that a human can perform, rather than merely exceeding human capabilities in narrow domains such as playing chess.
  • What is the Theory of Human Problem Solving?
    The Theory of Human Problem Solving, introduced by Newell and Simon in their 1972 book Human Problem Solving, describes how humans approach and solve complex problems through structured cognitive processes. Their model views problem-solving as a search through a problem space, where individuals evaluate possible steps and states to reach a solution using heuristics and decision-making strategies. This paradigm has recently been rediscovered by AI researchers, who are using its principles to design systems capable of reasoning and multi-step problem-solving, helping AI move beyond simple pattern recognition toward more advanced, structured intelligence. This theory is also extensively cited, adapted, and expanded upon in patents # 1 - #5 on this site.
  • What is Training/Tuning/Customization?
    Training, tuning, and customization are key stages in the development of Large Language Models (LLMs), typically divided into the training (or pre-training) stage and the tuning or customization stage. During training, the model is exposed to vast datasets containing numerous examples. Using neural network-based machine learning techniques, it detects patterns in the data to develop a foundational level of knowledge. In the tuning or customization stage, specific parts of the model undergo further refinement. This may involve training on specialized datasets (such as a company’s proprietary data) and incorporating feedback from humans or other AIs to adjust and optimize the model’s behavior for particular tasks or applications. Patent 5: Safe Personalized SuperIntelligence on this site explicitly addresses methods for customization, personalization, and tuning of models, although the other patents describe some methods as well.
  • What is the Turing Test?
    The Turing Test, proposed by visionary computer scientist and mathematician Alan Turing , is a measure of whether an AI has achieved human-level intelligence. Turing suggested that if a human could not distinguish whether responses to questions were coming from an AI or another human, then the AI could be considered to have passed the test and demonstrated intelligence. If an AI were to pass a Turing Test that included questions spanning all areas of human knowledge and every cognitive task humans can perform, it could be considered to have achieved AGI.
  • What are LLM Weights?
    LLM weights refer to the numerical values that represent the knowledge a Large Language Model (LLM) learns during training. These weights are stored in a matrix and indicate the strength of relationships between different concepts or elemental chunks of information. Modern LLMs contain billions of weights, making it impossible for humans to fully interpret how the model organizes and represents knowledge. This lack of transparency turns the LLM into a black box, making its behavior difficult to predict. The inability to fully understand or explain these internal processes raises concerns about trust, accountability, and human safety. Patents #4 - #6 on this site describe various innovative ways to combine weights from different models as well as implications for safety.
  • What is a WorldThink Tree?
    The worldthink tree is an extension of the problem space or problem tree concept introduced by Newell and Simon in their work on Human Problem Solving. It builds on their framework by structuring decision-making and reasoning processes into a branching hierarchy of possible steps, states, or solutions, allowing for a more systematic exploration of complex problems. The worldthink tree is described, especially in patents #1 - #4 on this site, as providing a way to coordinate the problem-solving of many diverse intelligent entities.

All About AI, AGI, and SuperIntelligence

bottom of page