Understanding AI, AGI, and ASI¶
What is AI (Artificial Intelligence)?¶
Artificial Intelligence (AI) is the broad field of creating machines and software that can perform tasks typically requiring human intelligence. Today's AI systems -- often called Artificial Narrow Intelligence (ANI) -- are specialists: they excel at one specific task (playing chess, recognizing faces, translating languages, generating text) but cannot transfer that skill to unrelated domains. Every AI system you use today, from Siri to GPT-4 to self-driving cars, is narrow AI. It is powerful within its domain but fundamentally limited -- a chess engine cannot write poetry, and a language model cannot physically navigate a room.
What is AGI (Artificial General Intelligence)?¶
Artificial General Intelligence (AGI) refers to AI systems that match or exceed human-level cognitive abilities across virtually all intellectual tasks -- learning, reasoning, problem-solving, perception, creativity, and social understanding. Unlike narrow AI, an AGI system could teach itself a new discipline, transfer knowledge between domains, handle novel situations it was never trained on, and understand context the way humans do. AGI does not yet exist, but its pursuit drives the most ambitious research programs in history (OpenAI, DeepMind, Anthropic, xAI, Meta, SSI). Estimated arrival: 2027--2035 according to leading researchers, though timelines remain highly uncertain.
What is ASI (Artificial Superintelligence)?¶
Artificial Superintelligence (ASI), also called Super AI, is a hypothetical system whose intelligence surpasses the most gifted human minds in every domain -- scientific creativity, social skills, strategic reasoning, and general wisdom. Philosopher Nick Bostrom defines it as "any intellect that greatly exceeds the cognitive performance of humans in virtually all domains of interest." ASI could emerge from recursive self-improvement cycles (an "intelligence explosion"), where an AI that can improve its own design rapidly surpasses human-level capabilities. Key concerns include the control problem (keeping ASI aligned with human values), goal misalignment (unintended optimization targets), and the potential for a technological singularity -- a point beyond which human civilization is fundamentally and unpredictably transformed.
AI vs AGI vs ASI -- The Complete Comparison (click to expand)
| Dimension | ANI (Narrow AI) | AGI (General Intelligence) | ASI (Superintelligence) |
|---|---|---|---|
| Definition | AI that excels at a single, specific task or narrow domain | AI with human-level cognitive abilities across all intellectual tasks | AI that vastly surpasses the best human minds in every domain |
| Intelligence Scope | Single domain only | All human cognitive domains | All domains, far beyond human capacity |
| Learning | Trained on specific datasets for specific tasks; cannot learn new domains without retraining | Can learn any new domain autonomously, transfer knowledge across fields | Can learn instantly, discover entirely new fields of knowledge humans haven't conceived |
| Reasoning | Pattern matching and statistical inference within trained domain | Human-like reasoning, abstraction, common sense, and causal understanding | Reasoning capabilities incomprehensible to humans; solves problems we cannot even formulate |
| Creativity | Can remix and recombine patterns from training data | Genuine novel creativity comparable to the best human minds | Creates entirely new paradigms of science, art, and mathematics |
| Self-Awareness | None -- no understanding of its own existence | Potentially self-aware; debated whether consciousness is required | Likely self-aware; may possess forms of consciousness beyond human understanding |
| Adaptability | Brittle -- fails on out-of-distribution inputs | Robust generalization to novel situations, like humans | Adapts to any environment or challenge, including ones humans cannot survive or comprehend |
| Autonomy | Requires human oversight, goals, and guardrails | Can set its own goals, plan long-term, and act independently | Fully autonomous; may pursue goals humans cannot predict or understand |
| Physical Capability | Software only, or narrow robotics (e.g., robotic arm) | Could operate any physical system, robot, or interface | Could design and build its own hardware, infrastructure, or physical embodiment |
| Current Examples | ChatGPT, AlphaFold, DALL-E, Tesla Autopilot, Siri, Google Search | None yet -- frontier models (GPT-4, Claude, Gemini) show early sparks but remain narrow | None -- purely theoretical |
| Status | Exists today -- deployed at massive scale | In active development -- billions invested, estimated 2027-2035 | Theoretical -- may follow AGI within years or decades |
| Key Risk | Job displacement, bias, misuse, deepfakes | Misalignment, economic disruption, power concentration, loss of human agency | Existential risk, intelligence explosion, loss of human control, civilizational transformation |
| Who's Building It | Every tech company | OpenAI, DeepMind, Anthropic, Meta, xAI, SSI, Alibaba, DeepSeek | Safe Superintelligence Inc. (SSI), theoretical research at MIRI, FHI, CHAI |
| Key Benchmark | Task-specific (ImageNet, SQuAD, HumanEval) | ARC-AGI, GPQA, Humanity's Last Exam, SWE-bench, FrontierMath | No benchmarks exist -- by definition, ASI exceeds all human-designed tests |
The Journey: ANI --> AGI --> ASI¶
We Are Here
|
v
+---------+ +----------+ +----------+
| ANI | -----> | AGI | -----> | ASI |
| (Today) | | (2027- | | (After |
| Narrow | | 2035?) | | AGI) |
| Task- | | Human- | | Beyond |
| Specific| | Level | | Human |
+---------+ +----------+ +----------+
ChatGPT No system Theoretical
AlphaFold exists yet "Intelligence
DALL-E GPT-4 shows Explosion"
Autopilot early sparks Singularity?
Where Are We Now? (2026)¶
The AI field is in a remarkable transition period. Here's what the current landscape looks like:
| Signal | What It Means |
|---|---|
| Gemini 2.5 Pro tops LMArena, 18.8% on Humanity's Last Exam | Google's thinking model leads reasoning, math, and code benchmarks; the frontier keeps advancing |
| Llama 4 (Scout, Maverick, Behemoth) ships natively multimodal MoE | Meta's open-weight models match GPT-4o; Behemoth 288B teacher outperforms GPT-4.5 on STEM |
| Meta Muse Spark scores 58% on Humanity's Last Exam | First model from Meta Superintelligence Labs: visual chain-of-thought, multi-agent orchestration, "personal superintelligence" vision |
| o1, o3, DeepSeek-R1 use chain-of-thought reasoning | Test-time compute scaling is a new paradigm -- models that "think longer" perform better |
| Gemini Robotics 1.5 VLA model powers physical agents | DeepMind's vision-language-action model controls diverse robots with generality, dexterity, and agentic reasoning |
| ARC-AGI scores remain <65% (humans score ~85%) | Core fluid reasoning and abstraction remain unsolved -- the gap to AGI is real |
| Autonomous coding agents (OpenHands, Devin, SWE-agent) resolve real GitHub issues | Agents are achieving narrow AGI-like performance in software engineering |
| Safe Superintelligence Inc. raised $30B+ in 2024 | Ilya Sutskever (ex-OpenAI chief scientist) is betting everything on the ASI path |
| AI Safety Summits held at Bletchley Park, Seoul, Paris | Governments worldwide are treating AGI/ASI risk as a top-tier policy issue |
| Scaling debate intensifies | Some argue scaling alone leads to AGI; others say fundamental breakthroughs are needed |
State of the Field: Key Metrics (2025-2026)¶
Quantitative signals tracking the pace of progress toward AGI/ASI. These metrics matter because the AGI race is fundamentally a story of scaling compute, shrinking costs, and the widening gap between capability and safety.
| Metric | Current Data | Why It Matters for AGI/ASI | Source |
|---|---|---|---|
| Training compute growth | Frontier model training compute grows ~4x/year; GPT-4 used ~10^25 FLOP | At this rate, models trained on 10^28 FLOP (1000x GPT-4) arrive by 2027-2028 -- potentially AGI-relevant | Epoch AI |
| Inference-time compute (test-time scaling) | o1/o3/R1 spend 10-100x more compute at inference via chain-of-thought | A new scaling axis: "thinking longer" improves reasoning without retraining, opening the door to unbounded intelligence at inference | Paper |
| Cost of intelligence | GPT-4-level inference cost dropped ~240x in 18 months (via distillation + efficiency) | Intelligence becomes a commodity; makes autonomous agent swarms economically viable | AI Index 2025 |
| Safety-to-capability ratio | ~2% of AI publications focus on safety; safety research funding is <5% of capability spending | The capability-safety gap is widening -- alignment research may not keep pace with the transition to AGI | AI Index 2025 |
| Benchmark saturation | MMLU: 90%+ (saturated), GPQA: 75%+ (approaching), ARC-AGI: <65% (unsolved), HLE: <60% | Easy benchmarks are saturated; hard reasoning and novel problem-solving remain the gap to AGI | Various benchmark papers |
| AI investment | $110B+ private AI investment in 2024; $30B+ for SSI alone | Capital is flooding into AGI -- the question is whether money alone can buy general intelligence | AI Index 2025 |
| Energy at scale | Frontier training runs now consume 50-100 GWh; next-gen data centers planned at 1-5 GW | Energy and cooling become the physical bottleneck for scaling to AGI -- multiple nuclear-powered data centers announced | Industry reports |
Google DeepMind's Levels of AGI Framework (2023)¶
Use this framework to orient yourself: every resource in this repo can be placed on this ladder. We are currently between Levels 1-3 on narrow tasks, with no system reaching Level 3 across general domains.
| Level | Name | Description | Current Status | Example Systems |
|---|---|---|---|---|
| 0 | No AI | Narrow software with no AI capability | Calculator, basic scripts | GOFAI rule systems |
| 1 | Emerging | Equal to or somewhat better than an unskilled human | Most current LLMs | ChatGPT, Llama 3, Gemma, Mistral |
| 2 | Competent | At least 50th percentile of skilled adults | Frontier models on select tasks | GPT-4, Gemini 2.5 Pro, Claude 3.5 (coding, writing, analysis) |
| 3 | Expert | At least 90th percentile of skilled adults | Narrow domains only | AlphaFold (protein structure), o1/R1 (math competitions), Devin/OpenHands (SWE-bench) |
| 4 | Virtuoso | At least 99th percentile of skilled adults | Not yet achieved across general tasks | -- |
| 5 | Superhuman (ASI) | Outperforms 100% of humans in all tasks | Theoretical -- the ASI threshold | See Recursive Self-Improvement |
Source: Levels of AGI: Operationalizing Progress on the Path to AGI -- Morris et al., Google DeepMind (2023)