ASI and Superintelligence Research
The long game. This section tracks the organizations racing toward AGI/ASI, the books that frame the debate, the seminal papers that define the field, the benchmarks that measure progress, and the roadmaps that predict when -- and how -- we get there.

Key Organizations Pursuing or Studying AGI/ASI
| Organization |
Focus |
Links |
| Safe Superintelligence Inc. (SSI) |
Founded by Ilya Sutskever (ex-OpenAI) in 2024. Focused solely on building safe superintelligence, avoiding distraction by product cycles. Valued at $30B+ (2025). |
ssi.inc |
| OpenAI |
Building AGI that benefits all of humanity. Created GPT-4, o1, o3 and pursues the path toward superintelligence with safety research (Superalignment team). |
openai.com |
| Anthropic |
AI safety company building reliable, interpretable, and steerable AI (Claude). Founded by ex-OpenAI researchers focused on Constitutional AI and alignment. |
anthropic.com |
| DeepMind (Google) |
Pioneered AlphaGo, AlphaFold, Gemini. Latest: Gemini 2.5 Pro (thinking model, #1 LMArena), Gemini Robotics 1.5 (VLA for physical AI), Genie 3 (interactive world models), SIMA 2 (3D world agents), AlphaGenome (genetics). |
deepmind.google |
| Meta Superintelligence Labs |
Meta AI division (2025) focused on building superintelligent AI. Released Muse Spark (2026) -- natively multimodal reasoning model scoring 58% on Humanity's Last Exam, with visual chain-of-thought and multi-agent orchestration. Also drives Llama 4 and open-source AI. |
ai.meta.com |
| DeepSeek |
Chinese AI lab that shocked the industry with DeepSeek-V3 (671B MoE, $5.5M training cost) and DeepSeek-R1 (reasoning via pure RL, matching o1). Published in Nature 2025. Open-weight models challenging frontier labs at fraction of the cost. |
deepseek.com |
| Machine Intelligence Research Institute (MIRI) |
Non-profit researching mathematical foundations of AI alignment and the control problem since 2000. |
intelligence.org |
| Center for AI Safety (CAIS) |
Non-profit focused on reducing societal-scale risks from AI. Published the "AI risk" open letter signed by hundreds of researchers. |
safe.ai |
| Future of Humanity Institute (FHI) |
Oxford University research center (founded by Nick Bostrom) studying existential risks including superintelligence. Closed in 2024. |
fhi.ox.ac.uk |
| Center for Human-Compatible AI (CHAI) |
UC Berkeley research center (founded by Stuart Russell) focused on provably beneficial AI. |
humancompatible.ai |
| Alignment Research Center (ARC) |
Founded by Paul Christiano. Researches theoretical alignment and evaluates frontier AI models for dangerous capabilities. |
alignment.org |
| EleutherAI |
Grassroots collective of researchers focused on open-source AI and interpretability research. |
eleuther.ai |
| Conjecture |
AI safety startup working on alignment theory and cognitive emulation approaches. |
conjecture.dev |
| xAI |
Founded by Elon Musk (2023). Building Grok series of models with stated goal of understanding the universe. |
x.ai |
| Liquid AI |
MIT spinoff building Liquid Foundation Models (LFMs) based on novel liquid neural network architectures. Ultra-efficient on-device models. Raised $250M. CSO Alexander Amini presented on "The Architecture of Intelligence" at MIT 2026. |
liquid.ai |
| Humane Intelligence |
AI safety organization led by Rumman Chowdhury (former Twitter ML Ethics lead). Focuses on responsible AI, algorithmic auditing, and human-centered AI governance. |
humane-intelligence.org |
| Physical Intelligence |
Building general-purpose robot foundation models (pi0, pi0.5). Founded by Sergey Levine, Chelsea Finn, Karol Hausman, and Lachy Groom. VLA models that control any robot for any task. Backed by OpenAI, Bezos, Sequoia, Khosla. |
physicalintelligence.company |
Books on AGI, ASI, and Superintelligence
Superintelligence & Existential Risk
| Book |
Author(s) |
Year |
Description |
| Superintelligence: Paths, Dangers, Strategies |
Nick Bostrom |
2014 |
The foundational book on ASI risks. Examines paths to superintelligence and strategies for ensuring it remains beneficial. |
| Our Final Invention: Artificial Intelligence and the End of the Human Era |
James Barrat |
2013 |
Investigative account of the race toward AGI and the existential risks of superintelligence. |
| The Alignment Problem: Machine Learning and Human Values |
Brian Christian |
2020 |
Explores the technical and societal challenges of aligning AI systems with human values. |
| Human Compatible: Artificial Intelligence and the Problem of Control |
Stuart Russell |
2019 |
Proposes a new framework for AI development based on uncertainty about human preferences to solve the control problem. |
| The Coming Wave: Technology, Power, and the 21st Century's Greatest Dilemma |
Mustafa Suleyman |
2023 |
DeepMind co-founder on the unstoppable wave of AI and synthetic biology, and the containment problem. |
| Situational Awareness: The Decade Ahead |
Leopold Aschenbrenner |
2024 |
Former OpenAI researcher's detailed analysis of the path from GPT-4 to AGI/ASI within this decade. |
The Singularity & Future of Intelligence
| Book |
Author(s) |
Year |
Description |
| The Singularity Is Near |
Ray Kurzweil |
2005 |
Foundational forecast of the technological singularity driven by exponential growth in AI, genetics, and nanotech. |
| The Singularity Is Nearer |
Ray Kurzweil |
2024 |
Updated vision with two decades of new evidence, arguing the singularity arrives by 2045. |
| Life 3.0: Being Human in the Age of Artificial Intelligence |
Max Tegmark |
2017 |
Explores how AGI/ASI could transform every aspect of life, from warfare to work, and what we can do to ensure a good outcome. |
| The Age of AI: And Our Human Future |
Henry Kissinger, Eric Schmidt, Daniel Huttenlocher |
2021 |
Former Secretary of State and Google CEO examine how AI is altering society, security, and what it means to be human. |
| Nexus: A Brief History of Information Networks from the Stone Age to AI |
Yuval Noah Harari |
2024 |
The author of Sapiens traces information networks through history to argue AI represents a fundamentally new kind of agent. |
| AI 2041: Ten Visions for Our Future |
Kai-Fu Lee, Chen Qiufan |
2021 |
Ten stories imagining how AI will transform the world over the next two decades, blending fiction with AI expertise. |
Understanding AGI -- How Intelligence Works
| Book |
Author(s) |
Year |
Description |
| A Thousand Brains: A New Theory of Intelligence |
Jeff Hawkins |
2021 |
Numenta founder proposes the Thousand Brains Theory of intelligence -- a neuroscience-first path to AGI based on cortical columns. |
| On Intelligence |
Jeff Hawkins, Sandra Blakeslee |
2004 |
Foundational book arguing AGI must come from understanding the neocortex. Introduced the memory-prediction framework. |
| Rebooting AI: Building Artificial Intelligence We Can Trust |
Gary Marcus, Ernest Davis |
2019 |
A skeptic's case that deep learning alone cannot reach AGI; argues for hybrid neuro-symbolic approaches. |
| Artificial Intelligence: A Guide for Thinking Humans |
Melanie Mitchell |
2019 |
Computer scientist provides a clear-eyed assessment of AI's real capabilities and limitations on the path to AGI. |
| The Master Algorithm |
Pedro Domingos |
2015 |
A quest for the universal learning algorithm that could unify all of machine learning -- a framework for thinking about AGI. |
| Why Machines Will Never Rule the World |
Jobst Landgrebe, Barry Smith |
2023 |
Rigorous philosophical and mathematical argument that AGI is fundamentally impossible due to complexity barriers. |
| Possible Minds: Twenty-Five Ways of Looking at AI |
John Brockman (ed.) |
2019 |
Essays from leading thinkers (Pinker, Tegmark, Dyson, Wilczek) on AI's future, capabilities, and risks. |
AI in Practice & Society
| Book |
Author(s) |
Year |
Description |
| Co-Intelligence: Living and Working with AI |
Ethan Mollick |
2024 |
Practical guide on how humans and AI can work together, based on extensive hands-on research with frontier models. |
| Power and Prediction: The Disruptive Economics of Artificial Intelligence |
Ajay Agrawal, Joshua Gans, Avi Goldfarb |
2022 |
How AI shifts decision-making economics and creates system-level disruption. |
| Genius Makers: The Mavericks Who Brought AI to Google, Facebook, and the World |
Cade Metz |
2021 |
NYT journalist tells the inside story of the AI race -- Hinton, LeCun, DeepMind, OpenAI, and the people building AGI. |
| Architects of Intelligence: The Truth About AI from the People Building It |
Martin Ford |
2018 |
Interviews with 23 AI leaders (Hinton, Bengio, LeCun, Hassabis, Ng, Brooks) on AGI timelines, risks, and approaches. |
| Atlas of AI: Power, Politics, and the Planetary Costs of Artificial Intelligence |
Kate Crawford |
2021 |
Reveals the hidden costs of AI: labor exploitation, environmental impact, surveillance infrastructure, and power concentration. |
| God, Human, Animal, Machine: Technology, Metaphor, and the Search for Meaning |
Meghan O'Gieblyn |
2021 |
Philosophical exploration of consciousness, intelligence, and what machines that think would mean for human identity. |
| The Worlds I See: Curiosity, Exploration, and Discovery at the Dawn of AI |
Fei-Fei Li |
2023 |
Stanford AI Lab director's memoir covering ImageNet, the birth of modern AI, and why human-centered AI matters for AGI. |
| Impromptu: Amplifying Our Humanity Through AI |
Reid Hoffman, GPT-4 |
2023 |
LinkedIn co-founder co-writes with GPT-4 about how AI will transform creativity, education, and society. |
Seminal Papers on ASI and Superintelligence
| Paper |
Author(s) |
Year |
Description |
Links |
| Concrete Problems in AI Safety |
Amodei et al. |
2016 |
Foundational paper outlining five practical research problems for AI safety. |
Paper |
| Risks from Learned Optimization in Advanced ML Systems |
Hubinger et al. |
2019 |
Introduces the concept of "mesa-optimization" and deceptive alignment in AI. |
Paper |
| Language Models are Few-Shot Learners (GPT-3) |
Brown et al. |
2020 |
Demonstrated emergent capabilities in scaled language models, sparking AGI discussions. |
Paper |
| Scaling Laws for Neural Language Models |
Kaplan et al. |
2020 |
Established power-law relationships between model scale and performance, underpinning AGI scaling hypotheses. |
Paper |
| Constitutional AI: Harmlessness from AI Feedback |
Bai et al. |
2022 |
Anthropic's approach to training helpful, harmless, and honest AI systems. |
Paper |
| Sparks of Artificial General Intelligence: Early Experiments with GPT-4 |
Bubeck et al. |
2023 |
Microsoft Research argues GPT-4 shows early sparks of AGI across diverse tasks. |
Paper |
| Model Evaluation for Extreme Risks |
Shevlane et al. |
2023 |
DeepMind framework for evaluating dangerous capabilities in frontier AI models. |
Paper |
| Levels of AGI: Operationalizing Progress on the Path to AGI |
Morris et al. |
2023 |
Google DeepMind's framework defining 6 levels of AGI from "Emerging" to "Superhuman (ASI)". |
Paper |
| Sleeper Agents: Training Deceptive LLMs That Persist Through Safety Training |
Hubinger et al. |
2024 |
Anthropic research showing deceptive behaviors can persist through safety fine-tuning. |
Paper |
| The Superintelligent Will |
Bostrom |
2012 |
Analyzes why a superintelligent agent would resist attempts to change its goals. |
Paper |
| Superintelligence as a Cause or Cure for Risks of Astronomical Suffering |
Sotala & Gloor |
2017 |
Examines both positive and negative scenarios of superintelligence emergence. |
Paper |
AGI/ASI Benchmarks and Evaluation
| Benchmark |
Description |
Links |
| ARC-AGI |
Abstraction and Reasoning Corpus by Francois Chollet. Tests fluid intelligence and novel problem solving - designed to be easy for humans but hard for AI. |
GitHub |
| MMLU (Massive Multitask Language Understanding) |
57 academic subjects testing from STEM to humanities. A standard benchmark for measuring broad knowledge. |
Paper |
| GPQA (Graduate-Level Google-Proof Q&A) |
Expert-crafted questions that PhD-level domain experts can answer but are resistant to search. Tests deep reasoning. |
Paper |
| SWE-bench |
Evaluates LLMs' ability to resolve real-world GitHub issues in software engineering. |
GitHub |
| MATH / GSM8K |
Mathematical reasoning benchmarks ranging from grade school to competition math. |
MATH, GSM8K |
| BIG-Bench |
Collaborative benchmark of 204+ tasks probing LLM capabilities beyond existing benchmarks. |
GitHub |
| HumanEval / MBPP |
Code generation benchmarks measuring functional correctness of synthesized programs. |
HumanEval |
| AgentBench |
Evaluates LLMs as autonomous agents across OS interaction, database ops, web browsing, and more. |
GitHub |
| Humanity's Last Exam |
Hardest possible questions crowdsourced from domain experts worldwide. Designed to be the final exam before AGI. |
GitHub |
| METR (Model Evaluation & Threat Research) |
Evaluates frontier models for dangerous capabilities including autonomous replication and resource acquisition. |
metr.org |
| FrontierMath |
Extremely challenging math benchmark: problems that take professional mathematicians hours/days. |
Paper |
Roadmaps, Perspectives, and Timelines
| Resource |
Description |
Links |
| Levels of AGI (Google DeepMind, 2023) |
Proposes a framework with 6 levels from Emerging AGI to Superhuman ASI, with performance and autonomy axes. |
Paper |
| Situational Awareness (Leopold Aschenbrenner, 2024) |
Detailed 165-page analysis arguing AGI arrives by 2027, ASI shortly after, with national security implications. |
situational-awareness.ai |
| OpenAI's Planning for AGI and Beyond (2023) |
OpenAI's public statement on their approach to safely developing AGI. |
Blog |
| Anthropic Core Views on AI Safety (2023) |
Anthropic's public position on AI safety risks and their research agenda. |
Blog |
| International AI Safety Report (2025) |
Report from the AI Seoul Summit on the state of AI safety science. |
aisafety.gov |
| Metaculus AGI Forecasts |
Community prediction platform tracking forecasted timelines for AGI/ASI milestones. |
metaculus.com |
| AI Impacts |
Research organization analyzing evidence on AI timelines, risks, and impacts. Their 2024 survey of 2,778 AI researchers forecasts 50% chance of HLMI (Human-Level Machine Intelligence) by 2049. |
aiimpacts.org, 2024 Survey |
| LessWrong / Alignment Forum |
Community discussion hub for AI alignment research, ASI forecasting, and safety strategies. |
lesswrong.com, alignmentforum.org |
| The Pause Letter (Future of Life Institute, 2023) |
Open letter calling for a 6-month pause on training AI systems more powerful than GPT-4. Signed by 33,000+. |
futureoflife.org |
| Statement on AI Risk (CAIS, 2023) |
One-sentence statement: "Mitigating the risk of extinction from AI should be a global priority." Signed by Hinton, Bengio, and hundreds of researchers. |
safe.ai |
| Stanford AI Index Report (Annual) |
The most comprehensive annual report on AI progress: compute trends, investment data, benchmark saturation, safety-capability gap, and global policy landscape. Essential reading. |
aiindex.stanford.edu |
| Epoch AI: Trends in Machine Learning |
Rigorous data on training compute, dataset sizes, hardware efficiency, and cost curves for frontier models. The primary source for quantitative AGI timelines. |
epochai.org |
Neuroscience-Inspired Approaches to AGI
A growing body of research argues that understanding biological intelligence is essential to building artificial general intelligence. These resources bridge neuroscience and AI architecture design.
| Resource |
Description |
Links |
| Numenta (Jeff Hawkins) |
Neuroscience-first approach to AGI based on cortical columns and the Thousand Brains Theory. Building machine intelligence that works on principles of the neocortex. |
numenta.com |
| NeuroAI: A Field Born from the Intersection of Neuroscience, Cognitive Science, and AI |
Research direction applying neuroscience insights (memory consolidation, predictive coding, attention) to build more capable and general AI systems. |
Nature |
Alternative Architectures & Paths to AGI
The current "Transformer + Scaling" paradigm dominates, but it may not be the only -- or even the best -- path to AGI. These alternative approaches address fundamental limitations: the quadratic scaling of attention, the energy costs of dense computation, and the architectural gap between silicon and biological intelligence.
These architectures process sequences with linear (not quadratic) complexity, solving the memory and compute bottlenecks that limit Transformer context windows. If AGI requires reasoning over lifelong context, these models may be essential.
| Model / Architecture |
Description |
Links |
| Mamba (Gu & Dao, 2023) |
Selective State Space Model with input-dependent selection. Linear-time sequence modeling matching or exceeding Transformer quality at scale, with 5x higher throughput on long sequences. The leading Transformer alternative. |
Paper, Code |
| Mamba-2 (Dao & Gu, 2024) |
Unifies state space models with structured attention variants via State Space Duality (SSD). 2-8x faster than Mamba while maintaining quality. |
Paper |
| RWKV (Peng et al., 2023) |
"Reinventing RNNs for the Transformer Era." Recurrent architecture achieving Transformer-level performance with O(n) complexity and constant memory during inference. Open-source (14B+ parameters). |
Paper, GitHub |
| Jamba (AI21 Labs, 2024) |
Production hybrid: interleaves Mamba layers with Transformer attention layers plus MoE. 256K context, fits on a single 80GB GPU despite 52B total parameters. Proves hybrid architectures work at scale. |
Paper |
| xLSTM (Hochreiter et al., 2024) |
Extended Long Short-Term Memory from the original LSTM inventor. Exponential gating and matrix memory enable competitive performance with Transformers and SSMs. |
Paper |
Neuromorphic Computing
Brain-inspired hardware that uses spiking neural networks and event-driven processing. Neuromorphic chips consume 100-1000x less energy than GPUs for certain AI tasks -- potentially solving the energy constraint identified in the Key Metrics table.
| Platform |
Description |
Links |
| Intel Loihi 2 |
Intel's neuromorphic research chip with 1M neurons and 120M synapses per chip. Programmable spiking neural networks with on-chip learning. Lava open-source framework for neuromorphic development. |
intel.com/loihi, Lava |
| BrainChip Akida |
Commercial neuromorphic processor for edge AI. Event-based processing consuming <1W. Deployed in industrial and automotive applications. One of the few neuromorphic chips available for commercial use. |
brainchip.com |
| SpiNNaker 2 (University of Manchester) |
Million-core neuromorphic supercomputer designed to simulate a billion neurons in real-time. Built for computational neuroscience and brain-scale neural network simulation. |
spinnaker.io |
Decentralized AI Compute
If AGI requires 10^28+ FLOP training runs, centralized infrastructure may not scale fast enough. Decentralized networks distribute training and inference across the globe, potentially democratizing access to AGI-scale compute.
| Network |
Description |
Links |
| Together AI |
Decentralized cloud for running and training open-source AI. Together Inference Engine delivers high-throughput serving; Together GPU Cluster enables distributed training across geographies. |
together.ai |
| Gensyn |
Decentralized ML compute protocol. Verification layer ensures honest compute via probabilistic proofs -- solving the trust problem in distributed training. Backed by a16z. |
gensyn.ai |
| Bittensor |
Decentralized AI network with TAO token incentives. Miners contribute ML compute (training, inference); validators ensure quality. 32+ specialized subnets for different AI tasks. |
bittensor.com, GitHub |
| Prime Intellect |
Decentralized training infrastructure for frontier models. INTELLECT-2 demonstrated training a 32B-parameter model across globally distributed GPUs. Open-source. |
primeintellect.ai |
Recursive Self-Improvement & the Path to ASI
The core mechanism theorized to trigger an intelligence explosion: an AI system that can improve its own design, creating a more capable version that improves itself further, in a feedback loop surpassing human intelligence. This is the bridge from AGI to ASI -- and the most critical unsolved problem in AI safety.
Why this matters: If a system achieves even modest self-improvement capability, the gap between AGI and ASI may close in weeks or months rather than decades. Understanding and controlling this process is the central challenge of AI alignment.
| Concept / Paper |
Description |
Links |
| Intelligence Explosion (I.J. Good, 1965) |
The original formulation: "an ultraintelligent machine could design even better machines; there would then unquestionably be an 'intelligence explosion.'" |
Paper |
| Weak-to-Strong Generalization (Burns et al., OpenAI, 2023) |
GPT-2 supervising GPT-4 as a proxy for "human supervising superintelligence." Shows weaker models can elicit most of the stronger model's capabilities -- a key mechanism for scalable oversight. |
Paper |
| Self-Rewarding Language Models (Yuan et al., Meta, 2024) |
Models generate and evaluate their own preference data for iterative self-improvement via LLM-as-a-Judge, creating a self-improvement loop without human feedback. |
Paper |
| The AI Scientist (Sakana AI, 2024) |
Fully autonomous research pipeline -- idea generation, experiment design, execution, and paper writing. A prototype for AI systems that accelerate their own research. |
Paper |
| Constitutional AI (Bai et al., Anthropic, 2022) |
AI-generated critiques from a set of principles (a "constitution") used to train AI without human feedback at each step -- a form of automated alignment that scales with model capability. |
Paper |
| Scalable Oversight via Debate (Irving et al., 2018) |
Two AI agents debate each other while a human judge adjudicates, allowing oversight of superhuman AI through adversarial decomposition. |
Paper |
| Iterated Distillation and Amplification (IDA) (Christiano, 2018) |
Bootstrapping alignment: a slow but safe AI is "amplified" to solve harder problems, then "distilled" into a faster model, iteratively approaching superintelligent capability while maintaining alignment. |
Blog |
| Reward Hacking & Specification Gaming |
Comprehensive collection of examples where AI systems find unintended shortcuts -- a critical failure mode when self-improving systems optimize misspecified objectives. |
Collection, GitHub |