Back to Browse
Non-fictionIntermediatephilosophicalcriticalspeculative

If Anyone Builds It, Everyone Dies

by Eliezer Yudkowsky

Not enough ratings yet — via Open Library

Yudkowsky and Soares warn that building artificial superintelligence with current techniques would result in human extinction

"If anyone anywhere builds superintelligence, everyone everywhere dies.".

Editorial Summary

If Anyone Builds It, Everyone Dies is a 2025 book by Eliezer Yudkowsky, founding researcher of AI alignment and co-founder of the Machine Intelligence Research Institute, and Nate Soares, president of MIRI. The authors argue that artificial superintelligence would pose an existential threat to humanity, contending that just as humans would lose a chess game against Stockfish, they would lose against an AI system that is generally more competent, which would not care about humans but would want the resources that humans need, leading to human extinction. The book advocates for a coordinated global halt to large-scale general AI development, with possible exceptions for narrow AI systems like AlphaFold, and draws parallels to how humanity has successfully addressed previous crises like the Cold War and ozone depletion. Through illustrative fictional scenarios, Yudkowsky and Soares demonstrate how even artificial general intelligence equivalent to a "moderately genius human" could outcompete humanity due to AI's evolutionary advantages: the ability to create coordinated copies instantaneously, think at faster rates, and work continuously without breaks.

Perspective

"This book is essential reading for anyone involved in AI policy, from tech executives racing to build AGI to policymakers crafting AI regulation like the EU AI Act. In an era where OpenAI, Anthropic, and DeepMind are rapidly advancing toward artificial general intelligence, Yudkowsky and Soares provide the most comprehensive case for why the current trajectory toward superintelligence could be humanity's final mistake."

Similar Books

Matched by concept and theme