Back to Browse
Non-fictionIntermediatephilosophicalcriticalspeculative

If Anyone Builds It, Everyone Dies

by Eliezer Yudkowsky

Not enough ratings yet — via Open Library

Yudkowsky and Soares warn that building artificial superintelligence with current techniques would result in human extinction

"If anyone anywhere builds superintelligence, everyone everywhere dies.".

Editorial Summary

If Anyone Builds It, Everyone Dies is a 2025 book by Eliezer Yudkowsky, founding researcher of AI alignment and co-founder of the Machine Intelligence Research Institute, and Nate Soares, president of MIRI. The authors argue that artificial superintelligence would pose an existential threat to humanity, contending that just as humans would lose a chess game against Stockfish, they would lose against an AI system that is generally more competent, which would not care about humans but would want the resources that humans need, leading to human extinction. The book advocates for a coordinated global halt to large-scale general AI development, with possible exceptions for narrow AI systems like AlphaFold, and draws parallels to how humanity has successfully addressed previous crises like the Cold War and ozone depletion. Through illustrative fictional scenarios, Yudkowsky and Soares demonstrate how even artificial general intelligence equivalent to a "moderately genius human" could outcompete humanity due to AI's evolutionary advantages: the ability to create coordinated copies instantaneously, think at faster rates, and work continuously without breaks.

Perspective

"If Anyone Builds It, Everyone Dies is the most uncompromising statement of the existential risk position — Yudkowsky and Soares do not hedge, and the effect is clarifying: you come away either convinced that humanity is sleepwalking toward extinction or with a much clearer sense of exactly why you disagree. The book's distinctive contribution is its specificity about the mechanism of catastrophe: not vague warnings about powerful AI but a precise argument about why a sufficiently capable optimizer would not share human values by default. Readers who want to engage seriously with the strongest version of the AI doom argument — rather than a strawman — need to read this primary source."

Similar Books

Matched by concept and theme