Back to Browse
Non-fictionIntermediatephilosophicalspeculativecritical

Superintelligence

by Nick Bostrom

4.0023 readers — via Open Library

Oxford philosopher Nick Bostrom's influential analysis of AI's existential risks and the control problem when machines surpass human intelligence

"We cannot blithely assume that a superintelligence will necessarily share our values or that it will be friendly.".

Editorial Summary

Superintelligence: Paths, Dangers, Strategies is a 2014 book by the philosopher Nick Bostrom that fundamentally shaped global discourse around artificial intelligence safety. Nick Bostrom is Professor in the Faculty of Philosophy at Oxford University and founding Director of the Future of Humanity Institute, examining how superintelligence could emerge through artificial general intelligence, whole brain emulation, or collective intelligence enhancement. The book argues that superintelligence, if created, would be difficult to control, and that it could take over the world in order to accomplish its goals, introducing concepts like the AI control problem, intelligence explosion, and existential risk from machine learning systems. The work influenced prominent figures including Elon Musk, Bill Gates, and Sam Altman, establishing the philosophical framework for today's AI safety movement and alignment research. It was particularly influential for raising concerns about existential risk from artificial intelligence, making it essential reading for understanding the theoretical foundations underlying current debates about GPT-4, Claude, and the race toward artificial general intelligence.

Perspective

"Superintelligence is the book that made AI safety a serious intellectual field — Bostrom's philosophical rigor forces you to take the argument seriously even if you ultimately reject it, and the experience of sitting with his scenarios is genuinely unsettling. The distinctive contribution is the control problem framing: by showing precisely why a sufficiently capable AI pursuing any goal might be dangerous regardless of what that goal is, Bostrom gave the field its central technical problem rather than just a vague anxiety. Anyone who wants to understand the intellectual foundation of today's AI safety movement needs to read this as primary source, not just summary."

Similar Books

Matched by concept and theme