In a very readable way, Max Tegmark, a professor of physics at MIT, explains the changes, opportunities and risks resulting from the rapid development of artificial-intelligence technologies. Using the story of a fictitious Omega team, he also hints at the risks stemming from even the originally purely positive intentions of such a technology.

The book begins with a story. The so-called Omega team, a specialized, secret department of a large, wealthy corporation (the reader can insert a famous firm of their choice), is tasked with the development of artificial general intelligence called Prometheus. Prometheus should covertly aid humankind and gradually resolve its greatest problems without people having a feeling that they are being controlled.

Gradually, the story takes us through the thoughts and work of the Omega team, which tries to prevent all conceivable risks stemming from the functioning of Prometheus. Omega is all too aware of what could happen should such a system have autonomous control over humankind.

I couldn’t believe that the Omega team not only inventively reacted to risks that I had come up with myself, but also to many others I hadn’t even been aware of. Their approach is almost paranoid, but their work is successful. The world begins to change for the better, albeit gradually. Everything is described using credible arguments and realistic scenarios, for instance using fictitious companies, media or patrons. Prometheus has no problem obtaining financial means. I told myself that this was written as a guideline that could be easily used.

Exactly at this moment, Max Tegmark interrupts the story and starts to guide the reader through the wider context and principles of the functioning and decision-making of artificial intelligence. He doesn’t stay only on the surface and doesn’t avoid simple physics and philosophy. Gradually, he covers various opinions on basic philosophical dilemmas, for instance related to self-driving cars, replacing jobs with automation, and finding a new meaning of life.

His discourse on the formulation of artificial-intelligence objectives seemed essential to me. From an originally simple, positive goal, each sufficiently powerful and artificial general intelligence system can derive an infinite number of higher goals, technically leading towards the fulfillment of the original objective, but under completely different conditions that don’t have to include the survival of humankind.

In the conclusion of his book, Max Tegmark offers alternative scenarios for his story, speculating about the various possibilities for failure. And again I was surprised by the inventiveness as well as the credibility of these scenarios. Can a super-smart artificial-intelligence system liberate itself from the isolation created by a less intelligent person?

I can recommend the book to anyone not patient enough for more technical or over-philosophical reading, but willing to think more thoroughly about the future of humankind in the era of artificial general intelligence. You will really enjoy it. You can also download it from Audible as an audiobook.

Regards, Petr Šrámek