“The short answer is obviously that we have no idea what will happen if humanity succeeds in building human-level AGI.”–Max Tegmark, in Life 3.0
Reprinted with permission of the publisher, my review of Max Tegmark’s new book, from the November/December issue of Age of Robots.
Full issue available for download here.
LIFE 3.0: BEING HUMAN IN THE AGE OF ARTIFICIAL INTELLIGENCE Max Tegmark ©2017, Borzoi Book published by Alfred A. Knopf, New York, 364p. Review by Mark Sackler
Max Tegmark is not one to shy away from bold scientific pronouncements. The MIT cosmologist and physics professor is perhaps best known for his taxonomy of a four level multiverse—some levels of which are predicted by certain theories, but none of which have been proven to exist. In his previous book, Our Mathematical Universe, My Quest for the Ultimate Nature of Reality, he offers the astounding conjecture that the whole of reality may be nothing more than pure mathematics.
So, what, if anything, makes Life 3.0, Being Human in The Age of Artificial Intelligence different? Unlike a universe of multiverses, or of pure mathematics, it deals with issues that are right in front of our faces. And his taxonomy of Life 1.0, 2.0 and 3.0 is not a mere conjecture that can’t yet— or might never—be tested. Artificial
intelligence is happening right in front of us, and we have a multiplicity of issues to deal with, while we still can control it. Even as Stephen Hawking and Elon Musk are shouting loudly about the potential dangers of artificial intelligence, and many actual AI researchers are countering that the dangers are overblown and distorted, Tegmark is doing something to bridge hype and reality. Or at least, he’s trying to. The problem is, there is no consensus even among the experts. He provides the reader with a wide range of scenarios. Many are not pretty—from a corporation using advanced AI to control global markets and ultimately governments, to a runaway AI that discards human intervention to rule the world itself. And yet, he asserts, all of the scenarios he presents have actual expert believers in their possibility.
The ultimate answer is, we don’t know. Tegmark is not so much warning against its development—it’s probably impossible to stop—as he is advising about its challenges, opportunities and dangers. He knows that the experts don’t really know, and neither does he. But he’s not afraid to present bold scenarios to awaken our awareness. He sums it up best in Chapter 5, Intelligence Explosion:
The short answer is obviously that we have no idea what will happen if humanity succeeds in building human-level AGI. For this reason, we’ve spent this chapter exploring a broad spectrum of scenarios. I’ve attempted to be quite inclusive, spanning the full range of speculations I’ve seen or heard discussed by AI researchers and technologists: fast takeoff/ slow takeoff/no takeoff, humans/ machines/cyborgs in control. I think it’s wise to be humble at this stage and acknowledge how little we know, because for each scenario discussed above, I know at least one well-respected AI researcher who views it as a real possibility.
Tegmark makes is clear, that for all the unknowns, we need to proceed with caution. Bold conjectures and scenarios sometimes turn into realities. And some of these potential realities are not where we want to go. Decisions we make about machine intelligence in the next few decades will go a long way to deciding the future of humanity—our evolution or even our continued existence. He goes on to present possible scenarios for what we might look like in 10,000 and even 1 Billion years. It’s fascinating, but mind-numbing. We simply might not be able to control any of it.
You can follow Seeking Delphi and me on Facebook and Twitter.