Book Review: Life 3.0, Being Human In The Age of Artificial Intelligence, by Max Tegmark

“The short answer is obviously that we have no idea what will happen if humanity succeeds in building human-level AGI.”–Max Tegmark, in Life 3.0

 

Reprinted with permission of the publisher, my review of Max Tegmark’s new book, from the November/December issue of Age of Robots.

Full issue available for download here.

LIFE 3.0: BEING HUMAN IN THE AGE OF ARTIFICIAL INTELLIGENCE Max Tegmark ©2017, Borzoi Book published by Alfred A. Knopf, New York, 364p. Review by Mark Sackler

 

Max Tegmark is not one to shy away from bold scientific pronouncements. The MIT cosmologist and physics professor is perhaps best known for his taxonomy of a four level multiverse—some levels of which are predicted by certain theories, but none of which have been proven to exist. In his previous book, Our Mathematical Universe, My Quest for the Ultimate Nature of Reality, he offers the astounding conjecture that the whole of reality may be nothing more than pure mathematics.

So, what, if anything, makes Life 3.0, Being Human in The Age of Artificial Intelligence different? Unlike a universe of multiverses, or of pure mathematics, it deals with issues that are right in front of our faces. And his taxonomy of Life 1.0, 2.0 and 3.0 is not a mere conjecture that can’t yet— or might never—be tested. Artificial
intelligence is happening right in front of us, and we have a multiplicity of issues to deal with, while we still can control it. Even as Stephen Hawking and Elon Musk are shouting loudly about the potential dangers of artificial intelligence, and many actual AI researchers are countering that the dangers are overblown and distorted, Tegmark is doing something to bridge hype and reality. Or at least, he’s trying to. The problem is, there is no consensus even among the experts. He provides the reader with a wide range of scenarios. Many are not pretty—from a corporation using advanced AI to control global markets and ultimately governments, to a runaway AI that discards human intervention to rule the world itself. And yet, he asserts, all of the scenarios he presents have actual expert believers in their possibility.

The ultimate answer is, we don’t know. Tegmark is not so much warning against its development—it’s probably impossible to stop—as he is advising about its challenges, opportunities and dangers. He knows that the experts don’t really know, and neither does he. But he’s not afraid to present bold scenarios to awaken our awareness. He sums it up best in Chapter 5, Intelligence Explosion:

The short answer is obviously that we have no idea what will happen if humanity succeeds in building human-level AGI. For this reason, we’ve spent this chapter exploring a broad spectrum of scenarios. I’ve attempted to be quite inclusive, spanning the full range of speculations I’ve seen or heard discussed by AI researchers and technologists: fast takeoff/ slow takeoff/no takeoff, humans/ machines/cyborgs in control. I think it’s wise to be humble at this stage and acknowledge how little we know, because for each scenario discussed above, I know at least one well-respected AI researcher who views it as a real possibility.

Tegmark makes is clear, that for all the unknowns, we need to proceed with caution. Bold conjectures and scenarios sometimes turn into realities. And some of these potential realities are not where we want to go. Decisions we make about machine intelligence in the next few decades will go a long way to deciding the future of humanity—our evolution or even our continued existence. He goes on to present possible scenarios for what we might look like in 10,000 and even 1 Billion years. It’s fascinating, but mind-numbing. We simply might not be able to control any of it.

You can follow Seeking Delphi and me on Facebook and Twitter.

Podcast #11: Will Artificial Intelligence Kill Your Job?

“By far, the greatest danger of Artificial Intelligence is that people conclude too early that they understand it.”–Eliezer Yudkowsky

One of the hottest topics in foresight today is artificial intelligence.  And while many of the most visible forward thinkers have been stressing over potential existential threats to all of humanity, there is a more mundane threat to all of us.  That would be our world of work.  As automation on the assembly line replaces more and more unskilled labor jobs,  there lies the looming threat of artificial intelligence taking on skilled, professional jobs.  Will A.I. kill your job?  Create you a new one? Both? Neither?  While the media is full of pessimism on this account, at least one prominent futurist is cautiously optimistic.  Author, speaker and blogger Ian Pearson, of Futurizon thinks that, at least in the short term, A.I. will create more jobs than it kills.  I talk to him about these views, as well as the longer range existential effects of A.I., in this week’s Seeking Delphi Podcast.

Links to relevant stories appear after the audio file and embedded YouTube video below.  A reminder that Seeking Delphi is available on iTunes, and has a channel on YouTube.  You can also follow us on Facebook.

 

 

 

 

 

 

Ian Pearson

 

 

Podcast #11: Will Artificial Intelligence Kill Your Job?

 

You Tube Slide Show of Episode #11

Ian Pearson’s blog post on A.I. and the future of work

News items:

Elon Musk’s Tesla to produce electric semi and pickup truck

European Space Agency warns on orbiting debris

Michael Abrash says full AR still 5-10 years away

Steve Wozniak on Google, Apple, and Facebook in 2075

Subscribe to Seeking Delphi on iTunes 

Subscribe on YouTube

Follow Seeking Delphi on Facebook @SeekingDelphi

Follow me on twitter @MarkSackler