Book Review: Life 3.0, Being Human In The Age of Artificial Intelligence, by Max Tegmark

“The short answer is obviously that we have no idea what will happen if humanity succeeds in building human-level AGI.”–Max Tegmark, in Life 3.0

 

Reprinted with permission of the publisher, my review of Max Tegmark’s new book, from the November/December issue of Age of Robots.

Full issue available for download here.

LIFE 3.0: BEING HUMAN IN THE AGE OF ARTIFICIAL INTELLIGENCE Max Tegmark ©2017, Borzoi Book published by Alfred A. Knopf, New York, 364p. Review by Mark Sackler

 

Max Tegmark is not one to shy away from bold scientific pronouncements. The MIT cosmologist and physics professor is perhaps best known for his taxonomy of a four level multiverse—some levels of which are predicted by certain theories, but none of which have been proven to exist. In his previous book, Our Mathematical Universe, My Quest for the Ultimate Nature of Reality, he offers the astounding conjecture that the whole of reality may be nothing more than pure mathematics.

So, what, if anything, makes Life 3.0, Being Human in The Age of Artificial Intelligence different? Unlike a universe of multiverses, or of pure mathematics, it deals with issues that are right in front of our faces. And his taxonomy of Life 1.0, 2.0 and 3.0 is not a mere conjecture that can’t yet— or might never—be tested. Artificial
intelligence is happening right in front of us, and we have a multiplicity of issues to deal with, while we still can control it. Even as Stephen Hawking and Elon Musk are shouting loudly about the potential dangers of artificial intelligence, and many actual AI researchers are countering that the dangers are overblown and distorted, Tegmark is doing something to bridge hype and reality. Or at least, he’s trying to. The problem is, there is no consensus even among the experts. He provides the reader with a wide range of scenarios. Many are not pretty—from a corporation using advanced AI to control global markets and ultimately governments, to a runaway AI that discards human intervention to rule the world itself. And yet, he asserts, all of the scenarios he presents have actual expert believers in their possibility.

The ultimate answer is, we don’t know. Tegmark is not so much warning against its development—it’s probably impossible to stop—as he is advising about its challenges, opportunities and dangers. He knows that the experts don’t really know, and neither does he. But he’s not afraid to present bold scenarios to awaken our awareness. He sums it up best in Chapter 5, Intelligence Explosion:

The short answer is obviously that we have no idea what will happen if humanity succeeds in building human-level AGI. For this reason, we’ve spent this chapter exploring a broad spectrum of scenarios. I’ve attempted to be quite inclusive, spanning the full range of speculations I’ve seen or heard discussed by AI researchers and technologists: fast takeoff/ slow takeoff/no takeoff, humans/ machines/cyborgs in control. I think it’s wise to be humble at this stage and acknowledge how little we know, because for each scenario discussed above, I know at least one well-respected AI researcher who views it as a real possibility.

Tegmark makes is clear, that for all the unknowns, we need to proceed with caution. Bold conjectures and scenarios sometimes turn into realities. And some of these potential realities are not where we want to go. Decisions we make about machine intelligence in the next few decades will go a long way to deciding the future of humanity—our evolution or even our continued existence. He goes on to present possible scenarios for what we might look like in 10,000 and even 1 Billion years. It’s fascinating, but mind-numbing. We simply might not be able to control any of it.

You can follow Seeking Delphi and me on Facebook and Twitter.

Podcast #12: Artificial Emotional Intelligence

“Your intellect may be confused, but your emotions will never lie to you.”–Roger Ebert

In episode #11, futurist Ian Pearson spoke to his assertion that artificial intelligence will create jobs.  One of the main reasons for this, he believes, will be the need to provide an emotional human interface between A.I. and its intended beneficiaries, be they patients, consumers, or business clients.  But the field of affective computing is rapidly developing artificial intelligence that can read and respond to human emotion.  They are systems with emotional intelligence.   In episode #12, I talk with author Richard Yonck.  His new book,  Heart of the Machine, provides a comprehensive overview of the current state of development in emotional A.I.,  while providing cogent scenarios projecting where it might lead us in the future.

Links to relevant stories appear after the audio file and embedded YouTube video below.  A reminder that Seeking Delphi is available on iTunes and PlayerFM,  and has a channel on YouTube.  You can also follow us on Facebook.

 

 

 

 

Podcast #12: Artificial Emotional Intelligence

 

You Tube Slide Show of Episode #12

Richard Yonck’s background on Intelligent-Future.com

Heart of The Machine on Amazon and Barnes and Noble.

Ray Kurzweil’s review of Heart of The Machine in the New York Times.

News items:

Atlanta sets goal to run on 100% renewable energy by 2035.

SpaceX plans to begin launch of global network of internet providing satellites in 2019

University of Houston Master of Science in Foresight web page

Subscribe to Seeking Delphi on iTunes 

Subscribe on YouTube

Follow Seeking Delphi on Facebook @SeekingDelphi

Follow me on twitter @MarkSackler

 

 

 

 

Podcast #8: Inventing The Local Future

“The best way to predict your future is to create it.”–Abraham Lincoln

“Think globally, act locally.”–Variously attributed

If you’ve never heard the phrase, “think globally, act locally,”  you’ve probably been living under a rock.  It’s origin is murky, but the concept is best attributed to Scottish town planner Patrick Geddes, and his 1915 book, Cities in Evolution.   100 years later,  Neil Richardson and Rick Smyre have written the 21st century blueprint for Communities of the Future, in their 2016 volume, Preparing for a World That Doesn’t Exist–Yet.  In my Seeking Delphi podcast interview with Neil Richardson,  we discuss many of the bold ideas in the book, including the authors’ call for enabling what they call a “second enlightenment.”   We also discuss three key points in the book–terms the authors coined–master capacity builder,  polycentric democracy and creative molecular economy.  Previous podcast episodes of Seeking Delphi have showcased technological quantum leaps that have the potential to cause radical upheaval of civilization.  Authors Richardson and Smyre point the way for small to medium organizations and communities to deal with it–to embrace, use, and grow with it.    A means to invent the local future.

Links to relevant stories and organizations appear after the audio file and embedded YouTube video below.  A reminder that Seeking Delphi is available on iTunes, and has a channel on YouTube.  You can also follow us on Facebook.  The YouTube video of Robot’s Delight is embedded below.

 

 

 

Episode #8: Inventing The Local Future 28:50

 

 

 

(YouTube slideshow)

Preparing For A World That Doesn’t Exist–Yet, on Amazon and Barnes & Noble

Emergent Action

Communities of The Future

European biocomputing project

India, China and Japan to increase coal usage through the 2020’s.

Facebook anti-suicide project

 

 

Subscribe to Seeking Delphi on iTunes 

Subscribe on YouTube

Follow Seeking Delphi on Facebook @SeekingDelphi

Follow me on twitter @MarkSackler

Podcast #6, Technology: The Good, The Bad and The Existential.

“We’ve arranged a civilization in which most crucial elements profoundly depend on science and technology.”–Carl Sagan

Here Be Dragons, Science Technology and The Future of Humanity
by Olle Häggström

Technology.  We certainly do depend on it.   It does great things for us, but it also can annoy us and, indeed, has the potential to do us outright harm.  In this episode of Seeking Delphi, I talk to author Olle Häggström about some of the existential risks that technology may pose to humanity.  His book, Here Be Dragons, is a thorough examination of a wide ranging inventory of potential dangers, from the ones we currently know and worry about (climate change, nuclear war), to the ones that yet might be (bio terrorism, nanotechnology, artificial intelligence) ,and the ones Hollywood fantasizes about (alien invasion).  Olle is a professor of mathematics at Chalmers University of Technology in Göteborg, Sweden.  I called him there to conduct the interview for this episode.

Links to relevant stories appear after the audio file and embedded YouTube video below.  A reminder that Seeking Delphi is available on iTunes, and has a channel on YouTube.  You can also follow us on Facebook.

 

 

 

 

 

Episode #6, Technology: The Good, The Bad, and The Existential  25:41

(YouTube slideshow)

Bigelow Aerospace plans to orbit lunar space station by 2020.

Blue Origin planning a lunar delivery service, a la Amazon.

Lawrence Berkeley lab doubles the number of materials potentially useful for solar fuels

Volkswagon unveils Sedric, its entry into the self-driving vehicle market.  (It looks like a breadbox on wheels.)

Subscribe to Seeking Delphi on iTunes 

Subscribe on YouTube

Follow Seeking Delphi on Facebook @SeekingDelphi

Follow me on twitter @MarkSackler

R.I.P.–Alvin Toffler

“The future always comes too fast and in the wrong order.”–Alvin Toffler

Alvin Toffler

Alvin Toffler

The world lost its foremost futurist in the past week,  a man who was one of my heroes.   Alvin Toffler taught the world how to think about the future some 45 years ago.  It’s a lesson the world should relearn.   I read Future Shock way back in 1973–and have been thinking about it–and the future–ever since.

The quote above describes the cause of the disease–the human psychological malady–he calls future shock.  He made me think about the implications of a future that comes too fast and too hard for most people to comprehend or tolerate.  It made me think about the dangers of thinking improperly about the future–or avoiding the thought of it at all.  I’ll go into detail on these issues–and the potential remedies thereof–in future posts.  In the meantime, I take off my virtual digital hat to the man who just may have been the foremost futurist of all time.

Writing in the New York Times on July 6, Farhad Manjoo lays out clearly and concisely why Toffler’s ideas are so relevant today.  I highly urge you to read this piece, and to read Future Shock if you’ve never done so.  I intend to reread it now.  We have never needed foresight more than we do today.

For (mostly) lighter fare,  visit my other blog,  The Millennium Conjectures.

 

Welcome

“Never predict  anything, especially the future.”–Casey Stengel

 The Ol’ Perfessor knew what he was talking about.   Well, maybe he didn’t, but the advice is sage nonetheless.  It is notoriously difficult to predict anything in the future with consistent accuracy.  So why in the world would anyone want to become a futurist?  Why bother?  Well, to be blunt, that is exactly why!  Ignoring the opportunities and dangers of the future is what I like to call The Ostrich Syndrome.  Go ahead, hide your head in the sand.  The future is not going to go away;  it will get here.  And if we can’t predict it, there are certainly ways to prepare for it.  To prevent bad outcomes, or at least make them less likely.  To create good outcomes, or at least make them more likely.  And to be  better prepared to deal with whatever does come.

The sad fact is, we live in a short-term oriented society with a short attention span.  So what is the antidote to this malady?  It is more thoughtful foresight.  We have everything to gain and nothing to lose.  Kurt  Vonnegut compared science fiction writers like himself to the proverbial canary in the mine shaft, warning of weak danger signals before others perceive them.  That’s what futurists do, though those weak signals can signal opportunities as well as dangers as the world changes.  That’s what I aim to do with the rest of my life.  I’ve enrolled in the  University of Houston’s Masters in Foresight program.  I’m adding a foresight element to a friend’s existing market research business.  I’m becoming an advocate for taking a longer view of everything.  Economics. Education. Environment. Government. You name it.  This my second blog, aptly named Seeking Delphi after the famed Oracle of Delphi.  We can’t predict the future, but we can anticipate the possibilities, avoid the catastrophes (or some of them) and create the opportunities.

See the about page for my background, and see the link below for a book review I published in 1999 in the Reed Elsevier journal Futures.   It provides a very succinct view of my personal philosophy on how we should view the future.    Here goes something.  See you tomorrow and beyond…

sackler review F31 April 1999

 

 

 

Welcome

“Never predict  anything, especially the future.”–Casey Stengel

 The Ol’ Perfessor knew what he was talking about.   Well, maybe he didn’t, but the advice is sage nonetheless.  It is notoriously difficult to predict anything in the future with consistent accuracy.  So why in the world would anyone want to become a futurist?  Why bother?  Well, to be blunt, that is exactly why!  Ignoring the opportunities and dangers of the future is what I like to call The Ostrich Syndrome.  Go ahead, hide your head in the sand.  The future is not going to go away;  it will get here.  And if we can’t predict it, there are certainly ways to prepare for it.  To prevent bad outcomes, or at least make them less likely.  To create good outcomes, or at least make them more likely.  And to be  better prepared to deal with whatever does come.

The sad fact is, we live in a short-term oriented society with a short attention span.  So what is the antidote to this malady?  It is more thoughtful foresight.  We have everything to gain and nothing to lose.  Kurt  Vonnegut compared science fiction writers like himself to the proverbial canary in the mine shaft, warning of weak danger signals before others perceive them.  That’s what futurists do, though those weak signals can signal opportunities as well as dangers as the world changes.  That’s what I aim to do with the rest of my life.  I’ve enrolled in the  University of Houston’s Masters in Foresight program.  I’m adding a foresight element to a friend’s existing market research business.  I’m becoming an advocate for taking a longer view of everything.  Economics. Education. Environment. Government. You name it.  This my second blog, aptly named Seeking Delphi after the famed Oracle of Delphi.  We can’t predict the future, but we can anticipate the possibilities, avoid the catastrophes (or some of them) and create the opportunities.

See the about page for my background, and see the link below for a book review I published in 1999 in the Reed Elsevier journal Futures.   It provides a very succinct view of my personal philosophy on how we should view the future.    Here goes something.  See you tomorrow and beyond…

sackler review F31 April 1999