News of The Future This Week: September 9, 2018

“A.I. doesn’t trust us, either.”–Rana el Kaliouby, CEO, Affectiva

 

Get ready for all A.I., all the time.  It’s the lion’s share of the news this week.  After all, Rana el Kaliouby says, among other things, that it should ultimately be pervasive.  In this week’s tech press, it pretty much is.  I did throw in a space story, if only for accent–and maybe to appease those who’ve had enough of machine intelligence.

While you’re reading about all this week’s future-related  news, don’t forget that you can subscribe to Seeking Delphi™ podcasts on iTunes, PlayerFM, or YouTube (audio with slide show) and you can also follow us on Twitter and Facebook 

Artificial Intelligence–The second Affectiva Emotion AI Summit, held this week in Boston, focused on the theme Trust in AI. And it featured Rana el Kaliouby’s bold assertion that appears at the top of this page. (Link to video highlights of last year’s summit available at the bottom of the page).

Kai-Fu Lee, former president of Google China, had some words of warning for the U.S.  He says that China will overtake America in A.I. within five years.

Almost on cue, Peter Diamandis published a review of Lee’s new book, A.I. Superpowers.  It outlines what Lee defines as four distinct waves of A.I., and what it means to control each of them.

–A tech story with Elon Musk? No way!  Mr. Impossible said this week that his Neuralink company will “soon” announce a product that will link your mind directly to a computer; he believes this link will be necessary to maintaining control of  A.I.  There is a reason soon is in quotes.

–Residents of Norfolk, England, may be a bit nervous about the prospects of local police catching anyone who burglarizes their home.  It seems the local bobbies are using an algorithm to determine if they should even bother to investigate.

–One area where A.I. could really prove to be a boon is in drug development.  Anything that could cut into soaring pharmaceutical R&D costs would be welcome, as the Diamandis Tech Blog reports

Artisits conception: reusable space plane.

Space commerce–Hot on the heals of a Japanese university and a construction company announcing a partnership to begin space elevator experiments, another Japanese firm has announced a traget of 2023 for the launch of a reusable space plane.

 

 

 

Highlight video from the first Emotion AI Summit, September 13, 2017

Seeking Delphi™ podcasts are available on iTunes, PlayerFM, or YouTube (audio with slide show) and you can also follow us on Twitter and Facebook 

2018 Emotion AI Summit

“What will kill us first, artificial intelligence or natural stupidity?’–Habib Haddad

 

Do you trust A.I.?   No?  May I ask why not?

Self-driving car crashes, you say? Automation job-killing apocalypse? A complete takeover and destruction of humanity by rogue super-A.I.?

Well, consider this missive, from Affectiva co-founder and CEO Rana el Kaliouby:

“A.I., doesn’t trust us either.”

Rana el Kaliouby adddressing the second Affectiva Emotion AI Summit.

She made this astounding statement in her keynote address at the second Emotion AI Summit, held in Boston, Massachusetts on September 6.  Trust in A.I., was the theme of this year’s meeting, and with good reason. The meeting covered the ethical and trust issues in A.I., in areas as diverse as autonomous vehicles, product marketing and education.

Since last year’s inaugural summit, which was held by Affectiva at the iconic MIT Media Lab, the news has been full of not-so-encouraging stories about a possible dark future of A.I.   More than one economic pundit has predicted a massive kill-off of jobs by smart automated systems.  Elon Musk, and until his recent demise, Stephen Hawking, have been all over the media with warnings of an A.I. doomsday.

So, what’s with Kaliouby’s position?  As the CEO of perhaps the foremost producer of emotion-savvy A.I. software, she obviously has motive to persuade us to trust AI.  But why wouldn’t it trust us?

Perhaps the statement was hyperbole.  She explained it as the need for A.I. to trust that it is getting good input from us, so it can make the right decisions.  But until we have sentient, general A.I., it might better be interpreted another way.  To trust A.I., we first must trust ourselves to provide the right programming and input for A.I.  As one presenter put it, the goal should not be to create good A.I., but A.I that does good.

In her closing address, el Kaliouby put forth what she called a three-part contract with A.I.  Trust—mutually—is the first part.  We trust it and it trusts us.  The second part is pervasiveness.  She feels it needs to ultimately encompass virtually all our experience.  And third, it needs to be ethical; this assumes we can define what that is.

But perhaps the most telling comment came from one member who appeared on a panel of venture capitalists who discussed investing in A.I.

When asked what is it that excites you the most and that scares you the most about A.I., Habib Haddad, of E14 Fund, said his greatest worry is, “what will kill us first, artificial intelligence or natural stupidity?

Seeking Delphi™ podcasts are available on iTunes, PlayerFM, or YouTube (audio with slide show) and you can also follow us on Twitter and Facebook