“The promise of autonomous vehicles is great.”–Dan Lipinski
“My opinion is that it’s a bridge too far to go to fully autonomous vehicles.”–Elon Musk
Wait–what? The man who thinks he can send humans on a one way trip to colonize Mars within 10 years, thinks fully autonomous vehicles are out of our reach? The Elon Musk quote above is from 2013. I would be surprised if he still feels that way–but who knows?
Segue to this morning, at the Intelligent Future interactive track at SXSW 2018 in Austin, TX. Nobody on the panel entitled “Who takes the wheel on self-driving car safety” suggested we won’t get there. But there was plenty of caution on how, how fast, and how far we go in doing so.
Most notable were comments by Andrew Reimer of MIT. He foresaw a gap of 50-100 years before fully autonomous cars–no human intervention–take over the lion’s share of driving, globally. His issues were not just technical; they included trust, complexity, infrastructure and good old fashioned habit. He was certain that manual driving would probably never completely go away. He sighted the example of a high end sports car owners wanting the enjoyment of driving.
“It might just be hobbyists,” he said, but made it clear that in some shape or form, the human factor is likely to survive for a very long time.
As for the issue of safety, Cathy Chase of Advocates for Highway and Auto Safety cited three critical areas of consideration to make self-driving car safety standard. The first is a morass of no fewer than 400 different laws that could be enacted–now–to make all driving safer. As an example, she mentioned automatic emergency breaking. Today it’s mostly only found as a feature in semi-autonomous luxury vehicles. To make it as standard as seat belts would require federal regulation. Second is the need for a shift in public attitudes; there needs to be reassurance. A majority of the public–at least in the US–does not yet trust self-driving cars.** The third is to avoid issue amnesia. In a rush to mainstream autonomous driving, congress could pass enabling laws prematurely, before all technical and regulatory issue are resolved.“
**My two cents on the issue of trust. To better understand why there is mistrust, consider the cognitive bias that Nobel economics laureate Daniel Kahneman calls “what you see is all there is.” or WYSIATI. When one Tesla on auto-pilot is involved in a fatal crash, it makes major national headlines. It’s right in front of us. Yet, over 100 in the US die every day in auto accidents caused by human error. Unless a celebrity is involved, none of them make news beyond their local area. Nobody pays attention unless they are directly affected. Statistically, at some point, self-driving vehicles are likely to be far safer than human-driven. But as long as the autonomous accidents make the big news, the public may not perceive them as safe.
A session on “Quantum Computing: Science Fiction to Science Fact,” was somewhat misnamed. While the history of its theoretical origins were recounted by D-Wave’s Bo Ewald, the session really focused on the current trends and developments leading toward a 10-year or so future horizon.
Ewald recounted how iconic physicist Richard Feynman first imagined quantum computing in 1981, published the first paper on it in 1982, and gave a talk on it at Los Alamos in 1983. Ewald was head of computing at Los Alamos in 1983 and met Feynman at that talk. Sheldon Cooper, eat your heart out.
Ewald repeated this story for me in a brief interview which should be available as part of a Seeking Delphi™ minicast later this evening. I also asked him about the notion that we really don’t know for sure everything that quantum computing will be able to do. He agreed.
“For the past ten years, most of the discussion has been about quantum cryptography, ” had said. “this has nothing to do with what Feinman was talking about. He was interested in modeling nature.” He cited material sciences and system optimizations as areas of great promise for the future.