Robots Just Need a Hug — and Your Car Keys

It’s a beautiful spring day, with flowers blooming and baseball games about to start. And none of that will matter after the robots we’ve made cast us aside.

Why would I think something so depressing? How could I not, when pop culture increasingly tells us to fear the machines we build, while at the same time tech companies are improving their capabilities and autonomy?

image

Photo: From Ex Machina

Remember the idea of the self-aware machine that served as a loyal sidekick — like R2-D2? Today, that concept feels positively old-fashioned.

High-tech anxiety

The latest exhibition of AI anxiety is the film Ex Machina, a smart and deeply unsettling thriller about the interactions of a tech billionaire, one of his experiments in building artificial intelligence in humanoid form, and his young employee asked to judge the ability of those creations to pass as human in conversation.

The movie does a strikingly good job of keeping it unclear who among those three parties has seen through the others.

The underlying message — that our techno-enthusiasm is leading us to drive beyond our headlights — reminded me of a similar but far less subtle bit of techno-dystopia, Dave Eggers’s 2013 novel The Circle.

That heavy-handed book — imagine if Ayn Rand decided that she resented silicon as much as socialism — doesn’t involve any humanoid robots. But the totality of the software and sensors marshaled by its fictional, metastasized fusion of Facebook and Google has a similar effect on human independence.

(The Circle also offers much the same view of the leadership of giant tech firms: Both the Circle of that book and Ex Machina’s BlueBook are run by supremely self-confident men who treat their fellow humans as programmable subroutines. The plausibility of such a portrayal should be a separate worry for the tech industry.)

Even some of the most public advocates of the possibilities of technology have suggested that we’re rushing into artificial intelligence: Inventor Elon Musk and physicist Stephen Hawking separately warned last year that AI could spell the doom of humanity.

You can drive my car

I had a great many hours to think about the merits of robots’ replacing humans during some quality time on Interstate 95 and the New Jersey Turnpike before and after a family get-together for Easter.

We completed our journeys without damage or injury, but I saw more than enough bad driving to be reminded that people are not always the best operators of the machines they create.

The government’s numbers bear that out. In 2013, 32,719 people died in car and truck crashes in the United States. Things have been worse — average deaths per vehicle miles traveled were five times higher in 1965 — but they’re still bad.

Driverless cars should be able to do much better. They can’t drive drunk, tired, or angry. They come programmed to obey the known laws of physics instead of making up new ones on the spot. They don’t have hands with which to check smartphones.

And autonomous vehicles are almost here, not just in the form of Google’s well-publicized development of self-driving cars without even a steering wheel. Last week, a car running Delphi’s self-driving system completed a cross-country drive from San Francisco to New York without incident and with human intervention confined to driving on city streets.

A lesser sort of robot-driving intelligence already lurks behind the dashboards of high-end vehicles, using sensors and software to automate an increasing portion of highway driving.

I do not favor robots taking away human autonomy as in Hollywood scripts. But on I-95, the Turnpike, or the Beltway? Yes, please.

Can we learn to stop worrying?

The first time a driverless car injures somebody, through action or inaction, we will not hear the end of it in the news — just as a few deadly and frightening incidents of unintended acceleration got far more coverage than everyday, human-inflicted crashes.

And in the meantime, we may never hear the end of predictions of doom at the hands of our robot overlords.

“The seeds of alarm land on the most thoroughly fertilized ground of our imagination,” mused veteran futurist Joel Garreau. “Change in the 21st century is so great it feels like the ground is moving beneath our feet. At such times, rational primates look for something solid to hang on to.”

My former Washington Post colleague, now a “professor of law, culture, and values” at Arizona State University’s Sandra Day O’Connor College of Law, has been thinking about these things since at least R2-D2’s onscreen debut.

Garreau likes to talk about how too much of this conversation focuses on either the hell scenario (runaway technology reduces all life to gray goo) or the heaven scenario (we reach the singularity of artificial intelligence transcending our own and then find ourselves watched over by machines of loving grace).

The safer and more likely bet is on the “prevail” scenario. As Garreau wrote to me on Saturday: “Newly connected, ornery, cantankerous, surprising humans create astonishing, bottom-up flock-like solutions, as we’ve done for millennia.”

That’s a scenario I can get behind too. But maybe it wouldn’t hurt to avoid making robots that look too much like humans and to ensure that each comes with an effective and clearly labeled off switch.

Email Rob at [email protected]; follow him on Twitter at @robpegoraro.