AI Ethics and Car Wars


Gill Pratt at the MIT AI Lab in the 1990s
The recent investments into Artificial Intelligence of $1,000,000,000 each by Toyota and the a consortium co-chaired by Tesla's Elon Musk have among other things already swung AI ethics out of Terminator-based hype and Her-based hope into a direction far more practical and socially-oriented. And that's a good thing, though it may be being done for the wrong reasons.

Let's start where most robot ethics intuitions come from—science fiction. Authors use robots (and aliens) to examine the human condition, to ask what makes us special? Why do we owe each other moral consideration? What entitles us to dominate an animal or an ecosystem? Nowhere are the human underpinnings of SciFi robots more evident than in the original, first-released Star Wars movie, A New Hope. The droids in that movie are supermen, but men nonetheless, and men who in that episode often dwell on their condition of slavery.  Because they are both men and owned, they are slaves. Compare A New Hope to the samurai adventure Hidden Fortress from which most of its plot and characters derive (Fanfolk: I know Lucas denies this; watch the movie yourself, then argue with me.)  In Akira Kurosawa's 1958 masterpiece the two characters through whom the narrative is viewed really are human slaves. But in the real world, will robots really be humans?

Robots are intelligent, they convert sensing into action.  But intelligence doesn't have to be human, or even animal. Technically, intelligence is the capacity to turn sensing into useful action. In humans (and arguably other animals), intelligence comes bundled with emotions, consciousness, and moral obligation. That's how we evolved. But in an artefact we can take all those pieces apart.  Think for example about mind and body. Where does one start and another end? For humans, there's no easy answer. But for a robot, the body is mechanical, and the mind is a program. There's no problem dividing the two. And also, there's no sense in which either is intelligent—able to transform sensing into action—without the other. The robot's agency—its ability to affect the world—depends on having both. The questions of dualism that have perturbed philosophers for centuries are now apparent to any schoolchild who's built and programmed a LEGO robot.

As the number of schoolchildren who have programmed robots implies, AI is here, now, and in fact pervasive in our society. AI is why Google can guess what web page you want from a few typed words, and your laptop can beat you at chess, and why—if you still do your own ironing—you haven't scorched any shirts in years. Our cars, phones, clothes dryers, games consoles and word processors are full of sensors watching over us, working to meet our needs, designed to reduce the amount of attention or skill we need for our tasks. They have been for years.

Yet suddenly in 2014 and 2015 the news was full of stories that AI is coming (not already here), and poses an existential threat to humanity. What triggered this attention? Probably two things:
  1. The first is escalating income inequality and profits for the tech industry resulting in a race to hire top talent. Writing software is like an art or sport—the best performers are in an utterly different league than the average performer. With money to burn, money was burned, with the headline purchases of Boston Dynamics (rumoured price $500M), and Deep Mind ($400M), though of course virtual reality company Oculus Rift went for $2B. The Deep Mind purchase, Google's first in Europe,  prompted The Guardian's senior technology editor to phone me at my university desk and talk to me for an hour. "We all thought AI was a joke; we gave up on it years ago; what's going on?" asked a man who should obviously know. But Deep Mind's $400,000,000 pricetag is no joke.
  2. The second factor may be an austerity-driven need for academic funding going beyond traditional sources. Oxford's Nick Bostrom was under pressure to bring in money for his Future of Humanity Institute. Bostrom's previous work on whether our reality is in fact a simulation wasn't entirely suited to the task. Bostrom is a very intelligent man, and the basic arguments of his recent book, Superintelligence, generally hold up well, but with one caveat. Computation is computation, and learning is learning, and these don't require either artefacts or anthropomorphised systems to do their work. Superintelligence doesn't so much describe the threat of future AI as the unsustainable historic and contemporary impact of human culture. The now AI-augmented abilities of our institutions to maximally exploit the earth's resources have to date been turning all the biomass on the planet into humans and cows, not paperclips like Bostrom facetiously suggested an unsupervised AI might. Other universities followed Oxford's suit, with centers for The Study of Existential Risk (Cambridge) and the Future of Life (MIT).
Shortly after the formation of Cambridge's Centre for Existential Risk, their physicist Stephen Hawking told the media that AI was an existential threat, largely echoing Bostrom. Shortly after that, Elon Musk also followed suit.

To be fair, Elon Musk's company, Tesla Motors, really does face an existential threat from AI. Google, Uber and Apple have all announced initiatives to build driverless cars.  (Uber actively undermined an academic partner by taking over half of the professors from of one of the three top robotics universities, CMU.) Google have been open about their disruptive vision. Their cars will make for a safer, more sustainable future, with far fewer cars. Google cars will roam like taxis, eliminating the need for parking and private ownership, and returning urban spaces to human use. Liability for any accidents will fall on a car's manufacturer—no problem for top market-capital companies like Apple and Google with more money than they can legally spend without falling foul of monopoly laws. But it might be quite a big problem for traditional car manufacturers, which are not only less rich, but also have a lot of liabilities in terms of pensions and manufacturing capacity that a plateauing international market for privately-owned vehicles will not likely be able to support.

Toyota's head of their new $1,060,000,000 AI and robotics initiative, Gill Pratt, has been explicit that Toyota see private ownership still as the future of automobiles, which is hardly surprising given their investments made in the automotive past. In the Toyota vision, AI should make cars and other domestic robots even more as they are now: an extension of their owner.  AI should be purposed to give an owner ever greater feelings of pleasure and power as the machines help them master feats of motion and strength.

This same theme can be found in the announcement of the new OpenAI, a joint venture including a substantial investment from Elon Musk, who is again explicit about wanting AI to extend human will. Musk has shown real investment in a sustainable future, but at the end of the day his billions have come from selling things to people, and he doesn't want to stop.

Although I prefer the Google vision of sustainable, human-oriented cities, I believe the end result is that Toyota's and Musk's reasoning about AI are correctRobots and other AI should be slave-like extensions of their users' will. Morality is the term we use for the systems we have developed and evolved to help us coordinate our own societies. We can't directly program humanity's needs and desires, we have to find ways to accommodate those desires that lead to the least possible suffering and the safest and most sustainable societies we can manage. But AI is an artefact, not a person. We can determine everything about AI within the laws of physics and computation. We have no reason to set AI up with goals that might compete with our own. While Google, Uber and Apple may have the better model for sustainable transportation, the needs of the existing car industry have brought Toyota and Tesla to sign up to what is probably the most sane, safe and sustainable model of AI ethics.

I wrote this the weekend of the NIPS ethics panel, when OpenAI was announced, and Star Wars was coming out in a few days.  Then I tried to get it "published" in the traditional media.  That unfortunately dated my brilliant title…

Actually, it wasn't only my title that got dated.  In just the two weeks I was chasing conventional publishers.  Google & Ford teamed up, (right after Ford was held up specifically as an example of a "patriotic" company in contrast to Google in national security document;) and Lyft & GM teamed up too.  Lyft should help GM make sure they aren't entirely left in the dust, but won't really help increase the number of cars sold.  Whether Ford can make Google change their business model though is another question.  Many people are wondering whether an entirely driverless taxis is a viable model, or if Google cars might suffer the same fate as rental cars–abuse and vandalism.  Of course, in the 21st century, if anyone damages your car, you'll probably know who.  It will be interesting to see how Ford & Google affect each other.  Meanwhile, if you own a car company and don't have the cash to cover liability for your AI, I'd recommend courting Apple.  

April update:  Toyota didn't listen to me; they went with Microsoft.

Comments