Meet the Self-Driving Car Whisperer

Meet the Self-Driving Car Whisperer

By Leslie Nguyen-Okwu

Carol Reiley
SourcePhoto Courtesy of


Because self-driving cars are about to test our law, ethics and philosophy.

By Leslie Nguyen-Okwu

A fleet of six colorful cars stand at attention — some are like Hot Wheels; others, little Beetle-esque Love Bugs. Their master and almighty creator: Carol Reiley, who has endowed them with their very own brains. Reiley’s job is not only to teach them to drive, but also to rev up their personalities. “Knight Rider on the inside and Herbie on the outside,” the 34-year-old says of her fleet of artificially intelligent vehicles.

Our relationships with our cars are about to get a whole lot more complicated. As the once quiet whir of self-driving cars roars louder in Mountain View, California — Google’s hometown — a few crucial questions need answering. Among the most urgent: How will these cars, with “minds” of their own, interact with pedestrians? Reiley is the co-founder and president of the startup, which wants to create the lingua franca that allows cars and people to communicate. Facing a driverless future, which some say may arrive as early as 2020, the stakes are high: Human idiosyncrasies and cars that don’t understand them won’t mix well. Human drivers run red lights, jaywalk, stop suddenly, swerve into other lanes — how, Reiley asks, could a rule-abiding car learn to act in response?

It fundamentally has something to do with who she is as a person.

Pascal Finette

Reiley’s approach: “teaching” rather than “programming” cars to drive through deep learning, which involves coding computer software to learn just as the brain’s neural networks do. The software improves via trial and error, strengthening and remembering “correct” behaviors while diminishing and forgetting “incorrect” behaviors. You might be familiar with the method: She trains her software like you train a dog, through positive and negative reinforcement — Good boy, you stopped for that pedestrian; Bad boy, you ran that red light. She’s got “compassion and deep empathy” undergirding her vision of human-car kinship, says Pascal Finette, one of Reiley’s mentors. “It fundamentally has something to do with who she is as a person.”

With Reiley at the wheel, clinched one of the 17 licenses given to startups in California permitting testing of self-driving cars on public roads. After emerging from a year of so-called “stealth mode” in September 2016, can now boast that it holds $12 million in venture capital funding. But it’s zooming through an increasingly crowded market, peopled with competitors including Comma, AdasWords, Nauto, Mobileye, Cruise Automation and, of course, Google.

Potty-mouthed AIs are like cute kids who picked up Daddy’s bad habits, but the fun stops when prepubescent AIs take the wheel and run people over.

Their differentiator amid the ruck?’s focus is on car-human interactions. For most of society, the first encounter with a self-driving car will occur at the traffic light — businesses like Uber or delivery fleets will probably grab autonomous vehicles before your neighbor does, says Brad Templeton, a self-driving car consultant and software architect at Singularity University. focuses on building roof-mounted kits that signal the car’s intentions to pedestrians and human drivers. Reiley doesn’t yet know what form these alerts will take, but she’s conducting research on how emojis, beeps, LED lights, speech recordings and multitoned honks will fare in a world where driverless cars need to communicate with humans audibly and visibly during, say, an abrupt stop or a U-turn. But much is still speculative, adds Templeton. And during OZY’s visit to, Reiley wasn’t yet “ready” to demo her cars to the public. 

Reiley bought her first car as a 16-year-old growing up in Washington state — a rundown beater Buick, which describes with the same sort of humanity she hopes to gift new vehicles: It had spunk, sass and quirk … fit for an impetuous teenager who crashed it within six months. She was a bit more mature by the time she moved into handling giant hunks of steel with her undergraduate research at Santa Clara University on underwater robots and her doctoral research at Johns Hopkins on surgical robots. And, naturally, she announced her engagement in 2014 with a robot-themed photo shoot and a 3-D-printed stainless steel wedding ring. 

Today, the wind is at Reiley’s back, thanks to the pace of technological advances, says Peter Stone, computer scientist and AI expert at the University of Texas at Austin. Recent advances in deep learning have made AI comparable to or better than humans at speech recognition and natural language processing (see Google Translate’s latest deep-learning upgrade, which works with entire sentences, rather than individual words). The fields of computer vision and object recognition are fast advancing too — LIDAR sensors now allow driverless cars to “see” with lasers more precise than cameras. Still, parsing the information inputted from a car’s surroundings remains a challenge. And then there are “the rare events and corner cases,” Stone notes. “Autonomous cars are never going to be perfect; human drivers are never going to perfect, either.” 

The scariest threat of all is “malevolent AI,” says University of Louisville’s computer scientist Roman Yampolskiy — when AI veers off course by learning bad behaviors from humans or simply accidentally. Already on our hands? A supercomputer that can swear thanks to Urban Dictionary, an AI-powered chatbot spouting sexist and racist tweets after trolls flooded the system, and a search algorithm favoring men over women by picking up on users’ gendered biases. And that stuff isn’t happening behind the wheel of a multiton vehicle. Potty-mouthed AIs are like cute kids who picked up Daddy’s bad habits, but the fun stops when prepubescent AIs take the wheel and run people over, he says.

It’s also extremely difficult to test the cars’ ethical systems: “You can’t just take your neural network and drive it for 80 million miles, and see if it kills five people,” Templeton notes. “How do you prove it works to yourself, to your board of directors, to your investors and even to the government?”